{ "paper_id": "P97-1031", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:15:31.655003Z" }, "title": "A Flexible POS Tagger Using an Automatically Acquired Language Model*", "authors": [ { "first": "Llufs", "middle": [], "last": "Mhrquez", "suffix": "", "affiliation": { "laboratory": "", "institution": "LSI-UPC c/Jordi", "location": { "postCode": "1-3 08034", "settlement": "Girona, Barcelona", "region": "Catalonia" } }, "email": "padro@isi@upc.es" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present an algorithm that automatically learns context constraints using statistical decision trees. We then use the acquired constraints in a flexible POS tagger. The tagger is able to use information of any degree: n-grams, automatically learned context constraints, linguistically motivated manually written constraints, etc. The sources and kinds of constraints are unrestricted, and the language model can be easily extended, improving the results. The tagger has been tested and evaluated on the WSJ corpus.", "pdf_parse": { "paper_id": "P97-1031", "_pdf_hash": "", "abstract": [ { "text": "We present an algorithm that automatically learns context constraints using statistical decision trees. We then use the acquired constraints in a flexible POS tagger. The tagger is able to use information of any degree: n-grams, automatically learned context constraints, linguistically motivated manually written constraints, etc. The sources and kinds of constraints are unrestricted, and the language model can be easily extended, improving the results. The tagger has been tested and evaluated on the WSJ corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In NLP, it is necessary to model the language in a representation suitable for the task to be performed. The language models more commonly used are based on two main approaches: first, the linguistic approach, in which the model is written by a linguist, generally in the form of rules or constraints (Voutilainen and Jgrvinen, 1995) . Second, the automatic approach, in which the model is automatically obtained from corpora (either raw or annotated) 1 , and consists of n-grams (Garside et al., 1987; Cutting et ah, 1992) , rules (Hindle, 1989) or neural nets (Schmid, 1994) . In the automatic approach we can distinguish two main trends: The low-level data trend collects statistics from the training corpora in the form of n-grams, probabilities, weights, etc. The high level data trend acquires more sophisticated information, such as context rules, constraints, or decision trees (Daelemans et al., 1996; M/~rquez and Rodriguez, 1995; Samuelsson et al., 1996) . The acquisition methods range from supervised-inductivelearning-from-example algorithms (Quinlan, 1986; *This research has been partially funded by the Spanish Research Department (CICYT) and inscribed as TIC96-1243-C03-02 I When the model is obtained from annotated corpora we talk about supervised learning, when it is obtained from raw corpora training is considered unsupervised. Aha et al., 1991) to genetic algorithm strategies (Losee, 1994) , through the transformation-based error-driven algorithm used in (Brill, 1995) , Still another possibility are the hybrid models, which try to join the advantages of both approaches (Voutilainen and Padr6, 1997) .", "cite_spans": [ { "start": 301, "end": 333, "text": "(Voutilainen and Jgrvinen, 1995)", "ref_id": null }, { "start": 480, "end": 502, "text": "(Garside et al., 1987;", "ref_id": "BIBREF9" }, { "start": 503, "end": 523, "text": "Cutting et ah, 1992)", "ref_id": null }, { "start": 532, "end": 546, "text": "(Hindle, 1989)", "ref_id": "BIBREF10" }, { "start": 562, "end": 576, "text": "(Schmid, 1994)", "ref_id": "BIBREF26" }, { "start": 886, "end": 910, "text": "(Daelemans et al., 1996;", "ref_id": "BIBREF8" }, { "start": 911, "end": 940, "text": "M/~rquez and Rodriguez, 1995;", "ref_id": "BIBREF16" }, { "start": 941, "end": 965, "text": "Samuelsson et al., 1996)", "ref_id": "BIBREF25" }, { "start": 1056, "end": 1071, "text": "(Quinlan, 1986;", "ref_id": "BIBREF22" }, { "start": 1072, "end": 1072, "text": "", "ref_id": null }, { "start": 1353, "end": 1370, "text": "Aha et al., 1991)", "ref_id": "BIBREF0" }, { "start": 1403, "end": 1416, "text": "(Losee, 1994)", "ref_id": "BIBREF14" }, { "start": 1483, "end": 1496, "text": "(Brill, 1995)", "ref_id": "BIBREF3" }, { "start": 1617, "end": 1629, "text": "Padr6, 1997)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present in this paper a hybrid approach that puts together both trends in automatic approach and the linguistic approach. We describe a POS tagger based on the work described in (Padr6, 1996) , that is able to use bi/trigram information, automatically learned context constraints and linguistically motivated manually written constraints. The sources and kinds of constraints are unrestricted, and the language model can be easily extended. The structure of the tagger is presented in figure 1. ", "cite_spans": [ { "start": 181, "end": 194, "text": "(Padr6, 1996)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We also present a constraint-acquisition algorithm that uses statistical decision trees to learn context constraints from annotated corpora and we use the acquired constraints to feed the POS tagger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "The paper is organized as follows. In section 2 we describe our language model, in section 3 we describe the constraint acquisition algorithm, and in section 4 we expose the tagging algorithm. Descriptions of the corpus used, the experiments performed and the results obtained can be found in sections 5 and 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "We will use a hybrid language model consisting of an automatically acquired part and a linguist-written part.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model", "sec_num": "2" }, { "text": "The automatically acquired part is divided in two kinds of information: on the one hand, we have bigrams and trigrams collected from the annotated training corpus (see section 5 for details). On the other hand, we have context constraints learned from the same training corpus using statistical decision trees, as described in section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model", "sec_num": "2" }, { "text": "The linguistic part is very small --since there were no available resources to develop it further--and covers only very few cases, but it is included to illustrate the flexibility of the algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model", "sec_num": "2" }, { "text": "A sample rule of the linguistic part:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model", "sec_num": "2" }, { "text": "i0.0 (XvauxiliarY.) (-[VBN IN , : JJ JJS JJR])+ ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model", "sec_num": "2" }, { "text": "This rule states that a tag past participle (VBN) is very compatible (10.0) with a left context consisting of a %vauxiliar% (previously defined macro which includes all forms of \"have\" and \"be\") provided that all the words in between don't have any of the tags in the set [VBN IN , : JJ JJS J JR] . That is, this rule raises the support for the tag past participle when there is an auxiliary verb to the left but only if there is not another candidate to be a past participle or an adjective inbetween. The tags [IN , :] prevent the rule from being applied when the auxiliary verb and the participle are in two different phrases (a comma, a colon or a preposition are considered to mark the beginning of another phrase).", "cite_spans": [ { "start": 272, "end": 296, "text": "[VBN IN , : JJ JJS J JR]", "ref_id": null }, { "start": 512, "end": 520, "text": "[IN , :]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Language Model", "sec_num": "2" }, { "text": "The constraint language is able to express the same kind of patterns than the Constraint Grammar formalism (Karlsson et al., 1995) , although in a different formalism. In addition, each constraint has a compatibility value that indicates its strength. In the middle run, the system will be adapted to accept CGs.", "cite_spans": [ { "start": 107, "end": 130, "text": "(Karlsson et al., 1995)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Language Model", "sec_num": "2" }, { "text": "Choosing, from a set of possible tags, the proper syntactic tag for a word in a particular context can be seen as a problem of classification. Decision trees, recently used in NLP basic tasks such as tagging and parsing (McCarthy and Lehnert, 1995: Daelemans et al., 1996; Magerman, 1996) , are suitable for performing this task. A decision tree is a n-ary branching tree that rep- Classify a new object with a decision tree is simply following the convenient path through the tree until a leaf is reached.", "cite_spans": [ { "start": 220, "end": 233, "text": "(McCarthy and", "ref_id": "BIBREF18" }, { "start": 234, "end": 272, "text": "Lehnert, 1995: Daelemans et al., 1996;", "ref_id": null }, { "start": 273, "end": 288, "text": "Magerman, 1996)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Constraint Acquisition", "sec_num": "3" }, { "text": "Statistical decision trees only differs from common decision trees in that leaf nodes define a conditional probability distribution on the set of classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraint Acquisition", "sec_num": "3" }, { "text": "It is important to note that decision trees can be directly translated to rules considering, for each path from the root to a leaf, the conjunction of all questions involved in this path as a condition and the class assigned to the leaf as the consequence. Statistical decision trees would generate rules in the same manner but assigning a certain degree of probability to each answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraint Acquisition", "sec_num": "3" }, { "text": "So the learning process of contextual constraints is performed by means of learning one statistical decision tree for each class of POS ambiguity -~ and converting them to constraints (rules) expressing compatibility/incompatibility of concrete tags in certain contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraint Acquisition", "sec_num": "3" }, { "text": "The algorithm we used for constructing the statistical decision trees is a non-incremental supervised learning-from-examples algorithm of the TDIDT (Top Down Induction of Decision Trees) family. It constructs the trees in a top-down way, guided by the distributional information of the examples, but not on the examples order (Quinlan, 1986) . Briefly. the algorithm works as a recursive process that departs from considering the whole set of examples at the root level and constructs the tree ina top-down way branching at any non-terminal node according to a certain selected attribute. The different values of this attribute induce a partition of the set of examples in the corresponding subsets, in which the process is applied recursively in order to generate the different subtrees. The recursion ends, in a certain node, either when all (or almost all) the remaining examples belong to the same class, or when the number of examples is too small. These nodes are the leafs of the tree and contain the conditional probability distribution, of its associated subset, of examples, on the possible classes.", "cite_spans": [ { "start": 326, "end": 341, "text": "(Quinlan, 1986)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": null }, { "text": "The heuristic function for selecting the most useful attribute at each step is of a crucial importance in order to obtain simple trees, since no backtracking is performed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": null }, { "text": "There exist two main families of attribute-selecting functions: information-based (Quinlan, 1986 : Ldpez, 1991 and statistically--based (Breiman et al., 1984; .", "cite_spans": [ { "start": 82, "end": 96, "text": "(Quinlan, 1986", "ref_id": "BIBREF22" }, { "start": 97, "end": 110, "text": ": Ldpez, 1991", "ref_id": null }, { "start": 136, "end": 158, "text": "(Breiman et al., 1984;", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": null }, { "text": "For each class of POS ambiguity the initial example set is built by selecting from the training corpus Classes of ambiguity are determined by the groups of possible tags for the words in the corpus, i.e, nounadjective, noun-adjective-verb, preposition-adverb, etc. all the occurrences of the words belonging to this ambiguity class. More particularly, the set of attributes that describe each example consists of the part-of-speech tags of the neighbour words, and the information about the word itself (orthography and the proper tag in its context). The window considered in the experiments reported in section 6 is 3 words to the left and 2 to the right. The following are two real examples from the training set for the words that can be preposition and adverb at the same time (IN-RB conflict).", "cite_spans": [ { "start": 219, "end": 264, "text": "noun-adjective-verb, preposition-adverb, etc.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training Set", "sec_num": null }, { "text": "Approximately 90% of this set of examples is used for the construction of the tree. The remaining 10% is used as fresh test corpus for the pruning process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VB DT NN <\"as\" ,IN> DT JJ NN IN NN <\"once\",RB> VBN TO", "sec_num": null }, { "text": "For the experiments reported in section 6 we used a attribute selection function due to L6pez de Mintaras (L6pez. 1991) , which belongs to the informationbased family. Roughly speaking, it defines a distance measure between partitions and selects for branching the attribute that generates the closest partition to the correc* partaion, namely the one that joins together all the examples of the same class.", "cite_spans": [ { "start": 106, "end": 119, "text": "(L6pez. 1991)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Attribute Selection Function", "sec_num": null }, { "text": "Let X be aset of examples, C the set of classes and Pc(X) the partition of X according to the values of C. The selected attribute will be the one that generates the closest partition of X to Pc(X). For that we need to define a distance measure between partitions. Let PA(X) be the partition of X induced by the values of attribute A. The average information of such partition is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute Selection Function", "sec_num": null }, { "text": "I(PA(X)) = -~, p(X,a) log,.p(X,a), aEPa(X)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute Selection Function", "sec_num": null }, { "text": "where p(X. a) is the probability for an element of X belonging to the set a which is the subset of X whose examples have a certain value for the attribute .4, and it is estimated bv the ratio ~ This average", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute Selection Function", "sec_num": null }, { "text": "\u2022 IXl '", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute Selection Function", "sec_num": null }, { "text": "information measure reflects the randomness of distribution of the elements of X between the classes of the partition induced by .4.. If we consider now the intersection between two different partitions induced by attributes .4 and B we obtain", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute Selection Function", "sec_num": null }, { "text": "I(PA(X) N PB(X))= - E Z p(X. aMb) log,.p(X, aAb).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute Selection Function", "sec_num": null }, { "text": "Conditioned information of PB(X) given PA(X) iS", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aEP.a(A'} bEPB;XI", "sec_num": null }, { "text": "I(PB(X)IPA(X)) = I( PA(X) M Ps(X)) -I(P~(X)) = -Z Z p(X, nb) log, p(X'anb) p(X,a) a~Pa(X ~, bEPBtX ~", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aEP.a(A'} bEPB;XI", "sec_num": null }, { "text": "It is easy to show that the measure", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aEP.a(A'} bEPB;XI", "sec_num": null }, { "text": "d(Pa(.Y). PB(X)) = [(Ps(X)iPA(X)) + I(PA(X)IPB(X))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aEP.a(A'} bEPB;XI", "sec_num": null }, { "text": "is a distance. Normalizing we obtain", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aEP.a(A'} bEPB;XI", "sec_num": null }, { "text": "d(PA(X).PB(,\\')) d.,v(Pa(X). PB(.V)) = I(Pa(X)aPB(X)) \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aEP.a(A'} bEPB;XI", "sec_num": null }, { "text": "with values in [0, 1] .", "cite_spans": [ { "start": 15, "end": 18, "text": "[0,", "ref_id": null }, { "start": 19, "end": 21, "text": "1]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "aEP.a(A'} bEPB;XI", "sec_num": null }, { "text": "So the selected attribute will be that one that minimizes the measure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aEP.a(A'} bEPB;XI", "sec_num": null }, { "text": "d.v(Pc(X), PA(X)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aEP.a(A'} bEPB;XI", "sec_num": null }, { "text": "Usual TDIDT algorithms consider a branch for each value of the selected attribute. This strategy is not feasible when the number of values is big (or even infinite). In our case the greatest number of values for an attribute is 45 --the tag set size--which is considerably big (this means that the branching factor could be 45 at every level of the tree 3). Some s.vsterns perform a previous recasting of the attributes in order to have only binary-valued attributes and to deal with binary trees (Magerman, 1996) . This can always be done but the resulting features lose their intuition and direct interpretation, and explode in number. We have chosen a mixed approach which consist of splitting for all values and afterwards joining the resulting subsets into groups for which we have not enough statistical evidence of being different distributions. This statistical evidence is tested with a X ~\" test at a 5% level of significance. In order to avoid zero probabilities the following smoothing is performed. In a certain set of examples, the probability of a tag ti is estimated by I~,l+-~ ri(4) = ,+~ where m is the number of possible tags and n the number of examples.", "cite_spans": [ { "start": 497, "end": 513, "text": "(Magerman, 1996)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Branching Strategy", "sec_num": null }, { "text": "Additionally. all the subsets that don't imply a reduction in the classification error are joined together in order to have a bigger set of examples to be treated in the following step of the tree construction. The classification error of a certain node is simply: I -maxt