{ "paper_id": "P93-1042", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:52:21.806019Z" }, "title": "HOW DO WE COUNT? THE PROBLEM OF TAGGING PHRASAL VERBS IN PARTS", "authors": [ { "first": "Nava", "middle": [ "A" ], "last": "Shaked", "suffix": "", "affiliation": { "laboratory": "", "institution": "The City University of New York", "location": { "addrLine": "33 West 42nd Street", "postCode": "10036", "settlement": "New York", "region": "NY" } }, "email": "nava@nynexst.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper examines the current performance of the stochastic tagger PARTS (Church 88) in handling phrasal verbs, describes a problem that arises from the statistical model used, and suggests a way to improve the tagger's performance. The solution involves a change in the definition of what counts as a word for the purpose of tagging phrasal verbs.", "pdf_parse": { "paper_id": "P93-1042", "_pdf_hash": "", "abstract": [ { "text": "This paper examines the current performance of the stochastic tagger PARTS (Church 88) in handling phrasal verbs, describes a problem that arises from the statistical model used, and suggests a way to improve the tagger's performance. The solution involves a change in the definition of what counts as a word for the purpose of tagging phrasal verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Statistical taggers are commonly used to preprocess natural language. Operations like parsing, information retrieval, machine translation, and so on, are facilitated by having as input a text tagged with a part of speech label for each lexical item. In order to be useful, a tagger must be accurate as well as efficient. The claim among researchers advocating the use of statistics for NLP (e.g. Marcus et al. 92) is that taggers are routinely correct about 95% of the time. The 5% error rate is not perceived as a problem mainly because human taggers disagree or make mistakes at approximately the same rate. On the other hand, even a 5% error rate can cause a much higher rate of mistakes later in processing if the mistake falls on a key element that is crucial to the correct analysis of the whole sentence. One example is the phrasal verb construction (e.g. gun down, back off). An error in tagging this two element sequence will cause the analysis of the entire sentence to be faulty. An analysis of the errors made by the stochastic tagger PARTS (Church 88 ) reveals that phrasal verbs do indeed constitute a problem for the model.", "cite_spans": [ { "start": 396, "end": 413, "text": "Marcus et al. 92)", "ref_id": null }, { "start": 1053, "end": 1063, "text": "(Church 88", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "The basic assumption underlying the stochastic process is the notion of independence. Words are defined as units separated by spaces and then undergo statistical approximations. As a result the elements of a phrasal verb are treated as two individual words, each with its own lexical probability (i.e. the probability of observing part of speech i given word j). An interesting pattern emerges when we examine the errors involving phrasal verbs. A phrasal verb such as sum up will be tagged by PARTS as noun + preposition instead of verb + particle. This error influences the tagging of other words in the sentence as well. One typical error is found in infinitive constructions, where a phrase like to gun down is tagged as INTO NOUN IN (a prepositional 'to' followed by a noun followed by another preposition). Words like gun, back, and sum, in isolation, have a very high probability of being nouns a.s opposed to verbs, which results in the misclassification described above. However, when these words are followed by a particle, they are usually verbs, and in the infinitive construction, always verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PHRASAL VERBS", "sec_num": "2." }, { "text": "Tile error appears to follow froln the operation of the stochastic process itself. In a trigram model the probability of each word is calculated by taking into consideration two elements: the lexical probability (probability of the word bearing a certain tag) and the contextual probability (probability of a word bearing a certain tag given two previous parts of speech). As a result, if an element has a very high lexical probability of being a noun (gun is a noun in 99 out of 102 occurrences in the Brown Corpus), it will not only influence but will actually override the contextual probability, which might suggest a different assignment. In the case of to gun down the ambiguity of to is enhanced by the ambiguity of gun, and a mistake in tagging gun will automatically lead to an incorrect tagging of to as a preposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE HYPOTHESIS", "sec_num": "2.1." }, { "text": "It follows that the tagger should perform poorly on phrasal verbs in those cases where the ambiguous element occurs much more frequenty as a noun (or any other element that is not a verb).The tagger will experience fewer problems handling this construction when the ambiguous element is a verb in the vast majority of instances. If this is true, the model should be changed to take into consideration the dependency between the verb and the particle in order to optimize the performance of the tagger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE HYPOTHESIS", "sec_num": "2.1." }, { "text": "3.1. DATA", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE EXPERIMENT", "sec_num": "3." }, { "text": "The first step in testing this hypothesis was to evaluate the current performance of PARTS in handling the phrasal verb construction. To do this a set of 94 pairs of Verb+Particle/Preposition was chosen to represent a range of dominant frequencies from overwhelmingly noun to overwhelmingly verb. 20 example sentences were randomly selected for each pair using an on-line corpus called MODERN, which is a collection of several corpora (Brown, WSJ, AP88-92, HANSE, HAROW, WAVER, DOE, NSF, TREEBANK, and DISS) totaling more than 400 million words. These sentences were first tagged manually to provide a baseline and then tagged automatically using PARTS. The a priori option of assuming only a verbal tag for all the pairs in question was also explored in order to test if this simple solution will be appropriate in all cases. The accuracy of the 3 tagging approaches was evaluated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE EXPERIMENT", "sec_num": "3." }, { "text": "Table 2 presents a sample of the pairs examined in tile first column, PARTS performance for each pair in tile second, and the results of assuming a verbal tag in the third. (The \"choice\" colunm is explained below.) The average performance of PARTS for this task is 89%, which is lower than the general average performance of the tagger as claimed in Church 88. Yet we notice that simply assigning a verbal tag to all pairs actually degrades performance because in some cases the content word is a.lmost always a noun rather than a verb. For example, a phrasal verb like box in generally appears with an intervening object (to box something in), and thus when box and in are adjacent (except for those rare cases involving heavy NP shift) box is a noun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RESULTS", "sec_num": "3.2." }, { "text": "Thus we see that there is a need to distinguish between the cases where the two element sequence should be considered as one word for the purpose of assigniug the Lexical Probability (i.e.,phrasal verb) and cases where we have a Noun + Preposition combination where PARTS' analyses will be preferred. Table 2 shows that allowing a choice between PARTS' analysis and one verbal tag to the phrase by taking the higher performance score, improves the performance of PARTS from 89% to 96% for this task, and reduces the errors in other constructions involving phrasal verbs. When is this alternative needed? In the cases where PARTS had 10% or more errors, most of the verbs occur lnuch more often as nouns or adjectives. This confirms my hypothesis that PARTS will have a problem solving the N/V ambiguity in cases where the lexical probability of the word points to a noun. These are the very cases that should be treated as one unit in the system. The lexical probability should be assigned to the pair as a whole rather than considering the two elements separately. Table 1 lists the cases where tagging improves 10% or more when PARTS is given the additional choice of assigning a verbal tag to the whole expression. Frequency distributions of these tokens in tile Brown Corpus are presented as well, which reflect why statistical probabilities err in these cases. In order to tag these expressions correctly, we will have to capture additional information about the pair which is not available froln tile PARTS statistical model. This paper shows that for some cases of phrasal verbs it is not enough to rely on lexical probability alone: We must take into consideration the dependency between the verb and the particle in order to improve the performance of the tagger.The relationship between verbs and particles is deeply rooted in Linguistics. Smith (1943) introduced the term phrasal verb, arguing that it should be regarded as a type of idiom because the elements behave as a unit. He claimed that phrasal verbs express a single concept that often has a one word counterpart in other languages, yet does not always have compositional meaning. Some particles are syntactically more adverbial in nature and some more prepositional, but it is generally agreed that the phrasal verb constitutes a kind of integral functional unit. Perhaps linguistic knowledge can help solve the tagging problem described here and force a redefinition of the boundaries of phrasal verbs. For now we can redefine the word boundaries for the problematic cases that PARTS doesn't handle well. Future research should concentrate on the linguistic characteristics of this problematic construction to determine if there are other cases where the current assumption that one word equals one unit interferes with successful processing.", "cite_spans": [ { "start": 1850, "end": 1862, "text": "Smith (1943)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 301, "end": 308, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1066, "end": 1073, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "RESULTS", "sec_num": "3.2." }, { "text": "I wish to thank my committee members Virginia Teller, Judith Klavans and John Moyne for their helpful comments and support. I am also indebted to Ken Church and Don Hindle for their guidance and help all along.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACKNOWLEDGEMENT", "sec_num": "5." } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text", "authors": [ { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 1988, "venue": "Proc. Conf. on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "136--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. W. Church. A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text. Proc. Conf. on Applied Natural Language Processing, 136-143, 1988.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Introduction to the Special Issue on Computational Linguistics Using Large Corpora", "authors": [ { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": "&", "middle": [ "R" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. W. Church, & R. Mercer. Introduction to the Spe- cial Issue on Computational Linguistics Using Large Corpora. To appear in Computational Linguistics, 1993.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "On The Interface of Morphology & Syntax. Evidence from Verb-Particle Combinations in Afi-ican. SPIL 18", "authors": [ { "first": "C", "middle": [], "last": "Le", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Le raux. On The Interface of Morphology & Syntax. Evidence from Verb-Particle Combinations in Afi-ican. SPIL 18. November 1988. MA Thesis.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "First steps towards an annotated database of American English", "authors": [ { "first": "M", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "& D", "middle": [], "last": "Magerman", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Marcus, B. Santorini & D. Magerman. First steps towards an annotated database of American English.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Words ~\" Idioms: Studies in The English Language", "authors": [ { "first": "L", "middle": [ "P" ], "last": "Smith", "suffix": "" } ], "year": 1943, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. P. Smith. Words ~\" Idioms: Studies in The English Language. 5th ed. London, 1943.", "links": null } }, "ref_entries": { "TABREF2": { "html": null, "num": null, "content": "
: A Sample of Performance Evaluation
", "type_str": "table", "text": "" } } } }