{ "paper_id": "P97-1024", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:16:03.173949Z" }, "title": "Independence Assumptions Considered Harmful", "authors": [ { "first": "Alexander", "middle": [ "Franz" ], "last": "Sony", "suffix": "", "affiliation": { "laboratory": "Computer Science Laboratory &: D21 Laboratory", "institution": "Sony Corporation", "location": { "addrLine": "6-7-35 Kitashinagawa Shinagawa-ku", "postCode": "141", "settlement": "Tokyo", "country": "Japan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Many current approaches to statistical language modeling rely on independence a.~sumptions 1)etween the different explanatory variables. This results in models which are computationally simple, but which only model the main effects of the explanatory variables oil the response variable. This paper presents an argmnent in favor of a statistical approach that also models the interactions between the explanatory variables. The argument rests on empirical evidence from two series of experiments concerning automatic ambiguity resolution.", "pdf_parse": { "paper_id": "P97-1024", "_pdf_hash": "", "abstract": [ { "text": "Many current approaches to statistical language modeling rely on independence a.~sumptions 1)etween the different explanatory variables. This results in models which are computationally simple, but which only model the main effects of the explanatory variables oil the response variable. This paper presents an argmnent in favor of a statistical approach that also models the interactions between the explanatory variables. The argument rests on empirical evidence from two series of experiments concerning automatic ambiguity resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper, we present an empirical argument in favor of a certain approach to statistical natural language modeling: we advocate statistical natural language models that account for the interactions between the explanatory statistical variables, rather than relying on independence a~ssumptions. Such models are able to perform prediction on the basis of estimated probability distributions that are properly conditioned on the combinations of the individual values of the explanatory variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "After describing one type of statistical model that is particularly well-suited to modeling natural language data, called a loglinear model, we present einpirical evidence fi'om a series of experiments on different ambiguity resolution tasks that show that the performance of the loglinear models outranks the performance of other models described in the literature that a~ssume independence between the explanatory variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "By \"statistical language model\", we refer to a mathematical object that \"imitates the properties\" of some respects of naturM language, and in turn makes predictions that are useful from a scientific or engineer-ing point of view. Much recent work in this flamework hm~ used written and spoken natural language data to estimate parameters for statisticM models that were characterized by serious limitations: models were either limited to a single explanatory variable or. if more than one explanatory variable wa~s considered, the variables were assumed to be independent. In this section, we describe a method for statistical language modeling that transcends these limitations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Language Modeling", "sec_num": null }, { "text": "Categorical data analysis is the area of statistics that addresses categorical statistical variable: variables whose values are one of a set of categories. An exampie of such a linguistic variable is PART-OF-SPEECH,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorical Data Analysis", "sec_num": "2.1" }, { "text": "whose possible values might include nou.n, verb, determiner, preposition, etc. We distinguish between a set of explanatory variames. and one response variable. A statistical model can be used to perforin prediction in the following manner: Given the values of the explanatory variables, what is the probability distribution for the response variable, i.e.. what are the probabilities for the different possible values of the response variable?", "cite_spans": [ { "start": 49, "end": 78, "text": "determiner, preposition, etc.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Categorical Data Analysis", "sec_num": "2.1" }, { "text": "Tile ba,sic tool used in categorical data analysis is the contingency table (sometimes called the \"crossclassified table of counts\"). A contingency table is a matrix with one dimension for each variable, including the response variable. Each cell ill the contingency table records the frequency of data with the appropriate characteristics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Contingency Table", "sec_num": "2.2" }, { "text": "Since each cell concerns a specific combination of feat.ures, this provides a way to estimate probabilities of specific feature combinations from the observed frequencies, ms the cell counts can easily be converted to probabilities. Prediction is achieved by determining the value of the response variable given the values of the explanatory variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Contingency Table", "sec_num": "2.2" }, { "text": "A loglinear model is a statistical model of the effect of a set of categorical variables and their combinations on the cell counts in a contingency table. It can be used to address the problem of sparse data. since it can act a.s a \"snmothing device, used to obtain cell estimates for every cell in a sparse array, even if the observed count is zero\" (Bishop, Fienberg, and Holland. 1975) .", "cite_spans": [ { "start": 351, "end": 388, "text": "(Bishop, Fienberg, and Holland. 1975)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "The Loglinear Model", "sec_num": "2.3" }, { "text": "Marginal totals (sums for all values of some variables) of the observed counts are used to estimate the parameters of the loglinear model; the model in turn delivers estimated expected cell counts, which are smoother than the original cell counts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Loglinear Model", "sec_num": "2.3" }, { "text": "The mathematical form of a loglinear model is a,s follows. Let mi5~ be the expected cell count for cell (i.j. k .... ) in the contingency table. The general form of a loglinear model is ms follows:", "cite_spans": [ { "start": 104, "end": 118, "text": "(i.j. k .... )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Loglinear Model", "sec_num": "2.3" }, { "text": "logm/j~... = u.-{-ltlti).-~lt2(j)-~-U3(k)-~lZl2(ij)-~-.. . (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Loglinear Model", "sec_num": "2.3" }, { "text": "In this formula, u denotes the mean of the logarithms of all the expected counts, u+ul(1) denotes the mean of the logarithms of the expected counts with value i of the first variable, u + u2(j) denotes the mean of the logarithms of the expected counts with value j of the second variable, u + ux~_(ii) denotes the mean of the logarithms of the expected counts with value i of the first veriable and value j of the second variable, and so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Loglinear Model", "sec_num": "2.3" }, { "text": "Thus. the term uzii) denotes the deviation of the mean of the expected cell counts with value i of the first variable from the grand mean u. Similarly, the term Ul2(ij) denotes the deviation of the mean of the expected cell counts with value i of the first variable and value j of the second variable from the grand mean u. In other words, ttl2(ij) represents the combined effect of the values i and j for the first and second variables on the logarithms of the expected cell counts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Loglinear Model", "sec_num": "2.3" }, { "text": "In this way, a loglinear model provides a way to estimate expected cell counts that depend not only on the main effects of the variables, but also on the interactions between variables. This is achieved by adding \"interaction terms\" such a.s Ul2(ij ) to the nmdel. For further details, see (Fienberg, 1980) .", "cite_spans": [ { "start": 290, "end": 306, "text": "(Fienberg, 1980)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "The Loglinear Model", "sec_num": "2.3" }, { "text": "For some loglinear models, it is possible to obtain closed forms for the expected cell counts. For more complicated models, the iterative proportional fitting algorithm for hierarchical loglinear models (Denting and Stephan, 1940) can be used. Briefly, this procedure works ms follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Iterative Estimation Procedure", "sec_num": "2.4" }, { "text": "Let the values for the expected cell counts that are estimated by the model be represented by the symbol 7hljk .... The interaction terms in the loglinear nmdels represent constraints on the estimated expected marginal totals. Each of these marginal constraints translates into an adjustment scaling factor for the cell entries. The iterative procedure has the following steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Iterative Estimation Procedure", "sec_num": "2.4" }, { "text": "1. Start with initial estimates for the estimated expected cell counts. For example, set all 7hijal = 1.0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Iterative Estimation Procedure", "sec_num": "2.4" }, { "text": "2. Adjust each cell entry by multiplying it by the scaling factors. This moves the cell entries towards satisfaction of the marginal constraints specified by the nmdel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Iterative Estimation Procedure", "sec_num": "2.4" }, { "text": "3. Iterate through the adjustment steps until the maximum difference e between the marginal totals observed in the sample and the estimated marginal totals reaches a certain minimum threshold, e.g. e = 0.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Iterative Estimation Procedure", "sec_num": "2.4" }, { "text": "After each cycle, the estimates satisfy the constraints specified in the model, and the estimated expected marginal totals come closer to matching the observed totals. Thus. the process converges. This results in Maximum Likelihood estimates for both multinomial and independent Poisson sampling schemes (Agresti, 1990) .", "cite_spans": [ { "start": 304, "end": 319, "text": "(Agresti, 1990)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "The Iterative Estimation Procedure", "sec_num": "2.4" }, { "text": "For natural language classification and prediction tasks, the aim is to estimate a conditional probability distribution P(H[E) over the possible values of the hypothesis H, where the evidence E consists of a number of linguistic features el, e2 ..... Much of the previous work in this area assumes independence between the linguistic features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Interactions", "sec_num": "2.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(/-/le~.ej .... ) ~ P(Hlel) x P(Hlej) x ...", "eq_num": "(2)" } ], "section": "Modeling Interactions", "sec_num": "2.5" }, { "text": "For example, a model to predict Part-of-Speech of a word on the basis of its morphological affix and its capitalization might a.ssume independence between the two explanatory variables a,s follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Interactions", "sec_num": "2.5" }, { "text": "P(POSIAFFIX, CAPITALIZATION) ,,~ (3) P(POSIAFFIX ) x P(POSICAPITALIZATION )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Interactions", "sec_num": "2.5" }, { "text": "This results ill a considerable computational simplification of the model but, as we shall see below. leads to a considerable loss of information and concomitant decrease in prediction accuracy. With a loglinear model, on the other hand. such independence assumptions are not necessary. The loglinear model provides a posterior distribution that is properly conditioned on the evidence, and maximizing the conditional probability P(HIE ) leads to minimum error rate classification (Duda and Hart. 1973) . Predicting Part-of-Speech", "cite_spans": [ { "start": 481, "end": 502, "text": "(Duda and Hart. 1973)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling Interactions", "sec_num": "2.5" }, { "text": "We will now turn to the empirical evidence supporting the argument against independence assumptions. ~ In this section, we will compare two models for pre-e ~ dicting the Part-of-Speech of an unknown word: A ~ simple model that treats the various explanatory variables ms independent, and a model using loglinear smoothing of a contingency table that takes into account the interactions between the explanatory variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Interactions", "sec_num": "2.5" }, { "text": "The model wa~s constructed in the following way. First, features that could be used to guess the PUS of a word were determined by examining the training portion of a text corpus. The initial set of features consisted of the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "\u2022 INCLUDES-NUMBER. Does the word include a nunlber?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "\u2022 CAPITALIZED. Is the word in sentence-initial position and capitalized, in any other position and capitalized, or in lower ca~e?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "\u2022 INCLUDES-PERIOD. Does the word include a period?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "\u2022 INCLUDES-COMMA. Does the word include a colnlna?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "\u2022 FINAL-PERIOD. Is the last character of the word a period?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "\u2022 INCLUDES-HYPHEN. Does the word include a hyphen?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "\u2022 ALL-UPPER-CASE. Is the word in all upper case?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "\u2022 SHORT. Is the length of the word three characters or less?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "\u2022 INFLECTION. Does the word carry one of the English inflectional suffixes?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "\u2022 PREFIX. Does the word carry one of a list of frequently occurring prefixes?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "\u2022 SUFFIX. Does the word carry one of a list of frequently occurring suffixes?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "Next, exploratory data analysis was perfornled in order to determine relevant features and their values, and to approximate which features interact. Each word of the training data was then turned into a feature vector, and the feature vectors were crossclassified in a contingency table. The contingency table was smoothed using a loglinear models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing the Model", "sec_num": "3.1" }, { "text": "Training and evaluation data was obtained from the Penn Treebank Brown corpus (Marcus, Santorini, and Marcinkiewicz, 1993) . The characteristics of \"'rare\" words that might show up ms unknown words differ fi'om the characteristics of words in general. so a two-step procedure wa~ employed a first time to obtain a set of \"'rare\" words ms training data, and again a second time to obtain a separate set of \"'rare*\" words ms evMuation data. There were 17,000 words in the training data, and 21,000 words in the evaluation data. Ambiguity resolution accuracy was evaluated for the \"'overall accuracy\" (Percentage that the most likely PUS tag is correct), and \"'cutoff factor accuracy\" (accuracy of the answer set consisting of all PUS tags whose probability lies within a factor F of the most likely PUS (de Marcken, 1990)).", "cite_spans": [ { "start": 78, "end": 122, "text": "(Marcus, Santorini, and Marcinkiewicz, 1993)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.2" }, { "text": "Accuracy Results (Weischedel et al., 1993) describe a model for unknown words that uses four features, but treats the features ms independent. We reimplemented this model by using four features: POS, INFLECTION, CAPITALIZED, and HYPHENATED, In Figures i 2, the results for this model are labeled 4 Independent Features. For comparison, we created a loglinear model with the same four features: the results for this model are labeled 4 Loglinear Features. The highest accuracy was obtained by the loglinear model that includes all two-way interactions and consists of two contingency tM)les with the following features:", "cite_spans": [ { "start": 17, "end": 42, "text": "(Weischedel et al., 1993)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "3.3", "sec_num": null }, { "text": "POS, ALL-UPPER-CASE. HYPHENATED, INCLUDES-NUMBER, CAPITALIZED, INFLECTION, SHORT. PREFIX, and SUFFIX. The results for this model are lM)eled 9 Loglinear Features. The parameters for all three unknown word models were estimated from the training data. and the models were evaluated on the evaluation data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.3", "sec_num": null }, { "text": "The accuracy of the different models in a.ssigning the most likely POSs to words is summarized in Figure 1. In the left diagram, the two barcharts show two different accuracy memsures: Percent correct (Overall Accuracy), and percent correct within the F=0.4 cutoff factor answer set (F=0.4 Set Accuracy).", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 104, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "3.3", "sec_num": null }, { "text": "In both cruses, the loglinear model with four features obtains higher accuracy than the method that assumes independence between the same four features. The loglinear model with nine ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.3", "sec_num": null }, { "text": "The performance of the loglinear model can be improved by adding more features, but this is not possible with the simpler nmdel that assumes independence between the features. Figure 2 shows the performance of the two types of nmdels with fenture sets that ranged from a single feature to nine features.", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 184, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Effect of Number of Features on Accuracy", "sec_num": "3.4" }, { "text": "As the diagram shows, the accuracies for both methods rise with the first few features, but then the two methods show a clear divergence. The accuracy of the simpler method levels off around at around 50-55%, while the loglinear model reaches an accuracy of 70-75%. This shows that the loglinear model is able to tolerate redundant features and use information from more features than the simpler method, and therefore achieves better results at ambiguity resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Number of Features on Accuracy", "sec_num": "3.4" }, { "text": "Next, we added of a stochastic POS tagger (Charniak et al., 1993) to provide a model of context. A stochastic POS tagger assigns POS labels to words in a sentence by using two parameters:", "cite_spans": [ { "start": 42, "end": 65, "text": "(Charniak et al., 1993)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Adding Context to the Model", "sec_num": "3.5" }, { "text": "\u2022 Lexical Probabilities: P(wlt ) --the probability of observing word w given that the tag t occurred.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding Context to the Model", "sec_num": "3.5" }, { "text": "\u2022 Contextual Probabilities: P(ti[ti-1, t~_2) -the probability of observing tag ti given that the two previous tags ti-1, t,i--2 occurred.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding Context to the Model", "sec_num": "3.5" }, { "text": "The tagger maximizes the probability of the tag sequence T = t.l,t, 2 .... ,t.,, given the word sequence W = wz,w2,... ,w,,, which is approximated a.s follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding Context to the Model", "sec_num": "3.5" }, { "text": "I\"L P(TIW) ~ II P(wdt~)P(tdt~_~, ti_=) (4) i= 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding Context to the Model", "sec_num": "3.5" }, { "text": "The accuracy of the combination of the loglinear model for local features and the stochastic POS tagger for contextual features was evaluated empirically by comparing three methods of handling unknown words:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding Context to the Model", "sec_num": "3.5" }, { "text": "\u2022 Unigram: Using the prior probability distribution P(t) of the POS tags for rare words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding Context to the Model", "sec_num": "3.5" }, { "text": "\u2022 ProbabUistic UWM: Using the probabilistic model that assumes independence between the features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding Context to the Model", "sec_num": "3.5" }, { "text": "\u2022 Classifier UWM: Using the loglinear model for unknown words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding Context to the Model", "sec_num": "3.5" }, { "text": "Separate sets of training and evaluation data for the tagger were obtained from from the Penn Treebank Wall Street corpus. Evaluation of the combined syst.em was performed on different configurations of the POS tagger on 30-40 different samples containing 4,000 words each.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding Context to the Model", "sec_num": "3.5" }, { "text": "Since the tagger displays considerable variance in its accuracy in assigning POS to unknown words in context, we use boxplots to display the results. Figure 3 compares the tagging error rate on unknown words for the unigram method (left) and the loglinear method with nine features (labeled statistical classifier) at right. This shows that the Ioglinear model significantly improves the Part-of-Speech tagging accuracy of a stochastic tagger on unknown words. The median error rate is lowered considerably, and samples with error rates over 32% are eliminated entirely. Since most of the lexical ambiguity resolution power of stochastic PUS tagging comes from the lexical probabilities, unknown words represent a significant source of error. Therefore, we investigated the effect of different types of models for unknown words on the error rate for tagging text with different proportions of unknown words. Samples of text that contained different proportions of unknown words were tagged using the three different methods for handling unknown words described above. The overall tagging error rate increases significantly as the proportion of new words increases. Figure 4 shows a graph of overall tagging accuracy versus percentage of unknown words in the text. The graph compares the three different methods of handling unknown words. The diagram shows that the loglinear model leads to better overall tagging performance than the simpler methods, with a clear separation of all samples whose proportion of new words is above approximately 10%.", "cite_spans": [], "ref_spans": [ { "start": 150, "end": 156, "text": "Figure", "ref_id": null }, { "start": 1165, "end": 1173, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Adding Context to the Model", "sec_num": "3.5" }, { "text": "In the second series of experiments, we compare the performance of different statistical models on the task of predicting Prepositional Phrase (PP) attachment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting PP Attachment", "sec_num": "4" }, { "text": "First, an initial set of linguistic features that could be useful for predicting PP attachment was determined. The initial set included the following features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for PP Attachment", "sec_num": "4.1" }, { "text": "\u2022 PREPOSITION. Possible values of this feature include one of the more frequent prepositions in the training set, or the value other-prep.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for PP Attachment", "sec_num": "4.1" }, { "text": "* VERB-LEVEL. Lexical association strength between the verb and the preposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for PP Attachment", "sec_num": "4.1" }, { "text": "\u2022 NOUN-LEVEL. Lexical association strength between the noun and the preposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for PP Attachment", "sec_num": "4.1" }, { "text": "\u2022 NOUN-TAG. Part-of-Speech of the nominal attachment site. This is included to account for correlations between attachment and syntactic category of the nominal attachment site, such as \"PPs disfavor attachment to proper nouns.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for PP Attachment", "sec_num": "4.1" }, { "text": "\u2022 NOUN-DEFINITENESS. Does the nominal attachment site include a definite determiner? This feature is included to account for a possible correlation between PP attachment to the nominal site and definiteness, which was derived by (Hirst, 1986) from the principle of presupposition minimization of (Craln and Steedman, 1985) .", "cite_spans": [ { "start": 229, "end": 242, "text": "(Hirst, 1986)", "ref_id": "BIBREF15" }, { "start": 296, "end": 322, "text": "(Craln and Steedman, 1985)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Features for PP Attachment", "sec_num": "4.1" }, { "text": "\u2022 PP-OBJECT-TAG. Part-of-speech of the object of the PP. Certain types of PP objects favor attachment to the verbal or nominal site. For example, temporal PPs, such as \"in 1959\", where the prepositional object is tagged CD (cardinal), favor attachment to the VP, because tile VP is more likely to have a temporal dimension.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for PP Attachment", "sec_num": "4.1" }, { "text": "The association strengths for VERB-LEVEL and NOUN-LEVEL were measured using the Mutual Information between the noun or verb, and the preposition. 1 The probabilities were derived ms Maximum Likelihood estimates from all PP cases in the training data. The Mutual Information values were ordered by rank. Then, the a~ssociation strengths were categorized into eight levels (A-H), depending on percentile in the ranked Mutual Information values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for PP Attachment", "sec_num": "4.1" }, { "text": "Training and evaluation data was prepared from the Penn treebank. All 1.1 million words of parsed text in the Brown Corpus, and 2.6 million words of parsed WSJ articles, were used. All instances of PPs that are attached to VPs and NPs were extracted. This resulted in 82,000 PP cases from the Brown Corpus, and 89,000 PP cases from the WS.] articles. Verbs and nouns were lemmatized to their root forms if the root forms were attested in the corpus. If the root form did not occur in the corpus, then the inflected form was used. All the PP cases from the Brown Curl)us, and 50,000 of the WSJ cases, were reserved ms training data. The remaining 39,00 WSJ PP cases formed the evaluation pool. In each experiment, performance IMutu',d Information provides an estimate of the magnitude of the ratio t)ctw(.(-n the joint prol)ability P(verb/noun,1)reposition), and the joint probability a.~suming indcpendcnce P(verb/noun)P(prcl)osition ) -s(: (, (Church and Hanks, 1990) . ", "cite_spans": [ { "start": 941, "end": 968, "text": "(, (Church and Hanks, 1990)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Data and Evaluation", "sec_num": "4.2" }, { "text": "Previous work oll automatic PP attachment disambiguation has only considered the pattern of a verb phrase containing an object, and a final PP. This lends to two possible attachment sites, the verb and the object of the verb. The pattern is usually further simplified by considering only the heads of the possible attachment sites, corresponding to the sequence \"Verb Noun1 Preposition Noun2\". The first set of experiments concerns this pattern. There are 53,000 such cases in the training data. and 16,000 such cases in the evaluation pool. A number of methods were evaluated on this pattern according to the 25-sample scheme described above. The results are shown in Figure 5 .", "cite_spans": [], "ref_spans": [ { "start": 669, "end": 677, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Experimental Results: Two Attachments Sites", "sec_num": "4.3" }, { "text": "Baseline: Right Association Prepositional phrases exhibit a tendency to attach to the most recent possible attachment site; this is referred to ms the principle of \"'Right Association\". For the \"V NP PP'\" pattern, this means preferring attachment to the noun phra~se. On the evaluation samples, a median of 65% of the PP cases were attached to the noun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.3.1", "sec_num": null }, { "text": "Results of Lexical Association (Hindle and R ooth. 1993) described a method for obtaining estimates of lexical a.ssociation strengths between nouns or verbs and prepositions, and then using lexical association strength to predict. PP attachment. In our reimplementation of this lnethod. the probabilities were estimated fi'om all the PP cases in the training set. Since our training data are bracketed, it was possible to estimate tile lexical associations with much less noise than Hindle & R ooth, who were working with unparsed text. The median accuracy for our reimplementation of Hindle & Rooth's method was 81%. This is labeled \"Hindle & Rooth'\" in Figure 5 .", "cite_spans": [], "ref_spans": [ { "start": 655, "end": 663, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "4.3.2", "sec_num": null }, { "text": "Results of the Loglinear Model The loglinear model for this task used the features", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.3.3", "sec_num": null }, { "text": "and NOUN-DEFINITENESS, and it included all secondorder interaction terms. This model achieved a median accuracy of 82%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PREPOSITION. VERB-LEVEL, NOUN-LEVEL,", "sec_num": null }, { "text": "Hindle & Rooth's lexical association strategy only uses one feature (lexical aasociation) to predict PP attachment, but. ms the boxplot shows, the results from the loglinear model for the \"V NP PP\" pattern do not show any significant improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PREPOSITION. VERB-LEVEL, NOUN-LEVEL,", "sec_num": null }, { "text": "As suggested by (Gibson and Pearlmutter. 1994) , PP attachment for the \"'Verb NP PP\" pattern is relatively easy to predict because the two possible attachment sites differ in syntactic category, and therefore have very different kinds of lexical preferences. For example, most PPs with of attach to nouns, and most PPs with f,o and by attach to verbs.", "cite_spans": [ { "start": 16, "end": 46, "text": "(Gibson and Pearlmutter. 1994)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results: Three Attachment Sites", "sec_num": "4.4" }, { "text": "In actual texts, there are often more than two possible attachment sites for a PP. Thus, a second, more realistic series of experiments was perforlned that investigated different PP attachment strategies for the pattern \"'Verb Noun1 Noun2 Preposition Noun3\"' that includes more than two possible attachment sites that are not syntactically heterogeneous. There were 28,000 such cases in the training data. and 8000 ca,~es in the evaluation pool. Figures 6-7 . The baseline is again provided by attachment according to the principle of \"Right Attachment'; to the nmst recent possible site, i.e. attaclunent to Noun2. A median of 69% of the PP cases were attached to Noun2.", "cite_spans": [], "ref_spans": [ { "start": 446, "end": 457, "text": "Figures 6-7", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results: Three Attachment Sites", "sec_num": "4.4" }, { "text": "Next, the lexical association method was evaluated on this pattern. First. the method described by Hindle & Rooth was reimplemented by using the lexical association strengths estimated from all PP cases. The results for this strategy are labeled \"Basic Lexical Association\" in Figure 6 . This method only achieved a median accuracy of 59%, which is worse than always choosing the rightmost attachment site. These results suggest that Hindle & R.ooth's scoring function worked well in the \"'Verb Noun1 Preposition Noun2\"' case not only because it was an accurate estimator of lexical associations between individual verbs/nouns and prepositions which determine PP attachment, but also because it accurately predicted the general verb-noun skew of prepositions.", "cite_spans": [], "ref_spans": [ { "start": 277, "end": 285, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Results of Lexical Association", "sec_num": "4.4.2" }, { "text": "Results of Enhanced Lexical Association It seems natural that this pattern calls for a combination of a structural feature with lexical association strength. To implement this, we modified Hindle & Rooth's method to estimate attachments to the verb, first noun. and second noun separately. This resulted in estimates that combine the structural feature directly with the lexical association strength. The modified method performed better than the original lexical association scoring function, but it still only obtained a median accuracy of 72%. This is labeled \"Split Hindle & Rooth\" in Figure 7 .", "cite_spans": [], "ref_spans": [ { "start": 589, "end": 597, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "4.4.3", "sec_num": null }, { "text": "To create a model that combines various structural and lexical features without independence assumptions, we implemented a loglinear model that includes the variables VERB-LEVEL FIRST-NOUN-LEVEL. and SECOND- The loglinear model also includes the variables PREPOSITION and PP-OBJECT-TAG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of Loglinear Model", "sec_num": "4.4.4" }, { "text": "It, was smoothed with a loglinear model that includes all second-order interactions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of Loglinear Model", "sec_num": "4.4.4" }, { "text": "This method obtained a median accuracy of 79%; this is labeled \"Loglinear Model\" in Figure 7 . As the boxplot shows, it performs significantly better than the methods that only use estimates of lexical a,~soclarion. Compared with the \"'Split Hindle Sz Rooth'\" method, the samples are a little less spread out, and there is no overlap at all between the central 50% of the samples from the two methods.", "cite_spans": [], "ref_spans": [ { "start": 84, "end": 92, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Results of Loglinear Model", "sec_num": "4.4.4" }, { "text": "The simpler \"V NP PP\" pattern with two syntactically different attachment sites yielded a null result: The loglinear method did not perform significantly better than the lexical association method. This could mean that the results of the lexical association method can not be improved by adding other features, but it is also possible that the features that could result in improved accuracy were not identified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.5" }, { "text": "The lexical association strategy does not perform well on the more difficult pattern with three possible attachment sites. The loglinear model, on the other hand, predicts attachment with significantly higher accuracy, achieving a clear separation of the central 50% of the evaluation samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.5" }, { "text": "We have contrasted two types of statistical language models: A model that derives a probability distribution over the response variable that is properly conditioned on the combination of the explanatory variable, and a simpler model that treats the explanatory variables as independent, and therefore models the response variable simply a~s the addition of the individual main effects of the explanatory variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "2These features use tile s~unc Mutual Informationba.~ed measure of lcxic',d a.sso(:iation a.s tim prc.vious loglinear model for two possibh~\" attachment sites, which wcrc estimated from all nomin'M azt(l vcrhal PP att~t(:hments in the corpus. The features FIRST-NOUN-LEVEL aaM SECOND-NOUN-LEVEL use the same estimates: in other words, in contrm~t to the \"split Lexi(:al Association\" method, they were not estimated sepaxatcly for the two different nominaJ, attachment sites.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "The experimental results show that, with the same feature set, inodeling feature interactions yields better performance: such nmdels achieves higher accuracy, and its accura~,y can be raised with additional features. It is interesting to note that modeling variable interactions yields a higher perforlnanee gain than including additional explanatory variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "While these results do not prove that modeling feature interactions is necessary, we believe that they provide a strong indication. This suggests a mlmber of avenues for filrther research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "First, we could attempt to improve the specific models that were presented by incorporating additional features, and perhal)S by taking into account higher-order features. This might help to address the performance gap between our models and human subjects that ha,s been documented in the literature, z A more ambitious idea would be to use a statistical model to rank overall parse quality for entire sentences. This would be an improvement over schemes that a,ssnlne independence between a number of individual scoring fimctions, such ms (Alshawi and Carter, 1994) . If such a model were to include only a few general variables to account for such features a.~ lexical a.ssociation and recency preference for syntactic attachment, it might even be worthwhile to investigate it a.s an approximation to the human parsing mechanism.", "cite_spans": [ { "start": 541, "end": 567, "text": "(Alshawi and Carter, 1994)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Categorical Data Analysis", "authors": [ { "first": "Alan", "middle": [], "last": "Agresti", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agresti, Alan. 1990. Categorical Data Analysis.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Training and scaling preference functions for disambiguation", "authors": [ { "first": "Hiyan", "middle": [], "last": "Alshawi", "suffix": "" }, { "first": "David", "middle": [], "last": "Carter", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "4", "pages": "635--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alshawi, Hiyan and David Carter. 1994. Training and scaling preference functions for disambigua- tion. Computational Linguistics, 20(4):635-648.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Discrete Multivariate Analysis: Th, eory and Practice", "authors": [ { "first": "S", "middle": [ "Y M E" ], "last": "Bishop", "suffix": "" }, { "first": "P", "middle": [ "W" ], "last": "Fienberg", "suffix": "" }, { "first": "", "middle": [], "last": "Holland", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bishop. Y. M., S. E. Fienberg, and P. W. Holland. 1975. Discrete Multivariate Analysis: Th, eory and Practice. MIT Press, Cambridge, MA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Word a,~soeiation norms, mutual information, and lexicography", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Curtis", "middle": [], "last": "Hendrickson", "suffix": "" }, { "first": "Jacobson", "middle": [], "last": "Neil", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Perkowitz", "suffix": "" } ], "year": 1990, "venue": "AAAI-93", "volume": "16", "issue": "", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, Eugene, Curtis Hendrickson, Neil ,Jacob- son, and Mike Perkowitz. 1993. Equations for part-of-speech tagging. In AAAI-93, pages 784~ 789. Church, Kenneth W. and Patrick Hanks. 1990. Word a,~soeiation norms, mutual information, and lexicography. Computational Linguistics, 16(1):22-29.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "On not being led up the garden path: The use of 3For cXaml)l(', If random s(;ntcnc(;s with \"V", "authors": [ { "first": "Stephen", "middle": [], "last": "Crain", "suffix": "" }, { "first": "Mark", "middle": [ "3" ], "last": "Steedman", "suffix": "" } ], "year": 1985, "venue": "Penn tr(',(;l)ank aa'(: tak(:n ms the gohl standard, then (Hindlc and Rooth, 1993) and", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Crain, Stephen and Mark 3. Steedman. 1985. On not being led up the garden path: The use of 3For cXaml)l(', If random s(;ntcnc(;s with \"V('rb NP PP\" (:~(:s from th(: Penn tr(',(;l)ank aa'(: tak(:n ms the gohl standard, then (Hindlc and Rooth, 1993) and (Ratna-", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "cu-ra~:y. If the huma~l CXl)erts arc allow(:d to consult the whoh,\" scntcn(:(:, their accuracy judged against random Trc(}l)ank s(',ntclm(:s rises to al)l)roximatcly 93%. context by the psychological syntax processor", "authors": [], "year": null, "venue": "Ryn~r, aal(t Roukos. 1994) rcl)ort that human, (:xi)(;rts using only hca(t words obtain 85%-88% a", "volume": "", "issue": "", "pages": "320--358", "other_ids": {}, "num": null, "urls": [], "raw_text": "arkhi, Ryn~r, aal(t Roukos. 1994) rcl)ort that human, (:xi)(;rts using only hca(t words obtain 85%-88% a('cu- ra~:y. If the huma~l CXl)erts arc allow(:d to consult the whoh,\" scntcn(:(:, their accuracy judged against random Trc(}l)ank s(',ntclm(:s rises to al)l)roximatcly 93%. context by the psychological syntax processor. In David R. Dowty, Lauri Karttunen, and An- rnold M. Zwicky, editors, Natural Language Pars- ing, pages 320-358, Cambridge, UK. Cambridge University Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Parsing the LOB corpus", "authors": [ { "first": "", "middle": [], "last": "De Marcken", "suffix": "" }, { "first": "G", "middle": [], "last": "Carl", "suffix": "" } ], "year": 1990, "venue": "Proceedings of A CL-90", "volume": "", "issue": "", "pages": "243--251", "other_ids": {}, "num": null, "urls": [], "raw_text": "de Marcken, Carl G. 1990. Parsing the LOB corpus. In Proceedings of A CL-90, pages 243-251.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "On a lea.st squares adjustment of a sampled frequency table when the expected marginal totals are known", "authors": [ { "first": "W", "middle": [ "E" ], "last": "Deming", "suffix": "" }, { "first": "F", "middle": [ "F" ], "last": "Stephan", "suffix": "" } ], "year": 1940, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deming, W. E. and F. F. Stephan. 1940. On a lea.st squares adjustment of a sampled frequency ta- ble when the expected marginal totals are known.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Pattern Classification and Scene Analysis", "authors": [ { "first": "Richard", "middle": [ "O" ], "last": "Duda", "suffix": "" }, { "first": "Peter", "middle": [ "E" ], "last": "Hart", "suffix": "" } ], "year": 1973, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duda, Richard O. and Peter E. Hart. 1973. Pattern Classification and Scene Analysis. John Wiley & Sons, New York.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Th.e Analysis of Cross-Classified Categorical Data", "authors": [ { "first": "Stephen", "middle": [ "E" ], "last": "Fienberg", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fienberg, Stephen E. 1980. Th.e Analysis of Cross- Classified Categorical Data. The MIT Press, Cambridge, MA, second edition edition.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Automatic Ambiguity Resolution in Natural Language Processing", "authors": [ { "first": "Alexander", "middle": [], "last": "Franz", "suffix": "" } ], "year": 1996, "venue": "Lecture Notes in Artificial Intelligence", "volume": "1171", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz, Alexander. 1996. Automatic Ambiguity Res- olution in Natural Language Processing. volume 1171 of Lecture Notes in Artificial Intelligence.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A corpusba,sed analysis of psycholinguistic constraints on PP attachment", "authors": [ { "first": "Ted", "middle": [], "last": "Gibson", "suffix": "" }, { "first": "Neal", "middle": [], "last": "Pearhnutter", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gibson, Ted and Neal Pearhnutter. 1994. A corpus- ba,sed analysis of psycholinguistic constraints on PP attachment. In Charles Clifton Jr., Lyn", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Perspectives on Sentence Processing", "authors": [ { "first": "Keith", "middle": [], "last": "Frazier", "suffix": "" }, { "first": "", "middle": [], "last": "Rayner", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frazier, and Keith Rayner, editors, Perspectives on Sentence Processing. Lawrence Erlbaum Asso- ciates.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Structural ambiguity and lexical relations", "authors": [ { "first": "Donald", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "Mats", "middle": [], "last": "Rooth", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "103--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hindle, Donald and Mats Rooth. 1993. Structural ambiguity and lexical relations. Computational Linguistics, 19( 1 ): 103-120.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Semantic Interpretation and the Resolution of Ambiguity", "authors": [ { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hirst, Graeme. 1986. Semantic Interpretation and the Resolution of Ambiguity. Cambridge Univer- sity Press, Cambridge.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, Mitchell P., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A maximum entropy model for Prepositional Phra,se attachment", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" }, { "first": "Jeff", "middle": [ "B" ], "last": "", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" } ], "year": 1994, "venue": "ARPA Workshop on Human Language Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratnaparkhi, Adwait, Jeff B ynar, and Salim Roukos. 1994. A maximum entropy model for Prepositional Phra,se attachment. In ARPA Workshop on Human Language Technology.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Coping with ambiguity and unknown words through probabilistic models", "authors": [ { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Meteer", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Palmucci", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "359--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weischedel, Ralph, Marie Meteer, Richard Schwartz, Lance Ramshaw, and Jeff Palmucci. 1993. Cop- ing with ambiguity and unknown words through probabilistic models. Computational Linguistics, 19(2):359-382.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "Performance of Different Models", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "-......... ~ ...... o-..... o ...... o ..... Error Rate on Unknown Words features further improves this score.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Results for Two Attachment SitesFigure 6: Three Attachment Sites: Right Association and Lexical Association was evaluated oil a series of 25 random samples of 100 PP cases fi'om the evaluation pool. in order to provide a characterization of the error variance.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF4": { "text": "As in the first set of experiments, a number of methods were evaluated an the three attachment site pattern with 25 samples of 100 random PP cases. The results are shown in", "num": null, "uris": null, "type_str": "figure" } } } }