{ "paper_id": "P93-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:52:27.744807Z" }, "title": "Towards History-based Grammars: Using Richer Models for Probabilistic Parsing*", "authors": [ { "first": "Ezra", "middle": [], "last": "Black", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Fred", "middle": [], "last": "Jelinek", "suffix": "", "affiliation": {}, "email": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "", "affiliation": {}, "email": "" }, { "first": "David", "middle": [ "M" ], "last": "Magerman", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Robert", "middle": [], "last": "Mercer", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ibm", "middle": [ "T J" ], "last": "Watson", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Research", "middle": [], "last": "Center", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe a generative probabilistic model of natural language, which we call HBG, that takes advantage of detailed linguistic information to resolve ambiguity. HBG incorporates lexical, syntactic, semantic, and structural information from the parse tree into the disambiguation process in a novel way. We use a corpus of bracketed sentences, called a Treebank, in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correct parse of a sentence. This stands in contrast to the usual approach of further grammar tailoring via the usual linguistic introspection in the hope of generating the correct parse. In head-to-head tests against one of the best existing robust probabilistic parsing models, which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error.", "pdf_parse": { "paper_id": "P93-1005", "_pdf_hash": "", "abstract": [ { "text": "We describe a generative probabilistic model of natural language, which we call HBG, that takes advantage of detailed linguistic information to resolve ambiguity. HBG incorporates lexical, syntactic, semantic, and structural information from the parse tree into the disambiguation process in a novel way. We use a corpus of bracketed sentences, called a Treebank, in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correct parse of a sentence. This stands in contrast to the usual approach of further grammar tailoring via the usual linguistic introspection in the hope of generating the correct parse. In head-to-head tests against one of the best existing robust probabilistic parsing models, which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Almost any natural language sentence is ambiguous in structure, reference, or nuance of meaning. Humans overcome these apparent ambiguities by examining the contez~ of the sentence. But what exactly is context? Frequently, the correct interpretation is apparent from the words or constituents immediately surrounding the phrase in question. This observation begs the following question: How much information about the context of a sentence or phrase is necessary and sufficient to determine its meaning? This question is at the crux of the debate among computational linguists about the application and implementation of statistical methods in natural language understanding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Previous work on disambiguation and probabilistic parsing has offered partial answers to this question. Hidden Markov models of words and *Thanks to Philip Resnik and Stanley Chen for their valued input. their tags, introduced in (5) and (5) and popularized in the natural language community by Church (5) , demonstrate the power of short-term n-gram statistics to deal with lexical ambiguity. Hindle and Rooth (5) use a statistical measure of lexical associations to resolve structural ambiguities. Brent (5) acquires likely verb subcategorization patterns using the frequencies of verbobject-preposition triples. Magerman and Marcus (5) propose a model of context that combines the n-gram model with information from dominating constituents. All of these aspects of context are necessary for disambiguation, yet none is sufficient.", "cite_spans": [ { "start": 295, "end": 305, "text": "Church (5)", "ref_id": null }, { "start": 500, "end": 509, "text": "Brent (5)", "ref_id": null }, { "start": 615, "end": 638, "text": "Magerman and Marcus (5)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "We propose a probabilistic model of context for disambiguation in parsing, HBG, which incorporates the intuitions of these previous works into one unified framework. Let p(T, w~) be the joint probability of generating the word string w~ and the parse tree T. Given w~, our parser chooses as its parse tree that tree T* for which T\" =arg maxp (T, w~) (1)", "cite_spans": [ { "start": 342, "end": 349, "text": "(T, w~)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "where ~(w~) is the set of all parses produced by the grammar for the sentence w~. Many aspects of the input sentence that might be relevant to the decision-making process participate in the probabilistic model, providing a very rich if not the richest model of context ever attempted in a probabilistic parsing model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T6~(~)", "sec_num": null }, { "text": "In this paper, we will motivate and define the HBG model, describe the task domain, give an overview of the grammar, describe the proposed HBG model, and present the results of experiments comparing HBG with an existing state-ofthe-art model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T6~(~)", "sec_num": null }, { "text": "One goal of a parser is to produce a grammatical interpretation of a sentence which represents the syntactic and semantic intent of the sentence. To achieve this goal, the parser must have a mechanism for estimating the coherence of an interpretation, both in isolation and in context. Probabilistic language models provide such a mechanism. A probabilistic language model attempts to estimate the probability of a sequence of sentences and their respective interpretations (parse trees) occurring in the language, :P(SI TI S2 T2 ... S,, T,~).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation for History-based Grammars", "sec_num": null }, { "text": "The difficulty in applying probabilistic models to natural language is deciding what aspects of the sentence and the discourse are relevant to the model. Most previous probabilistic models of parsing assume the probabilities of sentences in a discourse are independent of other sentences. In fact, previous works have made much stronger independence assumptions. The P-CFG model considers the probability of each constituent rule independent of all other constituents in the sentence. The :Pearl (5) model includes a slightly richer model of context, allowing the probability of a constituent rule to depend upon the immediate parent of the rule and a part-of-speech trigram from the input sentence. But none of these models come close to incorporating enough context to disambiguate many cases of ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation for History-based Grammars", "sec_num": null }, { "text": "A significant reason researchers have limited the contextual information used by their models is because of the difficulty in estimating very rich probabilistic models of context. In this work, we present a model, the history-based grammar model, which incorporates a very rich model of context, and we describe a technique for estimating the parameters for this model using decision trees. The history-based grammar model provides a mechanism for taking advantage of contextual information from anywhere in the discourse history. Using decision tree technology, any question which can be asked of the history (i.e. Is the subject of the previous sentence animate? Was the previous sentence a question? etc.) can be incorporated into the language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation for History-based Grammars", "sec_num": null }, { "text": "The history-based grammar model defines context of a parse tree in terms of the leftmost derivation of the tree. Following (5), we show in Figure 1 a contextfree grammar (CFG) for a'~b \"~ and the parse tree for the sentence aabb. The leftmost derivation of the tree T in Figure 1 is:", "cite_spans": [], "ref_spans": [ { "start": 139, "end": 147, "text": "Figure 1", "ref_id": null }, { "start": 271, "end": 279, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The History-based Grammar Model", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\"P1 'r2 'P3 S --~ ASB --* aSB --~ aABB ~-~ aaBB ~-h aabB Y-~", "eq_num": "(2)" } ], "section": "The History-based Grammar Model", "sec_num": null }, { "text": "where the rule used to expand the i-th node of the tree is denoted by ri. Note that we have in- dexed the non-terminal (NT) nodes of the tree with this leftmost order. We denote by ~-the sentential form obtained just before we expand node i. Hence, t~ corresponds to the sentential form aSB or equivalently to the string rlr2. In a leftmost derivation we produce the words in left-toright order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The History-based Grammar Model", "sec_num": null }, { "text": "Using the one-to-one correspondence between leftmost derivations and parse trees, we can rewrite the joint probability in (1) as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The History-based Grammar Model", "sec_num": null }, { "text": "~r~ p(T, w~) = H p(r, ]t[) i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The History-based Grammar Model", "sec_num": null }, { "text": "In a probabilistic context-free grammar (P-CFG), the probability of an expansion at node i depends only on the identity of the non-terminal Ni, i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The History-based Grammar Model", "sec_num": null }, { "text": "p(r lq) = Thus v(T, = II i----1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The History-based Grammar Model", "sec_num": null }, { "text": "So in P-CFG the derivation order does not affect the probabilistic model 1. A less crude approximation than the usual P-CFG is to use a decision tree to determine which aspects of the leftmost derivation have a bearing on the probability of how node i will be expanded. In other words, the probability distribution p(ri ]t~) will be modeled by p (ri[E[t~] ) where E[t] is the equivalence class of the history ~ as determined by the decision tree. This allows our 1Note the abuse of notation since we denote by p(ri) the conditional probability of rewriting the nonterminal AT/. probabilistic model to use any information anywhere in the partial derivation tree to determine the probability of different expansions of the i-th non-terminal. The use of decision trees and a large bracketed corpus may shift some of the burden of identifying the intended parse from the grammarian to the statistical estimation methods. We refer to probabilistic methods based on the derivation as History-based Grammars (HBG).", "cite_spans": [ { "start": 346, "end": 355, "text": "(ri[E[t~]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The History-based Grammar Model", "sec_num": null }, { "text": "In this paper, we explored a restricted implementation of this model in which only the path from the current node to the root of the derivation along with the index of a branch (index of the child of a parent ) are examined in the decision tree model to build equivalence classes of histories. Other parts of the subtree are not examined in the implementation of HBG. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The History-based Grammar Model", "sec_num": null }, { "text": "[N It_PPH1 N] IV indicates_VVZ [Fn [Fn~whether_CSW [N a_AT1 call_NN1 N] [V completed_VVD successfully_RR V]Fn&] or_CC [Fn+ iLCSW [N some_DD error_NN1 N]@ [V was_VBDZ detected_VVN V] @[Fr that_CST [V caused_VVD IN the_AT call_NN1 N] [Ti to_TO fail_VVI Wi]V]Fr]Fn+] Fn]V]._.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The History-based Grammar Model", "sec_num": null }, { "text": "We have chosen computer manuals as a task domain. We picked the most frequent 3000 words in a corpus of 600,000 words from 10 manuals as our vocabulary. We then extracted a few million words of sentences that are completely covered by this vocabulary from 40,000,000 words of computer manuals. A randomly chosen sentence from a sample of 5000 sentences from this corpus is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Domain", "sec_num": null }, { "text": "396. It indicates whether a call completed successfully or if some error was detected that caused the call to fail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Domain", "sec_num": null }, { "text": "To define what we mean by a correct parse, we use a corpus of manually bracketed sentences at the University of Lancaster called the Treebank. The Treebank uses 17 non-terminal labels and 240 tags. The bracketing of the above sentence is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 247, "end": 255, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Task Domain", "sec_num": null }, { "text": "A parse produced by the grammar is judged to be correct if it agrees with the Treebank parse structurally and the NT labels agree. The grammar has a significantly richer NT label set (more than 10000) than the Treebank but we have defined an equivalence mapping between the grammar NT labels and the Treebank NT labels. In this paper, we do not include the tags in the measure of a correct parse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Domain", "sec_num": null }, { "text": "We have used about 25,000 sentences to help the grammarian develop the grammar with the goal that the correct (as defined above) parse is among the proposed (by the grammar) parses for sentence. Our most common test set consists of 1600 sentences that are never seen by the grammarian.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Domain", "sec_num": null }, { "text": "The grammar used in this experiment is a broadcoverage, feature-based unification grammar. The grammar is context-free but uses unification to express rule templates for the the context-free productions. For example, the rule template:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Grammar", "sec_num": null }, { "text": "( 3): n unspec : n corresponds to three CFG productions where the second feature : n is either s, p, or : n. This rule template may elicit up to 7 non-terminals. The grammar has 21 features whose range of values maybe from 2 to about 100 with a median of 8. There are 672 rule templates of which 400 are actually exercised when we parse a corpus of 15,000 sentences. The number of productions that are realized in this training corpus is several hundred thousand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Grammar", "sec_num": null }, { "text": "While a NT in the above grammar is a feature vector, we group several NTs into one class we call a mnemonic represented by the one NT that is the least specified in that class. For example, the mnemonic VBOPASTSG* corresponds to all NTs that unify with: We use these mnemonics to label a parse tree and we also use them to estimate a P-CFG, where the probability of rewriting a NT is given by the probability of rewriting the mnemonic. So from a training set we induce a CFG from the actual mnemonic productions that are elicited in parsing the training corpus. Using the Inside-Outside algorithm, we can estimate P-CFG from a large corpus of text. But since we also have a large corpus of bracketed sentences, we can adapt the Inside-Outside algorithm to reestimate the probability parameters subject to the constraint that only parses consistent with the Treebank (where consistency is as defined earlier) contribute to the reestimation. From a training run of 15,000 sentences we observed 87,704 mnemonic productions, with 23,341 NT mnemonics of which 10,302 were lexical. Running on a test set of 760 sentences 32% of the rule templates were used, 7% of the lexical mnemonics, 10% of the constituent mnemonics, and 5% of the mnemonic productions actually contributed to parses of test sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P-CFG", "sec_num": null }, { "text": "pos--v 1 v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P-CFG", "sec_num": null }, { "text": "To evaluate the performance of a grammar and an accompanying model, we use two types of measurements:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar and Model Performance Metrics", "sec_num": null }, { "text": "\u2022 the any-consistent rate, defined as the percentage of sentences for which the correct parse is proposed among the many parses that the grammar provides for a sentence. We also measure the parse base, which is defined as the geometric mean of the number of proposed parses on a per word basis, to quantify the ambiguity of the grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar and Model Performance Metrics", "sec_num": null }, { "text": "\u2022 the Viterbi rate defined as the percentage of sentences for which the most likely parse is consistent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar and Model Performance Metrics", "sec_num": null }, { "text": "The any-contsistentt rate is a measure of the grammar's coverage of linguistic phenomena.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar and Model Performance Metrics", "sec_num": null }, { "text": "The Instead of the usual approach where a grammarian tries to fine tune the grammar in the hope of improving the Viterbi rate we use the combination of a large Treebank and the resulting derivation histories with a decision tree building algorithm to extract statistical parameters that would improve the Viterbi rate. The grammarian's task remains that of improving the any-consistent rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar and Model Performance Metrics", "sec_num": null }, { "text": "Viterbi", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar and Model Performance Metrics", "sec_num": null }, { "text": "The history-based grammar model is distinguished from the context-free grammar model in that each constituent structure depends not only on the input string, but also the entire history up to that point in the sentence. In HBGs, history is interpreted as any element of the output structure, or the parse tree, which has already been determined, including previous words, non-terminal categories, constituent structure, and any other linguistic information which is generated as part of the parse structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar and Model Performance Metrics", "sec_num": null }, { "text": "Unlike P-CFG which assigns a probability to a mnemonic production, the HBG model assigns a probability to a rule template. Because of this the HBG formulation allows one to handle any grammar formalism that has a derivation process. For the HBG model, we have defined about 50 syntactic categories, referred to as Syn, and about 50 semantic categories, referred to as Sere.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The HBG Model", "sec_num": null }, { "text": "Each NT (and therefore mnemonic) of the grammar has been assigned a syntactic (Syn) and a semantic (Sem) category. We also associate with a non-terminal a primary lexical head, denoted by H1, and a secondary lexical head, denoted by H~. 2 When a rule is applied to a non-terminal, it indicates which child will generate the lexical primary head and which child will generate the secondary lexical head. The proposed generative model associates for each constituent in the parse tree the probability: p( Syn, Sern, R, H1, H2 [Synp, Setup, P~, Ipc, Hip, H2p ) In HBG, we predict the syntactic and semantic labels of a constituent, its rewrite rule, and its two lexical heads using the labels of the parent constituent, the parent's lexical heads, the parent's rule P~ that lead to the constituent and the constituent's index Ipc as a child of R~. As we discuss in a later section, we have also used with success more information about the derivation tree than the immediate parent in conditioning the probability of expanding a constituent.", "cite_spans": [ { "start": 524, "end": 530, "text": "[Synp,", "ref_id": null }, { "start": 531, "end": 537, "text": "Setup,", "ref_id": null }, { "start": 538, "end": 541, "text": "P~,", "ref_id": null }, { "start": 542, "end": 546, "text": "Ipc,", "ref_id": null }, { "start": 547, "end": 551, "text": "Hip,", "ref_id": null }, { "start": 552, "end": 557, "text": "H2p )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The HBG Model", "sec_num": null }, { "text": "2The primary lexical head H1 corresponds (roughly) to the linguistic notion of a lexicai head. The secondary lexical head H2 has no linguistic parallel. It merely represents a word in the constituent besides the head which contains predictive information about the constituent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The HBG Model", "sec_num": null }, { "text": "We have approximated the above probability by the following five factors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The HBG Model", "sec_num": null }, { "text": "1. p(Syn IP~, X~o, X~, Sy~, Se.~) 2. p ( Sern ISyn, Rv, /pc, Hip, H2p, Synp, Sern; ) 3. p ( R ]Syn, Sem, 1~, Ipc, Hip, H2p, Synp, Semi) 4. p(H IR, Sw, Sere, I o, 5. p(n2 IH1, 1< Sy , Sere, Ipc, Sy, p) While a different order for these predictions is possible, we only experimented with this one.", "cite_spans": [ { "start": 39, "end": 51, "text": "( Sern ISyn,", "ref_id": null }, { "start": 52, "end": 55, "text": "Rv,", "ref_id": null }, { "start": 56, "end": 60, "text": "/pc,", "ref_id": null }, { "start": 61, "end": 65, "text": "Hip,", "ref_id": null }, { "start": 66, "end": 70, "text": "H2p,", "ref_id": null }, { "start": 71, "end": 76, "text": "Synp,", "ref_id": null }, { "start": 77, "end": 82, "text": "Sern;", "ref_id": null }, { "start": 83, "end": 84, "text": ")", "ref_id": null }, { "start": 90, "end": 99, "text": "( R ]Syn,", "ref_id": null }, { "start": 100, "end": 104, "text": "Sem,", "ref_id": null }, { "start": 105, "end": 108, "text": "1~,", "ref_id": null }, { "start": 109, "end": 113, "text": "Ipc,", "ref_id": null }, { "start": 114, "end": 118, "text": "Hip,", "ref_id": null }, { "start": 119, "end": 123, "text": "H2p,", "ref_id": null }, { "start": 124, "end": 129, "text": "Synp,", "ref_id": null }, { "start": 130, "end": 146, "text": "Semi) 4. p(H IR,", "ref_id": null }, { "start": 147, "end": 150, "text": "Sw,", "ref_id": null }, { "start": 151, "end": 156, "text": "Sere,", "ref_id": null }, { "start": 157, "end": 161, "text": "I o,", "ref_id": null }, { "start": 162, "end": 174, "text": "5. p(n2 IH1,", "ref_id": null }, { "start": 175, "end": 182, "text": "1< Sy ,", "ref_id": null }, { "start": 183, "end": 188, "text": "Sere,", "ref_id": null }, { "start": 189, "end": 193, "text": "Ipc,", "ref_id": null }, { "start": 194, "end": 197, "text": "Sy,", "ref_id": null }, { "start": 198, "end": 200, "text": "p)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The HBG Model", "sec_num": null }, { "text": "We only have built a decision tree to the rule probability component (3) of the model. For the moment, we are using n-gram models with the usual deleted interpolation for smoothing for the other four components of the model. We have assigned bit strings to the syntactic and semantic categories and to the rules manually. Our intention is that bit strings differing in the least significant bit positions correspond to categories of non-terminals or rules that are similar. We also have assigned bitstrings for the words in the vocabulary (the lexical heads) using automatic clustering algorithms using the bigram mutual information clustering algorithm (see (5)). Given the bitsting of a history, we then designed a decision tree for modeling the probability that a rule will be used for rewriting a node in the parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": null }, { "text": "Since the grammar produces parses which may be more detailed than the Treebank, the decision tree was built using a training set constructed in the following manner. Using the grammar with the P-CFG model we determined the most likely parse that is consistent with the Treebank and considered the resulting sentence-tree pair as an event. Note that the grammar parse will also provide the lexical head structure of the parse. Then, we extracted using leftmost derivation order tupies of a history (truncated to the definition of a history in the HBG model) and the corresponding rule used in expanding a node. Using the resulting data set we built a decision tree by classifying histories to locally minimize the entropy of the rule template.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": null }, { "text": "With a training set of about 9000 sentencetree pairs, we had about 240,000 tuples and we grew a tree with about 40,000 nodes. This required 18 hours on a 25 MIPS RISC-based machine and the resulting decision tree was nearly 100 megabytes. immediate parent is the constituent that immediately dominates the constituent being predicted. If the immediate parent of a constituent has a different syntactic type from that of the constituent, then the immediate parent is also the functional parent; otherwise, the functional parent is the functional parent of the immediate parent. The distinction between functional parents and immediate parents arises primarily to cope with unit productions. When unit productions of the form XP2 ~ XP1 occur, the immediate parent of XP1 is XP2. But, in general, the constituent XP2 does not contain enough useful information for ambiguity resolution. In particular, when considering only immediate parents, unit rules such as NP2 --* NP1 prevent the probabilistic model from allowing the NP1 constituent to interact with the VP rule which is the functional parent of NP1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": null }, { "text": "When the two parents are identical as it often happens, the duplicate information will be ignored. However, when they differ, the decision tree will select that parental context which best resolves ambiguities. Figure 3 shows an example of the representation of a history in HBG for the prepositional phrase \"with a list.\" In this example, the immediate parent of the N1 node is the NBAR4 node and the functional parent of N1 is the PP1 node.", "cite_spans": [], "ref_spans": [ { "start": 211, "end": 219, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Parameter Estimation", "sec_num": null }, { "text": "We compared the performance of HBG to the \"broad-coverage\" probabilistic context-free gram- In developing HBG, we experimented with similar models of varying complexity. One discovery made during this experimentation is that models which incorporated more context than HBG performed slightly worse than HBG. This suggests that the current training corpus may not contain enough sentences to estimate richer models. Based on the results of these experiments, it appears likely that significantly increasing the sise of the training corpus should result in a corresponding improvement in the accuracy of HBG and richer HBG-like models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "To check the value of the above detailed history, we tried the simpler model: 1. p(H1 [HI~, H~, P~, Z~o) 2. p (H2 [H~, H~p, H2p, 1%, Ip~) 3. p (syn IH , 4. v(Sem ISYn, H, , Ip, ) 5. p (R [Syn, Sere, H~, H2) This model corresponds to a P-CFG with NTs that are the crude syntax and semantic categories annotated with the lexical heads. The Viterbi rate in this case was 66%, a small improvement over the P-CFG model indicating the value of using more context from the derivation tree.", "cite_spans": [ { "start": 86, "end": 104, "text": "[HI~, H~, P~, Z~o)", "ref_id": null }, { "start": 143, "end": 152, "text": "(syn IH ,", "ref_id": null }, { "start": 153, "end": 167, "text": "4. v(Sem ISYn,", "ref_id": null }, { "start": 168, "end": 170, "text": "H,", "ref_id": null }, { "start": 171, "end": 172, "text": ",", "ref_id": null }, { "start": 173, "end": 176, "text": "Ip,", "ref_id": null }, { "start": 177, "end": 178, "text": ")", "ref_id": null }, { "start": 184, "end": 192, "text": "(R [Syn,", "ref_id": null }, { "start": 193, "end": 198, "text": "Sere,", "ref_id": null }, { "start": 199, "end": 202, "text": "H~,", "ref_id": null }, { "start": 203, "end": 206, "text": "H2)", "ref_id": null } ], "ref_spans": [ { "start": 110, "end": 137, "text": "(H2 [H~, H~p, H2p, 1%, Ip~)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "The success of the HBG model encourages future development of general history-based grammars as a more promising approach than the usual P-CFG. More experimentation is needed with a larger Treebank than was used in this study and with different aspects of the derivation history. In addition, this paper illustrates a new approach to grammar development where the parsing problem is divided (and hopefully conquered) into two subproblems: one of grammar coverage for the grammarian to address and the other of statistical modeling to increase the probability of picking the correct parse of a sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Stochastic Modeling for Automatic Speech Understanding", "authors": [ { "first": "J", "middle": [ "K" ], "last": "Baker", "suffix": "" } ], "year": 1975, "venue": "Speech Recognition", "volume": "", "issue": "", "pages": "521--542", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baker, J. K., 1975. Stochastic Modeling for Au- tomatic Speech Understanding. In Speech Recognition, edited by Raj Reddy, Academic Press, pp. 521-542.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Automatic Acquisition of Subcategorization Frames from Untagged Freetext Corpora", "authors": [ { "first": "M", "middle": [ "R" ], "last": "Brent", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brent, M. R. 1991. Automatic Acquisition of Sub- categorization Frames from Untagged Free- text Corpora. In Proceedings of the 29th An- nual Meeting of the Association for Computa- tional Linguistics. Berkeley, California.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Deducing Linguistic Structure from the Statistics of Large Corpora", "authors": [ { "first": "E", "middle": [], "last": "Brill", "suffix": "" }, { "first": "D", "middle": [], "last": "Magerman", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the June 1990 DARPA Speech and Natural Language Workshop. Hidden Valley", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brill, E., Magerman, D., Marcus, M., and San- torini, B. 1990. Deducing Linguistic Structure from the Statistics of Large Corpora. In Pro- ceedings of the June 1990 DARPA Speech and Natural Language Workshop. Hidden Valley, Pennsylvania.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Class-based ngram Models of Natural Language", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "P", "middle": [ "V" ], "last": "Desouza", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Lai", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1990, "venue": "Proceedings of ~he IBM Natural Language ITL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, P. F., Della Pietra, V. J., deSouza, P. V., Lai, J. C., and Mercer, R. L. Class-based n- gram Models of Natural Language. In Pro- ceedings of ~he IBM Natural Language ITL, March, 1990. Paris, France.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text", "authors": [ { "first": "K", "middle": [], "last": "Church", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the Second Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Church, K. 1988. A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text. In Proceedings of the Second Conference on Ap- plied Natural Language Processing. Austin, Texas.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Poor Estimates of Context are Worse than None", "authors": [ { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" }, { "first": "K", "middle": [], "last": "Church", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the June 1990 DARPA Speech and Natural Language Workshop. Hidden Valley", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gale, W. A. and Church, K. 1990. Poor Estimates of Context are Worse than None. In Proceed- ings of the June 1990 DARPA Speech and Natural Language Workshop. Hidden Valley, Pennsylvania.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Introduction to Formal Language Theory", "authors": [ { "first": "M", "middle": [ "A" ], "last": "Harrison", "suffix": "" } ], "year": 1978, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harrison, M. A. 1978. Introduction to Formal Language Theory. Addison-Wesley Publishing Company.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Structural Ambiguity and Lexical Relations", "authors": [ { "first": "D", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "M", "middle": [], "last": "Rooth", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the :June 1990 DARPA Speech and Natural Language Workshop. Hidden Valley", "volume": "", "issue": "", "pages": "Pennsyl-- vania", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hindle, D. and Rooth, M. 1990. Structural Am- biguity and Lexical Relations. In Proceedings of the :June 1990 DARPA Speech and Natural Language Workshop. Hidden Valley, Pennsyl- vania.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Self-organizing Language Modeling for Speech Recognition", "authors": [ { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jelinek, F. 1985. Self-organizing Language Model- ing for Speech Recognition. IBM Report.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Pearl: A Probabilistic Chart Parser", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Magerman", "suffix": "" }, { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the February 1991 DARPA Speech and Natural Language Workshop. Asilomar, California", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Magerman, D. M. and Marcus, M. P. 1991. Pearl: A Probabilistic Chart Parser. In Proceedings of the February 1991 DARPA Speech and Nat- ural Language Workshop. Asilomar, Califor- nia.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Probabilistic Grammar for Phonetic to French Transcription", "authors": [ { "first": "A", "middle": [], "last": "Derouault", "suffix": "" }, { "first": "B", "middle": [], "last": "Merialdo", "suffix": "" } ], "year": 1985, "venue": "ICASSP 85 Proceedings", "volume": "", "issue": "", "pages": "1577--1580", "other_ids": {}, "num": null, "urls": [], "raw_text": "Derouault, A., and Merialdo, B., 1985. Probabilis- tic Grammar for Phonetic to French Tran- scription. ICASSP 85 Proceedings. Tampa, Florida, pp. 1577-1580.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Generating a Grammar for Statistical Training", "authors": [ { "first": "R", "middle": [ "A" ], "last": "Sharman", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "R", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the :June 1990 DARPA Speech and Natural Language Workshop. Hidden Valley", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharman, R. A., :Jelinek, F., and Mercer, R. 1990. Generating a Grammar for Statistical Train- ing. In Proceedings of the :June 1990 DARPA Speech and Natural Language Workshop. Hid- den Valley, Pennsylvania.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Grammar and parse tree for aabb.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Sample bracketed sentence from Lancaster Treebank.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Sample representation of \"with a list\" in HBG model.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Parsing accuracy: P-CFG vs. HBG", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "content": "
rate evaluates the grammar's coverage
with the statistical model imposed on the gram-
mar. The goal of probabilistic modelling is to pro-
duce a The any-consistent rate is 90% when we re-
quire the structure and the labels to agree and
96% when unlabeled bracketing is required. These
results are obtained on 760 sentences from 7 to 17
words long from test material that has never been
seen by the grammarian. The parse base is 1.35
parses/word. This translates to about 23 parses
for a 12-word sentence. The unlabeled Viterbi rate
stands at 64% and the labeled Viterbi rate is 60%.
While we believe that the above Viterbi rate
is close if not the state-of-the-art performance,
there is room for improvement by using a more re-
fined statistical model to achieve the labeled any-
contsistent rate of 90% with this grammar. There
is a significant gap between the labeled Viterbiand
any-consistent rates: 30 percentage points.
", "type_str": "table", "text": "Viterbi rate close to the anty-contsistentt rate.", "html": null, "num": null }, "TABREF3": { "content": "
error rate.
Accuracy
P-CFG59.8%
HBG74.6%
Error Reduction36.8%
", "type_str": "table", "text": "mar, P-CFG. The any-consistent rate of the grammar is 90% on test sentences of 7 to 17 words. The Vi$erbi rate of P-CFG is 60% on the same test corpus of 760 sentences used in our experiments. On the same test sentences, the HBG model has a Viterbi rate of 75%. This is a reduction of 37% in", "html": null, "num": null } } } }