{ "paper_id": "1991", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:35:30.853355Z" }, "title": "Pearl: A Probabilistic Chart Parser*", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Magerman", "suffix": "", "affiliation": { "laboratory": "", "institution": "CS Department Stanf o rd University Stanford", "location": { "postCode": "9430. 5", "region": "CA" } }, "email": "ma.german@cs.sta.nford.edu" }, { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "", "affiliation": { "laboratory": "", "institution": "CS Department Stanf o rd University Stanford", "location": { "postCode": "9430. 5", "region": "CA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes a natural language pars ing algorithm for unrestricted text which uses a pr:obability-based scoring function to select the \"best\" parse of a sentence. The parser, Pearl, is a time-asynchronous bottom-up cha.rt parser with Earley-type top-down prediction which pur sues the highest-scoring theory in the chart, where the score of a theory represents the extent to which the context of the sentence predicts that interpre tation. This parser differs from previous attempts at stochastic parsers in that it uses a richer form of conditional probabilities based on c6ntext to pre dict likelihood. Pearl also provides a framework for incorporating the results of previous work in part-of-speech assignment, unknown word mod els, and other probabilistic models of linguistic features into one parsing tool, interleaving these techniques instead of using the traditional pipeline architecture. In preliminary tests, Pearl has been successful at resolving part-of-speech and word (in speech processing) ambiguity, determining cate gories for unknown words, and selecting correct parses first using a very loosely fitting covering grammar. 1 *-This \u2022work was partially supported by DARPA grant No. N0014-85-K0018, ONR contract No. N00014-89-C-0l 71 by DARPA and AFOSR jointly under grant No. AFOSR-90-0066, and by ARO grant No. DAAL 03-89-C0031 PRI. Special thanks to Carl Weir and Lynette Hirschman at Unisys for their valued input, guidance and support. 1 The grammar used for our experiments is the string grammar used in Unisys' PUNDIT natural language un derstanding system.", "pdf_parse": { "paper_id": "1991", "_pdf_hash": "", "abstract": [ { "text": "This paper describes a natural language pars ing algorithm for unrestricted text which uses a pr:obability-based scoring function to select the \"best\" parse of a sentence. The parser, Pearl, is a time-asynchronous bottom-up cha.rt parser with Earley-type top-down prediction which pur sues the highest-scoring theory in the chart, where the score of a theory represents the extent to which the context of the sentence predicts that interpre tation. This parser differs from previous attempts at stochastic parsers in that it uses a richer form of conditional probabilities based on c6ntext to pre dict likelihood. Pearl also provides a framework for incorporating the results of previous work in part-of-speech assignment, unknown word mod els, and other probabilistic models of linguistic features into one parsing tool, interleaving these techniques instead of using the traditional pipeline architecture. In preliminary tests, Pearl has been successful at resolving part-of-speech and word (in speech processing) ambiguity, determining cate gories for unknown words, and selecting correct parses first using a very loosely fitting covering grammar. 1 *-This \u2022work was partially supported by DARPA grant No. N0014-85-K0018, ONR contract No. N00014-89-C-0l 71 by DARPA and AFOSR jointly under grant No. AFOSR-90-0066, and by ARO grant No. DAAL 03-89-C0031 PRI. Special thanks to Carl Weir and Lynette Hirschman at Unisys for their valued input, guidance and support. 1 The grammar used for our experiments is the string grammar used in Unisys' PUNDIT natural language un derstanding system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "All natural language grammars are ambiguous. Even tightly fitting natural language grammars are ambigu ous in some ways. Loosely fitting grammars, which are necessary for handling the variability and complexity of unrestricted text and speech, are worse. The stan dard technique for dealing with this ambiguity, pruning grammars by hand, is painful, time-consuming, an d usually arbitrary. The solution which many people have proposed is to use stochastic models to train sta tistical grammars automatically from a large corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Attempts in applying statistical techniques to nat ural language parsing have exhibited varying degrees of success. These successful an d unsuccessful attempts have suggested to us that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 Stochastic techniques combined with traditional lin guistic theories can (and indeed must) provide a so lution to the natural language understanding prob lem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 In order for stochastic techniques to . be effective, they must be applied with restraint (poor estimates of context are worse than none(S] ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 Interactive, interleaved architectures are preferable to pipeline architectures in NLU systems, because they use more of the ava.ilable \u2022 information in the decision-making process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "We have constructed a stochastic parser, Pearl , whi\ufffdh is based on these ideas. The development of the Pearl par\ufffder is an effort to combine the statistical models developed recently into a single tool which incorporates all of these models into the decision-making component of a parser. While we have only attempted to incorporate a few simple sta tistical models into this parser, Pearl is structured in a way which allows any number of syntactic, semantic, and other knowledge sources to contribute to parsing decisions. The current implementation of Pearl uses Church's part-of-speech assignment trigram model, a simple probabilistic unknown word model, and a con-. ditional probability model for grammar rules based on part-of-speech trigrams and parent rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "By combining multiple knowledge sources and using a chart-parsing framework, Pearl attempts to handle a number of difficult problems. Pearl has the capa bility to parse word lattices, an ability which is useful in recognizing idioms in text processing, as well a. s in speech processing. The parser uses probabilistic train ing from a corpus to disambiguate between grammati cally acceptable structures , such as determining prepo-si t.ional phrase at.Lach men 1. a11d con . i 1111ction scope. In this paper, we will fi rst explain our contribu tion to the stoch astic models which are used in Pearl: a context-free grammar with context-sensitive condi tioi1al probab-ilities. Then , we wrll describe the parser's architecture and the parsing algorithm. Finally, we will give the results of some exp'eriments we performed using Pearl which explore its capabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Recent work involving context-free and context sensitive probabilistic grammars provide little hope for the success of processing unrestricted text using proba bilistic techniques. Works by Chitrao and Grishman [3] and by Sharman, Jelinek, and \u2022 Mercer(14] exhibit ac curacy rates lower than 50% using s-upervised train ing. Supervised training for probabilistic CFGs re quires parsed corpora, which is very costly in time and man-power [2] .", "cite_spans": [ { "start": 211, "end": 214, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 222, "end": 230, "text": "Sharman,", "ref_id": null }, { "start": 231, "end": 239, "text": "Jelinek,", "ref_id": null }, { "start": 240, "end": 256, "text": "and \u2022 Mercer(14]", "ref_id": null }, { "start": 437, "end": 440, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Using Statistics to Parse", "sec_num": null }, { "text": "In our investigations, we have made two observations which attempt to explain the lack-lust.er performance of statistical parsing techniques:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Statistics to Parse", "sec_num": null }, { "text": "\u2022 Simple probabilistic CFGs provide general informa tion about how likely a construct is going to appear anywhere in a sample of a language. This average likelihood is often a poor estimate of probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Statistics to Parse", "sec_num": null }, { "text": "\u2022 Parsing algorithms which accumulate probabilities of parse theories by simply multiplying them over penalize infrequent constructs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Statistics to Parse", "sec_num": null }, { "text": "Pearl avoids the first pitfall by using a context sensitive conditional probability CFG, where context of a theory is determined by the theories which pre dicted it and the part-of-speech sequences in the input sentence. To address the second issue, Pearl scores each theory by using the geometric mean of the con textual conditional probabilities of all of the theories which have contributed to that theory. This is equiva lent to using the sum of the logs of these probabilities. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Statistics to Parse", "sec_num": null }, { "text": "Jn a very large parsed corp11s of English t.ex t, one fi nds that the most. freq uently occurring noun phrase structure in the text is a noun phrase containing a determiner followed by a noun. Simple probabilistic CFCs dictate that., given this information , \"determiner noun\" should be the most likely interpretation of a noun phrase. Now, consider only those noun phrases which oc cur as subjects of a sen tence. In a given corpus, you might find that pronouns occur just a5 frequently as \"determiner noun\"s in the subject position. This type of information can ea5ily be captured by conditional probabitities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CFG with context-sensit ive conditional probabilities", "sec_num": null }, { "text": "Finally, assume that the sentence begins with a pro noun followed by a verb. In this case, it is quite clear that, while you can probably concoct a sentence which fits this description and does not have a pronoun for a subject, the first theory which you should pursue is one which makes this hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CFG with context-sensit ive conditional probabilities", "sec_num": null }, { "text": "The context-sensitive conditional probabilities which Pearl uses take into account the immediate parent of a theory 3 and the part-of-speech trigram centered at the beginning of the theory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CFG with context-sensit ive conditional probabilities", "sec_num": null }, { "text": "For example, consider the sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CFG with context-sensit ive conditional probabilities", "sec_num": null }, { "text": "My first love was named Pearl.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CFG with context-sensit ive conditional probabilities", "sec_num": null }, { "text": "A theory which tries to interpret \"love\" as a verb will be scored based on the part-of.:.speech trigram \"adjec tive verb verb\" and the parent theory,. probably \"S --1-NP VP.\" A theory which interprets \"love\" as a noun will be scored based on the trigram \"adjective noun verb.\" Although lexical probabilities favor \"love\" as a verb, the conditional probabilities will heavily favor \"love\" as a noun in this context. 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(no subliminal propaganda intended)", "sec_num": null }, { "text": "According to probability theory, the likelihood of two independent events occurring at the same time is the product of their individual probabilities. Previous sta tistical parsing techniques apply this definition to the cooccurrence of two theories in a parse, and claim that the likelihood of the two theories being correct is the product of the probabilities of the two theories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using the Geometric Mean of Theory Scores", "sec_num": null }, { "text": "This application of probability theory ignores two vital observations a.bout the domain of statistical pars ing:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using the Geometric Mean of Theory Scores", "sec_num": null }, { "text": "\u2022 Two constructs occurring in the same sentence a . re not necessarily independent (and frequently are not).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using the Geometric Mean of Theory Scores", "sec_num": null }, { "text": "If the independence assumption is violated , then the product of individual probabilities has no meaning with respect to the joint probability of two events. \u2022 Since statistical parsing suffers from sparse data, probability estimates of low frequency events will usually be inaccurate estimates. Extreme underesti mates of the likelihood of low frequency events will produce misleading joint probability estimates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using the Geometric Mean of Theory Scores", "sec_num": null }, { "text": "From these observations, we have determined that esti mating joint probabilities of theories using individual probabilities is \u2022 too difficult with the available data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using the Geometric Mean of Theory Scores", "sec_num": null }, { "text": "We have found that the geometric mean of these prob ability estimates provides an accurate assessment of a theory's viability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using the Geometric Mean of Theory Scores", "sec_num": null }, { "text": "In a departure from standard practice, and perhaps against 'better judgment, we will include a precise description of the theory scoring function used by Pearl. This scoring function tries to solve some of the problems noted in previous attempts at probabilistic parsing [3] [14]:", "cite_spans": [ { "start": 271, "end": 274, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "The Actual Theory Scoring Function", "sec_num": null }, { "text": "\u2022 Theory scores should not depend on the length of the string which the theory spans. \u2022 Sparse data (zero-frequency events) and even zero probability events do occur\ufffd and should not result in zero scoring theories. \u2022 Theory scores should not discriminate against un likely constructs when the context predicts them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Actual Theory Scoring Function", "sec_num": null }, { "text": "The raw score of a theory, 0 is calculated by taking the product of the conditional probability of that the ory's CFG rule given the context ( where context is a part-of-speech trigram and a parent theory ' s rule) and the score of the trigram: SC raw (0) = P( r ule s l(PoP1P2), rule parent )sc(PoP1P2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Actual Theory Scoring Function", "sec_num": null }, { "text": "Here, the score of a trigram is the product of the mutual information of the part-of-speech trigram, 5 PoP1P2, and the lexical probability of the word at the location of P1 being assigned that part-of-speech P1 . 6 In the case of ambiguity (part-of-speech ambiguity or multiple parent theories), the maximum value of this product is used. The score of a partial theory or a com plete theory is the geometric mean of the raw scores of all of the theories which are contained in that theory.", "cite_spans": [ { "start": 213, "end": 214, "text": "6", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "The Actual Theory Scoring Function", "sec_num": null }, { "text": "\u2022 t pop1 p2 , IS e ne o e P (poxp2 ) P ( pi ) , w 1ere x IS any parof-speech. See [4] for further explanation. 6 The trigram scoring function actually used by the parser is somewhat more complicated than this.", "cite_spans": [ { "start": 82, "end": 85, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 111, "end": 112, "text": "6", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "Theory Length Independence ]'his scoring fu nc tion , al though heuristic in derivation, provides a. method for evaluating the value of a theory, regardless \u2022 of i t.s length. When a rule is first predicted (Earley style), its score is just its raw score, which represents how much the context. predicts it. However, when the parse pro cess hypothesizes interpreta.t.ions of the sen tence which reinforce this theory, the geometric mean of a.II of the ra.w scores of the rule's subtree is \u2022used , representing the .overall li-kelihood of the theory given the context of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "Low-frequency Events Although some statistical natural language applications employ \u2022backing-off es timation techniques(l2](5J to handle low-frequency evei1ts, Pearl uses a very simple estimation technique, reluctantly attributed to Church [8] . This technique estimates the probability of an event by adding 0.5 to every frequency count. 7 Low-scoring theories will be predicted by the Earley-style parser. And, if no other hypothesis is suggested, these theories will be pursued. If a high scoring theory advances a theory with a very low raw score, the resulting theory's score will be the geometric mean of all of the raw scores of theories con tained in that theory, and thus will be much higher than the low-scoring theory's score.", "cite_spans": [ { "start": 240, "end": 243, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "As an example of how the conditional-probability-based scoring function handles ambiguity, consider the sentence Fruit flies like a banana.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example of Scoring Function", "sec_num": null }, { "text": "in the domain of insect studies. Lexical probabilities should indicate that the word \"flies\" is more likely to be a plural noun than an active verb. This information is incorporated in the trigram scores. However, when the interpretation S --+ . NP VP is proposed, two possible NPs will be parsed, NP --+ noun (fruit) and NP --+ noun noun (fruit flies). Since this sentence is syntactically ambiguous, if the first hypothesis is tested first, the parser will interpret this sentence incorrectly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example of Scoring Function", "sec_num": null }, { "text": "However, this will not happen in this domain. Since \"fruit flies\" is a common idiom in insect studies, the score of its trigram, noun noun verb, will be much greater than the score of the trigram, noun verb verb . Thus, not only will the lexical probability of the word \"flies/verb\" be lower than that of \"flies/noun,\" but also the raw score of \"NP --+ noun (fruit)\" will be lower than that of \"NP -noun noun (fruit flies),\" beca11se of the di ff erent.ial between t. he trigram scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example of Scoring Function", "sec_num": null }, { "text": "So, \"NP ----;. noun JlOUll\" will be used fi rsi. t.o adva11ce t.he \"S ----;. . NP VP\" nde. Fmther, even if t.h;) parser advances hoth NP hypotheses, the \"S -'-NP . VP\" rule usii1g \"NP ----;. noun noun'' will have a higher score than the \"S -NP . VP\" rule using \"NP -noun.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example of Scoring Function", "sec_num": null }, { "text": "Interleaved Architecture in Pearl However, if this fails, the semantic interpreter might be able to derive some meaning from the sentence if given non-overlapping noun, verb, an d prepositional phrases. If a sentence fai ls to parse, requests for partial parses of the input string can be made by specifying a range which the parse tree should cover an d the category (NP, VP, etc.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example of Scoring Function", "sec_num": null }, { "text": "The ability to produce partial parses allows the sys tem to handle mu ltiple sentence inputs. In both speech an d text processing, it is difficult to know where the end of a sentence is. For instance, one cannot reli ably determine when a speaker terminates a sentence in free speech. And in text processing, abbreviations and quoted expressions produce ambiguity about sen tence termination. When this ambiguity exists, Pearl can be queried for partial parse trees for the given in put, where the goal category is a sentence. Thus, if the word strin' g is actually two complete sentences, the parser can return this information. However, if the word string is only one sentence; then a complete parse tree is returned at little extra cost.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pearl is a time-a.synchronous bottom-up chart parser with Earley-type top-down prediction. The signifi cant difference between Pearl and non-probabilistic bottom-up parsers is th\ufffdt instead of completely gener ating all grarnmatical interpretatiops of a word string,", "sec_num": null }, { "text": "Trainability \u2022 One of the major advantages of the probabilistic parsers is trainability. The conditional probabilities used by Pearl are estimated by using fre quencies from a large corpus of parsed sentenc\ufffds. The parsed sentences must be parsed using the grammar formalism which the Pearl will use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pearl is a time-a.synchronous bottom-up chart parser with Earley-type top-down prediction. The signifi cant difference between Pearl and non-probabilistic bottom-up parsers is th\ufffdt instead of completely gener ating all grarnmatical interpretatiops of a word string,", "sec_num": null }, { "text": "Assuming the grammar is not recursive in an un constrained way, the parser can be trained in an unsu pervised mode. This is accomplished by running the parser without the scoring f u nctions, and generating many parse trees for each sentence. Previous work 9 has demonstrated that the correct information from these parse trees will be reinforced, while the incorrect substructure will not. Multiple passes of re-training us ing frequency data from the previous pass should cause the frequency tables to converge to a stable state. This hypothesis has not yet been tested. 10 An \u2022 alternative to completely unsupervised training is to take a parsed corpus for any domain of the same language using the same grammar, and use the fre quency data from that corpus as the initial training material for the new corpus. This approach should serve only to minimize the number of unsupervised passes required for the frequency data to converge.", "cite_spans": [ { "start": 573, "end": 575, "text": "10", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Pearl is a time-a.synchronous bottom-up chart parser with Earley-type top-down prediction. The signifi cant difference between Pearl and non-probabilistic bottom-up parsers is th\ufffdt instead of completely gener ating all grarnmatical interpretatiops of a word string,", "sec_num": null }, { "text": "While we have not yet done extensive testing of all of the capabilities of Pearl, we performed some simple tests to determine if its performance is at least con sistent with the premises upon which it is based. The test sentences used for this evaluation are not from the training data on which t.lic parser was trained. Using Pearl's context-free grammar, these t.est sent.cnces pro duced an average of (5L1 parses per sentence, with some sente1 \ufffd ces procluci 11g over 1 00 parses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Evaluation", "sec_num": null }, { "text": "To\u2022 determine how Pearl l1andJes 1111 kn0\\vn words, we removed five words f r om t.he lexicon, i, kno\u2022 w: tee, de scribe, and station, an d tried to parse the 40 sample sentences using the simple unknown word model pre viously described.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unknown Word Part-of-speech Assignrnent", "sec_num": null }, { "text": "In this test, the pron01m, i, was assigned the cor rect part-of-speech 9 of 10 times it. occurred in the test sentences. The nouns, tee and station, were correctly tagged 4 of 5 times. And the verbs, know and describe, were correctly tagged 3 of : 3 times. pronoun 90% noun 80% verb . 100% overall 89%", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unknown Word Part-of-speech Assignrnent", "sec_num": null }, { "text": "While this accuracy is expected for unknown words in isolation, based on the accuracy of the part-of speech tagging model, the performance is expected to \u2022 degrade for sequences of unknown words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1: Performance on Unknown Words in Test Sen tences", "sec_num": null }, { "text": "Accurately determining prepositional phrase attach ment in general is a difficult and weli-documented problem. However, based on experience with several . different domains, we have found prepositional phra:5e attachment to be a domain-specific phenomenon for which training can be very helpful. For. instance, in the direction-finding domain, fro m and to prep . ositional phrases generally attach to the preceding verb and not to any noun phrase. This tendency is captured in the training process for Pearl and is used to guide the parser to the more likely attachment with respect \u2022 to the domain. This \u2022 does not mean that Pearl will get the correct parse when the less likely att\ufffdchment is correct; in fact, Pearl will invariably get 'this case wrong. However, based on the premise that this is the less likely attachment, this will produce more correct analyses than incorrect. And, using a more sophisti cated statistical model, this performance can easily be improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prepositional Phrase Attachment", "sec_num": null }, { "text": "Pearl's performance on prepositional phrase attach ment was very high (54/.55 or 98.2% correct). The rea son the accuracy rate was so high is that the direction finding domain is very consistent in it's use of individ ual prepositions. The accuracy rate is not expected to be as high in other domains, although it certainly should be higher than 50% and we wou ld expect it to be great.er t.han 75 %, a. It Of the two sentences which did not parse, one used passive voice, which only occurred in one sentence in the training corpus. While tl\ufffde other sentence, How can I get from cafe sushi to Cambridge City Hospital by walking did not produce a parse for the entire word string, it could be processed using Pearl's partial parsing capa bility. By accessing the chart produced by the failed parse attempt,\u2022 the parser can find a parsed sentence containing the first eleven words, and a prepositional phrase containing the final two words. This informa tion could be used to interpret the sentence properly.", "cite_spans": [ { "start": 404, "end": 406, "text": "It", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Prepositional Phrase Attachment", "sec_num": null }, { "text": "The Pearl parser. takes advantage of domain-dependent information to select the most appropriate interpreta tion of an input. However, the statistical measure used to dis. ambiguate these interpretations is sensitive to certain attributes of the grammatical formalism used, as well as to the part-of-speech categories used to la bel lexical entries. All of the experiments performed on Pearl thus far have been using one grammar, one part. of-speech tag set , and one domain ( ", "cite_spans": [ { "start": 475, "end": 476, "text": "(", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Future Wo rk", "sec_num": null }, { "text": "The probabilistic parser which we have described pro vides a platform for exploiting the useful informa tion ma.de available by statistical models in a manner which is consistent with existing grammar formalisms and parser designs. Pearl can be trained to use any context-free grammar, accompanied by the appropri ate training material . And, the parsing . algorithm is very similar to a standard bottom-up algorithm, with the exception of using theory scores to order the search .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "More thorough testing is necessary to measure_ Pearl's performance in terms of parsing accuracy, pa.rt. of-speech assignment, unknown word categorization, \u2022 idiom processing capabilities, and even word selection in speech processing. With-the exception of word se lection, preliminary tests show Pearl performs \u2022 these tasks with a high degree of accuracy. But, in the ab sence of precise performance estimates, we still ptopose that the architectm;e of this parser is preferable to tra ditional pipeline architectures. Only by using an inter leaved architecture can a speech recognizer efficiently \u2022 make use of corriplex grammatic-al information to select from among hypothesized words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "The parent of a theory is defined as a. theory with a CF rule which contains the left-hand side of the theory. For instance, if \"S -NP VP\" and \"NP _.