|
{ |
|
"paper_id": "1993", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:36:52.100379Z" |
|
}, |
|
"title": "Evaluation of TTP Parser: A Preliminary Report", |
|
"authors": [ |
|
{ |
|
"first": "Tomek", |
|
"middle": [], |
|
"last": "Strzalkowski", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "New York University", |
|
"location": { |
|
"addrLine": "715 Broadway, New Yo rk", |
|
"postCode": "704, 10003", |
|
"settlement": "rm", |
|
"region": "NY" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"G N" |
|
], |
|
"last": "Scheyen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "TTP {Tagged \u2022 Text Parser) is a fast and robust natural language parser specifically designed to process vast quantities of unrestricted text. TTP can analyze written text at the speed of approximately 0.3 sec/sentence, or 73 words per second. An important novel feature of TTP parser is that it is equipped with a skip-and-fit recovery mechanism that allows for fast closing of more difficult sub-constituents after a preset amount of time has elapsed without producing a parse. Although a complete analysis is attempted for each sentence, the parser may occasionally ignore fragments of input to resume \"normal\" processing after skipping a few words. These fragments are later analyzed separately and attached as incomplete constituents to the main parse tree. TTP has recently been evaluated against several leading parsers. While no formal numbers were released (a formal evaluation is planned later this year), TTP has. performed surprisingly well. The main argument of this paper is that TTP can provide a substantial gain in parsing speed giving up relatively little in terms of the quality of output it produces. This property allows TTP to be used effectively in parsing large volumes of text.", |
|
"pdf_parse": { |
|
"paper_id": "1993", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "TTP {Tagged \u2022 Text Parser) is a fast and robust natural language parser specifically designed to process vast quantities of unrestricted text. TTP can analyze written text at the speed of approximately 0.3 sec/sentence, or 73 words per second. An important novel feature of TTP parser is that it is equipped with a skip-and-fit recovery mechanism that allows for fast closing of more difficult sub-constituents after a preset amount of time has elapsed without producing a parse. Although a complete analysis is attempted for each sentence, the parser may occasionally ignore fragments of input to resume \"normal\" processing after skipping a few words. These fragments are later analyzed separately and attached as incomplete constituents to the main parse tree. TTP has recently been evaluated against several leading parsers. While no formal numbers were released (a formal evaluation is planned later this year), TTP has. performed surprisingly well. The main argument of this paper is that TTP can provide a substantial gain in parsing speed giving up relatively little in terms of the quality of output it produces. This property allows TTP to be used effectively in parsing large volumes of text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "10 to make such a task manageable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of This Paper", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Recently, there has been a growing demand for fast and reliable natural language processing tools, capable of performing reasonably accurate syntactk analysis of large volumes of text within an acceptable time. A full sentential parser that produces complete analysis of input, may be con sidered reasonably fast if the average parsing time per sentence falls anywhere between 2 and 10 sec onds. A large volume of text, perhaps a giga byte or more, would contain as many as 7 million sentences. At the speed of say, 6 sec/sentence, this much text would require well over a year to parse. While 7 million sentences is a lot of text, this much may easily be contained in a fair sized text database. Therefore, the parsing speed would have to be increased by at \u2022 1east a factor of In this paper we describe TTP, a fast and robust natural language parser that can . analyze written text and generate regularized parse struc tures at a speed of below 1 second per sentence. In the experiments conducted on variety of nat ural language texts, including technical prose, news messages, and newspaper articles, the aver age parsing time varied .between 0.3 sec/sentence and 0.5 sec/sentence, or between 2500 and 4300 words per minute, as we tried to find an ac ceptable compromise between parser's speed and precision (these results were obtained on a Sun SparcStation 2). Original experiments were per formed within an _information retrieval system with the recall/precision statistics used to mea sure effectiveness of the. parser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of This Paper", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the second part of the paper, the linguistic 294 accuracy of TTP is discussed based on the partial results of a quantitative evaluation of its output using the Parseval method (Black et al, 1991) . This method calculates three scores of \"closeness\" as it compares the bracketed parse structures re turned by the parser against a pre-selected stan dard. These scores are: the crossings rate which indicates how many constituents in the candidate parse are incompatible with those in the stan dard; recall which is the percentage of candidate constituents found in the standard; and precision which specifies the percentage of standard con stituents in the candidate parse. Parseval may also be used to compare the performance of differ ent parsers. In comparison with NYU's Proteus parser, for example, which is on average two levels of magnitude slower than TTP, the crossing score was only 6 to 27% higher for TTP, with recall 13% lower, and approximately the same precision for both parsers. In addition we discuss the relationships be tween allotted parse time per sentence, the aver age parsing time, crossings rate, and recall and precision scores. and Hanks (1990) used partial parses generated by Fidditch to study word co-occurrence patterns in syntactic contexts. On the other hand, ap plications involving information extraction or re trieval from text will usually require more accu rate parsers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 198, |
|
"text": "(Black et al, 1991)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1157, |
|
"end": 1173, |
|
"text": "and Hanks (1990)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of This Paper", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "until finally S \ufffd S+\" and\" +S is reached on the stack. Subsequently, the parser skips input to find 'and', then resumes normal processing. The key point here is that TTP decides (or is forced) to reduce incomplete constituents rather than to backtrack or otherwise select an alternative anal ysis. However, this is done only after the parser is thrown into the panic mode, which in case of TTP is induced by the time-out signal. In other words, while there is still time TTP will proceed in reg ular top-down fashion until the time-out signal is received. Afterwards, for some productions early reduction will be forced and fragments of input will be skipped if necessary. If this action does not produce a parse within a preset time (which is usually much longer than the original 'regular' parsing time), the second time-out signal is gener ated which forces the parser to finish even at the cost of introducing dummy constituents into the parse tree. The skipped-over fragments of input are quickly processed by a simple phrasal ana lyzer, and then attached to the main parse tree at the points where they were deleted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "An alternative is to create a parser that would attempt to produce a complete parse, and would resort to partial or approximate analysis only under exceptional conditions such as an extra grammatical input or a severe time pressure. En countering a construction that it couldn't han dle, the parser would first try to produce an ap proximate analysis of the difficult fragment, and then resume normal processing for the rest of the input. The outcome is a kind of \"fitted\" parse, reflecting a compromise between the actual input and grammar-encoded preferences. One way to accomplish this is to adopt the \u2022 follow ing procedure: (1) close (reduce) the obstruct ing constituent ( one which is being currently parsed), then possibly reduce a few of its par ent constituents, removing corresponding produc tions from further consideration, until a produc tion is reactivated for which a continuation is pos sible; (2) Jump over the intervening material so as to restart processing of the remainder of the sentence using the newly reactivated production. As an example, consider the following sentence where the highlighted fragment is likely to cause problems, and may be better off ignored in the first pass: \"The method is illustrated by the au tomatic construction of both recur sive and iterative programs operating on natural numbers, lists, . and trees, in order to construct a program sat isfying certain specifications a theo rem induced by those specifications is proved, and the desired program is ex tracted from the proof.\" Assuming that the parser now reads the arti cle 'a' following the string 'certain specifications', it may proceed to reduce the current NP, then SI -+ to\" + V +NP, SI -+ SA, SA -+ NP+ V +NP+SA,", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An example parse structure returned by TTP is shown below. Note (vrbtm X) brackets which surround all un-parsed tokens in the input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "An alternative is to create a parser that would attempt to produce a complete parse, and would resort to partial or approximate analysis only under exceptional conditions such as an extra grammatical input or a severe time pressure. En countering a construction that it couldn't han dle, the parser would first try to produce an ap proximate analysis of the difficult fragment, and then resume normal processing for the rest of the input. The outcome is a kind of \"fitted\" parse, reflecting a compromise between the actual input and grammar-encoded preferences. One way to accomplish this is to adopt the \u2022 follow ing procedure: (1) close (reduce) the obstruct ing constituent ( one which is being currently parsed), then possibly reduce a few of its par ent constituents, removing corresponding produc tions from further consideration, until a produc tion is reactivated for which a continuation is pos sible; (2) Jump over the intervening material so as to restart processing of the remainder of the sentence using the newly reactivated production. As an example, consider the following sentence where the highlighted fragment is likely to cause problems, and may be better off ignored in the first pass: \"The method is illustrated by the au tomatic construction of both recur sive and iterative programs operating on natural numbers, lists, . and trees, in order to construct a program sat isfying certain specifications a theo rem induced by those specifications is proved, and the desired program is ex tracted from the proof.\" Assuming that the parser now reads the arti cle 'a' following the string 'certain specifications', it may proceed to reduce the current NP, then SI -+ to\" + V +NP, SI -+ SA, SA -+ NP+ V +NP+SA,", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Mrs . ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(n progress)))) (pp (prep in) (np (name malaysia . ) ) ) ) (vrbtm \"))))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As may be expected, this kind of action in volves a great deal of indeterminacy which, in case of natural language strings, is comp_ ound\ufffdd by the high degree of lexical ambiguity. If the purpose of this skip-and-fit technique is to get the parser smoothly through even the most com plex strings, the amount of additional l>ackt!ack ing caused by the lexical level ambiguity is cer tain to defeat it. Without lexical disambiguation of input, the parser's performance will det. erio rate, even if the skipping is limited only to Gertain types of adverbial adjuncts. The most common cases of lexical ambiguity are those of a plural noun (nns) vs. a singular verb (vbz), a singular noun (nn) vs. a plural or infinitive verb (vbp,vb) , and a past tense verb (vbd) vs. a past participle (vbn), as illustrated in the following example.", |
|
"cite_spans": [ |
|
{ |
|
"start": 721, |
|
"end": 729, |
|
"text": "(vbp,vb)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\"The notation used (vbn or vbd?) We use a stochastic tagger to process the in put text prior to parsing. The tagger, developed at BBN (see Meteer et al., 1991) is based upon a bi-gram model and it selects most likely tag for a word given co-occurrence probabilities computed from a relatively small training set. The input to TTP looks more like the following: \"The/ dt notation/nn used/vbn explicitly /rb associates/vbz a/dt data/nns structure/nn shared/vbn by /in concurrent/jj processes/nns with/in operations/nns defined/vbn on/in it/pp ./.\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 32, |
|
"text": "(vbn or vbd?)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 130, |
|
"end": 159, |
|
"text": "BBN (see Meteer et al., 1991)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In a 'normal' operation, TTP produces a reg ularized representation of each parsed sentence that reflects the sentence's logical structure. This representation may differ considerably from a standard parse tree, in that the constituents get moved around (e.g., de-passivization), and the phrases are organized recursively around their head elements. However, for the purpose of the evaluation with Parseval an 'input-bracketing' version has been created. In this version the skipped-over material is simply left unbracketed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As the parsing proceeds, each sentence re ceives a new slot of time during which its parse is to be returned. The amount of time allotted to any particular sentence can be regulated to obtain an acceptable compromise between parser's speed and accuracy. In our experiments we found that 0.5 sec/sentence time slot was appropriate for the Wall Street Journal articles (the average length of the sentence in our WSJ collection is 17 words). We must note here that giving the parser more time per sentence doesn't always mean that a bet ter (more accurate) parse will be obtained. For complex or extra-grammatical structures we are likely to be better off if we do not allow the parser wander around for too long: the most likely inter pretation of an unexpected input is probably the one generated early ( the grammar rule ordering enforces some preferences). In fact, our experi ments indicate that as the 'normal' parsing time is extended, the accuracy of the produced parse increases at an ever slowering pace, peaking for STRZALKOWSKI -SCHEYEN a certain value, then declining slightly to eventu ally stabilize at a constant level. This final level off indicates, we believe, an inherent limit in the coverage of the underlying grammar.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The time-out mechanism is implemented using a straightforward parameter passing and is lim ited to only a subset of nonterminals used by the grammar. Suppose that X is such a nontermi nal, and that it appears on the right-hand side of a production \u2022s --+ X \u2022 Y Z. The set of \"starters\" is computed for Y, which consists of the word tags that can occur as the left-most constituent of Y. This set is passed as a parameter while the parser attempts to recognize X in the input. If X is recognized successfully within a preset time, then the parser proceeds to parse a Y, and noth ing else happens. On the other hand, if the parser cannot determine whether there is an X in the in put or not, that is, it neither succeeds nor fails in parsing X before being timed out, the unfinished X constituent is closed (reduced) with a partial parse, and the parser is restarted at the closest element from the starters set for Y that can be found in the remainder of the input. If Y rewrites to an empty string, the starters for Z to the right of Y are added to the starters for Y and both sets are passed as a parameter to X. As an example consider the following clauses in the TTP parser:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The TTP Time-out Mech anism", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "sentence(P) :- assertion ( 0 ,P).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The TTP Time-out Mech anism", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "assertion(SR,P) : clause(SR,Pl ), s_coord(SR,Pl,P). clause(SR,P) :sa([pdt,dt,cd,pp,ppS,jj ,jjr ,jjs,nn,nns,np,nps] ,P2), subject ([vbd,vbz,vbp] ,Pl), verbphrase(SR,Pl,P2,P). thats(SR,P) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 143, |
|
"text": "([vbd,vbz,vbp]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The TTP Time-out Mech anism", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "that, assertion( SR,P).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The TTP Time-out Mech anism", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the above code, P, Pl, and P2 repre sent partial parse structures, while SR is a set of starter word tags where the parsing will re sume should the present nonterminal be timed out. First arguments to 'assertion', 'sa', and 'sub-ject' are also sets of starter tags. In the 'clause' production above, a (finite) clause rewrites into a left sentence adjunct ('sa'), a 'subject', and a 'verbphrase'. If 'sa' is aborted before its eval uation is complete, the parser will jump over some elements of the unparsed portion of the in put looking for a word that could begin a sub ject phrase: a pre-determiner (pdt), a determiner (dt); a count word (cd), a pronoun (pp,ppS) , an adjective (jj , jjr, jjs), a noun (nn, nns), or a proper name (np, nps) . Likewise, when 'subject' is timed out, the parser will restart with 'verbphrase' at ei ther vbz, vbd or vbp (finite forms of a verb). Note that if 'verbphrase' is timed out both 'verbphrase' and 'clause' will be closed, and the parser will restart at an element of set SR passed down to 'clause' from assertion. Note also that in the top level production for a sentence the starter set for 'assertion' is initialized to be empty: if the failure occurs at this level, no continuation is possible.", |
|
"cite_spans": [ |
|
{ |
|
"start": 660, |
|
"end": 668, |
|
"text": "(pp,ppS)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 736, |
|
"end": 745, |
|
"text": "(np, nps)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The TTP Time-out Mech anism", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The forced reduction and skip-over are car ried out through special productions that are ac tivated only after a preset amount of time has elapsed since the start of parsing. For example, 'subject' is defined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The TTP Time-out Mech anism", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "When a non-terminal is timed out and the parser jumps over a non-zero length fragment of input, it is assumed that the skipped part was some sub-constituent of the reduced non-terminal (e.g., subject). Accordingly, a place holder (PG) is left in the parse structure under the node domi nated by this non-terminal. This placeholder will be later filled by some material recovered from the skipped-over fragment which is put aside by store (PG) If 'subject' of 'ntovo' is timed-out, the parser will first jump to \"to (New York)\", and only after failing to find a verb (\"New\" ) will redo 'skip' in order to take a longer leap to the next 'to'. This example shows that a great deal of indeterminacy still remains even in the tagged text, and that the final selection of skip points and starter tags may require some training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 438, |
|
"end": 442, |
|
"text": "(PG)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "subject(SR,PG) :-timed_out,!, skip(SR), store(PG). subject(SR,P) : noun_phrase( SR,P) .", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A related problem is the selection of non terminals that can be allowed to time out. This is normally restricted to various less-than-essential adjunct-type constituents, including adverbials, some prepositional phrases, relative clauses, etc. Major sentential constituents such as subject or verbphrase should not be timed (though their sub-constituents can), or we risk to generate very uninteresting parses. Note that a timed-out phrase is not lost, but its links with the main parse structure (e.g., traces in relative clauses) may be severed, though not necessarily beyond repair. Another important restriction is to avoid introduction of spurious dummy phrases, for ex ample, in order to force an object on a transi tive verb. The time-out points must be placed in such a way that while the above principles are ob served, the parser is guaranteed a swift exit when in the skip-and-fit mode. In other words, we do not want the parser to get trapped in inadver tently created dead ends, hopelessly trying to fit the parse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "subject(SR,PG) :-timed_out,!, skip(SR), store(PG). subject(SR,P) : noun_phrase( SR,P) .", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As an additional safety valve, a second time out signal can be issued to catch any processes still operating beyond a reasonable time after the first time-out. In this case, a relaxed skipping protocol is adopted with skips to only major con stituents, or outright to the end of the input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "subject(SR,PG) :-timed_out,!, skip(SR), store(PG). subject(SR,P) : noun_phrase( SR,P) .", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Dummy constituents may be introduced if neces sary to close a parse fast. This, however, happens rarely if the parser is designed carefully. While parsing 4 million sentences (85 million words) of Wall Street Journal articles, the second time-out was invoked less than 10 times.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "STRZALKOWSKI -S_ CHEYEN", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Parseval is a quantitative method of parser eval uation which compares constituent bracketings in the parse's output with a set of 'standard' brack etings. A parse is understood as a system of la beled brackets imposed upon the input sentence, with no changes in either word order or form per mitted. Using three separate scores of 'crossings', recall and precision assigned to each parse ( and explained in more detail below) the measure de termines parser's accuracy indicating how close it is to the standard. For the purpose of this evaluation Penn Treebank bracketings have been adopted as standard.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parser Evaluation With Parseval", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the rest of this section we demonstrate how Parseval typically processes a sentence. The ex ample used here is sentence 337 from Brown Cor pus, one of the set of 50 sentences used in ini tial evaluation. In this example, the sentence has been processed with TTP time-out set at 700 msecs. Mean 0.92 60.57 75.69 The second set of tests involved 100 sentences form Wall Street Journal. Since WSJ sentences were usually longer and more complex than the 50 Brown sentences, we used time-outs of 250 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 313, |
|
"text": "60.57 75.69", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parser Evaluation With Parseval", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "points to the limitations of the underlying gram mar used in these tests: initial correct hypotheses ( enforced by preferences within the parser) are re placed by less likely ones when the parser is forced to backtrack.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "One should note that the per-sentence crossing ratio and recall score indicate how good the parser is at discovering the correct bracketings (preci sion is less useful as it already includes crossing errors). Clearly, both the crossing ratio and preci sion improves as we let the parser take more time to complete its job. On the other hand, the re call, after an initial increase, declines somewhat for larger values of time-out. This, we believe,", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "At present TTP is a part of a natural language in formation retrieval system. Along with a stochas tic part-of-speech tagger, morpho-lexical stemmer and phrase extractor, it constitutes the linguistic pre-processor built on top of the statistical in formation retrieval system PRISE, developed at NIST. During the database creation process TTP is used to parse documents so that appropriate in dexing phrases can be identified with a high de gree of accuracy. Both phrases and single-word terms are selected, along with their immediate syntactic context which is used to generate se mantic word associations and create a domain specific level-1 thesaurus. For TREC-1 confer ence concluded last November, the total of 500 MBytes of Wall Street Journal articles have been parsed. This is approximately 4 million sentences, and it took about 2 workstation-weeks to pro cess. While the quality of parsing was less than perfect, it was nonetheless quite good. In vari ous experiments with the final database we noted an increase of retrieval precision over the purely statistical base system that ranged from 6% (no table) to more than 13% (significant). Therefore, at least in applications such as document retrieval or information extraction, TTP-level parsing ap pears entirely sufficient, while its high speed and robustness makes an otherwise impractical task of linguistic text processing, quite manageable, and moreover, at par with other statistical parts of the system. Further development of TTP will continue, es pecially expanding its base grammar to bring the coverage closer to Sager's LSP or Grishman 's Pro teus. We are also investigating ways of automated generation of skipping parsers like TTP out of any full grammar parser, a process we call 'ttpiza tion'. TTP has been made available for research purposes to several sites outside NYU, where it is used in variety of applications ranging from in formation extraction from text to optical readers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The first two steps that the evaluator takes is to delete certain kinds of lesser constituents. This is done because of a great variety of treat ment of these items across different parsers, and their relatively minor role in deciding correctness of a parse. The first phase deletes the following types of token strings from the parse:1. Auxiliaries -\"might have been understood\" \ufffd \"understood\" 2. \"Not\" -\"should not have come\" \ufffd \"come\" 3. Pre-infinitival \"to\" -\"not to answer\" \ufffd \"answer\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "5. Possessive endings -\"Lori's mother\" \ufffd \"Lori mother\" 6. Word-ex. ternal punctuation ( quotes, com mas, periods, dashes, etc.) The revised parse of sentence 337 is shown be low. fiscal 1990) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 127, |
|
"text": "Word-ex. ternal punctuation ( quotes, com mas, periods, dashes, etc.)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 192, |
|
"text": "fiscal 1990)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Null categories -(NP ()) \ufffd (NP )", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The Health Insurance Association of America, an insurers' trade group, acknowledges that stiff competition among its members to insure businesses likely to be good risks during the first year of coverage has aggravated the problem in the small-business market. ( n market)))))))))))))) . ) ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(t_pos (poss ((name philip morris)) 's)) ( n lead))))))))))))", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "risks)))))))))))))))))", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": {}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "In the first set, 50 sentences from Brown Corpus were used. In the series of runs with time out value ranging from 100 to 2500 msecs, TTP scores varied from average crossing rate of 1.38 (for 100 msec time-out) to 0.82 (at 1700 msec); re call from the low of 59.37 (at 1800 msec!) to 62.02 (at 500 msec); and precision from 70. 52 (at 100 msec) to 77.06 (at 1400 msec). The mean scores were: crossing rate 0.92, recall 60. 57% and preci sion 75.69%. These scores reflect both the quality of the parser as well as the differences between grammar systems used in TTP and in preparing the standard parses. For example, average re call scores for hand parsed Brown sentences .", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"text": "(vbn or vbd?) by concurrent pro cesses (nns or vbz?) with operations defined (vbn or vbd?) on it.\"", |
|
"num": null, |
|
"content": "<table><tr><td>ex plicitly associates (nns or vbz?) \u2022\u2022 a data structure ( vb or nn ?) shar\ufffdd</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"text": "", |
|
"num": null, |
|
"content": "<table><tr><td>T/O 250</td><td colspan=\"2\">C TOT PER 71 2.91</td><td colspan=\"2\">R 55.08 61.50 P TIME 305</td></tr><tr><td>500</td><td>70</td><td>2.60</td><td>55.22 63.16</td><td>438</td></tr><tr><td>600</td><td>72</td><td>2.68</td><td>54.20 . 62.51</td><td>477</td></tr><tr><td>750</td><td>69</td><td>2.57</td><td>55.22 63.85</td><td>540</td></tr><tr><td>1500</td><td>68</td><td>2.60</td><td>54.57 64.01</td><td>797</td></tr><tr><td>30000</td><td>59</td><td>2.17</td><td>51.79 66.26</td><td>2930</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |