{
"paper_id": "J99-2004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:47:05.042038Z"
},
"title": "Supertagging: An Approach to Almost Parsing",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {
"postCode": "19104",
"settlement": "Philadelphia",
"region": "PA"
}
},
"email": "joshi@linc.cis.upenn.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we have proposed novel methods for robust parsing that integrate the flexibility of linguistically motivated lexical descriptions with the robustness of statistical techniques. Our thesis is that the computation of linguistic structure can be localized iflexical items are associated with rich descriptions (supertags) that impose complex constraints in a local context. The supertags are designed such that only those elements on which the lexical item imposes constraints appear within a given supertag. Further, each lexical item is associated with as many supertags as the number of different syntactic contexts in which the lexical item can appear. This makes the number of different descriptions for each lexical item much larger than when the descriptions are less complex, thus increasing the local ambiguity for a parser. But this local ambiguity can be resolved by using statistical distributions of supertag co-occurrences collected from a corpus of parses. We have explored these ideas in the context of the Lexicalized Tree-Adjoining Grammar (LTAG) framework. The supertags in LTAG combine both phrase structure information and dependency information in a single representation. Supertag disambiguation results in a representation that is effectively a parse (an almost parse), and the parser need \"only\" combine the individual supertags. This method of parsing can also be used to parse sentence fragments such as in spoken utterances where the disambiguated supertag sequence may not combine into a single structure.",
"pdf_parse": {
"paper_id": "J99-2004",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we have proposed novel methods for robust parsing that integrate the flexibility of linguistically motivated lexical descriptions with the robustness of statistical techniques. Our thesis is that the computation of linguistic structure can be localized iflexical items are associated with rich descriptions (supertags) that impose complex constraints in a local context. The supertags are designed such that only those elements on which the lexical item imposes constraints appear within a given supertag. Further, each lexical item is associated with as many supertags as the number of different syntactic contexts in which the lexical item can appear. This makes the number of different descriptions for each lexical item much larger than when the descriptions are less complex, thus increasing the local ambiguity for a parser. But this local ambiguity can be resolved by using statistical distributions of supertag co-occurrences collected from a corpus of parses. We have explored these ideas in the context of the Lexicalized Tree-Adjoining Grammar (LTAG) framework. The supertags in LTAG combine both phrase structure information and dependency information in a single representation. Supertag disambiguation results in a representation that is effectively a parse (an almost parse), and the parser need \"only\" combine the individual supertags. This method of parsing can also be used to parse sentence fragments such as in spoken utterances where the disambiguated supertag sequence may not combine into a single structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper, we present a robust parsing approach called supertagging that integrates the flexibility of linguistically motivated lexical descriptions with the robustness of statistical techniques. The idea underlying the approach is that the computation of linguistic structure can be localized if lexical items are associated with rich descriptions (supertags) that impose complex constraints in a local context. This makes the number of different descriptions for each lexical item much larger than when the descriptions are less complex, thus increasing the local ambiguity for a parser. However, this local ambiguity can be resolved by using statistical distributions of supertag co-occurrences collected from a corpus of parses. Supertag disambiguation results in a representation that is effectively a parse (an almost parse).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In the linguistic context, there can be many ways of increasing the complexity of descriptions of lexical items. The idea is to associate lexical items with descriptions that allow for all and only those elements on which the lexical item imposes constraints to be within the same description. Further, it is necessary to associate each lexical item with as many descriptions as the number of different syntactic contexts in which the lexical item can appear. This, of course, increases the local ambiguity for the parser. The parser has to decide which complex description out of the set of descriptions associated with each lexical item is to be used for a given reading of a sentence, even before combining the descriptions together. The obvious solution is to put the burden of this job entirely on the parser. The parser will eventually disambiguate all the descriptions and pick one per lexical item, for a given reading of the sentence. However, there is an alternate method of parsing that reduces the amount of disambiguation done by the parser. The idea is to locally check the constraints that are associated with the descriptions of lexical items to filter out incompatible descriptions. 1 During this disambiguation, the system can also exploit statistical information that can be associated with the descriptions based on their distribution in a corpus of parses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "We first employed these ideas in the context of Lexicalized Tree Adjoining grammars (LTAG) in Joshi and Srinivas (1994) . Although presented with respect to LTAG, these techniques are applicable to other lexicalized grammars as well. In this paper, we present vastly improved supertag disambiguation results--from previously published 68% accuracy to 92% accuracy using a larger training corpus and better smoothing techniques. The layout of the paper is as follows: In Section 2, we present an overview of the robust parsing approaches. A brief introduction to Lexicalized Tree Adjoining grammars is presented in Section 3. Section 4 illustrates the goal of supertag disambiguation through an example. Various methods and their performance results for supertag disambiguation are discussed in detail in Section 5 and Section 6. In Section 7, we discuss the efficiency gained in performing supertag disambiguation before parsing. A robust and lightweight dependency analyzer that uses the supertag output is briefly presented in Section 8. In Section 9, we will discuss the applicability of supertag disambiguation to other lexicalized grammars.",
"cite_spans": [
{
"start": 94,
"end": 119,
"text": "Joshi and Srinivas (1994)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In recent years, there have been a number of attempts at robust parsing of natural language. They can be broadly categorized under two paradigms--finite-state-grammarbased parsers and statistical parsers. We briefly present these two paradigms and situate our approach to robust parsing relative to these paradigms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Approaches",
"sec_num": "2."
},
{
"text": "Finite-state-grammar-based approaches to parsing are exemplified by the parsing systems in Joshi, (1960) , Abney (1990) , Appelt et al. (1993) , Roche (1993) , Grishman (1995) , Hobbs et al. (1997) , Joshi and Hopely (1997) , and Karttunen et al. (1997) . These systems use grammars that are represented as cascaded finite-state regular expression recognizers. The regular expressions are usually hand-crafted. Each recognizer in the cascade provides a locally optimal output. The output of these systems is mostly in the form of noun groups and verb groups rather than constituent structure, often called a shallow parse. There are no clause-level attachments or modifier attachments in the shallow parse. These parsers always produce one output, since they use the longestmatch heuristic to resolve cases of ambiguity when more than one regular expression 1 The use of descriptions for primitives to capture constraints locally has a precursor in AI. The Waltz algorithm (Waltz 1975) for labeling vertices of polygonal solid objects can be thought of in these terms. Waltz made the description of vertices more complex by including information about the incident edges, associated surfaces and other information. This increases the local ambiguity but the local constraints on the complex descriptions are strong enough to efficiently disambiguate the descriptions. Of course, Waltz did not use statistical information for disambiguation. See also Joshi (1998) .",
"cite_spans": [
{
"start": 91,
"end": 104,
"text": "Joshi, (1960)",
"ref_id": "BIBREF26"
},
{
"start": 107,
"end": 119,
"text": "Abney (1990)",
"ref_id": null
},
{
"start": 122,
"end": 142,
"text": "Appelt et al. (1993)",
"ref_id": "BIBREF2"
},
{
"start": 145,
"end": 157,
"text": "Roche (1993)",
"ref_id": "BIBREF49"
},
{
"start": 160,
"end": 175,
"text": "Grishman (1995)",
"ref_id": "BIBREF21"
},
{
"start": 178,
"end": 197,
"text": "Hobbs et al. (1997)",
"ref_id": "BIBREF23"
},
{
"start": 200,
"end": 223,
"text": "Joshi and Hopely (1997)",
"ref_id": "BIBREF31"
},
{
"start": 230,
"end": 253,
"text": "Karttunen et al. (1997)",
"ref_id": "BIBREF38"
},
{
"start": 973,
"end": 985,
"text": "(Waltz 1975)",
"ref_id": "BIBREF65"
},
{
"start": 1450,
"end": 1462,
"text": "Joshi (1998)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-State-Grammar-based Parsers",
"sec_num": "2.1"
},
{
"text": "matches the input string at a given position. At present none of these systems use any statistical information to resolve ambiguity. The grammar itself can be partitioned into domain-independent and domain-specific regular expressions, which implies that porting to a new domain would involve rewriting the domain-dependent expressions. This approach has proved to be quite successful as a preprocessor in information extraction systems (Hobbs et al. 1995; Grishman 1995) .",
"cite_spans": [
{
"start": 437,
"end": 456,
"text": "(Hobbs et al. 1995;",
"ref_id": "BIBREF24"
},
{
"start": 457,
"end": 471,
"text": "Grishman 1995)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-State-Grammar-based Parsers",
"sec_num": "2.1"
},
{
"text": "Pioneered by the IBM natural language group (Fujisaki et al. 1989) and later pursued by, for example, Schabes, Roth, and Osborne (1993) , Jelinek et al. (1994 ), Magerman (1995 , Collins (1996) , and Charniak (1997) , this approach decouples the issue of wellformedness of an input string from the problem of assigning a structure to it. These systems attempt to assign some structure to every input string. The rules to assign a structure to an input are extracted automatically from hand-annotated parses of large corpora, which are then subjected to smoothing to obtain reasonable coverage of the language. The resultant set of rules are not linguistically transparent and are not easily modifiable. Lexical and structural ambiguity is resolved using probability information that is encoded in the rules. This allows the system to assign the most-likely structure to each input. The output of these systems consists of constituent analysis, the degree of detail of which is dependent on the detail of annotation present in the treebank that is used to train the system. There are also parsers that use probabilistic (weighting) information in conjunction with hand-crafted grammars, for example, Black et al. (1993) , Nagao (1994) , Alshawi and Carter (1994) , and Srinivas, Doran, and Kulick (1995) . In these cases the probabilistic information is primarily used to rank the parses produced by the parser and not so much for the purpose of robustness of the system.",
"cite_spans": [
{
"start": 44,
"end": 66,
"text": "(Fujisaki et al. 1989)",
"ref_id": "BIBREF17"
},
{
"start": 102,
"end": 135,
"text": "Schabes, Roth, and Osborne (1993)",
"ref_id": "BIBREF53"
},
{
"start": 138,
"end": 158,
"text": "Jelinek et al. (1994",
"ref_id": "BIBREF25"
},
{
"start": 159,
"end": 176,
"text": "), Magerman (1995",
"ref_id": null
},
{
"start": 179,
"end": 193,
"text": "Collins (1996)",
"ref_id": "BIBREF13"
},
{
"start": 200,
"end": 215,
"text": "Charniak (1997)",
"ref_id": "BIBREF11"
},
{
"start": 1199,
"end": 1218,
"text": "Black et al. (1993)",
"ref_id": "BIBREF3"
},
{
"start": 1221,
"end": 1233,
"text": "Nagao (1994)",
"ref_id": "BIBREF45"
},
{
"start": 1236,
"end": 1261,
"text": "Alshawi and Carter (1994)",
"ref_id": "BIBREF1"
},
{
"start": 1264,
"end": 1302,
"text": "and Srinivas, Doran, and Kulick (1995)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Parsers",
"sec_num": "2.2"
},
{
"text": "Lexicalized grammars are particularly well-suited for the specification of natural language grammars. The lexicon plays a central role in linguistic formalisms such as LFG (Kaplan and Bresnan 1983) , GPSG (Gazdar et al. 1985) , HPSG (Pollard and Sag 1987) , CCG (Steedman 1987) , Lexicon Grammar (Gross 1984) , LTAG (Schabes and Joshi 1991) , Link Grammar (Sleator and Temperley 1991) , and some version of GB (Chomsky 1992). Parsing, lexical semantics, and machine translation, to name a few areas, have all benefited from lexicalization. Lexicalization provides a clean interface for combining the syntactic and semantic information in the lexicon. We discuss the merits of lexicalization and other related issues in the context of partial parsing and briefly discuss Feature-based Lexicalized Tree Adjoining Grammars (LTAGs) as a representative of the class of lexicalized grammars.",
"cite_spans": [
{
"start": 172,
"end": 197,
"text": "(Kaplan and Bresnan 1983)",
"ref_id": "BIBREF36"
},
{
"start": 205,
"end": 225,
"text": "(Gazdar et al. 1985)",
"ref_id": "BIBREF19"
},
{
"start": 233,
"end": 255,
"text": "(Pollard and Sag 1987)",
"ref_id": null
},
{
"start": 262,
"end": 277,
"text": "(Steedman 1987)",
"ref_id": "BIBREF59"
},
{
"start": 296,
"end": 308,
"text": "(Gross 1984)",
"ref_id": "BIBREF22"
},
{
"start": 316,
"end": 340,
"text": "(Schabes and Joshi 1991)",
"ref_id": "BIBREF52"
},
{
"start": 356,
"end": 384,
"text": "(Sleator and Temperley 1991)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalized Grammars",
"sec_num": "3."
},
{
"text": "Feature-based Lexicalized Tree Adjoining Grammar (FB-LTAG) (Joshi, Levy, and Takahashi 1975; Vijay-Shanker 1987; Schabes, AbeillG and Joshi 1988; Vijay-Shanker and Joshi 1991; Joshi and Schabes 1996) is a tree-rewriting grammar formalism unlike context-free grammars and head grammars, which are string-rewriting formalisms. The primitive elements of FB-LTAGs are called elementary trees. Each elementary tree is associated with at least one lexical item on its frontier. The lexical item associated with an elementary tree is called the anchor of that tree. An elementary tree serves as a complex description of the anchor and provides a domain of locality over which the anchor can specify syntactic and semantic (predicate argument) constraints. Elementary trees are of two kinds: (a) initial trees and (b) auxiliary trees. In an FB-LTAG grammar for natural language, initial trees are phrase structure trees of simple sentences containing no recursion, while recursive structures are represented by auxiliary trees. Elementary trees are combined by substitution and adjunction operations. The result of combining the elementary trees is the derived tree and the process of combining the elementary trees to yield a parse of the sentence is represented by the derivation tree. The derivation tree can also be interpreted as a dependency tree with unlabeled arcs between words of the sentence. A more detailed discussion of LTAGs with an example and some of the key properties of elementary trees is presented in Appendix A.",
"cite_spans": [
{
"start": 59,
"end": 92,
"text": "(Joshi, Levy, and Takahashi 1975;",
"ref_id": "BIBREF32"
},
{
"start": 93,
"end": 112,
"text": "Vijay-Shanker 1987;",
"ref_id": "BIBREF62"
},
{
"start": 113,
"end": 145,
"text": "Schabes, AbeillG and Joshi 1988;",
"ref_id": "BIBREF51"
},
{
"start": 146,
"end": 175,
"text": "Vijay-Shanker and Joshi 1991;",
"ref_id": "BIBREF63"
},
{
"start": 176,
"end": 199,
"text": "Joshi and Schabes 1996)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalized Grammars",
"sec_num": "3."
},
{
"text": "Part-of-speech disambiguation techniques (POS taggers) (Church 1988; Weischedel et al. 1993; Brill 1993) are often used prior to parsing to eliminate (or substantially reduce) the part-of-speech ambiguity. The POS taggers are all local in the sense that they use information from a limited context in deciding which tag(s) to choose for each word. As is well known, these taggers are quite successful.",
"cite_spans": [
{
"start": 55,
"end": 68,
"text": "(Church 1988;",
"ref_id": "BIBREF12"
},
{
"start": 69,
"end": 92,
"text": "Weischedel et al. 1993;",
"ref_id": "BIBREF67"
},
{
"start": 93,
"end": 104,
"text": "Brill 1993)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supertags",
"sec_num": "4."
},
{
"text": "In a lexicalized grammar such as the Lexicalized Tree Adjoining Grammar (LTAG), each lexical item is associated with at least one elementary structure (tree). The elementary structures of LTAG localize dependencies, including long-distance dependencies, by requiring that all and only the dependent elements be present within the same structure. As a result of this localization, a lexical item may be (and, in general almost always is) associated with more than one elementary structure. We will call these elementary structures supertags, in order to distinguish them from the standard partof-speech tags. Note that even when a word has a unique standard part of speech, say a verb (V), there will usually be more than one supertag associated with this word. Since there is only one supertag for each word (assuming there is no global ambiguity) when the parse is complete, an LTAG parser (Schabes, Abeill6, and Joshi 1988 ) needs to search a large space of supertags to select the right one for each word before combining them for the parse of a sentence. It is this problem of supertag disambiguation that we address in this paper.",
"cite_spans": [
{
"start": 891,
"end": 924,
"text": "(Schabes, Abeill6, and Joshi 1988",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supertags",
"sec_num": "4."
},
{
"text": "Since LTAGs are lexicalized, we are presented with a novel opportunity to eliminate or substantially reduce the supertag assignment ambiguity by using local information, such as local lexical dependencies, prior to parsing. As in standard part-of-speech disambiguation, we can use local statistical information in the form of n-gram models based on the distribution of supertags in an LTAG parsed corpus. Moreover, since the supertags encode dependency information, we can also use information about the distribution of distances between a given supertag and its dependent supertags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supertags",
"sec_num": "4."
},
{
"text": "Note that as in standard part-of-speech disambiguation, supertag disambiguation could have been done by a parser. However, carrying out part-of-speech disambiguation prior to parsing makes the job of the parser much easier and therefore speeds it up. Supertag disambiguation reduces the work of the parser even further. After supertag disambiguation, we would have effectively completed the parse and the parser need \"only\" combine the individual structures; hence the term \"almost parsing.\" This method can also be used to associate a structure to sentence fragments and in cases where the supertag sequence after disambiguation may not combine into a single structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supertags",
"sec_num": "4."
},
{
"text": "LTAGs, by virtue of possessing the Extended Domain of Locality (EDL) property, 2 associate with each lexical item, one elementary tree for each syntactic environment that 2 EDL is described in Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example of Supertagging",
"sec_num": "4.1"
},
{
"text": "Examples of syntactic environments where the supertags shown in Figure 1 would be used.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 72,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Table 1",
"sec_num": null
},
{
"text": "Construction Figure 1 would be used. The example in Figure 2 illustrates the initial set of supertags assigned to each word of the sentence: the purchase price includes two ancillary companies. The order of the supertags for each lexical item in the example is not relevant. Figure 2 also shows the final supertag sequence assigned by the supertagger, which picks the best supertag sequence using statistical information (described in Section 6) about individual supertags and their dependencies on other supertags. The chosen supertags are combined to derive a parse. Without the supertagger, the parser would have to process combinations of the entire set of trees (at least the 17 trees shown); with it the parser need only process combinations of 7 trees.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 21,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 52,
"end": 60,
"text": "Figure 2",
"ref_id": null
},
{
"start": 275,
"end": 283,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Supertag",
"sec_num": null
},
{
"text": "The structure of the supertag can be best seen as providing admissibility constraints on syntactic environments in which it may be used. Some of these constraints can be checked locally. The following are a few constraints that can be used to determine the admissibility of a syntactic environment for a supertag: 4 Supertag disambiguation for the sentence: the purchase price includes two ancillary companies. \u2022 Left (Right) span constraint: If the span of the supertag to the left (right) of the anchor is larger than the length of the string to the left (right) of the word that anchors the supertag, then the supertag cannot be used in any parse of the input string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Supertag Ambiguity Using Structural Information",
"sec_num": "5."
},
{
"text": "I Sq NP Sr S ~ NP I /x ..... I N v Nr, ~ .L v'/\"NA~ N I I I v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Supertag Ambiguity Using Structural Information",
"sec_num": "5."
},
{
"text": "\u2022 Lexical items in the supertag: A supertag can be eliminated if the terminals appearing on the frontier of the supertag do not appear in the input string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Supertag Ambiguity Using Structural Information",
"sec_num": "5."
},
{
"text": "Supertags with the built-in lexical item by, that represent passive constructions are typically eliminated from being considered during the parse of an active sentence. More generally, these constraints can be used to eliminate supertags that cannot have their features satisfied in the context of the input string. An example of this is the elimination of supertag that requires a wh+ NP when the input string does not contain wh-words. Table 2 indicates the decrease in supertag ambiguity for 2,012 WSJ sentences (48,763 words) by using the structural constraints relative to the supertag ambiguity without the structural constraintsP These filters prove to be very effective in reducing supertag ambiguity. The graph in Figure 3 plots the number of supertags at the sentence level for sentences of length 2 to 50 words with and without the filters. As can be seen from the graph, the supertag ambiguity is significantly lower when the filters are used. The graph in Figure 4 shows the percentage drop in supertag ambiguity due to filtering for sentences of length 2 to 50 words. As can be seen, the average reduction in supertag ambiguity is about 50%. This means that given a sentence, close to 50% of the supertags can be eliminated even before parsing begins by just using structural constraints of the supertags. This reduction in supertag ambiguity speeds up the parser significantly. In fact, the supertag 5 WSJ Section 20 of the Penn Treebank. ",
"cite_spans": [],
"ref_spans": [
{
"start": 438,
"end": 445,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 723,
"end": 731,
"text": "Figure 3",
"ref_id": null
},
{
"start": 969,
"end": 977,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reducing Supertag Ambiguity Using Structural Information",
"sec_num": "5."
},
{
"text": "A,U i....\" ,...\"..! \"V\" \"l.d i-d i I'V7 :i -,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Supertag Ambiguity Using Structural Information",
"sec_num": "5."
},
{
"text": ".,. :\" :\" :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Supertag Ambiguity Using Structural Information",
"sec_num": "5."
},
{
"text": "\".i \"\" ... ~ ..... i .,..,.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Supertag Ambiguity Using Structural Information",
"sec_num": "5."
},
{
"text": ": 0.00 10.00 20.00 30.00 40.00 50.00",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Supertag Ambiguity Using Structural Information",
"sec_num": "5."
},
{
"text": "XViiii' i~i//~rs\" .............",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'Without Filters",
"sec_num": null
},
{
"text": "Comparison of number of supertags with and without filtering for sentences of length 2 to 50 words. ambiguity in XTAG system is so large that the parser is prohibitively slow without the use of these filters. Table 3 tabulates the reduction of supertag ambiguity due to the filters against various parts of speech: Verbs in all their forms contribute most to the problem of supertag ambiguity and most of the supertag ambiguity for verbs is due to light verbs and verb particles. The filters are very effective in eliminating over 50% of the verb anchored supertags.",
"cite_spans": [],
"ref_spans": [
{
"start": 209,
"end": 216,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Even though structural constraints are effective in reducing supertag ambiguity, the search space for the parser is still sufficiently large. In the next few sections, we present stochastic and rule-based approaches to supertag disambiguation. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Percentage drop in the number of supertags with and without filtering for sentences of length 2 to 50 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "Before proceeding to discuss the various models for supertag disambiguation, we would like to trace the time course of development of this work. We do this not only to show the improvements made to the early work reported in our 1994 paper (Joshi and Srinivas 1994) , but also to explain the rationale for choosing certain models of supertag disambiguation over others. We summarize the early work in the following subsection.",
"cite_spans": [
{
"start": 240,
"end": 265,
"text": "(Joshi and Srinivas 1994)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models, Data, Experiments, and Results",
"sec_num": "6."
},
{
"text": "As reported in Joshi and Srinivas (1994) , we experimented with a trigram model as well as the dependency model for supertag disambiguation. The trigram model that was trained on (part-of-speech, supertag) pairs, instead of (words, supertag) pairs, collected from the LTAG derivations of 5,000 WSJ sentences and tested on 100 WSJ sentences produced a correct supertag for 68% of the words in the test set. We have since significantly improved the performance of the trigram model by using a larger Table 3 The effect of filters on supertag ambiguity tabulated against part of speech. training set and incorporating smoothing techniques. We present a detailed discussion of the model and its performance on a range of corpora in Section 6.5. In Section 6.2, we briefly mention the dependency model of supertagging that was reported in the earlier work.",
"cite_spans": [
{
"start": 15,
"end": 40,
"text": "Joshi and Srinivas (1994)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 498,
"end": 505,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Early Work",
"sec_num": "6.1"
},
{
"text": "In an n-gram model for disambiguating supertags, dependencies between supertags that appear beyond the n-word window cannot be incorporated. This limitation can be overcome if no a priori bound is set on the size of the window but instead a probability distribution of the distances of the dependent supertags for each supertag is maintained. We define dependency between supertags in the obvious way: A supertag is dependent on another supertag if the former substitutes or adjoins into the latter. Thus, the substitution and the foot nodes of a supertag can be seen as specifying dependency requirements of the supertag. The probability with which a supertag depends on another supertag is collected from a corpus of sentences annotated with derivation structures. Given a set of supertags for each word and the dependency information between pairs of supertags, the objective of the dependency model is to compute the most likely dependency linkage that spans the entire string. The result of producing the dependency linkage is a sequence of supertags, one for each word of the sentence along with the dependency information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Model",
"sec_num": "6.2"
},
{
"text": "Since first reported in Joshi and Srinivas (1994) , we have not continued experiments using this model of supertagging, primarily for two reasons. We are restrained by the lack of a large corpus of LTAG parsed derivation structures that is needed to reliably estimate the various parameters of this model. We are currently in the process of collecting a large LTAG parsed WSJ corpus, with each sentence annotated with the correct derivation. A second reason for the disuse of the dependency model for supertagging is that the objective of supertagging is to see how far local techniques can be used to disambiguate supertags even before parsing begins. The dependency model, in contrast, is too much like full parsing and is contrary to the spirit of supertagging.",
"cite_spans": [
{
"start": 24,
"end": 49,
"text": "Joshi and Srinivas (1994)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Model",
"sec_num": "6.2"
},
{
"text": "We have improved the performance of the trigram model by incorporating smoothing techniques into the model and training the model on a larger training corpus. We have also proposed some new models for supertag disambiguation. In this section, we discuss these developments in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Models with Smoothing",
"sec_num": "6.3"
},
{
"text": "Two sets of data are used for training and testing the models for supertag disambiguation. The first set has been collected by parsing the Wall Street Journal 7, IBM Manual, and ATIS corpora using the wide-coverage English grammar being developed as part of the XTAG system . The correct derivation from all the derivations produced by the XTAG system was picked for each sentence from these corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Models with Smoothing",
"sec_num": "6.3"
},
{
"text": "The second and larger data set was collected by converting the Penn Treebank parses of the Wall Street Journal sentences. The objective was to associate each lexical item of a sentence with a supertag, given the phrase structure parse of the sentence. This process involved a number of heuristics based on local tree contexts. The heuristics made use of information about the labels of a word's dominating nodes (parent, grandparent, and great-grandparent), labels of its siblings (left and right) and siblings of its parent. An example of the result of this conversion is shown in Figure 5 . It must be noted that this conversion is not perfect and is correct only to a first order of approximation owing mostly to errors in conversion and lack of certain kinds of information such as distinction between adjunct and argument preposition phrases, in the Penn Treebank parses. Even though the converted supertag corpus can be refined further, the corpus in its present form has proved to be an invaluable resource in improving the performance of the supertag models as is discussed in the following sections. \"publishing\") (\"NN\" \"group ",
"cite_spans": [],
"ref_spans": [
{
"start": 582,
"end": 590,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "N-gram Models with Smoothing",
"sec_num": "6.3"
},
{
"text": "Mr.//NNP//B_Nn Vinken//NNP//A_NXN is//VBZ//B_Vvx chairman//NN//A_nx0Nl of//IN//B nxPnx Elsevier//NNP//B_Nn N.V.//NNP//A_NXN ,//,//B_nxPUnxpu the//DT//B_Dnx Dutch//NNP//B_Nn publishing//VBG//B_Vn group//NN//A_NXN .//.//",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Models with Smoothing",
"sec_num": "6.3"
},
{
"text": "The phrase structure tree and the supertags obtained from the phrase structure tree for the WSJ sentence: Mr. Vinken is chairman of Elsevier N.V., the Dutch publishing group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "Using structural information to filter out supertags that cannot be used in any parse of the input string reduces the supertag ambiguity but obviously does not eliminate it completely. One method of disambiguating the supertags assigned to each word is to order the supertags by the lexical preference that the word has for them. The frequency with which a certain supertag is associated with a word is a direct measure of its lexical preference for that supertag. Associating frequencies with the supertags and using them to associate a particular supertag with a word is clearly the simplest means of disambiguating supertags. Therefore a unigram model is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unigram Model",
"sec_num": "6.4"
},
{
"text": "where Supertag(wi) --tk 9 argmaxtkPr(tk I wi).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unigram Model",
"sec_num": "6.4"
},
{
"text": "(1) frequency( tk, wi) Pr(tk l wi) = frequency(wi)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unigram Model",
"sec_num": "6.4"
},
{
"text": "Thus, the most frequent supertag that a word is associated with in a training corpus is selected as the supertag for the word according to the unigram model. For the words that do not appear in the training corpus we back off to the part of speech of the word and use the most frequent supertag associated with that part of speech as the supertag for the word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unigram Model",
"sec_num": "6.4"
},
{
"text": "Results from the unigram supertag model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 4",
"sec_num": null
},
{
"text": "Training Set Test Set Top n Supertags % Success XTAG Parses 8,000 3,000 n = 1 73.4% n = 2 80.2% n = 3 80.8% Converted Penn Treebank Parses 1,000,000 47,000 n = I 77.2% n = 2 87.0% n = 3 91.5%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": null
},
{
"text": "6.4.1 Experiments and Results. We tested the performance of the unigram model on the previously discussed two sets of data. The words are first assigned standard parts of speech using a conventional tagger (Church 1988 ) and then are assigned supertags according to the unigram model. A word in a sentence is considered correctly supertagged if it is assigned the same supertag as it is associated with in the correct parse of the sentence. The results of these experiments are tabulated in Table 4 .",
"cite_spans": [
{
"start": 206,
"end": 218,
"text": "(Church 1988",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 491,
"end": 498,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set",
"sec_num": null
},
{
"text": "Although the performance of the unigram model for supertagging is significantly lower than the performance of the unigram model for part-of-speech tagging (91% accuracy), it performed much better than expected considering the size of the supertag set is much larger than the size of part-of-speech tag set. One of the reasons for this high performance is that the most frequent supertag for the most frequent words-determiners, nouns, and auxiliary verbs--is the correct supertag most of the time. Also, backing off to the part of speech helps in supertagging unknown words, which most often are nouns. The bulk of the errors committed by the unigram model is incorrectly tagged verbs (subcategorization and transformation), prepositions (noun attached vs. verb attached) and nouns (head vs. modifier noun).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": null
},
{
"text": "We first explored the use of trigram model of supertag disambiguation in Joshi and Srinivas (1994) . The trigram model was trained on (part-of-speech, supertag) pairs collected from the LTAG derivations of 5,000 WSJ sentences and tested on 100 WSJ sentences. It produced a correct supertag for 68% of the words in the test set. A major drawback of this early work was that it used no lexical information in the supertagging process as the training material consisted of (part-of-speech, supertag) pairs. Since that early work, we have improved the performance of the model by incorporating lexical information and sophisticated smoothing techniques, as well as training on larger training sets. In this section, we present the details and the performance evaluation of this model. In a unigram model, a word is always associated with the supertag that is most preferred by the word, irrespective of the context in which the word appears. An alternate method that is sensitive to context is the n-gram model. The n-gram model takes into account the contextual dependency probabilities between supertags within a window of n words in associating supertags to words. Thus, the most probable supertag sequence for an n-word sentence is given by:",
"cite_spans": [
{
"start": 73,
"end": 98,
"text": "Joshi and Srinivas (1994)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "= argmaxTPr(T1, T2 ..... TN) * Pr(W1, W2,..., WN I T1, T2 ..... TN) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "where Ti is the supertag for word Wi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "To compute this using only local information, we approximate, assuming that the probability of a word depends only on its supertag ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "Ti I Zi-2, Zi-1) (5) i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "The term Pr(Ti I Ti-2, Ti-1) is known as the contextual probability since it indicates the size of the context used in the model and the term Pr(Wi I Ti) is called the word emit probability since it is the probability of emitting the word Wi given the tag Ti.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "These probabilities are estimated using a corpus where each word is tagged with its correct supertag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "The contextual probabilities were estimated using the relative frequency estimates of the contexts in the training corpus. To estimate the probabilities for contexts that do not appear in the training corpus, we used the Good-Turing discounting technique (Good 1953) combined with Katz's back off model (Katz 1987) . The idea here is to discount the frequencies of events that occur in the corpus by an amount related to their frequencies and utilize this discounted probability mass in the back off model to distribute to unseen events. Thus, the Good-Turing discounting technique estimates the frequency of unseen events based on the distribution of the frequency of the counts of observed events in the corpus. If r is the observed frequency of an event, and Nr is the number of events with the observed frequency r, and N is the total number of events, then the probability of an unseen event is given by N1/N. Furthermore, the frequencies of the observed events are adjusted so that the total probability of all events sums to one. The adjusted frequency for observed events, r*, is computed as Once the frequencies of the observed events are discounted and the frequencies for unseen events are estimated, Katz's back off model is used. In this technique, if the observed frequency of an < n-gram, supertag> sequence is zero then its probability is computed based on the observed frequency of an (n -1)-gram sequence. Thus, The word emit probability for the (word, supertag) pairs that appear in the training corpus is computed using the relative frequency estimates as shown in Equation 7. For the (word, supertag) pairs that do not appear in the corpus, the word emit probability is estimated as shown in Equation 8. Some of the word features used in our imple-mentation include prefixes and suffixes of length less than or equal to three characters, capitalization, and digit features.",
"cite_spans": [
{
"start": 303,
"end": 314,
"text": "(Katz 1987)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "N(Wi, Ti) Pr(WdTi) - N(Ti) if N(Wi, Ti) > 0 (7) = Pr(UNKITi) \u2022 Pr(word_features(Wi)[Ti) otherwise (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "The counts for the (word, supertag) pairs for the words that do not appear in the corpus is estimated using the leaving-one-out technique (Niesler and Woodland 1996; Ney, Essen, and Kneser 1995) . A token UNK is associated with each supertag and its count NUN K is estimated by:",
"cite_spans": [
{
"start": 138,
"end": 165,
"text": "(Niesler and Woodland 1996;",
"ref_id": "BIBREF47"
},
{
"start": 166,
"end": 194,
"text": "Ney, Essen, and Kneser 1995)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "NI(Tj) Pr(UNK[Tj) -N(Tj) + ~] Pr(UNKITj) \u2022 N(Tj) NUNK(Tj) = 1 --PF(UNKITj)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "where NI(Tj) is the number of words that are associated with the supertag Tj that appear in the corpus exactly once. N(Tj) is the frequency of the supertag Tj and NUNK(Tj) is the estimated count of UNK in Tj. The constant 7/is introduced so as to ensure that the probability is not greater than one, especially for supertags that are sparsely represented in the corpus. We use word features similar to the ones used in Weischedel et al. (1993) , such as capitalization, hyphenation, and endings of words, for estimating the word emit probability of unknown words. 6.5.1 Experiments and Results. We tested the performance of the trigram model on various domains such as the Wall Street Journal (WSJ), the IBM Manual corpus and the ATIS corpus. For the IBM Manual corpus and the ATIS domains, a supertag annotated corpus was collected using the parses of the XTAG system and selecting the correct analysis for each sentence. The corpus was then randomly split into training and test material. Supertag performance is measured as the percentage of words that are correctly supertagged by a model when compared with the key for the words in the test corpus.",
"cite_spans": [
{
"start": 419,
"end": 443,
"text": "Weischedel et al. (1993)",
"ref_id": "BIBREF67"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "Experiment 1: (Performance on the Wall Street Journal corpus). We used the two sets of data, from the XTAG parses and from the conversion of the Penn Treebank parses to evaluate the performance of the trigram model. Table 5 shows the performance on the two sets of data. The first data set, data collected from the XTAG parses, was split into 8,000 words of training and 3,000 words of test material. The data collected from converting the Penn Treebank was used in two experiments differing in the size of the training corpus--200,000 words 8 and 1,000,000 words9--and tested on 47,000 words 1\u00b0. A total of 300 different supertags were used in these experiments. Performance of the supertagger on the IBM Manual corpus and ATIS corpus. correctly supertagged was used as the training corpus and a set of 1,000 words was used as a test corpus. The performance of the supertagger on this corpus is shown in Table 6 . Performance on the ATIS corpus was evaluated using a set of 1,500 words correctly supertagged as the training corpus and a set of 400 words as a test corpus. The performance of the supertagger on the ATIS corpus is also shown in Table 6 . As expected, the performance on the ATIS corpus is higher than that of the WSJ and the IBM Manual corpus despite the extremely small training corpus. Also, the performance of the IBM Manual corpus is better than the WSJ corpus when the size of the training corpus is taken into account. The baseline for the ATIS domain is remarkably high due to the repetitive constructions and limited vocabulary in that domain. This is also true for the IBM Manual corpus, although to a lesser extent. The trigram model of supertagging is attractive for limited domains since it performs quite well with relatively insignificant amounts of training material. The performance of the supertagger can be improved in an iterative fashion by using the supertagger to supertag larger amounts of training material, which can be quickly hand-corrected and used to train a better-performing supertagger.",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 223,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 905,
"end": 912,
"text": "Table 6",
"ref_id": "TABREF10"
},
{
"start": 1144,
"end": 1151,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "N-gram Model",
"sec_num": "6.5"
},
{
"text": "Information. Lexical information contributes most to the performance of a POS tagger, since the baseline performance of assigning the most likely POS for each word produces 91% accuracy (Brill 1993) . Contextual information contributes relatively a small amount towards the performance, improving it from 91% to 96-97%, a 5.5% improvement. In contrast, contextual information has greater effect on the performance of the supertagger. As can be seen, from the above experiments, the baseline performance of the supertagger is about 77% and the performance improves to about 92% with the inclusion of contextual information, an improvement of 19.5%. The relatively low baseline performance for the supertagger is a direct consequence of the fact that there are many more supertags per word than there are POS tags. Further, since many combinations of supertags are not possible, contextual information has a larger effect on the performance of the supertagger.",
"cite_spans": [
{
"start": 186,
"end": 198,
"text": "(Brill 1993)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Lexical versus Contextual",
"sec_num": "6.5.2"
},
{
"text": "In an error-driven transformation-based (EDTB) tagger (Brill 1993) , a set of patternaction templates that include predicates that test for features of words appearing in the context of interest are defined. These templates are then instantiated with the appropriate features to obtain transformation rules. The effectiveness of a transformation rule to correct an error and the relative order of application of the rules are learned using a corpus. The learning procedure takes a gold corpus in which the words have been correctly annotated and a training corpus that is derived from the gold corpus by removing the annotations. The objective in the learning phase is to learn the optimum ordering of rule applications so as to minimize the number of tag mismatches between the training and the reference corpus.",
"cite_spans": [
{
"start": 54,
"end": 66,
"text": "(Brill 1993)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error-driven Transformation-based Tagger",
"sec_num": "6.6"
},
{
"text": "Results. A EDTB model has been trained using templates defined on a three-word window. We trained the templates on 200,000 words 11 and tested on 47,000 words 12 of the WSJ corpus. The model performed at an accuracy of 90%. The EDTB model provides a great deal of flexibility to integrate domain-specific and linguistic information into the model. However, a major drawback of this approach is that the training procedure is extremely slow, which prevented us from training on the 1,000,000 word corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and",
"sec_num": "6.6.1"
},
{
"text": "The output of the supertagger, an almost parse, has been used in a variety of applications including information retrieval (Chandrasekar and Srinivas 1997b , 1997c , 1997d and information extraction (Doran et al. 1997 ), text simplification (Chandrasekar, Doran, and Srinivas 1996, Chandrasekar and Srinivas 1997a) , and language modeling (Srinivas 1996) to illustrate that supertags provide an appropriate level of lexical description needed for most applications.",
"cite_spans": [
{
"start": 123,
"end": 155,
"text": "(Chandrasekar and Srinivas 1997b",
"ref_id": "BIBREF8"
},
{
"start": 156,
"end": 163,
"text": ", 1997c",
"ref_id": "BIBREF9"
},
{
"start": 164,
"end": 171,
"text": ", 1997d",
"ref_id": "BIBREF10"
},
{
"start": 199,
"end": 217,
"text": "(Doran et al. 1997",
"ref_id": "BIBREF15"
},
{
"start": 241,
"end": 266,
"text": "(Chandrasekar, Doran, and",
"ref_id": "BIBREF6"
},
{
"start": 267,
"end": 298,
"text": "Srinivas 1996, Chandrasekar and",
"ref_id": "BIBREF6"
},
{
"start": 299,
"end": 314,
"text": "Srinivas 1997a)",
"ref_id": "BIBREF56"
},
{
"start": 339,
"end": 354,
"text": "(Srinivas 1996)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supertagging before Parsing",
"sec_num": "7."
},
{
"text": "The output of the supertagger has also been used as a front end to a lexicalized grammar parser. As mentioned earlier, a lexicalized grammar parser can be conceptualized to consist of two stages (Schabes, AbeillG and Joshi 1988) . In the first stage, the parser looks up the lexicon and selects all the supertags associated with each word of the sentence to be parsed. In the second stage, the parser searches the lattice of selected supertags in an attempt to combine them using substitution and adjunction operations so as to yield a derivation that spans the input string. At the end of the second stage, the parser would not only have parsed the input, but would have associated a small set of (usually one) supertags with each word.",
"cite_spans": [
{
"start": 195,
"end": 228,
"text": "(Schabes, AbeillG and Joshi 1988)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supertagging before Parsing",
"sec_num": "7."
},
{
"text": "The supertagger can be used as a front end to a lexicalized grammar parser so as to prune the search-space of the parser even before parsing begins. It should be clear that by reducing the number of supertags that are selected in the first stage, the search-space for the second stage can be reduced significantly and hence the parser can be made more efficient. Supertag disambiguation techniques, as discussed in the 11 WSJ Sections 15 to 18 of the Penn Treebank. 12 WSJ Section 20 of the Penn Treebank. previous sections, attempt to disambiguate the supertags selected in the first pass, based on lexical preferences and local lexical dependencies, so as to ideally select one supertag for each word. Once the supertagger selects the appropriate supertag for each word, the second stage of the parser is needed only to combine the individual supertags to arrive at the parse of the input. Tested on about 1,300 WSJ sentences with each word in the sentence correctly supertagged, the LTAG parser took approximately 4 seconds per sentence to yield a parse (combine the supertags and perform feature unification). In contrast, the same 1,300 WSJ sentences without the supertag annotation took nearly 120 seconds per sentence to yield a parse. Thus the parsing speedup gained by this integration is a factor of about 30.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supertagging before Parsing",
"sec_num": "7."
},
{
"text": "In the XTAG system, we have integrated the trigram supertagger as a front end to an LTAG parser to pick the appropriate supertag for each word even before parsing begins. However, a drawback of this approach is that the parser would fail completely if any word of the input is incorrectly tagged by the supertagger. This problem could be circumvented to an extent by extending the supertagger to produce n-best supertags for each word. Although this extension would increase the load on the parser, it would certainly improve the chances of arriving at a parse for a sentence. In fact, Table 7 presents the performance of the supertagger that selects, at most, the top three supertags for each word. The optimum number of supertags to output to balance the success rate of the parser against the efficiency of the parser must be determined empirically.",
"cite_spans": [],
"ref_spans": [
{
"start": 586,
"end": 593,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Supertagging before Parsing",
"sec_num": "7."
},
{
"text": "A more serious limitation of this approach is that it fails to parse ill-formed and extragrammatical strings such as those encountered in spoken utterances and unrestricted texts. This is due to the fact that the Earley-style LTAG parser attempts to combine the supertags to construct a parse that spans the entire string. In cases where the supertag sequence for a string cannot be combined into a unified structure, the parser fails completely. One possible extension to account for ill-formed and extragrammatical strings is to extend the Earley parser to produce partial parses for the fragments whose supertags can be combined. An alternate method of computing dependency linkages robustly is presented in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supertagging before Parsing",
"sec_num": "7."
},
{
"text": "Supertagging associates each word with a unique supertag. To establish the dependency links among the words of the sentence, we exploit the dependency requirements encoded in the supertags. Substitution nodes and foot nodes in supertags serve as slots that must be filled by the arguments of the anchor of the supertag. A substitution slot of a supertag is filled by the complements of the anchor while the foot node of a supertag is filled by a word that is being modified by the supertag. These argument slots have a polarity value reflecting their orientation with respect to the anchor of the supertag. Also associated with a supertag is a list of internal nodes (including the root node) that appear in the supertag. Using the structural information coupled with the argument requirements of a supertag, a simple heuristic-based, linear time, deterministic algorithm (which we call a lightweight dependency analyzer (LDA)) produces dependency linkages not necessarily spanning the entire sentence. The LDA can produce a number of partial linkages, since it is driven primarily by the need to satisfy local constraints without being driven to construct a single dependency linkage that spans the entire input. This, in fact, contributes to the robustness of LDA and promises to be a useful tool for parsing sentence fragments that are rampant in speech utterances, as exemplified by the Switchboard corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lightweight Dependency Analyzer",
"sec_num": "8."
},
{
"text": "Tested on section 20 of the Wall Street Journal corpus, which contained 47,333 dependency links in the gold standard, the LDA, trained on 200,000 words, produced 38,480 dependency links correctly, resulting in a recall score of 82.3%. Also, a total of 41,009 dependency links were produced by the LDA, resulting in a precision score of 93.8%. A detailed evaluation of the LDA is presented in Srinivas (1997b) .",
"cite_spans": [
{
"start": 392,
"end": 408,
"text": "Srinivas (1997b)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lightweight Dependency Analyzer",
"sec_num": "8."
},
{
"text": "Although we have presented supertagging in the context of LTAG, it is applicable to other lexicalized grammar formalisms such as CCG (Steedrnan 1997), HPSG (Pollard and Sag 1987) , and LFG (Kaplan and Bresnan 1983) . We have implemented a broad coverage CCG grammar containing about 80 categories based on the XTAG English grammar. These categories have been used to tag the same training and test corpora used in the supertagging experiments discussed in this paper and a supertagger to disambiguate the CCG categories has been developed. We are presently analyzing the performance of the supertagger using the LTAG trees and the CCG categories.",
"cite_spans": [
{
"start": 156,
"end": 178,
"text": "(Pollard and Sag 1987)",
"ref_id": null
},
{
"start": 189,
"end": 214,
"text": "(Kaplan and Bresnan 1983)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applicability of Supertagging to other Lexicalized Grammars",
"sec_num": "9."
},
{
"text": "The idea of supertagging can also be applied to a grammar in HPSG formalism indirectl~ by compiling the HPSG grammar into an LTAG grammar (Kasper et al. 1995) . A more direct approach would be to tag words with feature structures that represent supertags (Kempe 1994) . For LFG, the lexicalized subset of fragments used in the LFG-DOP model (Bod and Kaplan 1998 ) can be seen as supertags.",
"cite_spans": [
{
"start": 138,
"end": 158,
"text": "(Kasper et al. 1995)",
"ref_id": "BIBREF39"
},
{
"start": 255,
"end": 267,
"text": "(Kempe 1994)",
"ref_id": "BIBREF41"
},
{
"start": 341,
"end": 361,
"text": "(Bod and Kaplan 1998",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applicability of Supertagging to other Lexicalized Grammars",
"sec_num": "9."
},
{
"text": "An approach that is closely related to supertagging is the reductionist approach to parsing that is being carried out under the Constraint Grammar framework (Karlsson et al. 1994; Voutilainen 1994; Tapanainen and J/irvinen 1994) . In this framework, each word is associated with the set of possible functional tags that it may be assigned in the language. This constitutes the lexicon. The grammar consists of a set of rules that eliminate functional tags for words based on the context of a sentence. Parsing a sentence in this framework amounts to eliminating as many implausible functional tags as possible for each word, given the context of the sentence. The resultant output structure might contain significant syntactic ambiguity, which may not have been eliminated by the rule applications, thus producing almost parses. Thus, the reductionist approach to parsing is similar to supertagging in that both view parsing as tagging with rich descriptions. However, the key difference is that the tagging is done in a probabilistic setting in the supertagging approach while it is rule based in the constraint grammar approach.",
"cite_spans": [
{
"start": 157,
"end": 179,
"text": "(Karlsson et al. 1994;",
"ref_id": "BIBREF37"
},
{
"start": 180,
"end": 197,
"text": "Voutilainen 1994;",
"ref_id": "BIBREF64"
},
{
"start": 198,
"end": 228,
"text": "Tapanainen and J/irvinen 1994)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applicability of Supertagging to other Lexicalized Grammars",
"sec_num": "9."
},
{
"text": "We are currently developing supertaggers for other languages. In collaboration with Anne Abeill~ and Marie-Helene Candito of the University of Paris, using their French TAG grammar, we have developed a supertagger for French. We are currently working on evaluating the performance of this supertagger. Also, the annotated corpora necessary for training supertaggers for Korean and Chinese are under development at the University of Pennsylvania.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applicability of Supertagging to other Lexicalized Grammars",
"sec_num": "9."
},
{
"text": "A version of the supertagger trained on the WSJ corpus is available under GNU Public License from http: / / www.cis.upenn.edu / ~xtag / swrelease.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applicability of Supertagging to other Lexicalized Grammars",
"sec_num": "9."
},
{
"text": "In this paper, we have presented a novel approach to robust parsing distinguished from the previous approaches to robust parsing by integrating the flexibility of linguistically motivated lexical descriptions with the robustness of statistical techniques. By associating rich descriptions (supertags) that impose complex constraints in a local context, we have been able to use local computational models for effective supertag disambiguation. A trigram supertag disambiguation model, trained on 1,000,000 (word, supertag) pairs of the Wall Street Journal corpus, performs at an accuracy level of 92.2%. After disambiguation, we have effectively completed the parse of the sentence, creating an almost parse, in that the parser need only combine the selected structures to arrive at a parse for the sentence. We have presented a lightweight dependency analyzer (LDA) that takes the output of the supertagger and uses the dependency requirements of the supertags to produce a dependency linkage for a sentence. This method can also serve to parse sentence fragments in cases where the supertag sequence after disambiguation may not combine to form a single structure. This approach is applicable to all lexicalized grammar parsers. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "10."
},
{
"text": "Elementary trees for the sentence: the company is being acquired.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 6",
"sec_num": null
},
{
"text": "the top and the bottom. The bottom FS contains information relating to the subtree rooted at the node, and the top FS contains information relating to the supertree at that node. 13 Features may get their values from three different sources:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B2",
"sec_num": null
},
{
"text": "\u2022 Morphology of anchor: from the morphological information of the lexical items that anchor the tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B2",
"sec_num": null
},
{
"text": "\u2022 Structural characteristics: from the structure of the tree itself (for (b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B2",
"sec_num": null
},
{
"text": "Substitution and adjunction in LTAG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 7",
"sec_num": null
},
{
"text": "example, the mode = ind/imp feature on the root node in the c~3 tree in Figure 6 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 80,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 7",
"sec_num": null
},
{
"text": "The derivation process: from unification with features from trees that adjoin or substitute.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 7",
"sec_num": null
},
{
"text": "Elementary trees are combined by substitution and adjunction operations. Substitution inserts elementary trees at the substitution nodes of other elementary trees. Figure 7 (a) shows two elementary trees and the tree resulting from the substitution of one tree into the other. In this operation, a node marked for substitution in an elementary tree is replaced by another elementary tree whose root label matches the label of the node. The top FS of the resulting node is the result Of unification of the top features of the two original nodes, while the bottom FS of the resulting node is simply the bottom features of the root node of the substituting tree.",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 172,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 7",
"sec_num": null
},
{
"text": "In an adjunction operation, an auxiliary tree is inserted into an elementary tree. Figure 7(b) shows an auxiliary tree adjoining into an elementary tree and the result of the adjunction. The root and foot nodes of the auxiliary tree must match the node label at which the auxiliary tree adjoins. The node being adjoined to splits, and its top FS unifies with the top FS of the root node of the auxiliary tree, while its bottom FS unifies with the bottom FS of the foot node of the auxiliary tree. Figure 7(b) shows an auxiliary tree and an elementary tree, and the tree resulting from an adjunction operation. For a parse to be well-formed, the top and bottom FS at each node should be unified at the end of a parse.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 94,
"text": "Figure 7(b)",
"ref_id": null
},
{
"start": 497,
"end": 508,
"text": "Figure 7(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 7",
"sec_num": null
},
{
"text": "The result of combining the elementary trees shown in Figure 6 is the derived tree, shown in Figure 8(a) . The process of combining the elementary trees to yield a parse of the sentence is represented by the derivation tree, shown in Figure 8(b) . The nodes of the derivation tree are the tree names that are anchored by the appropriate lexical items. The combining operation is indicated by the type of the arcs (a broken line indicates substitution and a bold line indicates adjunction) while the address of the operation is indicated as part of the node label. The derivation tree can also be interpreted as a dependency tree with unlabeled arcs between words of the sentence, as shown in Figure 8(c) .",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Figure 6",
"ref_id": null
},
{
"start": 93,
"end": 104,
"text": "Figure 8(a)",
"ref_id": null
},
{
"start": 234,
"end": 245,
"text": "Figure 8(b)",
"ref_id": null
},
{
"start": 692,
"end": 703,
"text": "Figure 8(c)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 7",
"sec_num": null
},
{
"text": "A broad-coverage grammar system, XTAG, has been implemented in the LTAG formalism. In this section, we briefly discuss some aspects related to XTAG for the sake of completeness. A more detailed report on XTAG can be found in XTAG-Group (1995). The XTAG system consists of a morphological analyzer, a part-of-speech tagger, a wide-coverage LTAG English grammar, a predictive left-to-right Earley-style parser for LTAG (Schabes 1990) , and an X-windows interface for grammar development . The input sentence is subjected to morphological analysis (a) Derived tree, (b) derivation tree, and (c) dependency tree for the sentence: the company is being acquired.",
"cite_spans": [
{
"start": 417,
"end": 431,
"text": "(Schabes 1990)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 7",
"sec_num": null
},
{
"text": "and is tagged with parts of speech before being sent to the parser. The parser retrieves the elementary trees that the words of the sentence anchor and combines them by adjunction and substitution operations to derive a parse of the sentence. The grammar of XTAG has been used to parse sentences from ATIS, IBM Manual and WSJ corpora (TAG-Group 1995) . The resulting XTAG corpus contains sentences from these domains along with all the derivations for each sentence. (a) Tree for raising analysis, anchored by seems; (b) transitive tree; (c) object extraction tree for the verb hit.",
"cite_spans": [
{
"start": 334,
"end": 350,
"text": "(TAG-Group 1995)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 7",
"sec_num": null
},
{
"text": "(John in sentence 1) does not appear in the elementary tree for seem since it does not serve as an argument for seem. Figure 9 (b) shows the elementary tree anchored by the transitive verb hit in which both the subject NP and object NP are realized within the same elementary tree. LTAG is distinguished from other grammar formalisms by possessing part (2) of the EDL property. In LTAGs, there is one elementary tree for every syntactic environment that the anchor may appear in. Each elementary tree encodes the linear order of the arguments of the anchor in a particular syntactic environment. For example, a transitive verb such as hit is associated with both the elementary tree shown in Figure 9(b) for a declarative transitive sentence such as sentence 2, and the elementary tree shown in Figure 9 (c) for an object extracted transitive sentence such as sentence 3. Notice that the object noun phrase is realized to the left of the subject noun phrase in the object extraction tree.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 126,
"text": "Figure 9",
"ref_id": null
},
{
"start": 692,
"end": 703,
"text": "Figure 9(b)",
"ref_id": null
},
{
"start": 795,
"end": 803,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 7",
"sec_num": null
},
{
"text": "As a consequence of the fact that LTAGs possess the part (2) of the EDL property, the derivation structures in LTAGs contain the information of a dependency structure. Another aspect of EDL is that the arguments of the anchor can be filled in any order. This is possible because the elementary structures allocate a slot for each argument of the anchor in each syntactic environment that the anchor appears in.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 7",
"sec_num": null
},
{
"text": "There can be many ways of constructing the elementary structures of a grammar so as to possess the EDL property. However, by requiring that the constructed elementary structures be \"minimal,\" the third property of LTAGs namely, factoring of recursion from the domain of dependencies, follows as a corollary of EDL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 7",
"sec_num": null
},
{
"text": "Factoring of recursion from the domain of dependencies (FRD): Recursion is factored away from the domain for the statement of dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "In LTAGs, recursive constructs are represented as auxiliary trees. They combine with elementary trees by the operation of adjunction. Elementary trees define the domain for stating dependencies such as agreement, subcategorization, and filler-gap dependencies. Auxiliary trees, by adjunction to elementary trees, account for the longdistance behavior of these dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "An additional advantage of a grammar possessing FRD and EDL properties is that feature structures in these grammars are extremely simple. Since the recursion has been factored out of the domain of dependency, and since the domain is large enough for agreement, subcategorizafion, and filler-gap dependencies, feature structures in such systems do not involve any recursion. In fact they reduce to typed terms that can be combined by simple term-like unification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "For the purpose of this paper, we suppress the features associated with the supertags. 4 Mitch Marcus pointed out that these tests are similar to the generalized shaper tests used in the Harvard Predictive Analyzer(Kuno 1966).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The description of the part-of-speech tags is provided inMarcus, Santorini, and Marcinkiewicz (1993).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sentences of length < 15 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Nodes marked for substitution are associated with only the top FS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was done when the first author was at the University of Pennsylvania. It was partially supported by NSF grant NSF-STC SBR 8920230, ARPA grant N00014-94 and ARO grant DAAH04-94-G0426. We would like to thank Steven Abney, Raman Chandrasekar, Christine Doran, Beth Ann Hockey, Mark Liberman, Mitch Marcus, and Mark Steedman for useful comments and discussions which have helped shape this work. We also thank the reviewers for their insightful comments and suggestions to improve an earlier version of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Feature-based Lexicalized Tree Adjoining Grammar (FB-LTAG) is a tree-rewriting grammar formalism, unlike context-free Grammars and head grammars, which are stringrewriting formalisms. FB-LTAGs trace their lineage to Tree Adjunct Grammars (TAGs), which were first developed in Joshi, Lev36 and Takahashi (1975) and later extended to include unification-based feature structures (Vijay-Shanker 1987; Vijay-Shanker and Joshi 1991) and lexicalization (Schabes, AbeillG and Joshi 1988) . For a more recent and comprehensive reference, see Joshi and Schabes (1996) .The primitive elements of FB-LTAGs are called elementary trees. Each elementary tree is associated with at least one lexical item on its frontier. The lexical item associated with an elementary tree is called the anchor of that tree. An elementary tree serves as a complex description of the anchor and provides a domain of locality over which the anchor can specify syntactic and semantic (predicate argument) constraints. Elementary trees are of two kinds: (a) Initial Trees and (b) Auxiliary Trees. In an FB-LTAG grammar for natural language, initial trees are phrase structure trees of simple sentences containing no recursion, while recursive structures are represented by auxiliary trees.Examples of initial trees (c~s) and auxiliary trees (fls) are shown in Figure 6 . Nodes on the frontier of initial trees are marked as substitution sites by a \"1\", while exactly one node on the frontier of an auxiliary tree, whose label matches the label of the root of the tree, is marked as a foot node by a \",\". The other nodes on the frontier of an auxiliary tree are marked as substitution sites.Each node of an elementary tree is associated with two feature structures (FS),",
"cite_spans": [
{
"start": 276,
"end": 309,
"text": "Joshi, Lev36 and Takahashi (1975)",
"ref_id": null
},
{
"start": 447,
"end": 480,
"text": "(Schabes, AbeillG and Joshi 1988)",
"ref_id": "BIBREF51"
},
{
"start": 534,
"end": 558,
"text": "Joshi and Schabes (1996)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 1325,
"end": 1333,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix A: Feature-based Lexicalized Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "In this section, we define the key properties of LTAGs: lexicalization, Extended Domain of Locality (EDL), and factoring of recursion from the domain of dependency (FRD), and discuss how these properties are realized in natural language grammars written in LTAGs. A more detailed discussion about these properties is presented in Joshi (1985 Joshi ( , 1987 , Kroch and Joshi (1985) , Schabes, AbeillG and Joshi (1988) , and Joshi and Schabes (1996) .",
"cite_spans": [
{
"start": 330,
"end": 341,
"text": "Joshi (1985",
"ref_id": "BIBREF27"
},
{
"start": 342,
"end": 356,
"text": "Joshi ( , 1987",
"ref_id": "BIBREF28"
},
{
"start": 359,
"end": 381,
"text": "Kroch and Joshi (1985)",
"ref_id": "BIBREF42"
},
{
"start": 384,
"end": 417,
"text": "Schabes, AbeillG and Joshi (1988)",
"ref_id": "BIBREF51"
},
{
"start": 424,
"end": 448,
"text": "Joshi and Schabes (1996)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix B: Key Properties of LTAGs",
"sec_num": null
},
{
"text": "A grammar is lexicalized if it consists of:\u2022 a finite set of elementary structures (strings, trees, directed acyclic graphs, etc.), each structure anchored on a lexical item.\u2022 lexical items, each associated with at least one of the elementary structures of the grammar\u2022 a finite set of operations combining these structures.This property proves to be linguistically crucial since it establishes a direct link between the lexicon and the syntactic structures defined in the grammar. In fact, in lexicalized grammars all we have is the lexicon, which projects the elementary structures of each lexical item; there is no independent grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "The Extended Domain of Locality (EDL) property has two parts: . . Every elementary structure must contain all and only the arguments of the anchor in the same structure.For each lexical item, the grammar must contain an elementary structure for each syntactic environment the lexical item might appear in.Part (1) of EDL allows the anchor to impose syntactic and semantic constraints on its arguments directly since they appear in the same elementary structure that it anchors. Hence, all elements that appear within one elementary structure are considered to be local. This property also defines how large an elementary structure in a grammar can be. Figure 9 shows trees for the following example sentences:(1) (2) (3) John seems to like Mary.",
"cite_spans": [],
"ref_spans": [
{
"start": 652,
"end": 660,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "Who did John hit? Figure 9(a) shows the elementary tree anchored by seem that is used to derive a raising analysis for sentence 1. Notice that the elements appearing in the tree are only those that serve as arguments to the anchor and nothing else. In particular, the subject NP probabilistic models. Computational Linguistics, 19(2):359-382, June. XTAG-Group, The. 1995. A lexicalized tree adjoining grammar for English. Technical Report IRCS 95-03, University of Pennsylvania.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 29,
"text": "Figure 9(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "John hit Mary.",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Rapid incremental parsing with repair",
"authors": [
{
"first": "Abne~",
"middle": [],
"last": "Steven",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 6th New OED Conference: Electronic Text Research",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abne~ Steven. 1990. Rapid incremental parsing with repair. In Proceedings of the 6th New OED Conference: Electronic Text Research, pages 1-9, University of Waterloo, Waterloo, Ontario, Canada.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Training and scaling preference functions for disambiguation",
"authors": [
{
"first": "Hiyan",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Carter",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "4",
"pages": "635--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alshawi, Hiyan and David Carter. 1994. Training and scaling preference functions for disambiguation. Computational Linguistics, 20(4):635-648.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "FASTUS: A finite-state processor for information extraction from real-world text",
"authors": [
{
"first": "D",
"middle": [],
"last": "Appelt",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hobbs",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bear",
"suffix": ""
},
{
"first": "D",
"middle": [
"J"
],
"last": "Israel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Tyson",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings oflJCAI-93",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Appelt, D., J. Hobbs, J. Bear, D. J. Israel, and M. Tyson. 1993. FASTUS: A finite-state processor for information extraction from real-world text. In Proceedings oflJCAI-93, Charnbery, France, September.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Towards History-based Grammars: Using Richer Models for Probabilistic Parsing",
"authors": [
{
"first": "Ezra",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Magerman",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31st Ann ual Meeting",
"volume": "",
"issue": "",
"pages": "31--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Black, Ezra, Fred Jelinek, John Lafferty, David M. Magerman, Robert Mercer, and Salim Roukos. 1993. Towards History-based Grammars: Using Richer Models for Probabilistic Parsing. In Proceedings of the 31st Ann ual Meeting, pages 31-37, Columbus, OH. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A probabilistic corpus-driven model for lexical-functional analysis",
"authors": [
{
"first": "Rens",
"middle": [],
"last": "Bod",
"suffix": ""
},
{
"first": "Ronald",
"middle": [],
"last": "Kaplan",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING-ACL \"98: 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bod, Rens and Ronald Kaplan. 1998. A probabilistic corpus-driven model for lexical-functional analysis. In Proceedings of COLING-ACL \"98: 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Montreal, Quebec, Canada, August.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic grammar induction and parsing free text: A transformation-based approach",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31st Annual Meeting",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brill, Eric. 1993. Automatic grammar induction and parsing free text: A transformation-based approach. In Proceedings of the 31st Annual Meeting, Columbus, OH. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Motivations and methods for text simplification",
"authors": [
{
"first": "R",
"middle": [],
"last": "Chandrasekar",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Doran",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th International Conference on Computational Linguistics (COLING'96)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandrasekar, R., Christine Doran, and B. Srinivas. 1996. Motivations and methods for text simplification. In Proceedings of the 16th International Conference on Computational Linguistics (COLING'96), Copenhagen, Denmark, August.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic induction of rules for text simplification. Knowledge-based Systems",
"authors": [
{
"first": "R",
"middle": [],
"last": "Chandrasekar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "10",
"issue": "",
"pages": "183--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandrasekar, R. and B. Srinivas. 1997a. Automatic induction of rules for text simplification. Knowledge-based Systems, 10:183-190.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Gleaning information from the web: Using syntax to filter out irrelevant information",
"authors": [
{
"first": "R",
"middle": [],
"last": "Chandrasekar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of AAA11997 Spring Symposium on NLP on the World Wide Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandrasekar, R. and B. Srinivas. 1997b. Gleaning information from the web: Using syntax to filter out irrelevant information. In Proceedings of AAA11997 Spring Symposium on NLP on the World Wide Web.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Using supertags in document filtering: The effect of increased context on information retrieval effectiveness",
"authors": [
{
"first": "R",
"middle": [],
"last": "Chandrasekar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of Recent Advances in NLP (RANLP) '97",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandrasekar, R. and B. Srinivas. 1997c. Using supertags in document filtering: The effect of increased context on information retrieval effectiveness. In Proceedings of Recent Advances in NLP (RANLP) '97, Tzigov Chark, Bulgaria, September.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Using syntactic information in document filtering: A comparative study of part-of-speech tagging and supertagging",
"authors": [
{
"first": "R",
"middle": [],
"last": "Chandrasekar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of RIAO'97",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandrasekar, R. and B. Srinivas. 1997d. Using syntactic information in document filtering: A comparative study of part-of-speech tagging and supertagging. In Proceedings of RIAO'97, Montreal, Quebec, Canada, June.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Chomsk~ Noam. 1992. A Minimalist Approach to Linguistic Theory. MIT Working Papers in Linguistics",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Fourteenth National Conference on Artificial Intelligence AAA",
"volume": "",
"issue": "",
"pages": "47--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charniak, Eugene. 1997. Statistical parsing with a context-free grammar and word statistics. In Proceedings of the Fourteenth National Conference on Artificial Intelligence AAA/, pages 47-66, Menlo Park, CA. Chomsk~ Noam. 1992. A Minimalist Approach to Linguistic Theory. MIT Working Papers in Linguistics, Occasional Papers in Linguistics, No. 1.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A stochastic parts program and noun phrase parser for unrestricted text",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ward",
"suffix": ""
}
],
"year": 1988,
"venue": "2nd Applied Natural Language Processing Conference",
"volume": "",
"issue": "",
"pages": "136--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, Kenneth Ward. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In 2nd Applied Natural Language Processing Conference, pages 136-143, Austin, TX.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A new statistical parser based on bigram lexical dependencies",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual Meeting",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael. 1996. A new statistical parser based on bigram lexical dependencies. In Proceedings of the 34th Annual Meeting, Santa Cruz, CA. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "XTAG System--A wide coverage grammar for English",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Doran",
"suffix": ""
},
{
"first": "Dania",
"middle": [],
"last": "Egedi",
"suffix": ""
},
{
"first": "Beth",
"middle": [
"Ann"
],
"last": "Hockey",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Zaidel",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 17th International Conference on Computational Linguistics (COLING'94)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doran, Christine, Dania Egedi, Beth Ann Hockey, B. Srinivas, and Martin Zaidel. 1994. XTAG System--A wide coverage grammar for English. In Proceedings of the 17th International Conference on Computational Linguistics (COLING'94), Kyoto, Japan, August.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Mother of Perl: A Multi-tier pattern description language",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Doran",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Niv",
"suffix": ""
},
{
"first": "Breckenridge",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Reynar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the International Workshop on Lexically Driven Information Extraction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doran, Christine, Michael Niv, Breckenridge Baldwin, Jeffrey Reynar, and B. Srinivas. 1997. Mother of Perl: A Multi-tier pattern description language. In Proceedings of the International Workshop on Lexically Driven Information Extraction, Frascati, Italy, July.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A wide-coverage CCG parser",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Doran",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings oJ: the 3rd TAG+ Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doran, Christine and B. Srinivas. 1994. A wide-coverage CCG parser. In Proceedings oJ: the 3rd TAG+ Conference, Paris, France.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A probabilistic parsing method for sentence disambiguation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Fujisaki",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Nishino",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the 1st",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fujisaki, T., F. Jelinek, J. Cocke, E. Black and T. Nishino. 1989. A probabilistic parsing method for sentence disambiguation. In Proceedings of the 1st",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Annual International Workshop of Parsing Technologies",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual International Workshop of Parsing Technologies, Pittsburgh, PA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Generalized Phrase Structure Grammar",
"authors": [
{
"first": "G",
"middle": [],
"last": "Gazdar",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Pullum",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sag",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gazdar, G., E. Klein, G. Pullum, and I. Sag. 1985. Generalized Phrase Structure Grammar.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The population frequenceis of species and the estimation of population parameters",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Cambridge",
"suffix": ""
},
{
"first": "I",
"middle": [
"J"
],
"last": "Good",
"suffix": ""
}
],
"year": 1953,
"venue": "Biometrika",
"volume": "40",
"issue": "",
"pages": "237--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harvard University Press, Cambridge, MA. Good, I. J. 1953. The population frequenceis of species and the estimation of population parameters. Biometrika 40 (3 and 4), pages 237-264.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Where's the syntax? The New York University MUC-6 System",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Sixth Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grishman, Ralph. 1995. Where's the syntax? The New York University MUC-6 System. In Proceedings of the Sixth Message Understanding Conference, Columbia, MD.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Lexicon-grammar and the syntactic analysis of French",
"authors": [
{
"first": "Maurice",
"middle": [],
"last": "Gross",
"suffix": ""
}
],
"year": 1984,
"venue": "Proceedings of the lOth International Conference on Computational Linguistics (COLING'84)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gross, Maurice. 1984. Lexicon-grammar and the syntactic analysis of French. In Proceedings of the lOth International Conference on Computational Linguistics (COLING'84), Stanford, CA.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "FASTUS: A cascaded finite-state transducer for extracting information from natural-language text",
"authors": [
{
"first": "Jerry",
"middle": [
"R"
],
"last": "Hobbs",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Appelt",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bear",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Israel Megumi",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Kameyama",
"suffix": ""
},
{
"first": "Mabry",
"middle": [],
"last": "Stickel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tyson",
"suffix": ""
}
],
"year": 1997,
"venue": "Finite State Devices for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hobbs, Jerry R., Douglas Appelt, John Bear, David Israel Megumi Kameyama, Mark Stickel, and Mabry Tyson. 1997. FASTUS: A cascaded finite-state transducer for extracting information from natural-language text. In E. Roche and Y. Schabes, editors, Finite State Devices for Natural Language Processing. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "SRI International FASTUS system MUC-6 test results and analysis",
"authors": [
{
"first": "Jerry",
"middle": [
"R"
],
"last": "Hobbs",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Douglas",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Appelt",
"suffix": ""
},
{
"first": "David",
"middle": [
"Israel"
],
"last": "Bear",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Kehler",
"suffix": ""
},
{
"first": "Megumi",
"middle": [],
"last": "Kamayama",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Myers",
"suffix": ""
},
{
"first": "Mabry",
"middle": [],
"last": "Tyson",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Sixth Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hobbs, Jerry R., Douglas E. Appelt, John Bear, David Israel Andy Kehler, Megumi Kamayama, David Martin, Karen Myers, and Mabry Tyson. 1995. SRI International FASTUS system MUC-6 test results and analysis. In Proceedings of the Sixth Message Understanding Conference, Columbia, MD.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Decision tree parsing using a hidden derivation model",
"authors": [
{
"first": "Fred",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Magerman",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings from the ARPA Workshop on Human Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, Fred, John Lafferty, David M. Magerman, Robert Mercer, Adwait Ratnaparkhi, and Salim Roukos. 1994. Decision tree parsing using a hidden derivation model. In Proceedings from the ARPA Workshop on Human Language Technology Workshop, March.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Computation of syntactic structure",
"authors": [
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1960,
"venue": "Advances in Documentation and Library Science",
"volume": "III",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind K. 1960. Computation of syntactic structure. In Advances in Documentation and Library Science, volume III, Part 2. Interscience Publishers, Inc., NY.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Tree adjoining grammars: How much context sensitivity is required to provide a reasonable structural description",
"authors": [
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1985,
"venue": "Natural Language Parsing",
"volume": "",
"issue": "",
"pages": "206--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind K. 1985. Tree adjoining grammars: How much context sensitivity is required to provide a reasonable structural description? In D. Dowty, I. Karttunen, and A. Zwicky, editors, Natural Language Parsing. Cambridge University Press, Cambridge, U.K., pages 206-250.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "An introduction to tree adjoining grammars",
"authors": [
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1987,
"venue": "Mathematics of Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind K. 1987. An introduction to tree adjoining grammars. In A. Manaster Ramer, editor, Mathematics of Language.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Role of constrained computational systems in natural language processing",
"authors": [
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1998,
"venue": "Artificial Intelligence",
"volume": "103",
"issue": "",
"pages": "117--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind K. 1998. Role of constrained computational systems in natural language processing. Artificial Intelligence, 103:117-132.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A parser from antiquity",
"authors": [
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Hopely",
"suffix": ""
}
],
"year": 1997,
"venue": "Natural Language Engineering",
"volume": "2",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind K. and Philip Hopely. 1997. A parser from antiquity. Natural Language Engineering, 2(4).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Tree adjunct grammars",
"authors": [
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Takahashi",
"suffix": ""
}
],
"year": 1975,
"venue": "Journal of Computer and System Sciences",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind K., L. Levy, and M. Takahashi. 1975. Tree adjunct grammars. Journal of Computer and System Sciences.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Tree-adjoining grammars",
"authors": [
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1996,
"venue": "Handbook of Formal Languages and Automata",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind K. and Yves Schabes, 1996. Tree-adjoining grammars. In Handbook of Formal Languages and Automata.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Disambiguation of super parts of speech (or supertags): Almost parsing",
"authors": [
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th International Conference on Computational Linguistics (COLING'94)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind K. and B. Srinivas. 1994. Disambiguation of super parts of speech (or supertags): Almost parsing. In Proceedings of the 15th International Conference on Computational Linguistics (COLING'94), Kyoto, Japan, August.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Lexical-functional grammar: A formal system for grammatical representation",
"authors": [
{
"first": "Ronald",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Bresnan",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaplan, Ronald and Joan Bresnan. 1983. Lexical-functional grammar: A formal system for grammatical representation. In J. Bresnan, editor, The Mental Representation of Grammatical Relations. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Constraint Grammar: A Language-Independent System for Parsing Unrestricted Text",
"authors": [
{
"first": "F",
"middle": [],
"last": "Karlsson",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Voutilainen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Heikkil~i",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Anttila",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karlsson, F., A. Voutilainen, J. Heikkil~i, and A. Anttila. 1994. Constraint Grammar: A Language-Independent System for Parsing Unrestricted Text. Mouton de Gruyter, Berlin and NY.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Regular expressions for language engineering",
"authors": [
{
"first": "L",
"middle": [],
"last": "Karttunen",
"suffix": ""
},
{
"first": "J-P",
"middle": [],
"last": "Chanod",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Schiller",
"suffix": ""
}
],
"year": 1997,
"venue": "Natural Language Engineering",
"volume": "2",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karttunen, L. J-P. Chanod, G. Grefenstette, and A. Schiller. 1997. Regular expressions for language engineering. Natural Language Engineering, 2(4).",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Compilation of HPSG to TAG",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Kasper",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Kiefer",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Netter",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vijay-Shanker",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Meeting",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kasper, Robert, Bernd Kiefer, Klaus Netter, and K. Vijay-Shanker. 1995. Compilation of HPSG to TAG. In Proceedings of the 33rd Annual Meeting, Cambridge, MA. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Estimation of probabilities from sparse data for the language model component of a speech recognizer",
"authors": [
{
"first": "Slava",
"middle": [
"M"
],
"last": "Katz",
"suffix": ""
}
],
"year": 1987,
"venue": "IEEE Transactions on Acoustics, Speech and SignalProcessing",
"volume": "35",
"issue": "3",
"pages": "400--401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katz, Slava M. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech and SignalProcessing, 35(3):400-401.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Probabilistic Tagging with Feature Structures",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Kempe",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th International Conference on Computational Linguistics (COLING'94)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kempe, Andre. 1994. Probabilistic Tagging with Feature Structures. In Proceedings of the 15th International Conference on Computational Linguistics (COLING'94), Kyoto, Japan, August.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "The linguistic relevance of tree adjoining grammars",
"authors": [
{
"first": "Anthony",
"middle": [
"S"
],
"last": "Kroch",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Aravind",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kroch, Anthony S. and Aravind K. Joshi. 1985. The linguistic relevance of tree adjoining grammars. Technical Report MS-CIS-85-16, Department of Computer and Information Science, University of Pennsylvania.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Magerman, David M. 1995. Statistical decision-tree models for parsing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kuno",
"suffix": ""
}
],
"year": 1966,
"venue": "Readings in Automatic Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuno, S. 1966. Harvard predictive analyzer. In David G. Hays, editor, Readings in Automatic Language Processing. American Elsevier Pub. Co., NY. Magerman, David M. 1995. Statistical decision-tree models for parsing. In Proceedings of the 33rd Annual Meeting. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"M"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, Mitchell M., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Varieties of heuristics in sentence processing",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1994,
"venue": "Current Issues in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nagao, Makoto. 1994. Varieties of heuristics in sentence processing. In Current Issues in Natural Language Processing: In Honour of Don Walker. Giardini with Kluwer.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "On the estimation of 'small' probabilities by leaving-one-out",
"authors": [
{
"first": "Herman",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Ute",
"middle": [],
"last": "Essen",
"suffix": ""
},
{
"first": "Reinhard",
"middle": [],
"last": "Kneser",
"suffix": ""
}
],
"year": 1995,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "17",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ney, Herman, Ute Essen, and Reinhard Kneser. 1995. On the estimation of 'small' probabilities by leaving-one-out. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(2).",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A variable-length category-based n-gram language model",
"authors": [
{
"first": "T",
"middle": [
"R"
],
"last": "Niesler",
"suffix": ""
},
{
"first": "P",
"middle": [
"C"
],
"last": "Woodland",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niesler, T. R. and P. C. Woodland. 1996. A variable-length category-based n-gram language model. In Proceedings, IEEE ICASSP.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Analyse syntaxique transformationelle du francais par transducteurs et lexique-grammaire",
"authors": [
{
"first": "Emmanuel",
"middle": [],
"last": "Roche",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roche, Emmanuel. 1993. Analyse syntaxique transformationelle du francais par transducteurs et lexique-grammaire. Ph.D. thesis, Universite Paris 7.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Mathematical and Computational Aspects of Lexicalized Grammars",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schabes, Yves. 1990. Mathematical and Computational Aspects of Lexicalized Grammars. Ph.D. thesis, Computer Science Department, University of Pennsylvania.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Parsing strategies with 'lexicalized' grammars: Application to Tree Adjoining Grammars",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Schabes",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Abeillg",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 12th International Conference on Computational Linguistics (COLING'88)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schabes, Yves, Anne AbeillG and Aravind K. Joshi. 1988. Parsing strategies with 'lexicalized' grammars: Application to Tree Adjoining Grammars. In Proceedings of the 12th International Conference on Computational Linguistics (COLING'88), Budapest, Hungary, August.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "editor, Current Issues in Parsing Technologies",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Schabes",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schabes, Yves and Aravind K. Joshi. 1991. Parsing with lexicalized tree adjoining grammar. In M. Tomita, editor, Current Issues in Parsing Technologies. Kluwer Academic Publishers.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Parsing the Wall Street Journal with the inside-outside algorithm",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Schabes",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the European ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schabes, Y., M. Roth, and R. Osborne. 1993. Parsing the Wall Street Journal with the inside-outside algorithm. In Proceedings of the European ACL.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Parsing English with a Link Grammar",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Sleator",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Temperley",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sleator, Daniel and Davy Temperley. 1991. Parsing English with a Link Grammar. Technical Report CMU-CS-91-196, Department of Computer Sdence, Carnegie Mellon University.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Almost parsing\" technique for language modeling",
"authors": [
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of lCSLP96 Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas, B. 1996. \"Almost parsing\" technique for language modeling. In Proceedings of lCSLP96 Conference, Philadelphia, PA.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Complexity of Lexical Descriptions and its Relevance to Partial Parsing",
"authors": [
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas, B. 1997a. Complexity of Lexical Descriptions and its Relevance to Partial Parsing. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Performance evaluation of supertagging for partial parsing",
"authors": [
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas, B. 1997b. Performance evaluation of supertagging for partial parsing. In Proceedings of the International Workshop on Parsing Technologies, September.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Heuristics and parse ranking",
"authors": [
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Doran",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Kulick",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 4th Annual International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas, B., Christine Doran, and Seth Kulick. 1995. Heuristics and parse ranking. In Proceedings of the 4th Annual International Workshop on Parsing Technologies, Prague, September.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Combinatory grammars and parasitic gaps",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 1987,
"venue": "Natural Language and Linguistic Theory",
"volume": "5",
"issue": "",
"pages": "403--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steedman, Mark. 1987. Combinatory grammars and parasitic gaps. Natural Language and Linguistic Theory, 5:403--439.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "The Syntactic Interface",
"authors": [],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steedman, Mark editor. 1997. The Syntactic Interface. MIT Press, Cambridge, MA and London, England.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Syntactic analysis of natural language using linguistic rules and corpus-based patterns",
"authors": [
{
"first": "Pasi",
"middle": [],
"last": "Tapanainen",
"suffix": ""
},
{
"first": "Timo",
"middle": [],
"last": "J~irvinen",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th International Conference on Computational Linguistics (COLING'94)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tapanainen, Pasi and Timo J~irvinen. 1994. Syntactic analysis of natural language using linguistic rules and corpus-based patterns. In Proceedings of the 15th International Conference on Computational Linguistics (COLING'94), Kyoto, Japan, August.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "A Study of Tree Adjoining Grammars",
"authors": [
{
"first": "K",
"middle": [],
"last": "Vijay-Shanker",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vijay-Shanker, K. 1987. A Study of Tree Adjoining Grammars. Ph.D. thesis, Department of Computer and Information Science, University of Pennsylvania.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Unification based tree adjoining grammars",
"authors": [
{
"first": "K",
"middle": [],
"last": "Vijay-Shanker",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Aravind",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vijay-Shanker, K. and Aravind K. Joshi. 1991. Unification based tree adjoining grammars. In J. Wedekind, editor, Un~'cation-based Grammars. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Designing a Parsing Grammar. Publications of the Department of General Linguistics",
"authors": [
{
"first": "Atro",
"middle": [],
"last": "Voutilainen",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Voutilainen, Atro. 1994. Designing a Parsing Grammar. Publications of the Department of General Linguistics, University of Helsinki.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Understanding line drawings of scenes with shadows",
"authors": [
{
"first": "D",
"middle": [],
"last": "Waltz",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waltz, D. 1975. Understanding line drawings of scenes with shadows. In P.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Psychology of Computer Vision",
"authors": [
{
"first": "Winston",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Winston, editor, Psychology of Computer Vision, MIT Press.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Coping with ambiguity and unknown words through",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Palmucci",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Meteer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weischedel, Ralph, Richard Schwartz, Jeff Palmucci, Marie Meteer, and Lance Ramshaw. 1993. Coping with ambiguity and unknown words through",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "selection of the supertags associated with each word of the sentence: the purchase price includes two ancillary companies. Span of the supertag: Span of a supertag is the minimum number of lexical items that the supertag can coven Each substitution site of a supertag will cover at least one lexical item in the input. A simple rule can be used to eliminate supertags based on the span constraint: if the span of a supertag is larger than the input string, then the supertag cannot be used in any parse of the input string.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "-SBJ\" (\"NNP .... Mr.\") (\"NNP .... Vinken\") ) (\"VP\" (\"VBZ .... is\") (\"NP-PRD\" (\"NP\" (\"NN .... chairman\") ) (\"PP\" (\"IN .... of\") (\"NP\" (\"NP\" (\"NNP .... Elsevier\") (\"NNP .... N.V.\") ) (-.... ,,) (\"NP\" (\"DT .... the\") (\"NNP .... Dutch\") (\"VBG\" \") ))))) (,,. .... .,,) ))",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Pr(W1, W2 ..... WN I T1, T2 ..... TN) ~ II Pr(Wi l Ti) an n-gram (trigram, in this case) approximation N Pr(T1, T2 ..... TN) ,,~ 1-I Pr(",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "15r(T3IT1, T2) = Pr(T3]T1, T2) if Pr(T31T1, T2) > 0 = a(T1, T2) * Pr(T31T2) if Pr(T21T1) > 0 = Pr(T31T2) otherwise Pr(T2IT1) = Pr(T2IT1) if Pr(T2IT1) > 0 = fl(T1) * Prl(T2) otherwisewhere a(Ti, Tj) and fl(Tk) are constants to ensure that the probabilities sum to one.",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "For testing the performance of the trigram supertagger on the IBM Manual corpus, a set of 14,000 words 8 Sentences in wsJ Sections 15 through 18 of Penn Treebank. 9 Sentences in WSJ Sections 00 through 24, except Section 20 of Penn Treebank. 10 Sentences in WSJ Section 20 of Penn Treebank.",
"num": null
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"content": "
System | Total # of Words | Average # of Supertags/Word |
Without Structural Constraints | 48,783 | 47.0 |
With Structural Constraints | 48,783 | 25.0 |
",
"text": "Supertag ambiguity with and without the use of structural constraints."
},
"TABREF9": {
"type_str": "table",
"html": null,
"num": null,
"content": "Data Set | Size of | Training | Size of | % Correct |
| Training Set | | Test Set | |
| (Words) | | (Words) | |
XTAG Parses | 8,000 | Unigram | 3,000 | 73.4% |
| | (Baseline) | | |
| | Trigram | 3,000 | 86.0% |
Converted | 200,000 | Unigram | | |
Penn Treebank | | (Baseline) | 47,000 | 75.3% |
Parses | | Trigram | 47,000 | 90.9% |
| 1,000,000 | Unigram | | |
| | (Baseline) | 47,000 | 77.2% |
| | Trigram | 47,000 | 92.2% |
",
"text": "Performance of the supertagger on the WSJ corpus."
},
"TABREF10": {
"type_str": "table",
"html": null,
"num": null,
"content": "",
"text": ""
},
"TABREF12": {
"type_str": "table",
"html": null,
"num": null,
"content": "Data Set | Size of | Size of | Training | % Correct |
| Test Set | Training Set | | |
| (Words) | (Words) | | |
Converted | 47,000 | 200,000 | Trigram | 90.9% |
Penn Treebank | | | (Best Supertag) | |
Parses | | | Trigram | 95.8% |
| | | (3-Best Supertags) | |
| | 1,000,000 | Trigram | 92.2% |
| | | (Best Supertag) | |
| | | Trigram | 97.1% |
| | | (3-Best Supertags) | |
",
"text": "Performance improvement of 3-best supertagger over the 1-best supertagger on the WSJ corpus."
}
}
}
}