{ "paper_id": "P95-1036", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:34:09.834892Z" }, "title": "Some Novel Applications of Explanation-Based Learning to Parsing Lexicalized Tree-Adjoining Grammars\"", "authors": [ { "first": "B", "middle": [], "last": "Srinivas", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania Philadelphia", "location": { "postCode": "19104", "region": "PA", "country": "USA" } }, "email": "" }, { "first": "Aravind", "middle": [ "K" ], "last": "Joshi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania Philadelphia", "location": { "postCode": "19104", "region": "PA", "country": "USA" } }, "email": "joshi@linc.cis.upenn.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we present some novel applications of Explanation-Based Learning (EBL) technique to parsing Lexicalized Tree-Adjoining grammars. The novel aspects are (a) immediate generalization of parses in the training set, (b) generalization over recursive structures and (c) representation of generalized parses as Finite State Transducers. A highly impoverished parser called a \"stapler\" has also been introduced. We present experimental results using EBL for different corpora and architectures to show the effectiveness of our approach.", "pdf_parse": { "paper_id": "P95-1036", "_pdf_hash": "", "abstract": [ { "text": "In this paper we present some novel applications of Explanation-Based Learning (EBL) technique to parsing Lexicalized Tree-Adjoining grammars. The novel aspects are (a) immediate generalization of parses in the training set, (b) generalization over recursive structures and (c) representation of generalized parses as Finite State Transducers. A highly impoverished parser called a \"stapler\" has also been introduced. We present experimental results using EBL for different corpora and architectures to show the effectiveness of our approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper we present some novel applications of the so-called Explanation-Based Learning technique (EBL) to parsing Lexicalized Tree-Adjoining grammars (LTAG). EBL techniques were originally introduced in the AI literature by (Mitchell et al., 1986; Minton, 1988; van Harmelen and Bundy, 1988) . The main idea of EBL is to keep track of problems solved in the past and to replay those solutions to solve new but somewhat similar problems in the future. Although put in these general terms the approach sounds attractive, it is by no means clear that EBL will actually improve the performance of the system using it, an aspect which is of great interest to us here. Rayner (1988) was the first to investigate this technique in the context of natural language parsing. Seen as an EBL problem, the parse of a single sentence represents an explanation of why the sentence is a part of the language defined by the grammar. Parsing new sentences amounts to finding analogous explanations from the training sentences. As a special case of EBL, Samuelsson and *This work was partiaJly supported by ARC) grant DAAL03-89-0031, ARPA grant N00014-90-J-1863, NSF STC grsmt DIR-8920230, and Ben Franklin Partnership Program (PA) gremt 93S. 3078C-6 Rayner (1991) specialize a grammar for the ATIS domain by storing chunks of the parse trees present in a treebank of parsed examples. The idea is to reparse the training examples by letting the parse tree drive the rule expansion process and halting the expansion of a specialized rule if the current node meets a 'tree-cutting' criteria. However, the problem of specifying an optimal 'tree-cutting' criteria was not addressed in this work. Samuelsson (1994) used the information-theoretic measure of entropy to derive the appropriate sized tree chunks automatically. Neumann (1994) also attempts to specialize a grammar given a training corpus of parsed exampies by generalizing the parse for each sentence and storing the generalized phrasal derivations under a suitable index.", "cite_spans": [ { "start": 230, "end": 253, "text": "(Mitchell et al., 1986;", "ref_id": "BIBREF4" }, { "start": 254, "end": 267, "text": "Minton, 1988;", "ref_id": "BIBREF3" }, { "start": 268, "end": 297, "text": "van Harmelen and Bundy, 1988)", "ref_id": null }, { "start": 669, "end": 682, "text": "Rayner (1988)", "ref_id": "BIBREF7" }, { "start": 1230, "end": 1251, "text": "3078C-6 Rayner (1991)", "ref_id": null }, { "start": 1679, "end": 1696, "text": "Samuelsson (1994)", "ref_id": "BIBREF10" }, { "start": 1806, "end": 1820, "text": "Neumann (1994)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although our work can be considered to be in this general direction, it is distinct in that it exploits some of the key properties of LTAG to (a) achieve an immediate generalization of parses in the training set of sentences, (b) achieve an additional level of generalization of the parses in the training set, thereby dealing with test sentences which are not necessarily of the same length as the training sentences and (c) represent the set of generalized parses as a finite state transducer (FST), which is the first such use of FST in the context of EBL, to the best of our knowledge. Later in the paper, we will make some additional comments on the relationship between our approach and some of the earlier approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In addition to these special aspects of our work, we will present experimental results evaluating the effectiveness of our approach on more than one kind of corpus. We also introduce a device called a \"stapler\", a considerably impoverished parser, whose only job is to do term unification and compute alternate attachments for modifiers. We achieve substantial speed-up by the use of \"stapler\" in combination with the output of the FST.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows. In Section 2 we provide a brief introduction to LTAG with the help of an example. In Section 3 we discuss our approach to using EBL and the advantages provided by LTAG. The FST representation used for EBL is illustrated in Section 4. In Section 5 we present the \"stapler\" in some detail. The results of some of the experiments based on our approach are presented in Section 6. In Section 7 we discuss the relevance of our approach to other lexicalized grammars. In Section 8 we conclude with some directions for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lexicalized Tree-Adjoining Grammar (LTAG) (Schabes et al., 1988; Schabes, 1990) consists of ELE-MENTARY TREES, with each elementary tree having a lexical item (anchor) on its frontier. An elementary tree serves as a complex description of the anchor and provides a domain of locality over which the anchor can specify syntactic and semantic (predicate-argument) constraints. Elementary trees are of two kinds -(a) INITIAL TREES and (b) AUX-ILIARY TREES. Nodes on the frontier of initial trees are marked as substitution sites by a '~'. Exactly one node on the frontier of an auxiliary tree, whose label matches the label of the root of the tree, is marked as a foot node by a '.'; the other nodes on the frontier of an auxiliary tree are marked as substitution sites. Elementary trees are combined by Substitution and Adjunction operations.", "cite_spans": [ { "start": 42, "end": 64, "text": "(Schabes et al., 1988;", "ref_id": "BIBREF11" }, { "start": 65, "end": 79, "text": "Schabes, 1990)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lexicalized Tree-Adjoining Grammar", "sec_num": "2" }, { "text": "Each node of an elementary tree is associated with the top and the bottom feature structures (FS). The bottom FS contains information relating to the subtree rooted at the node, and the top FS contains information relating to the supertree at that node. 1 The features may get their values from three different sources such as the morphology of anchor, the structure of the tree itself, or by unification during the derivation process. FS are manipulated by substitution and adjunction as shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 498, "end": 506, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Lexicalized Tree-Adjoining Grammar", "sec_num": "2" }, { "text": "The initial trees (as) and auxiliary trees (/3s) for the sentence show me the flights from Boston to Philadelphia are shown in Figure 2 . Due to the limited space, we have shown only the features on the al tree. The result of combining the elementary trees 1Nodes marked for substitution are associated with only the top FS.", "cite_spans": [], "ref_spans": [ { "start": 127, "end": 135, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Lexicalized Tree-Adjoining Grammar", "sec_num": "2" }, { "text": "shown in Figure 2 is the derived tree, shown in Figure 2(a) . The process of combining the elementary trees to yield a parse of the sentence is represented by the derivation tree, shown in Figure 2 (b). The nodes of the derivation tree are the tree names that are anchored by the appropriate lexical items. The combining operation is indicated by the nature of the arcs-broken line for substitution and bold line for adjunction-while the address of the operation is indicated as part of the node label. The derivation tree can also be interpreted as a dependency tree 2 with unlabeled arcs between words of the sentence as shown in Figure 2 (c).", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 17, "text": "Figure 2", "ref_id": null }, { "start": 48, "end": 59, "text": "Figure 2(a)", "ref_id": null }, { "start": 189, "end": 197, "text": "Figure 2", "ref_id": null }, { "start": 632, "end": 640, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Lexicalized Tree-Adjoining Grammar", "sec_num": "2" }, { "text": "Elementary trees of LTAG are the domains for specifying dependencies. Recursive structures are specified via the auxiliary trees. The three aspects of LTAG -(a) lexicalization, (b)-extended domain of locality and (c) factoring of recursion, provide a natural means for generalization during the EBL pro-ce88.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicalized Tree-Adjoining Grammar", "sec_num": "2" }, { "text": "We are pursuing the EBL approach in the context of a wide-coverage grammar development system called XTAG (Doran et al., 1994) . The XTAG system consists of a morphological analyzer, a part-ofspeech tagger, a wide-coverage LTAG English grammar, a predictive left-to-right Early-style parser for LTAG (Schabes, 1990) and an X-windows interface for grammar development (Paroubek et al., 1992) . Figure 3 shows a flowchart of the XTAG system. The input sentence is subjected to morphological analysis and is parts-of-speech tagged before being sent to the parser. The parser retrieves the elementary trees that the words of the sentence anchor and combines them by adjunction and substitution operations to derive a parse of the sentence. Given this context, the training phase of the EBL process involves generalizing the derivation trees generated by XTAG for a training sentence and storing these generalized parses in the generalized parse 2There axe some differences between derivation trees and conventional dependency trees. However we will not discuss these differences in this paper as they are not relevant to the present work. Figure 4 . An index using the morphological features of the words in the input sentence is computed. Using this index, a set of generalized parses is retrieved from the generalized parse database created in the training phase. If the retrieval fails to yield any generalized parse then the input sentence is parsed using the full parser. However, if the retrieval succeeds then the generalized parses are input to the \"stapler\". Section 5 provides a description of the \"stapler\".", "cite_spans": [ { "start": 106, "end": 126, "text": "(Doran et al., 1994)", "ref_id": "BIBREF1" }, { "start": 300, "end": 315, "text": "(Schabes, 1990)", "ref_id": null }, { "start": 367, "end": 390, "text": "(Paroubek et al., 1992)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 393, "end": 401, "text": "Figure 3", "ref_id": null }, { "start": 1135, "end": 1143, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "3.1 Implications of LTAG representation for EBL An LTAG parse of a sentence can be seen as a sequence of elementary trees associated with the lexical items of the sentence along with substitution and adjunction links among the elementary trees. Also, the feature values in the feature structures of each node of every elementary tree are instantiated by the parsing process. Given an LTAG parse, the generalization of the parse is truly immediate in that a generalized parse is obtained by (a) uninstantiating the particular lexical items that anchor the individual elementary trees in the parse and (h) uninstantiating the feature values contributed by the morphology of the anchor and the derivation process. This type of generalization is called feature-generalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "In other EBL approaches (Rayner, 1988; Neumann, 1994; Samuelsson, 1994) it is necessary to walk up and down the parse tree to determine the appropriate subtrees to generalize on and to suppress the feature values. In our approach, the process of generalization is immediate, once we have the output of the parser, since the elementary trees anchored by the words of the sentence define the subtrees of the parse for generalization. Replacing the elementary trees with unistantiated feature values is all that is needed to achieve this generalization.", "cite_spans": [ { "start": 24, "end": 38, "text": "(Rayner, 1988;", "ref_id": "BIBREF7" }, { "start": 39, "end": 53, "text": "Neumann, 1994;", "ref_id": "BIBREF5" }, { "start": 54, "end": 71, "text": "Samuelsson, 1994)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "The generalized parse of a sentence is stored indexed on the part-of-speech (POS) sequence of the training sentence. In the application phase, the POS sequence of the input sentence is used to retrieve a generalized parse(s) which is then instantiated with the features of the sentence. This method of retrieving a generalized parse allows for parsing of sentences of the same lengths and the same POS sequence as those in the training corpus. However, in our approach there is another generalization that falls out of the LTAG representation which allows for flexible matching of the index to allow the system to parse sentences that are not necessarily of the same length as any sentence in the training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "Auxiliary trees in LTAG represent recursive structures. So if there is an auxiliary tree that is used in an LTAG parse, then that tree with the trees for its arguments can be repeated any number of times, or possibly omitted altogether, to get parses of sentences that differ from the sentences of the training corpus only in the number of modifiers. This type of generalization is called modifier-generalization. This type of generalization is not possible in other EBL approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "This implies that the POS sequence covered by the auxiliary tree and its arguments can be repeated zero or more times. As a result, the index of a generalized parse of a sentence with modifiers is no longer a string but a regular expression pattern on the POS sequence and retrieval of a generalized parse involves regular expression pattern matching on the indices. If, for example, the training example was", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "(1) Show/V me/N the/D fiights/N from/P Boston/N to/P Philadelphia/N.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "then, the index of this sentence is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "(2) VNDN(PN)* since the two prepositions in the parse of this sentence would anchor (the same) auxiliary trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "The most efficient method of performing regular expression pattern matching is to construct a finite state machine for each of the stored patterns and then traverse the machine using the given test pattern. If the machine reaches the final state, then the test pattern matches one of the stored patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "Given that the index of a test sentence matches one of the indices from the training phase, the generalized parse retrieved will be a parse of the test sentence, modulo the modifiers. For example, if the test sentence, tagged appropriately, is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "(3) Show/V me/S the/D flights/N from/P Boston/N to/P Philadelphia/N on/P Monday/N.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "then, Mthough the index of the test sentence matches the index of the training sentence, the generalized parse retrieved needs to be augmented to accommodate the additional modifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "To accommodate the additional modifiers that may be present in the test sentences, we need to provide a mechanism that assigns the additional modifiers and their arguments the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "1. The elementary trees that they anchor and 2. The substitution and adjunction links to the trees they substitute or adjoin into.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "We assume that the additional modifiers along with their arguments would be assigned the same elementary trees and the same substitution and adjunction links as were assigned to the modifier and its arguments of the training example. This, of course, means that we may not get all the possible attachments of the modifiers at this time. (but see the discussion of the \"stapler\" Section 5.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach to using EBL", "sec_num": "3" }, { "text": "The representation in Figure 6 combines the generalized parse with the POS sequence (regular expression) that it is indexed by. The idea is to annotate each of the finite state arcs of the regular expression matcher with the elementary tree associated with that POS and also indicate which elementary tree it would be adjoined or substituted into. This results in a Finite State Transducer (FST) representation, illustrated by the example below. Consider the sentence (4) with the derivation tree in Figure 5 . An alternate representation of the derivation tree that is similar to the dependency representation, is to associate with each word a tuple (this_tree, head_word, head_tree, number). The description of the tuple components is given in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 30, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 500, "end": 508, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 746, "end": 753, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "FST Representation", "sec_num": "4" }, { "text": "Following this notation, the derivation tree in : the tree anchored by the head word; \"-\" if the current word does not depend on any other word. number : a signed number that indicates the direction and the ordinal position of the particular head elementary tree from the position of the current word OR : an unsigned number that indicates the Gorn-address (i.e., the node address) in the derivation tree to which the word attaches OR : \"-\" if the current word does not depend on any other word. /(fl2, N, a4, 2 ", "cite_spans": [], "ref_spans": [ { "start": 496, "end": 511, "text": "/(fl2, N, a4, 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "FST Representation", "sec_num": "4" }, { "text": ") N/(a~, V,al,-1) N/(c~4,V, C~l,-1) N/(as, P, fl,-1))* N/(a6, P, fl,-1))*", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FST Representation", "sec_num": "4" }, { "text": "After generalization, the trees /h and f12 are no longer distinct so we denote them by ft. The trees a5 and a6 are also no longer distinct, so we denote them by a. With this change in notation, the two Kleene star regular expressions in (6) can be merged into one, and the resulting representation is (7) which can be seen as a path in an FST as in Figure 6 . This FST representation is possible due to the lexicalized nature of the elementary trees. This representation makes a distinction between dependencies between modifiers and complements. The number in the tuple associated with each word is a signed number if a complement dependency is being expressed and is an unsigned number if a modifier dependency is being expressed, s", "cite_spans": [], "ref_spans": [ { "start": 349, "end": 357, "text": "Figure 6", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "FST Representation", "sec_num": "4" }, { "text": "In this section, we introduce a device called \"stapler\", a very impoverished parser that takes as input the result of the EBL lookup and returns the parse(s) for the sentence. The output of the EBL lookup is a sequence of elementary trees annotated with dependency links -an almost parse. To construct a complete parse, the \"stapler\" performs the following tasks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stapler", "sec_num": "5" }, { "text": "\u2022 Identify the nature of link: The dependency links in the almost parse are to be distinguished as either substitution links or adjunction links. This task is extremely straightforward since the types (initial or auxiliary) of the elementary trees a dependency link connects identifies the nature of the link.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stapler", "sec_num": "5" }, { "text": "\u2022 Modifier Attachment: The EBL lookup is not guaranteed to output all possible modifierhead dependencies for a give input, since the modifier-generalization assigns the same modifier-head link, as was in the training example, to all the additional modifiers. So it is the task of the stapler to compute all the alternate attachments for modifiers. \u2022 Address of Operation: The substitution and adjunction links are to be assigned a node address to indicate the location of the operation. The \"staPler\" assigns this using the structure of 3In a complement auxiliary tree the anchor subcategorizes for the foot node, which is not the case for a modifier auxiliaxy tree. the elementary trees that the words anchor and their linear order in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stapler", "sec_num": "5" }, { "text": "Feature Instantiation: The values of the features on the nodes of the elementary trees are to be instantiated by a process of unification. Since the features in LTAGs are finite-valued and only features within an elementary tree can be co-indexed, the \"stapler\" performs termunification to instantiate the features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stapler", "sec_num": "5" }, { "text": "We now present experimental results from two different sets of experiments performed to show the effectiveness of our approach. The first set of experiments, (Experiments l(a) through 1(c)), are intended to measure the coverage of the FST representation of the parses of sentences from a range of corpora (ATIS, IBM-Manual and Alvey). The results of these experiments provide a measure of repetitiveness of patterns as described in this paper, at the sentence level, in each of these corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": null }, { "text": "The details of the experiment with the ATIS corpus are as follows. A total of 465 sentences, average length of 10 words per sentence, which had been completely parsed by the XTAG system were randomly divided into two sets, a training set of 365 sentences and a test set of 100 sentences, using a random number generator. For each of the training sentences, the parses were ranked using heuristics 4 and the top three derivations were generMized and stored as an FST. The FST was tested for retrieval of a generalized parse for each of the test sentences that were pretagged with the correct POS sequence (In Experiment 2, we make use of the POS tagger to do the tagging). When a match is found, the output of the EBL component is a generalized parse that associates with each word the elementary tree that it anchors and the elementary tree into which it adjoins or substitutes into -an almost parse, s 4We axe not using stochastic LTAGs. For work on Stochastic LTAGs see (Resnik, 1992; Schabes, 1992) .", "cite_spans": [ { "start": 972, "end": 986, "text": "(Resnik, 1992;", "ref_id": "BIBREF8" }, { "start": 987, "end": 1001, "text": "Schabes, 1992)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment l(a):", "sec_num": null }, { "text": "SSee (Joshi and Srinivas, 1994) for the role of almost parse in supertag disaanbiguation. Experiment l(b) and 1(c): Similar experiments were conducted using the IBM-manual corpus and a set of noun definitions from the LDOCE dictionary that were used as the Alvey test set (Carroll, 1993) .", "cite_spans": [ { "start": 5, "end": 31, "text": "(Joshi and Srinivas, 1994)", "ref_id": "BIBREF2" }, { "start": 272, "end": 287, "text": "(Carroll, 1993)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment l(a):", "sec_num": null }, { "text": "Results of these experiments are summarized in Table 2 . The size of the FST obtained for each of the corpora, the coverage of the FST and the traversal time per input are shown in this table. The coverage of the FST is the number of inputs that were assigned a correct generalized parse among the parses retrieved by traversing the FST.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 54, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiment l(a):", "sec_num": null }, { "text": "Since these experiments measure the performance of the EBL component on various corpora we will refer to these results as the 'EBL-Lookup times'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment l(a):", "sec_num": null }, { "text": "The second set of experiments measure the performance improvement obtained by using EBL within the XTAG system on the ATIS corpus. The performance was measured on the same set of 100 sentences that was used as test data in Experiment l(a). The FST constructed from the generalized parses of the 365 ATIS sentences used in experiment l(a) has been used in this experiment as well. Experiment 2(a): The performance of XTAG on the 100 sentences is shown in the first row of Table 3 . The coverage represents the percentage of sentences that were assigned a parse. Experiment 2(b): This experiment is similar to Experiment l(a). It attempts to measure the coverage and response times for retrieving a generalized parse from the FST. The results are shown in the second row of Table 3 . The difference in the response times between this experiment and Experiment l(a) is due to the fact that we have included here the times for morphological analysis and the POS tagging of the test sentence. As before, 80% of the sentences were assigned a generalized parse. However, the speedup when compared to the XTAG system is a factor of about 60. Experiment 2(c): The setup for this experiment is shown in Figure 7 . The almost parse from the EBL lookup is input to the full parser of the XTAG system. The full parser does not take advantage of the dependency information present in the almost parse, however it benefits from the elementary tree assignment to the words in it. This information helps the full parser, by reducing the ambiguity of assigning a correct elementary tree sequence for the words of the sentence. The speed up shown in the third row of Table 3 is entirely due to this ambiguity reduction. If the EBL lookup fails to retrieve a parse, which happens for 20% of the sentences, then the tree assignment ambiguity is not reduced and the full parser parses with all the trees for the words of the sentence. The drop in coverage is due to the fact that for 10% of the sentences, the generalized parse retrieved could not be instantiated to the features of the sentence.", "cite_spans": [], "ref_spans": [ { "start": 471, "end": 478, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 772, "end": 779, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1193, "end": 1201, "text": "Figure 7", "ref_id": "FIGREF8" }, { "start": 1648, "end": 1655, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experiment l(a):", "sec_num": null }, { "text": "Coverage % Average time (in es) Figure 4 . In this experiment, the almost parse resulting from the EBL lookup is input to the \"stapler\" that generates all possible modifier attachments and performs term unification thus generating all the derivation trees. The \"stapler\" uses both the elementary tree assignment information and the dependency information present in the almost parse and speeds up the performance even further, by a factor of about 15 with further decrease in coverage by 10% due to the same reason as mentioned in Experiment 2(c). However the coverage of this system is limited by the coverage of the EBL lookup. The results of this experiment are shown in the fourth row of Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 32, "end": 40, "text": "Figure 4", "ref_id": "FIGREF2" }, { "start": 692, "end": 699, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "Relevance to other lexicalized grammars Some aspects of our approach can be extended to other lexicalized grammars, in particular to categorial grammars (e.g. Combinatory Categorial Grammar (CCG) (Steedman, 1987) ). Since in a categorial grammar the category for a lexical item includes its arguments, the process of generalization of the parse can also be immediate in the same sense of our approach. The generalization over recursive structures in a categorial grammar, however, will require further annotations of the proof trees in order to identify the 'anchor' of a recursive structure. If a lexical item corresponds to a potential recursive structure then it will be necessary to encode this information by making the result part of the functor to be X --+ X. Further annotation of the proof tree will be required to keep track of dependencies in order to represent the generalized parse as an FST.", "cite_spans": [ { "start": 196, "end": 212, "text": "(Steedman, 1987)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "7", "sec_num": null }, { "text": "In this paper, we have presented some novel applications of EBL technique to parsing LTAG. We have also introduced a highly impoverished parser called the \"stapler\" that in conjunction with the EBL resuits in a speed up of a factor of about 15 over a system without the EBL component. To show the effectiveness of our approach we have also discussed the performance of EBL on different corpora, and different architectures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "As part of the future work we will extend our approach to corpora with fewer repetitive sentence patterns. We propose to do this by generalizing at the phrasal level instead of at the sentence level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Practical Unification-based Parsing of Natural Language", "authors": [ { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Carroll. 1993. Practical Unification-based Parsing of Natural Language. University of Cambridge, Com- puter Laboratory, Cambridge, England.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "XTAG System -A Wide Coverage Grammar for English", "authors": [ { "first": "Christy", "middle": [], "last": "Doran", "suffix": "" }, { "first": "Dahlia", "middle": [], "last": "Egedi", "suffix": "" }, { "first": "Beth", "middle": [ "Ann" ], "last": "Hockey", "suffix": "" }, { "first": "B", "middle": [], "last": "Srinivas", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Zaidel", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 17 *h International Conference on Computational Linguistics (COLING '9~)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christy Doran, DahLia Egedi, Beth Ann Hockey, B. Srini- vas, and Martin Zaidel. 1994. XTAG System -A Wide Coverage Grammar for English. In Proceedings of the 17 *h International Conference on Computational Lin- guistics (COLING '9~), Kyoto, Japan, August.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Disambigu~-tion of Super Parts of Speech (or Supertags): Almost Parsing", "authors": [ { "first": "K", "middle": [], "last": "Aravind", "suffix": "" }, { "first": "B", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "", "middle": [], "last": "Srinivas", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 17 th International Con-]erence on Computational Linguistics (COLING '9~)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aravind K. Joshi and B. Srinivas. 1994. Disambigu~- tion of Super Parts of Speech (or Supertags): Almost Parsing. In Proceedings of the 17 th International Con- ]erence on Computational Linguistics (COLING '9~), Kyoto, Japan, August.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Qunatitative Results concerning the utility of Explanation-Based Learning", "authors": [ { "first": "Steve", "middle": [], "last": "Minton", "suffix": "" } ], "year": 1988, "venue": "Proceedings of 7 ~h AAAI Conference", "volume": "", "issue": "", "pages": "564--569", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steve Minton. 1988. Qunatitative Results concerning the utility of Explanation-Based Learning. In Proceed- ings of 7 ~h AAAI Conference, pages 564-569, Saint Paul, Minnesota.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Explanation-Based Generalization: A Unifying View", "authors": [ { "first": "Tom", "middle": [ "M" ], "last": "Mitchell", "suffix": "" }, { "first": "Richard", "middle": [ "M" ], "last": "Keller", "suffix": "" }, { "first": "Smadax", "middle": [ "T" ], "last": "Kedar-Carbelli", "suffix": "" } ], "year": 1986, "venue": "Machine Learning", "volume": "1", "issue": "", "pages": "47--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom M. Mitchell, Richard M. Keller, and Smadax T. Kedar-Carbelli. 1986. Explanation-Based Generaliza- tion: A Unifying View. Machine Learning 1, 1:47-80.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Application of Explanationbased Learning for Efficient Processing of Constraintbased Grammars", "authors": [ { "first": "Gfinter", "middle": [], "last": "Neumann", "suffix": "" } ], "year": 1994, "venue": "10 th IEEE Conference on Artificial Intelligence for Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gfinter Neumann. 1994. Application of Explanation- based Learning for Efficient Processing of Constraint- based Grammars. In 10 th IEEE Conference on Artifi- cial Intelligence for Applications, Sazt Antonio, Texas.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Xtag -a graphical workbench for developing tree-adjoining grammars", "authors": [ { "first": "Patrick", "middle": [], "last": "Paroubek", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Schabes", "suffix": "" }, { "first": "Aravind", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 1992, "venue": "Third Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Paroubek, Yves Schabes, and Aravind K. Joshi. 1992. Xtag -a graphical workbench for developing tree-adjoining grammars. In Third Conference on Ap- plied Natural Language Processing, Trento, Italy.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Applying Explanation-Based Generalization to Natural Langua4ge Processing", "authors": [ { "first": "Manny", "middle": [], "last": "Rayner", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the International Conference on Fifth Generation Computer Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manny Rayner. 1988. Applying Explanation-Based Generalization to Natural Langua4ge Processing. In Proceedings of the International Conference on Fifth Generation Computer Systems, Tokyo.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Probabilistic tree-adjoining grammax as a framework for statistical natural language processing", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Fourteenth International Conference on Computational Linguistics (COLING '9~)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1992. Probabilistic tree-adjoining gram- max as a framework for statistical natural language processing. In Proceedings of the Fourteenth In- ternational Conference on Computational Linguistics (COLING '9~), Ntntes, France, July.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Quantitative Evaluation of Explanation-Based Learning as an Optimization Tool for Large-Scale Natural Laatguage System", "authors": [], "year": 1991, "venue": "Proceedings of the I~ h Interna. tional Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christer Samuelsson aJad Manny Rayner. 1991. Quan- titative Evaluation of Explanation-Based Learning as an Optimization Tool for Large-Scale Natural Laat- guage System. In Proceedings of the I~ h Interna. tional Joint Conference on Artificial Intelligence, Syd- ney, Australia.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Grammar Specialization through Entropy Thresholds", "authors": [ { "first": "Chister", "middle": [], "last": "Samuelsson", "suffix": "" } ], "year": 1994, "venue": "32nd Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chister Samuelsson. 1994. Grammar Specialization through Entropy Thresholds. In 32nd Meeting of the Association for Computational Linguistics, Las Cruces, New Mexico.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "parsing strategies with 'lexicalized' grammars: Application to \"l~ee Adjoining Grammars", "authors": [ { "first": "Yves", "middle": [], "last": "Schabes", "suffix": "" }, { "first": "Anne", "middle": [], "last": "Abeill~", "suffix": "" }, { "first": "Aravind", "middle": [ "K" ], "last": "Ajad", "suffix": "" }, { "first": "", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the 12 *4 International Con/erence on Computational Linguistics ( COLIN G '88)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yves Schabes, Anne Abeill~, aJad Aravind K. Joshi. 1988. parsing strategies with 'lexicalized' grammars: Application to \"l~ee Adjoining Grammars. In Pro- ceedings of the 12 *4 International Con/erence on Com- putational Linguistics ( COLIN G '88), Budapest, Hun- gary, August.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Mathematical and Computational Aspects of Lexicalized Grammars", "authors": [ { "first": "Yves", "middle": [], "last": "Sch&bes", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yves Sch&bes. 1990. Mathematical and Computational Aspects of Lexicalized Grammars. Ph.D. thesis, Com- puter Science Department, University of Pennsylva- nia.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Stochastic lexicalized treeadjoining grammars", "authors": [ { "first": "Yves", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1992, "venue": "Proceedings o] the Fourteenth International Con]erence on Computational Linguistics (COLING '9~)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yves Schabes. 1992. Stochastic lexicalized tree- adjoining grammars. In Proceedings o] the Fourteenth International Con]erence on Computational Linguis- tics (COLING '9~), Nantes, Fr&ace, July.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Evaluating a wide-coverage grammar", "authors": [ { "first": "B", "middle": [], "last": "Srinivas", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Dora", "suffix": "" }, { "first": "Seth", "middle": [], "last": "Kullck", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Sarkar", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Srinivas, Christine Dora,s, Seth Kullck, and Anoop Sarkar. 1994. Evaluating a wide-coverage grammar. Manuscript, October.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Combinatory Graanmaxs and Paxasitic Gaps. Natural Language and Linguistic Theory", "authors": [ { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 1987, "venue": "", "volume": "5", "issue": "", "pages": "403--439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Steedman. 1987. Combinatory Graanmaxs and Paxasitic Gaps. Natural Language and Linguistic The- ory, 5:403-439.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Explemation-Based Generafization --Paxtial Evaluation", "authors": [], "year": null, "venue": "Artificial Intelligence", "volume": "36", "issue": "", "pages": "401--412", "other_ids": {}, "num": null, "urls": [], "raw_text": "Explemation-Based Generafization --Paxtial Evalua- tion. Artificial Intelligence, 36:401-412.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Figure 1: Substitution and Adjunction in LTAG" }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "(as and/~s) Elementary trees, (a) Derived Tree, (b) Derivation Tree, and (c) Dependency tree for the sentence: show me the flights from Boston to Philadelphia. Flowchart of the XTAG system with the EBL component database under an index computed from the morphological features of the sentence. The application phase of EBL is shown in the flowchart in" }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "show me the flights from Boston to Philadelphia." }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "(without the addresses of operations) is represented as in (5). Derivation Tree for the sentence: show me the flights from Boston to Philadelphia this_tree : the elementary tree that the word anchors head_word : the word on which the current word is dependent on; \"-\" if the current word does not depend on any other word. head_tree" }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "al, -, -, -) the/(a3, flights, ~4,+1) from/(fll, flights, a4, 2) to/(fi2, flights,a4, 2) me/(a2, show,al,-l) fiights/ ( a4,show , ~I , -I ) Boston/(as, from, fll -1) Philadelphia/(as, to, f12,-1)Generalization of this derivation tree results in the representation in (6)." }, "FIGREF6": { "type_str": "figure", "uris": null, "num": null, "text": "Finite State Transducer Representation for the sentences: show me the flights from Boston to Philadelphia, show me the flights from Boston to Philadelphia on Monday, ..." }, "FIGREF7": { "type_str": "figure", "uris": null, "num": null, "text": "s ................. .i l~.ivsttm llm" }, "FIGREF8": { "type_str": "figure", "uris": null, "num": null, "text": "System Setup for Experiment 2(c)." }, "TABREF0": { "html": null, "type_str": "table", "num": null, "content": "", "text": "Description of the tuple components" }, "TABREF2": { "html": null, "type_str": "table", "num": null, "content": "
", "text": "Coverage and Retrieval times for various corpora" }, "TABREF4": { "html": null, "type_str": "table", "num": null, "content": "
: Performance comparison of XTAG with
and without EBL component
Experiment 2(d): The setup for this experiment
is shown in
", "text": "" } } } }