{ "paper_id": "P94-1030", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:19:15.564887Z" }, "title": "GENERALIZED CHART ALGORITHM: AN EFFICIENT PROCEDURE FOR COST-BASED ABDUCTION", "authors": [ { "first": "Yasuharu", "middle": [], "last": "Den", "suffix": "", "affiliation": { "laboratory": "ATR Interpreting Telecommunications Research Laboratories", "institution": "", "location": { "addrLine": "2-2 Hikaridai, Seika-cho, Soraku-gun", "postCode": "619-02", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present an efficient procedure for cost-based abduction, which is based on the idea of using chart parsers as proof procedures. We discuss in detail three features of our algorithm-goal-driven bottom-up derivation, tabulation of the partial results, and agenda control mechanism-and report the results of the preliminary experiments, which show how these features improve the computational efficiency of cost-based abduction.", "pdf_parse": { "paper_id": "P94-1030", "_pdf_hash": "", "abstract": [ { "text": "We present an efficient procedure for cost-based abduction, which is based on the idea of using chart parsers as proof procedures. We discuss in detail three features of our algorithm-goal-driven bottom-up derivation, tabulation of the partial results, and agenda control mechanism-and report the results of the preliminary experiments, which show how these features improve the computational efficiency of cost-based abduction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Spoken language understanding is one of the most challenging research areas in natural language processing. Since spoken language is incomplete in various ways, i.e., containing speech errors, ellipsis, metonymy, etc., spoken language understanding systems should have the ability to process incomplete inputs by hypothesizing the underlying information. The abduction-based approach (Hobbs et al., 1988) has provided a simple and elegant way to realize such a task.", "cite_spans": [ { "start": 384, "end": 404, "text": "(Hobbs et al., 1988)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Consider the following 3apanese sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "(1) Sfseki kat-ta (a famous writer) buy PAST This sentence contains two typical phenomena arising in spoken language, i.e., metonymy and the ellipsis of a particle. When this sentence is uttered under the situation where the speaker reports his experience, its natural interpretation is the speaker bought a SSseki novel. To derive this interpretation, we need to resolve the following problems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 The metonymy implied by the noun phrase S6seki is expanded to a S6seki novel, based on the pragmatic knowledge that the name of a writer is sometimes used to refer to his novel. \u2022 The particle-less thematic relation between the verb katta and the noun phrase SSseki is determined to be the object case relation, based on the semantic knowledge that the object case relation between a trading action and a commodity can be linguistically expressed as a thematic relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "This interpretation is made by abduction. For instance, the above semantic knowledge is stated, in terms of the predicate logic, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "(2) sem(e,x) C trade(e) A commodity(x) A obj (e,x) Then, the inference process derives the consequent sem(e,x) by hypothesizing an antecedent obj(e,x), which is never proved from the observed facts. This process is called abduction.", "cite_spans": [ { "start": 45, "end": 50, "text": "(e,x)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Of course, there may be several other possibilities that support the thematic relation sem (e,x) . For instance, the thematic relation being determined to be the agent case relation, sentence (1) can have another interpretation, i.e., Sfseki bought something, which, under some other situations, might be more feasible than the first interpretation. To cope with feasibility, the abduction-based model usually manages the mechanism for evaluating the goodness of the interpretation. This is known as cost-based abduction (Hobbs et al., 1988) .", "cite_spans": [ { "start": 91, "end": 96, "text": "(e,x)", "ref_id": null }, { "start": 521, "end": 541, "text": "(Hobbs et al., 1988)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In cost-based abduction, each assumption bears a certain cost. For instance, the assumption obj(e,x), introduced by applying rule (2), is specified to have a cost of, say, $2. The goodness of the interpretation is evaluated by accumulating the costs of all the assumptions involved. The whole process of interpreting an utterance is depicted in the following schema:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "1. Find all possible interpretations, and 2. Select the one that has the lowest cost.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In our example, the interpretation that assumes the thematic relation to be the object case relation, with the metonymy being expanded to a S6seki novel, is cheaper than the interpretation that assumes the thematic relation to be the agent case relation; hence, the former is selected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "An apparent problem here is the high computational cost; because abduction allows many possibilities, the schema involves very heavy computation. Particularly in the spoken language understanding task, we need to consider a great number of possibilities when hypothesizing various underlying information. This makes the abduction process computationally demanding, and reduces the practicality of abduction-based systems. The existing models do not provide any basic solution to this problem. Charniak (Charniak and Husain, 1991; Charniak and Santos Jr., 1992) dealt with the problem, but those solutions are applicable only to the propositional case, where the search space is represented as a directed graph over ground formulas.", "cite_spans": [ { "start": 502, "end": 529, "text": "(Charniak and Husain, 1991;", "ref_id": "BIBREF0" }, { "start": 530, "end": 560, "text": "Charniak and Santos Jr., 1992)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In other words, they did not provide a way to build such graphs from rules, which, in general, contain variables and can be recursive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "This paper provides a basic and practical solution to the computation problem of cost-based abduction. The basic idea comes from the natural language parsing literature. As Pereira and Warren (1983) pointed out, there is a strong connection between parsing and deduction. They showed that parsing of DCG can be seen as a special case of deduction of Horn clauses; conversely, deduction can be seen as a generalization of parsing. Their idea of using chart parsers as deductive-proof procedures can easily be extended to the idea of using chart parsers as abductive-proof procedures. Because chart parsers have many advantages from the viewpoint of computational efficiency, chart-based abductive-proof procedures are expected to nicely solve the computation problem. Our algorithm, proposed in this paper, has the following features, which considerably enhance the computational efficiency of cost-based abduction:", "cite_spans": [ { "start": 173, "end": 198, "text": "Pereira and Warren (1983)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "1. Goal-driven bottom-up derivation, which reduces the search space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "2. Tabulation of the partial results, which avoids the recomputation of the same goal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "3. Agenda control mechanism, which realizes various search strategies to find the best solution efficiently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "The rest of the paper is organized as follows. First, we explain the basic idea of our algorithm, and then present the details of the algorithm along with simple examples. Next, we report the results of the preliminary experiments, which clearly show how the above features of our algorithm improve the computational efficiency. Then, we compare our algorithm with Pereira and Warren's algorithm, and finally conclude the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Pereira and Warren showed that chart parsers can be used as proof procedures; they presented the Earley deduction proof procedure, that is a generalization of top-down chart parsers. However, they mentioned only top-down chart parsers, which is not always very efficient compared to bottom-up (left-corner) chart parsers. It seems that using leftcorner parsers as proof procedures is not so easy, . . . . . . . . . . . ' :\"' \"' \" Let us begin with the general problems of Horn clause deduction with naive top-down and bottomup derivations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "\u2022 Deduction with top-down derivation is affected by the frequent backtracking necessitated by the inadequate selection of rules to be applied. \u2022 Deduction with bottom-up derivation is affected by the extensive vacuous computation, which never contributes to the proof of the initial goal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "These are similar to the problems that typically arise in natural language parsing with naive top-down and bottom-up parsers. In natural language parsing, these problems are resolved by introducing a more sophisticated derivation mechanism, i.e., left-corner parsing. We have attempted to apply such a sophisticated mechanism to deduction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "Suppose that the proof of a goal g(x,y) can be represented in the manner in Figure 1 ; the first argument x of the goal g(x,y) is shared by all the formulas along the path from the goal g(z,y) to the left corner am (z,zm) . In such a case, we can think of a derivation process that is similar to leftcorner parsing. We call this derivation head-driven derivation, which is depicted as follows:", "cite_spans": [ { "start": 215, "end": 221, "text": "(z,zm)", "ref_id": null } ], "ref_spans": [ { "start": 76, "end": 84, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "Step 1 Find a fact a(w,z) whose first argument w unifies with the first argument x of the goal g(x,y), and place it on the left corner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "Step 2 Find a rule am-l(W,Zrn-l) C a(W,Zm)/~ BZ ^ ... A Bn whose leftmost antecedent a(W,Zm) unifies with the left-corner key a(x,z), and introduce the new goals B1, ..., and Bn. If all these goals are recursively derived, then create the consequent a,,~_ 1 ( z ,zm_ 1 ), which dominates a(x,zm), B1, ..., and Bn, and place it on the left corner instead of a(x,z).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "Step3 If the consequent am-l(x,zm_l) unifies with the goal g(z,y), then finish the process. Otherwise, go back to step2 with am-1 (x,zm_l) being the new left-corner key.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "Left-corner parsing of DCG is just a special case of head-driven derivation, in which the input string is shared along the left border, i.e., the path from a nonterminal to the leftmost word in the string that is dominated by that nonterminal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "Also, semantic-head-driven generation (Shieber el al., 1989; van Noord, 1990 ) and head-corner parsing ivan Noord, 1991; Sikkel and op den Akker, 1993) can be seen as head-driven derivation, when the semantic-head/syntactic-head is moved to the leftmost position in the body of each rule and the argument representing the semantic-feature/headfeature is moved to the first position in the argument list of each formula.", "cite_spans": [ { "start": 38, "end": 60, "text": "(Shieber el al., 1989;", "ref_id": null }, { "start": 61, "end": 76, "text": "van Noord, 1990", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "To apply the above procedures, all rules must", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "be in chain form arn--l(W,Zrn-~) C arn(W,Zm) A B1 A ... A Bn;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "that is, in every rule, the first argument of the leftmost antecedent must be equal to the first argument of the consequent. This is the condition under which left-corner parsers can be used as proof procedures. Because this condition is overly restrictive, we extend the procedures so that they allow non-chain rules, i.e., rules not in chain form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "Step 1 is replaced by the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "Step 1 Find a non-chain rule a(w,z) C B1 A... A B~ such that the first argument w of the consequent a(w,z) unifies with the first argument z of the goal g(x,y), and introduce the new goals B1, ..., and /3,. A fact is regarded as a non-chain rule with an empty antecedent. If all these goals are recursively derived, then create the consequent a(z,z), which dominates B1, ..., and B,, and place it on the left corner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-driven Derivation", "sec_num": null }, { "text": "The idea given in the previous section realizes the goal-driven bottom-up derivation, which is the first feature of our algorithm. Then, we present a more refined algorithm based upon the idea, which realizes the other two features as well as the first one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalized Chart Algorithm", "sec_num": null }, { "text": "Like left-corner parsing, which has the drawback of repeatedly recomputing partial results, head-driven derivation will face the same problem when it is executed in a depth-first manner with backtracking. In the case of left-corner parsing, the problem is resolved by using the tabulation method, known as chart parsing (Kay, 1980) . A recent study by Haruno et al. (1993) has shown that the same method is applicable to semantic-headdriven generation. The method is also applicable to head-driven derivation, which is more general than semantic-head-driven generation.", "cite_spans": [ { "start": 320, "end": 331, "text": "(Kay, 1980)", "ref_id": "BIBREF2" }, { "start": 366, "end": 372, "text": "(1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Chart Parsing and its Generalization", "sec_num": null }, { "text": "To generalize charts to use in proof procedures, ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chart Parsing and its Generalization", "sec_num": null }, { "text": "Figure 2: Example of Generalized Charts we first define the chart lexicons. In chart parsing, lexicons are the words in the input string, each of which is used as the index for a subset of the edges in the chart; each edge incident from (the start-point of) lexicon w represents the substructure dominating the sub-string starting from w. In our case, from the-similarity between leftcorner parsing and head-driven derivation, lexicons are the terms that occur in the first argument position of any formula; each edge incident from (the start-point of) lexicon x represents the substructure dominating the successive sequence of the derived formulas starting from the fact in which z occupies the first argument position. For example, in the chart representing the proof in Figure 1 , all the edges corresponding to the formulas on the left border, i.e. am(X,Zrn), am--l (Z, ..., al(x, zl) and g(z,y), are incident from (the start-point of) lexicon z, and, hence, x is the index for these edges.", "cite_spans": [ { "start": 871, "end": 874, "text": "(Z,", "ref_id": null }, { "start": 875, "end": 879, "text": "...,", "ref_id": null }, { "start": 880, "end": 885, "text": "al(x,", "ref_id": null }, { "start": 886, "end": 889, "text": "zl)", "ref_id": null } ], "ref_spans": [ { "start": 774, "end": 782, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "m( <[A],IB]>,[B,A])", "sec_num": null }, { "text": "Following this definition of the chart lexicons, there are two major differences between chart parsing and proof procedures, which Haruno also showed to be the differences between chart parsing and semantic-head-driven generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "m( <[A],IB]>,[B,A])", "sec_num": null }, { "text": "1. In contrast to chart parsing, where lexicons are determined immediately upon input, in proof procedures lexicons should be incrementally introduced. 2. In contrast to chart parsing, where lexicons are connected one by one in a linear sequence, in proof procedures lexicons should be connected in many-to-many fashion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "m( <[A],IB]>,[B,A])", "sec_num": null }, { "text": "In proof procedures, the chart lexicons are not determined at the beginning of the proof (because we don't know which formulas are actually used in the proof), rather they are dynamically extracted from the subgoals as the process goes. In addition, if the rules are nondeterministic, it sometimes happens that there are introduced, from one leftcorner key, a(x,z), two or more distinct successive subgoals, bl(wl,y~), b2(w2,y2), etc., that have different first arguments, w 1, w 2, etc. In such a case, one lexicon x should be connected to two or more distinct lexicons, w 1, w 2, etc. Furthermore, it can happen that two or more distinct left-corner keys, al (xl,zl), a2(x2,z2) , etc., incidentally introduce the successive subgoals, bl(w,yl), b2(w,y~), etc., with the same first argument w. In such a case, two or more distinct lexicons, x 1, x 2, etc., should be connected to one lexicon w. Therefore, the connections among lexicons should be manyto-many. Figure 2 shows an example of charts with many-to-many connections, where the connections are represented by pointers A, B; etc.", "cite_spans": [ { "start": 661, "end": 679, "text": "(xl,zl), a2(x2,z2)", "ref_id": null } ], "ref_spans": [ { "start": 960, "end": 968, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "m( <[A],IB]>,[B,A])", "sec_num": null }, { "text": "We, so far, have considered deduction but not abduction. Here, we extend our idea to apply to abduction, and present the definition of the algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Algorithm", "sec_num": null }, { "text": "The extension for abduction is very simple. First, we add a new procedure, which introduces an assumption G for a given goal G. An assumption is treated as if it were a fact. This means that an assumption, as well as a fact, is represented as a passive edge in terms of the chart algorithm. Second, we associate a set S of assumptions with each edge e in the chart; S consists of all the assumptions that are contained in the completed part of the (partial) proof represented by the edge e. More formally, the assumption set 5 associated with an edge e is determined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Algorithm", "sec_num": null }, { "text": "A, then S--{A}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "If e is a passive edge representing an assumption", "sec_num": "1." }, { "text": "2. If e is a passive/active edge introduced from a non-chain rule, including fact, then S is empty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "If e is a passive edge representing an assumption", "sec_num": "1." }, { "text": "3. If e is a passive/active edge predicted from a chain rule with a passive edge e' being the leftcorner key, then S is equal to the assumption set S' of e'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "If e is a passive edge representing an assumption", "sec_num": "1." }, { "text": "4. If e is a passive/active edge created by combining an active edge el and a passive edge e2, then ,-q = $1 U $2 where 81 and ~q2 are the assumption sets of el and e2, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "If e is a passive edge representing an assumption", "sec_num": "1." }, { "text": "Taking these into account, the definition of our algorithm is as follows, f is a function that assigns a unique vertex to each chart lexicon. The notation A:S stands for the label of an edge e, where A is the label of e in an ordinary sense and S is the assumption set associated with e. Each passive edge T:S represents an answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "If e is a passive edge representing an assumption", "sec_num": "1." }, { "text": "Here, we present a simple example of the application of our algorithm to spoken language understanding. Figure 3 provides the rules for spoken Japanese understanding, with which the sentence (1) is parsed and interpreted. They include the pragmatic, semantic and knowledge rules as well as the syntactic and lexical rules.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 112, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Examples", "sec_num": null }, { "text": "The syntactic rules allow the connection between a verb and a noun phrase with or with-Syntactic Rules s (i,k,e)Cvp(i,k,e) vp(i,k,e)Cnp(i,j,c,x) A vp(j,k,e) A depend ( (c,e,x)d) vp ( i,k,e)C np( i,j,x) A vp(j,k,e) A depend ( (c,e,X)d) np (i,k,c,x)Cnp(i,j,x) A p(j,k,c) depend ( (c,e,x) ( (c,e,x) , ,c)C trade(e) A commodity(z) ^ obj ( (e,x) ,) $~", "cite_spans": [ { "start": 105, "end": 144, "text": "(i,k,e)Cvp(i,k,e) vp(i,k,e)Cnp(i,j,c,x)", "ref_id": null }, { "start": 166, "end": 177, "text": "( (c,e,x)d)", "ref_id": null }, { "start": 181, "end": 201, "text": "( i,k,e)C np( i,j,x)", "ref_id": null }, { "start": 223, "end": 234, "text": "( (c,e,X)d)", "ref_id": null }, { "start": 238, "end": 268, "text": "(i,k,c,x)Cnp(i,j,x) A p(j,k,c)", "ref_id": null }, { "start": 276, "end": 285, "text": "( (c,e,x)", "ref_id": null }, { "start": 286, "end": 295, "text": "( (c,e,x)", "ref_id": null }, { "start": 333, "end": 340, "text": "( (e,x)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": null }, { "text": "Knowledge Rules person( x )C soseki( x ) w~ter(x)Csoseki(x) book(x)Cnovd(x) eommodity( ~ )C book(z)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": null }, { "text": "trade(e)Cbuy(e) intend( e)C trade( e) Figure 3 : Example of Rules out a particle, which permit structures like [VP[NpS6sek2][vpkatla] ]. Such a structure is evaluated by the pragmatic and semantic criteria. That is, the dependency between a verbal concept e and a nominal concept x is supported if there is an entity y such that x and y have a pragmatic relation, i.e., a metonymy relation, and e and y have a semantic relation, i.e., a thematic relation. The metonymy relation is defined by the pragmatic rules, based on certain knowledge, such as that the name of a writer is sometimes used to refer to his novel. Also, the thematic relation is defined by the semantic rules, based on certain knowledge, such as that the object case relation between a trading action and a commodity can be linguistically expressed as a thematic relation.", "cite_spans": [ { "start": 111, "end": 133, "text": "[VP[NpS6sek2][vpkatla]", "ref_id": null } ], "ref_spans": [ { "start": 38, "end": 46, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Examples", "sec_num": null }, { "text": "The subscript $c of a formula A represents the cost of assuming formula A. A is easy to assume when c is small, while A is difficult to assume when c is large. For instance, the cost of interpreting the thematic relation between a trading action and a commodity as the object case relation is low, say $2, while the cost of interpreting the thematic relation between an intentional action and a third person as the agent case relation is high, say $20. This assignment of costs is suitable for a situation in which the speaker reports his experience. In spite of the difficulty of assigning suitable costs in general, the cost-based interpretation is valuable, because it provides a uniform criteria for syntax, semantics and pragmatics. Hopefully, several techniques, independently developed in these areas, e.g., stochastic parsing, example-based/corpus-based techniques for word sense/structural disambiguation, etc., will be usable for better cost assignment. Probability will also be a key technique for the cost assignment (Charniak and Shimony, 1990) . Figure 4 and Table 1 show the chart that is created when a sentence (1) is parsed and interpreted using our algorithm. Although the diagram seems complicated, it is easy to understand if we break down the diagram. Included are the syntactic parsing of the sentence (indicated by edges 2, 6, 7, 14, 52 and 53), the pragmatic interpretation of the metonymy by S6seki S (indicated by edges 17, 18, 20 and 24), the semantic interpretation of the thematic relation between a buying event B and a novel N written by S6seki (indicated by edges 42, 44, 45, 47, 48 and 50) , and so on. In the pragmatic interpretation, assumption novel(N) (edge 21) is introduced, which is reused in the semantic interpretation. In other words, a single assumption is used more than once. Such a tricky job is naturally realized by the nature of the chart algorithm.", "cite_spans": [ { "start": 1029, "end": 1057, "text": "(Charniak and Shimony, 1990)", "ref_id": null }, { "start": 1577, "end": 1623, "text": "(indicated by edges 42, 44, 45, 47, 48 and 50)", "ref_id": null } ], "ref_spans": [ { "start": 1060, "end": 1068, "text": "Figure 4", "ref_id": "FIGREF1" }, { "start": 1073, "end": 1080, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Examples", "sec_num": null }, { "text": "Since the aim of cost-based abduction is to find out the best solution, not all solutions, it is reasonable to consider combining heuristic search strategies with our algorithm to find the best solution efficiently. Our algorithm facilitates such an extension by using the agenda control mechanism, which is broadly used in advanced chart parsing systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agenda Control", "sec_num": null }, { "text": "The agenda is a storage for the edges created by any of the three procedures of the chart algorithm, out of which edges to be added to the chart are selected, one by one, by a certain criterion. The simplest strategy is to select the edge which has the minimal cost at that time, i.e., ordered search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agenda Control", "sec_num": null }, { "text": "Although ordered search guarantees that the first solution is the best one, it is not always very efficient. We can think of other search strategies, like best first search, beam search, etc., which are more practical than ordered search. To date, we have not investigated any of these practical search strategies. However, it is apparent that our chart algorithm, together with the agenda control mechanism, will provide a good way to realize these practical search strategies. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agenda Control", "sec_num": null }, { "text": "We conducted preliminary experiments to compare four methods of cost-based abduction: top-down algorithm (TD), head-driven algorithm (HD), generalized chart algorithm with full-search (GCF), and generalized chart algorithm with ordered search (GCO). The rules used for the experiments are in the spoken language understanding task, and they are rather small (51 chain rules + 35 non-chain rules). The test sentences include one verb and 1-4 noun phrases, e.g., sentence (1). Table 2 shows the results. The performance of each method is measured by the number of computation steps, i.e., the number of derivation steps in TD and HD, and the number of passive and active edges in GCF and GCO. The decimals in parentheses show the ratio of the performance of each method to the performance of TD. The table clearly shows how the three features of our algorithm improve the computational efficiency. The improvement from TD to HD is due to the first feature, i.e., goal-driven bottom-up derivation, which eliminates about 50% of the computation steps; the improvement from HD to GCF is due to the second feature, i.e., tabulation of the partial results, which decreases the number of steps another 13%-23%; the improvement from GCF to GCO is due to the last feature, i.e., the agenda control mechanism, which decreases the number of steps another 4%-8%. In short, the efficiency is improved, maximally, about four times.", "cite_spans": [], "ref_spans": [ { "start": 475, "end": 482, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": null }, { "text": "We describe, here, some differences between our algorithm and Earley deduction presented by Pereira and Warren. First, as we mentioned before, our algorithm is mainly based on bottom-up (left-corner) derivation rather than top-down derivation, that Earley deduction is based on. Our experiments showed the superiority of this approach in our par-titular, though not farfetched, example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Earley Deduction", "sec_num": null }, { "text": "Second, our algorithm does not use subsumption-checking of edges, which causes a serious computation problem in Earley deduction. Our algorithm needs subsumption-checking only when a new edge is introduced by the combination procedure. In the parsing of augmented grammars, even when two edges have the same nonterminal symbol, they are different in the annotated structures associated with those edges, e.g., feature structures; in such a case, we cannot use one edge in place of another. Likewise, in our algorithm, edges are always annotated by the assumption sets, which, in most cases, prevent those edges from being reused. Therefore, in this case, subsumption-checking is not effective. In our algorithm, reuse of edges only becomes possible when a new edge is introduced by the introduction procedure. However, this is done only by adding a pointer to the edge to be reused, and, to invoke this operation, equality-checking of lexicons, not edges, is sufficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Earley Deduction", "sec_num": null }, { "text": "Finally, our algorithm has a stronger connection with chart parsing than Earley deduction does. Pereira and Warren noted that the indexing of formulas is just an implementation technique to increase efficiency. However, indexing plays a considerable role in chart parsing, and how to index formulas in the case of proof procedures is not so obvious. In our algorithm, from the consideration of head-driven derivation, the index of a formula is determined to be the first argument of that formula. All formulas with the same index are derived the first time that index is introduced in the chart. Pointers among lexicons are also helpful in avoiding nonproductive attempts at applying the combination procedure. All the devices that were originally used in chart parsers in a restricted way are included in the formalism, not in the implementation, of our algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Earley Deduction", "sec_num": null }, { "text": "In this paper, we provided a basic and practical solution to the computation problem of costbased abduction. We explained the basic concept of our algorithm and presented the details of the algorithm along with simple examples. We also showed how our algorithm improves computational efficiency on the basis of the results of the preliminary experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concluding Remarks", "sec_num": null }, { "text": "We are now developing an abduction-based spoken language understanding system using our algorithm. The main problem is how to find a good search strategy that can be implemented with the agenda control mechanism. We are investigating this issue using both theoretical and empirical approaches. We hope to report good results along these lines in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concluding Remarks", "sec_num": null } ], "back_matter": [ { "text": "The author would like to thank Prof. Yuji Matsumoto of Nara Institute of Science and Technology and Masahiko Haruno of NTT Communication Science Laboratories for their helpful discussions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "[?]sem ((P,S,x) t?]sem ((P,B,S>~,) ", "cite_spans": [ { "start": 7, "end": 15, "text": "((P,S,x)", "ref_id": null }, { "start": 23, "end": 34, "text": "((P,B,S>~,)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Eugene Charniak and Eugene Santos Jr. Dynamic MAP calculations for abduction", "authors": [ { "first": "Husain ; Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Saadia", "middle": [], "last": "Husain", "suffix": "" }, { "first": ";", "middle": [], "last": "Santos", "suffix": "" }, { "first": "", "middle": [], "last": "Haruno", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 12th IJCAI", "volume": "", "issue": "", "pages": "350--356", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Husain, 1991] Eugene Charniak and Saadia Husain. A new admissible heuristic for minimal-cost proofs. Proceedings of the 12th IJCAI, pages 446-451, 1991. [Charniak and Santos Jr., 1992] Eugene Charniak and Eugene Santos Jr. Dynamic MAP calcu- lations for abduction. Proceedings of the lOth AAAI, pages 552-557, 1992. [Charniak and Shimony, 1990] Eugene Charniak and Solomon E. Shimony. Probabilistic seman- tics for cost based abduction. Proceedings of the 8th AAAI, pages 106-111, 1990. [Haruno et al., 1993] Masahiko Haruno, Yasuharu Den, Yuji Matsumoto, and Makoto Nagao. Bidi- rectional chart generation of natural language texts. Proceedings of the 11th AAAI, pages 350- 356, 1993.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Interpretation as abduction", "authors": [ { "first": "Jerry", "middle": [ "R" ], "last": "Hobbs", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Stickel", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Edwards", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the 26th Annual Meeting of ACL", "volume": "", "issue": "", "pages": "95--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Hobbs et at., 1988] Jerry R. Hobbs, Mark Stickel, Paul Martin, and Douglas Edwards. Interpreta- tion as abduction. Proceedings of the 26th An- nual Meeting of ACL, pages 95-103, 1988.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Algorithm schemata and data structures in syntactic processing", "authors": [ { "first": "Kay ; Martin", "middle": [], "last": "Kay", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kay, 1980] Martin Kay. Algorithm schemata and data structures in syntactic processing. Technical Report CSL-80-12, XEROX Palo Alto Research Center, 1980.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Parsing as deduction", "authors": [ { "first": ";", "middle": [], "last": "Warren", "suffix": "" }, { "first": "C", "middle": [ "N" ], "last": "Fernando", "suffix": "" }, { "first": "", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "H", "middle": [ "D" ], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Warren", "suffix": "" } ], "year": 1983, "venue": "Proceedings of the 21st Annual Meeting of A CL", "volume": "", "issue": "", "pages": "137--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Warren, 1983] Fernando C.N. Pereira and David H.D. Warren. Parsing as deduction. Proceedings of the 21st Annual Meeting of A CL, pages 137-144, 1983.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Gertjan van Noord. An overview of head-driven bottom-up generation. Current Research in Natural Language Generation", "authors": [ { "first": "M", "middle": [], "last": "Stuart", "suffix": "" }, { "first": "", "middle": [], "last": "Shieber", "suffix": "" }, { "first": "Robert", "middle": [ "C" ], "last": "Gertjan Van Noord", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Moore", "suffix": "" }, { "first": "", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 1989, "venue": "Gertjan van Noord. Head corner parsing for discontinuous constituency. Proceedings of the 29th Annual Meeting of ACL", "volume": "", "issue": "", "pages": "114--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Shieber et at., 1989] Stuart M. Shieber, Gertjan van Noord, Robert C. Moore, and Fernando C.N. Pereira. A semantic-head-driven generation al- gorithm for unification-based formalisms. Pro- ceedings of the 27th Annual Meeting of ACL, pages 7-17, 1989. [Sikkel and op den Akker, 1993] Klaas Sikkel and Rieks op den Akker. Predictive head-corner chart parsing. The 3rd International Workshop on Parsing Technologies, pages 267-276, 1993. [van Noord, 1990] Gertjan van Noord. An over- view of head-driven bottom-up generation. Cur- rent Research in Natural Language Generation, chapter 6, pages 141-165. Academic Press, 1990. [van Noord, 1991] Gertjan van Noord. Head cor- ner parsing for discontinuous constituency. Pro- ceedings of the 29th Annual Meeting of ACL, pages 114-121, 1991.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Concept of Head-driven Derivation unless the rules given to the provers have a certain property. Here, we describe under what conditions left-corner parsers can be used as proof procedures." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Chart Diagram for SSseki katta" }, "TABREF0": { "num": null, "content": "
m( <[AJ,[B]>,[A,B]) ,oO.O*\u00b0\"\u00b0\"\u00b0O\u00b0Oo..,.
/ i //./... k '-.... \\ \\\\
h( <IA],[BI> A~>) I <II.[BI>~.. m( <[],~f~..){])
~[A1JBI~(\"C'\" --~/ <[l [l>~ \"\" ........ \":':~
\\Z u
(Some labels\".... ..... m(<[A],[_]..~',tA]) ..... I
", "type_str": "table", "html": null, "text": "...\"\" ..... / / ',..., ,.. -\" ..... -%-\"A ..... omitted) .................... .." }, "TABREF3": { "num": null, "content": "
39,,,5..4. -.-~\u00b0..\u00b0.\u00b0-\u00b0 \u00b0\u00b0o\u00b0\u00b0
g~gllIRmBSml~00 -? --.......
f.l-*-o.~[katta]
[Soseki,katta]\u00b0\u00b0\u00b0'\u00b0\u00b0\u00b0' o0
i I I i I I6\"7,8! a F i~ iii........ <s.v 1.~ <B,S>s\u2022o'......
I
iI\"..'L_
I
I II%20Q.-\" /24 g
l
\\3,4,5 ~. S/
k\"\\_X Z:\" .....L
25
", "type_str": "table", "html": null, "text": "............... !. 35.. .... 34,49 \" i" }, "TABREF4": { "num": null, "content": "
Ns II TDIttDGCFGCO
1215 112 (0.52)83 (0.39)75 (0.35)
2432 218 (0.50) 148 (0.34)113 (0.26)
3654 330 (0.50) 193 (0.30)160 (0.24)
4876 442 (0.50) 238 (0.27)203 (0.23)
", "type_str": "table", "html": null, "text": "Comp. among TD, HD, GCF, and GCO" } } } }