{ "paper_id": "P91-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:03:38.732432Z" }, "title": "PLAN-BASED MODEL OF DIALOGUE", "authors": [ { "first": "Lynn", "middle": [], "last": "Lambert", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Delaware Newark", "location": { "postCode": "19716", "settlement": "Delaware", "country": "USA" } }, "email": "" }, { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Delaware Newark", "location": { "postCode": "19716", "settlement": "Delaware", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a tripartite model of dialogue in which three different kinds of actions are modeled: domain actions, problem-solving actions, and discourse or communicative actions. We contend that our process model provides a more finely differentiated representation of user intentions than previous models; enables the incremental recognition of communicative actions that cannot be recognized from a single utterance alone; and accounts for implicit acceptance of a communicated proposition.", "pdf_parse": { "paper_id": "P91-1007", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a tripartite model of dialogue in which three different kinds of actions are modeled: domain actions, problem-solving actions, and discourse or communicative actions. We contend that our process model provides a more finely differentiated representation of user intentions than previous models; enables the incremental recognition of communicative actions that cannot be recognized from a single utterance alone; and accounts for implicit acceptance of a communicated proposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper presents a tripartite model of dialogue in which intentions are modeled on three levels: the domain level (with domain goals such as traveling by train), the problem-solving level (with plan-construction goals such as instantiating a parameter in a plan), and the discourse level (with communicative goals such as ezpressing surprise). Our process model has three major advantages over previous approaches: 1) it provides a better representation of user intentions than previous models and allows the nuances of different kinds of goals and processing to be captured at each level; 27 it enables the incremental recognition of commumcatire goals that cannot be recognized from a single utterance alone; and 3) it differentiates between illocutionary effects and desired perlocutionary effects, and thus can account for the failure of an inform act to change a heater's beliefs[Per90] ~.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A number of researchers have contended that a coherent discourse consists of segments that are related to one another through some type of structuring relation [Gri75, MT83] or have used rhetorical relations to generate coherent text [Hov88, MP90] . In addition, some researchers have modeled discourse based on the semantic relationship of individual clauses [Po186a] or groups of clauses [Rei78] . But all of the above fail to capture the goal-oriented nature of discourse. Grosz and Sidner[GS86] argue that recognizing the structural relationships among the intentions underlying a discourse is necessary to identify discourse structure, but they do not provide the details of a computational mechanism for recognizing these relationships.", "cite_spans": [ { "start": 160, "end": 167, "text": "[Gri75,", "ref_id": null }, { "start": 168, "end": 173, "text": "MT83]", "ref_id": null }, { "start": 234, "end": 241, "text": "[Hov88,", "ref_id": null }, { "start": 242, "end": 247, "text": "MP90]", "ref_id": null }, { "start": 360, "end": 368, "text": "[Po186a]", "ref_id": null }, { "start": 390, "end": 397, "text": "[Rei78]", "ref_id": null }, { "start": 476, "end": 498, "text": "Grosz and Sidner[GS86]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Limitations of Current Models of Discourse", "sec_num": "2" }, { "text": "To account for the goal-oriented nature of discourse, many researchers have adopted the planning/plan-recognition paradigm [APS0, PA80] in which utterances are viewed as part of a plan for accomplishing a goal and understanding consists of recognizing this plan. The most welldeveloped plan-based model of discourse is that of Litman and AIIen[LA87] . However, their discourse plans conflate problem-solving actions and communicative actions. For example, their Correct-Plan has the flavor of a problem-solving plan that one would pursue in attempting to construct another plan, whereas their Identify-Parameter takes on some of the characteristics of a communicative plan that one would pursue when conveying information. More significantly, their model cannot capture the relationship among several utterances that are all part of the same higher-level discourse plan if that plan cannot be recognized and added to their plan stack based on analysis of the first utterance alone. Thus, if more than one utterance is necessary to recognize a discourse goal (as is often the case, for example, with warnings), Litman and Allen's model will not be able to identify the discourse goal pursued by the two utterances together or what role the first utterance plays with respect to the second. Consider, for example, the following pair of utterances:", "cite_spans": [ { "start": 123, "end": 129, "text": "[APS0,", "ref_id": null }, { "start": 130, "end": 135, "text": "PA80]", "ref_id": null }, { "start": 327, "end": 349, "text": "Litman and AIIen[LA87]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Limitations of Current Models of Discourse", "sec_num": "2" }, { "text": "(1) The city of zz~ is considering filing for bankruptcy. (2) One of your mutual funds owns zzz bonds. Although neither of the two utterances alone constitutes a warning, a natural language system must be able to recognize the warning from the set of two utterances together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations of Current Models of Discourse", "sec_num": "2" }, { "text": "Our tripartite model of dialogue overcomes these limitations. It differentiates among domain, problem-solving, and communicative actions yet models the relationships among them, and enables the recognition of communicative actions that take more than one utterance to achieve but which cannot be recognized from the first utterance alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations of Current Models of Discourse", "sec_num": "2" }, { "text": "In the remainder of this paper, we will present our tripartite model, motivating why our model recognizes three different kinds of goals, describing our dialogue model and how it is built incrementally as a discourse proceeds, and illustrating this plan inference process with a sample dialogue. Finally, we will outline our current research on modeling negotiation dialogues and recognizing discourse acts such as expressing surprise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations of Current Models of Discourse", "sec_num": "2" }, { "text": "A Tripartite Model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3", "sec_num": null }, { "text": "Our plan recognition framework recognizes three different kinds of goals: domain, problemsolving, and discourse. In an information-seeking or expert-consultation dialogue, one participant is seeking information and advice about how to construct a plan for achieving some domain goal. A problem-solving goal is a metagoal that is pursued in order to construct a domain plan [Wil81, LA87, Ram89] . For example, if an agent has a goal of earning an undergraduate degree, the agent might have the problem-solving goal of selecting the instantiation of the degree parameter as BA or BS and then the problem-solving goal of building a subplan for satisfying the requirements for that degree. A number of researchers have demonstrated the importance of modeling domain and problem-solving goals [PA80, WilS1, LA87, vBC86, Car87, Ram89] .", "cite_spans": [ { "start": 373, "end": 380, "text": "[Wil81,", "ref_id": null }, { "start": 381, "end": 386, "text": "LA87,", "ref_id": null }, { "start": 387, "end": 393, "text": "Ram89]", "ref_id": null }, { "start": 788, "end": 794, "text": "[PA80,", "ref_id": null }, { "start": 795, "end": 801, "text": "WilS1,", "ref_id": null }, { "start": 802, "end": 807, "text": "LA87,", "ref_id": null }, { "start": 808, "end": 814, "text": "vBC86,", "ref_id": null }, { "start": 815, "end": 821, "text": "Car87,", "ref_id": null }, { "start": 822, "end": 828, "text": "Ram89]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Kinds of Goals and Plans", "sec_num": "3.1" }, { "text": "Intuitively, a discourse goal is the communicative goal that a speaker has in making an utterance [.Car89 ], such as obtaining information or expressing surprise. Recognition of discourse goals provides expectations for subsequent utterances and suggests how these utterances should be interpreted. For example, the first two utterances in the following exchange establish the expectation that S1 will either accept S2's response, or that S1 will pursue utterances directed toward understanding and accepting it [Car89] . Consequently, Sl's second utterance should be recognized as expressing surprise at S2's statement. SI: When does CS400 meet? $2:GS400 meets on Monday from 7.9p.m. SI: GS400 meets at night? A robust natural language system must recognize discourse goals and the beliefs underlying them in order to respond appropriately.", "cite_spans": [ { "start": 98, "end": 105, "text": "[.Car89", "ref_id": null }, { "start": 512, "end": 519, "text": "[Car89]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Kinds of Goals and Plans", "sec_num": "3.1" }, { "text": "The plan library for our process model contains the system's knowledge of goals, actions, and plans. Although domain plans are not mutually known by the participants[Po186b], how to communicate and how to solve problems are common skills that people use in a wide variety of contexts, so the system can assume that knowledge about discourse and problem-solving plans is shared knowl-edge. Our representation of a plan includes a header giving the name of the plan and the action it accomplishes, preconditions, applicability conditions, constraints, a body, effects, and goals. Applicability conditions represent conditions that must be satisfied for the plan to be reasonable to pursue in the given situation whereas constraints limit the allowable instantiation of variables in each of the components of a plan [LAB7, Car87] . Especially in the case of discourse plans, the goals and effects are likely to be different. This allows us to differentiate between illocutionary and perlocutionary effects and capture the notion that one can, for example, perform an inform act without the hearer adopting the communicated proposition. 3 Figure 1 presents three discourse plans and one problemsolving and domain plan.", "cite_spans": [ { "start": 813, "end": 819, "text": "[LAB7,", "ref_id": null }, { "start": 820, "end": 826, "text": "Car87]", "ref_id": null } ], "ref_spans": [ { "start": 1135, "end": 1143, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Kinds of Goals and Plans", "sec_num": "3.1" }, { "text": "Agents use utterances to perform communicative acts, such as informing or asking a question. These discourse actions can in turn be part of performing other discourse actions; for example, providing background data can be part of asking a question. Discourse actions can take more than one utterance to complete; asking for information requires that a speaker request the information and believe that the request is acceptable (i.e., that the speaker say enough to ensure that the speaker believes that the request is understandable, justified, and the necessary background information is known by the respondent). Thus, actions at the discourse level form a tree structure in which each node represents a communicative action that a participant is performing and the children of a node represent communicative actions pursued in order to perform the parent action.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structure of the Model", "sec_num": "3.2" }, { "text": "Information needed for problem-solving actions is obtained through discourse actions, so discourse actions can be executed in order to perform problem-solving actions as well as being part of other discourse actions. Similarly, domain plans are constructed through problem-solving actions, so problem-solving actions can be executed in order to eventually perform domain actions as well as being part of plans for other problem-solving actions. Therefore, our Dialogue Model (DM) contains three levels of tree structures, 4 one for each kind of action (discourse, problem-solving, and domain) with links among the actions on different levels. At the lowest level the discourse actions are represented; these actions may contribute to the problem-solving actions at the middle level which, ZConsider, for example, someone saying \"I informed yon of X 6at you wouldn't 6elieve me.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structure of the Model", "sec_num": "3.2" }, { "text": "4The DM is really a mental model of intentions [Pol80b] . The structures shown in our figures implicitly capture a number of intentions that are attributed to the participants, such as the intention that the hearer recognize that the speaker believes the applicability conditions for the just initiated discourse actions are satisfied and the intention that the participants follow through with the subactions that are part of plans for actions in theDM.", "cite_spans": [ { "start": 47, "end": 55, "text": "[Pol80b]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Structure of the Model", "sec_num": "3.2" }, { "text": "Get-Minor(.agent, ..sub j) Prec:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Plan-D1: {_agent earns a minor in _subj} Action:", "sec_num": null }, { "text": "have-plan(_agent, Plan-D1, Get-minor(.agent, .sub j)) Body:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Plan-D1: {_agent earns a minor in _subj} Action:", "sec_num": null }, { "text": "1. Complete-Form(.agent, change-of-major-form, add-minor) 2. Take Figure 3) . The planning agent is the agent of all actions at the domain level, since the plan being constructed is for his subsequent execution. Since we are assuming a cooperative dialogue in which the two participants are working together to construct a domain plan, both participants are joint agents of actions at the problemsolving level. Both participants make utterances and thus either participant may be the agent of an action at the discourse level. For example, a DM derived from two utterances is shown in Figure 3 ; its construction is described in Section 3.3. The DM in Figure 3 indicates that the inform and the request were both part of a plan for asking for information; the inform provided background data enabling the information request to be accepted by the hearer. Furthermore, the actions at the discourse level were pursued in order to perform a Build-Plan action at the problemsolving level, and this problem-solving action is being performed in order to eventually perform the domain action of getting a math minor. The current focus of attention on each level is marked with an asterisk.", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 65, "text": "Take", "ref_id": null }, { "start": 66, "end": 75, "text": "Figure 3)", "ref_id": "FIGREF0" }, { "start": 585, "end": 593, "text": "Figure 3", "ref_id": "FIGREF0" }, { "start": 652, "end": 660, "text": "Figure 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Domain Plan-D1: {_agent earns a minor in _subj} Action:", "sec_num": null }, { "text": "Our process model uses plan inference rules[APS0, Car87], constraint satisfaction[LAB7], focusing heuristics[Car87], and features of the new utterance to identify the relationship between the utterance and the existing dialogue model. The plan inference rules take as input a hypothesized action Ai and suggest other actions (either at the same level in the DM or at the immediately higher level) that might be the agent's motivation for Ai.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Dialogue Model", "sec_num": "3.3" }, { "text": "The focusing heuristics order according to coherence the ways in which the DM might be expanded on each of the three levels to incorporate the actions motivating a new utterance. Our focusing heuristics at the discourse level are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Dialogue Model", "sec_num": "3.3" }, { "text": "1. Expand the plan for an ancestor of the currently focused action in the existing DM so that it includes the new utterance, preferring to expand ancestors closest to the currently focused action. This accounts for new utterances that continue discourse acts already in the DM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Dialogue Model", "sec_num": "3.3" }, { "text": "2. Enter a new discourse action whose plan can be expanded to include both the existing discourse level of the DM and the new utterance. This accounts for situations in which actions at the discourse level of the previous DM are part of a plan for another discourse act that had not yet been conveyed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Dialogue Model", "sec_num": "3.3" }, { "text": "3. Begin a new tree structure at the discourse level. This accounts for initiation of new discourse plans unrelated to those already in the DM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Dialogue Model", "sec_num": "3.3" }, { "text": "The focusing heuristics, however, are not identical for all three levels. Although it is not possible to expand the plan for the focused action on the discourse level since it will always be a surface speech act, continuing the plan for the currently focused action or expanding it to include a new action are the most coherent expectations on the problem-solving and domain levels. This is because the agents are most expected to continue with the problem-solving and domain plans on which their attention is currently centered. In addition, since actions at the discourse and problem-solving levels are currently being executed, they cannot be returned to (although a similar action can be initiated anew and entered into the model). However, since actions at the domain level are part of a plan that is being constructed for future execution, a domain subplan already completely developed may be returned to for revision. Although such a shift in attention back to a previously considered subplan is not one of the strongest expectations, it is still possible at the domain level. Furthermore, new and unrelated discourse plans will often be pursued during the course of a conversation whereas it is unlikely that several different domain plans (each representing a topic shift) will be investigated. Thus, on the domain level, a return to a previously considered domain subplan is preferred over a shift to a new domain plan that is unrelated to any already in the DM. In addition to different focusing heuristics and different agents at each level, our tripartite model enables us to capture different rules regarding plan retention. A continually growing dialogue structure does not seem to reflect the information retained by humans. We contend that the domain plan that is incrementally fleshed out and built at the highest level should be maintained throughout the dialogue, since it provides knowledge about the agent's intended domain actions that will be useful in providing cooperative advice. However, problem-solving and discourse actions need not be retained indefinitely. If a problem-solving or discourse action has not yet completed execution, then its immediate children should be retained in the DM, since they indicate what has been done as part of performing that as yet uncompleted action; its other descendants can be discarded since the aparent actions that motivated them are finished. (For illustration purposes, all actions have been retained in Figure 3. )", "cite_spans": [], "ref_spans": [ { "start": 2475, "end": 2484, "text": "Figure 3.", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Building the Dialogue Model", "sec_num": "3.3" }, { "text": "We have expanded on Litman and Allen's notion of constraint satisfaction[LA87] and Allen and Perrault's use of beliefs[AP80]. Our applicability conditions contain beliefs by the agent of the plan, and our recognition algorithm requires that the system be able to plausibly ascribe these beliefs in recognizing the plan. The algorithm is given the semantic representation of an utterance. Then plan inference rules are used to infer actions that might motivate the utterance; the belief ascription process during constraint satisfaction determines whether it is reasonable to ascribe the requisite beliefs to the agent of the action and, if not, the inference is rejected. The focusing heuristics allow expecta-tions derived from the existing dialogue context to guide the recognition process by preferring those inferences that can eventually lead to the most expected expansions of the existing dialogue model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Dialogue Model", "sec_num": "3.3" }, { "text": "In [Car89] we claimed that a cooperative participant must explicitly or implicitly accept a response or pursue discourse goals directed toward being able to accept the response. Thus our model treats failure to initiate a negotiation dialogue as implicit acceptance of the proposition conveyed by the response. Consider, for example, the following dialogue:", "cite_spans": [ { "start": 3, "end": 10, "text": "[Car89]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Building the Dialogue Model", "sec_num": "3.3" }, { "text": "Who is teaching CS360 next semester? $2: Dr. Baker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SI:", "sec_num": null }, { "text": "What time does it meet? Since Sl's second utterance cannot be interpreted as initiating a negotiation dialogue, S1 has implicitly accepted the proposition that Dr. Baker is teaching CS360 next semester as true. This notion of implicit acceptance is similar to a restricted form of Perrault's default reasoning about the effects of an inform act[Per90] and is explained further in [Lam91].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SI:", "sec_num": null }, { "text": "As an example of how our process model assimilates utterances and can incrementally recognize a discourse action that cannot be recognized from a single utterance, consider the following: SI: (a) I want a math minor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Example", "sec_num": "3.4" }, { "text": "(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Example", "sec_num": "3.4" }, { "text": "A few of the plans needed to handle this example are shown in Figure 1 ; these plans assume a cooperative dialogue. From the surface inform, plan inference rules suggest that S1 is executing a Tell action and that this Tell is part of an Inform action (the applicability conditions for both actions can be plausibly ascribed to S1) and these are entered into the discourse level of the DM. No further inferences on this level are possible since the Inform can be part of several discourse plans and there is no existing dialogue context that suggests which of these S1 might be pursuing. The system infers that S1 wants the goal of the Inform action, namely know(S2, want(S1, Get-Minor(S1, Math))).", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 70, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "b) What should I do?", "sec_num": null }, { "text": "Since this proposition is a precondition for building a plan for getting a math minor, the system infers that S1 wants Build-Plan(S1, $2, Get-Minor(S1, math)) and this Build.Plan action is entered into the problem-solving level of the DM. From this, the system infers that S1 wants the goal of that action; since this result is the precondition for getting a math minor, the system infers that S1 wants to get a math minor and this domain action is entered into the domain level of the DM. The resulting discourse model, with links between the actions at different levels and the current focus of attention on each level marked with an asterisk, is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 658, "end": 666, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "b) What should I do?", "sec_num": null }, { "text": "The semantic representation of (b) is and that S1 wants the goal of the Obtain-In?o-Re? action (namely, that $1 know the subactions that he needs to do in order to perform _a~tion2), which is in turn a precondition for building a plan. This produces the inference that $1 wants Build-Plan(S1, $2, .action2) which is an action at the problem-solving level. The focusing heuristics suggest that the most coherent expectation at the discourse level is that Sl's discourse level actions are part of a plan for performing the Tell action that is the parent of the action that was previously marked as the current focus of attention in the discourse model. However, no line of inference from the second utterance represents an expansion of this plan. (This means that the proposition was understood without any clarification. 5) Similarly, no expansion of the plan for the Inform action (the other ancestor of the focus of attention in the existing DM) succeeds in linking the new utterance to the DM. (This means that the communicated proposition was accepted without any squaring away of beliefs[Jos82].)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b) What should I do?", "sec_num": null }, { "text": "Since the first focusing heuristic was unsuccessful in finding a relationship between the new utterance and the existing dialogue model, the sec-5We are assuming that the hearer has an opportunity to intervene after an utterance. This is a simplification and must eventually be removed to capture a heater's saving his requests for clarification and negotiation of beliefs until the end of the speaker's complete turn. were entered into the DM from (a). s The focusing heuristics suggest that the most coherent continuation at the problem-solving level is that the new utterance is continuing the Build-Plan that was previously marked as the current focus of attention at that level. This is possible by instantiating .action2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b) What should I do?", "sec_num": null }, { "text": "with math) . Thus the DM is expanded as shown in Figure 3 with the new focus of attention on each level marked with an asterisk. Note that Sl's overall goal of obtaining information was not conveyed by (a) alone; consequently, only after both utterances were coherently related could it be determined that (a) was paxt of an overall discourse plan to obtain information and that (a) was intended to provide background data for the request being made in (b). 7", "cite_spans": [ { "start": 5, "end": 10, "text": "math)", "ref_id": null } ], "ref_spans": [ { "start": 49, "end": 57, "text": "Figure 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "b) What should I do?", "sec_num": null }, { "text": "6Note that the actions in the body of Ask.Re] ~re not ordered; an agent can provide d~'ification and background information before or after asking a question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b) What should I do?", "sec_num": null }, { "text": "7An inform action could also be used for other purposes, including justifying a question and merely conveying information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b) What should I do?", "sec_num": null }, { "text": "Further queries would lead to more elaborate tree structures on the problem-solving and domain levels. For example, suppose that S1 is told that Math 210 is a required course for a math minor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b) What should I do?", "sec_num": null }, { "text": "Then a subsequent query such as Who is teaching Math 210 next semester e. would be performing a discourse act of obtaining information in order to perform a problem-solving action of instantiating a parameter in a Learn-Material domain action.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b) What should I do?", "sec_num": null }, { "text": "Since learning the materiM from one of the teachers of a course is part of a domain plan for taking a course and since instantiating the parameters in actions in the body of domain plans is part of building the domain plan, further inferences would indicate that this Instanfiafe-Wars problem-solving action is being executed in order to perform the problemsolving action of building a plan for the domain action of taking Math 210 in order to build a plan to get a math minor. Consequently, the domain and problem-solving levels would be expanded so that each contained several plans, with appropriate links between the levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b) What should I do?", "sec_num": null }, { "text": "We are currently examining the applications that this model has in modeling negotiation dialogues and discourse acts such as convince, warn, and express surprise. To extend our notion of implicit acceptance of a proposition to negotiation di-alogues, we are exploring treating a discourse plan as having successfully achieved its goal if it is plausible that all of its subacts have achieved their goals and all of its applicability conditions (except those negated by the goal) are still true after the subacts have been executed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current and Future Work", "sec_num": "4" }, { "text": "Especially in negotiation dialogues, a system must account for the fact that a user may change his mind during a conversation. But often people only slightly modify their beliefs. For example, the system might inform the user of some proposition about which the user previously held no beliefs. In that case, if the user has no reason to disbelieve the proposition, the user may adopt that proposition as one of his own beliefs. However, if the user disbelieved the proposition before the system performed the inform, then the user might change from disbelief to neither belief nor disbelief; a robust model of understanding must be able to handle a response that expresses doubt or even disbelief at a previous utterance, especially in modeling arguments and negotiation dialogues. Thus, a system should be able to (1) represent levels of belief, (2) recognize how a speaker's utterance conveys these different levels of belief, (3) use these levels of belief in recognizing discourse plans, and (4) use previous context and a user's responses to model changing beliefs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current and Future Work", "sec_num": "4" }, { "text": "We are investigating the use of a multi-level belief model to represent the strength of an agent's beliefs and are studying how the form of an utterance and certain clue words contribute to conveying these beliefs. Consider, for example, the following two utterances:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current and Future Work", "sec_num": "4" }, { "text": "(1) Is Dr. Smith teaching CSMO?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current and Future Work", "sec_num": "4" }, { "text": "(2) Isn't Dr. Smith teaching CSMO?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current and Future Work", "sec_num": "4" }, { "text": "A simple yes-no question as in utterance (1) suggests only that the speaker doesn't know whether Dr. Smith teaches CS310 whereas the form of the question in utterance (2) suggests that the speaker has a relatively strongbelief that Dr. Smith teaches CS310 but is uncertain of this. These beliefs conveyed by the surface speech act must be taken into account during the plan recognition process. Thus our plan recognition algorithm will first use the effects of the surface speech act to suggest augmentations to the belief model. These augmentations will then be taken into account in deciding whether requisite beliefs for potential discourse acts can be plausibly ascribed to the speaker and will enable us to identify such discourse actions as expressing surprise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current and Future Work", "sec_num": "4" }, { "text": "[Lam91] further discusses the use of a multilevel belief model and its contribution in modeling dialogue. cution level (corresponding to queries after commitment has been made to achieve a goal in a particular way). In our tripartite model, discourse, problemsolving, and domain plans form a hierarchy with links between adjacent levels. Whereas Ramshaw's exploration level captures the consideration of alternative plans, our intermediate level captures the notion of problem-solving and plan-construction, whether or not there has been a commitment to a particular way of achieving a domain goal. Thus a query such as To whom do I make out the check? would be recognized as a query against the domain execution level in Ramshaw's model (since it is a query made after commitment to a plan such as opening a passbook savings account[Ram91]), but our model would treat it as a discourse plan that is executed to further the problem-solving plan of instantiating a parameter in an action in a domain plan --i.e., our model would view the agent as asking a question in order to further the construction of his partially constructed domain plan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current and Future Work", "sec_num": "4" }, { "text": "Our tripartite model offers several advantages. Ramshaw's model assumes that the top-level domain plan is given at the outset of the dialogue and then his model expands that plan to accommodate user queries. Our model, on the other hand, builds the DM incrementally at each level as the dialogue progresses; it therefore can handle bottom-up dialogues [Car87] in which the user's overall top-level goal is not explicitly known at the outset and can recognize discourse actions that cannot be identified from a single utterance. In addition, our domain, problem-solving, and discourse plans are all recognized incrementally using basically the same plan recognition algorithm on each level[Wil81]. Consequently, we foresee being able to extend our model to include additional pairs of problem-solving and discourse levels whose domain level contains an existing problem-solving or discourse plan; this will enable us to handle utterances such as What should we work on next? (query trying to further construction of a problem-solving plan) and Do you have information about ... ? (query trying to further construction of a discourse plan to obtain information).", "cite_spans": [ { "start": 352, "end": 359, "text": "[Car87]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Current and Future Work", "sec_num": "4" }, { "text": "Ramshaw's plan exploration strategies, his differentiation between exploration and commitment, and his heuristics for recognizing adoption of a plan are very important. While our work has not yet addressed these issues, we believe that they are consistent with our model and are best addressed at our problem-solving level by adding new problem-solvin~ metaplans. Such an incorporation will have severat advantages, including the ability to handle utterances such as 5 Related Work Ramshaw[Ram91] has developed a model of discourse that contains a domain execution level, an exploration level, and a discourse level. In his model, discourse plans can refer either to the exploration level (corresponding to queries about possible ways of achieving a goal) or to the domain exe-If I decide to get a BA degree, then I'll take French to meet the foreign language requirement.", "cite_spans": [ { "start": 482, "end": 496, "text": "Ramshaw[Ram91]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Current and Future Work", "sec_num": "4" }, { "text": "In the above case, the speaker is still exploring a plan for getting a BA degree, but has committed to taking French to satisfy the foreign language requirement should the plan for the BA degree be adopted. It does not appear that Ramshaw's model can handle such contingent commitment. This enrichment of our problem-solving level may necessitate changes to our focusing heuristics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current and Future Work", "sec_num": "4" }, { "text": "We have presented a tripartite model of dialogue that distinguishes between domain, problemsolving, and discourse or communicative actions. By modeling each of these three kinds of actions as separate tree structures, with links between the actions on adjacent levels, our process model enables the incremental recognition of discourse actions that cannot be identified from a single utterance alone. However, it is still able to capture the relationship between discourse, problemsolving, and domain actions. In addition, it provides a more finely differentiated representation of user intentions than previous models, allows the nuances of different kinds of processing (such as different focusing expectations and information retention) to be captured at each level, and accounts for implicit acceptance of a communicated proposition. Our current work involves using this model to handle negotiation dialogues in which a hearer does not automatically accept as valid the proposition communicated by an inform action.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "This material is based upon work supported by the National Science Foundation under Grant No. IRI-8909332. The Government has certain rights in this material.2We would ilke to thank Kathy McCoy for her comments on various drafts of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Analyzing intention in utterances", "authors": [ { "first": "James", "middle": [ "F" ], "last": "Allen", "suffix": "" }, { "first": "C", "middle": [ "Raymond" ], "last": "Perrault", "suffix": "" } ], "year": 1980, "venue": "Artificial Intelligence", "volume": "15", "issue": "", "pages": "143--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "James F. Allen and C. Raymond Perrault. Analyzing intention in utterances. Artifi- cial Intelligence, 15:143-178, 1980.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Pragmatic modeling: Toward a robust natural language interface", "authors": [ { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1987, "venue": "Computational Intelligence", "volume": "3", "issue": "", "pages": "117--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sandra Carberry. Pragmatic modeling: Toward a robust natural language inter- face. Computational Intelligence, 3:117- 136, 1987.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A pragmatic.s-based approach to ellipsis resolution", "authors": [ { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1989, "venue": "Computational Linguistics", "volume": "15", "issue": "2", "pages": "75--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sandra Carberry. A pragmatic.s-based ap- proach to ellipsis resolution. Computa- tional Linguistics, 15(2):75-96, 1989.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The Thread of Discourse", "authors": [ { "first": "Joseph", "middle": [ "E" ], "last": "Grimes", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph E. Grimes. The Thread of Dis- course. Mouton, 1975.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Attention, intention, and the structure of discourse", "authors": [ { "first": "Barbara", "middle": [], "last": "Grosz", "suffix": "" }, { "first": "Candace", "middle": [], "last": "Sidner", "suffix": "" } ], "year": 1986, "venue": "Computational Linguistics", "volume": "12", "issue": "3", "pages": "175--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Grosz and Candace Sidner. At- tention, intention, and the structure of discourse. Computational Linguistics, 12(3):175-204, 1986.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Planning coherent multisentential text", "authors": [ { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the ~6th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "163--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduard H. Hovy. Planning coherent multisentential text. Proceedings of the ~6th Annual Meeting of the Association for Computational Linguistics, pages 163- 169, 1988.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Mutual beliefs in question-answer systems", "authors": [ { "first": "K", "middle": [], "last": "Aravind", "suffix": "" }, { "first": "", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 1982, "venue": "Mutual Beliefs", "volume": "", "issue": "", "pages": "181--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aravind K. Joshi. Mutual beliefs in question-answer systems. In N. Smith, ed- itor, Mutual Beliefs, pages 181-197, New York, 1982. Academic Press.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A plan recognition model for subdialogues in conversation", "authors": [ { "first": "Diane", "middle": [], "last": "Litman", "suffix": "" }, { "first": "James", "middle": [], "last": "Allen", "suffix": "" } ], "year": 1987, "venue": "Cognitive Science", "volume": "11", "issue": "", "pages": "163--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diane Litman and James Allen. A plan recognition model for subdialogues in con- versation. Cognitive Science, 11:163-200, 1987.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Modifying beliefs in a plan-based discourse model", "authors": [ { "first": "Lynn", "middle": [], "last": "Lambert", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 29th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lynn Lambert. Modifying beliefs in a plan-based discourse model. In Proceed- ings of the 29th Annual Meeting of the ACL, Berkeley, CA, June 1991.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Planning text for advisory dialogues", "authors": [ { "first": "Johanna", "middle": [], "last": "Moore", "suffix": "" }, { "first": "Cecile", "middle": [], "last": "Paris", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "203--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johanna Moore and Cecile Paris. Plan- ning text for advisory dialogues. In Pro- ceedings of the 27th Annual Meeting of the Association for Computational Linguis- tics, pages 203-211, Vancouver, Canada, 1990.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Thompson. Relational propositions in discourse", "authors": [ { "first": "C", "middle": [], "last": "William", "suffix": "" }, { "first": "Sandra", "middle": [ "A" ], "last": "Mann", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William C. Mann and Sandra A. Thomp- son. Relational propositions in dis- course. Technical Report ISI/RR-83-115, ISI/USC, November 1983.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A plan-based analysis of indirect speech acts", "authors": [ { "first": "Raymond", "middle": [], "last": "Perrault", "suffix": "" }, { "first": "James", "middle": [], "last": "Allen", "suffix": "" } ], "year": 1980, "venue": "American Journal of Computational Linguistics", "volume": "6", "issue": "3-4", "pages": "167--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raymond Perrault and James Allen. A plan-based analysis of indirect speech acts. American Journal of Computational Linguistics, 6(3-4):167-182, 1980.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An application of default logic to speech act theory", "authors": [ { "first": "Raymond", "middle": [], "last": "Perrault", "suffix": "" } ], "year": 1990, "venue": "Intentions in Communication", "volume": "", "issue": "", "pages": "161--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raymond Perrault. An application of de- fault logic to speech act theory. In Philip Cohen, Jerry Morgan, and Martha Pol- lack, editors, Intentions in Communica- tion, pages 161-185. MIT Press, Cam- bridge, Massachusetts, 1990.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The linguistics discourse model: Towards a formal theory of discourse structure", "authors": [ { "first": "Livia", "middle": [], "last": "Polanyi", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Livia Polanyi. The linguistics discourse model: Towards a formal theory of dis- course structure. Technical Report 6409, Bolt Beranek and Newman Laboratories Inc., Cambridge, Massachusetts, 1986.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Inferring Domain Plans in Question-Answering", "authors": [ { "first": "Martha", "middle": [], "last": "Pollack", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Pollack. Inferring Domain Plans in Question-Answering. PhD thesis, University of Pennsylvania, Philadelphia, Pennsylvania, 1986.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Pragmatic Knowledge for Resolving lll-Formedness", "authors": [ { "first": "Lance", "middle": [ "A" ], "last": "Ramshaw", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lance A. Ramshaw. Pragmatic Knowl- edge for Resolving lll-Formedness. PhD thesis, University of Delaware, Newark, Delaware, June 1989.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A three-level model for plan exploration", "authors": [ { "first": "Lance", "middle": [ "A" ], "last": "Rarnshaw", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lance A. Rarnshaw. A three-level model for plan exploration. In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, Berkeley, California, 1991.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Towards user specific explanations from expert systems", "authors": [ { "first": "Rachel", "middle": [], "last": "Reichman", "suffix": "" } ], "year": 1978, "venue": "Proceedings of the Sixth Canadian Conference on Artificial Intelligence", "volume": "2", "issue": "", "pages": "194--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rachel Reichman. Conversational co- herency. Cognitive Science, 2:283-327, 1978. Peter van Beck and Robin Cohen. To- wards user specific explanations from ex- pert systems. In Proceedings of the Sixth Canadian Conference on Artificial Intelli- gence, pages 194-198, Montreal, Canada, 1986.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Meta-planning: Representing and using knowledge about planning in problem solving and natural lang uage understanding", "authors": [ { "first": "Robert", "middle": [], "last": "Wilensky", "suffix": "" } ], "year": 1981, "venue": "Cognitive Science", "volume": "", "issue": "", "pages": "197--233", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Wilensky. Meta-planning: Repre- senting and using knowledge about plan- ning in problem solving and natural lan- g uage understanding. Cognitive Science, :197-233, 1981.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "~-l~fo-Ref(St. $2. -&ctlonl. need-dotS1..Lctlonl. Oct-Minor(St. M~.th))) ... ,ztform{Sl, $2 .... t($1, Oet-Minor(Sl, Ms|h))luformrefq S~, Sl, .~ctionl, .sctlont t }ei-Minor(Sl. MAth'})) .equest(Sl, S2s lnfotmref(S2, SI, .sctionl, | J .~\u00a2tionl? Oet.lCfiuor{Sl! MAth}}} ] J ......J Dialogue Model derived from two utterances ond focusing heuristic is tried. It suggests that the new utterance and the actions at the discourse level in the existing DM might both be part of an expanded plan for some other discourse action. The inferences described above lead from (b) to the discourse action Ask-Ref whose plan can be expanded as shown in Figure 3 to include, as background for the Ask-Ref, the Inform and the Tell actions that", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "html": null, "text": "Request", "num": null, "content": "
f''''\"DomLin Level \"''''''J
! !\" [Oct-Minor{S,, M&th~ ]I g
jR |en~ble-&rcT Problem-solvlngLevel
| * [Build-Plan~Sl, S2, Get-Minor,S1,~.,h?? J ,
jdmm~en~oDiscourse Level m em em am amamm.5
|!
g | ,8ub&ctioa-sre ~ ! [ T,n{s~, s~, ...~{s~, Q,,-Mi.o,(S~, M.,b),IJ ! t
Jsub6ct[on-&r\u00a2 ~ !J !
Figure 2: Dialogue Model from the first utterance
Surface-Request(S1, $2, Informref(S2, $1,
_action1, need-do(S1, _action1, .action2)))
From this utterance we can infer that $1 is per-
forming a
", "type_str": "table" } } } }