{ "paper_id": "J99-1001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:47:12.133747Z" }, "title": "A Process Model for Recognizing Communicative Acts and Modeling Negotiation Subdialogues", "authors": [ { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Lynn", "middle": [], "last": "Lambert", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Negotiation is an important part of task-oriented expert-consultation dialogues. This paper presents a plan-based model for understanding cooperative negotiation subdialogues. Our system infers both the communicative actions that people pursue when speaking and the beliefs underlying these actions. Beliefs, and the strength of these beliefs, are recognized from the surface form of utterances,from discourse acts, and from the explicit and implicit acceptance of previous utterances. Our algorithm for recognizing discourse actions combines linguistic, world, and contextual knowledge in a unified framework. By combining these different knowledge sources, we are able to recognize complex discourse acts such as expressing doubt, to identify the relationship of utterances to one another, and to model negotiation subdialogues. Since negotiation is an integral part of multiagent activity, our process model addresses an important aspect of cooperative interaction and thus is a step toward an intelligent and robust natural language consultation system.", "pdf_parse": { "paper_id": "J99-1001", "_pdf_hash": "", "abstract": [ { "text": "Negotiation is an important part of task-oriented expert-consultation dialogues. This paper presents a plan-based model for understanding cooperative negotiation subdialogues. Our system infers both the communicative actions that people pursue when speaking and the beliefs underlying these actions. Beliefs, and the strength of these beliefs, are recognized from the surface form of utterances,from discourse acts, and from the explicit and implicit acceptance of previous utterances. Our algorithm for recognizing discourse actions combines linguistic, world, and contextual knowledge in a unified framework. By combining these different knowledge sources, we are able to recognize complex discourse acts such as expressing doubt, to identify the relationship of utterances to one another, and to model negotiation subdialogues. Since negotiation is an integral part of multiagent activity, our process model addresses an important aspect of cooperative interaction and thus is a step toward an intelligent and robust natural language consultation system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In a typical expert-consultation dialogue, one participant (hereafter referred to as the executing agent or EA) has a goal that he 1 wants to achieve and is working with the other participant (referred to as the consulting agent or CA) to construct a plan for achieving this goal. Although both the plan construction process and the conversation are collaborative activities, this does not mean that people always believe what they are told. In fact, part of the collaborative activity of conversation is negotiation of conflicting beliefs. This negotiation is particularly important in task-oriented expert-consultation dialogues, since the participants must resolve any conflicting beliefs in order to work together effectively to devise a plan that is both well-formed and addresses the executing agent's needs. Thus, a robust natural language consultation system must be able to handle negotiation subdialogues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Even though there is wide agreement that negotiation is an integral part of multiagent activity, previous natural language understanding systems have been unable to handle negotiation subdialogues such as the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(2) $1: Who is teaching CS360? $2: Dr. Smith is teaching CS360.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(3) (4) (5) (6) $1: But isn't CS360 an undergraduate course? $2: Yes. CS360 is an undergraduate course.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Dr. Smith teaches both graduate and undergraduate courses. $1: Who handles the CS360 lab?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "For example, existing systems do not recognize when an agent is expressing doubt at a previous response as in utterance 3, when an agent is attempting to resolve a conflict suggested by the other participant as in utterances (4)-(5), or when an agent is implicitly conveying acceptance of a communicated proposition as in utterance (6). These shortcomings prevent existing natural language systems from being able to handle dialogues in which one agent initially does not accept the proposition conveyed by the other agent and initiates a negotiation subdialogue to resolve their differences in belief.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We have developed a plan-based model of dialogue that addresses these limitations. Our analysis of naturally occurring dialogue indicates that one way that people express doubt at a proposition Pdoubt is by contending that some other conflicting proposition Pi is true. Our process model includes an algorithm for recognizing such expressions of doubt, as well as other complex discourse acts. The algorithm uses a multistrength belief model and a combination of linguistic, world, and contextual knowledge. Our implemented system can recognize implicit as well as explicit acceptance of a communicated proposition, multiple expressions of doubt at the same proposition, expressions of doubt at both immediately preceding and earlier utterances, and negotiation subdialogues embedded within other negotiation subdialogues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In the remainder of this paper, we describe our system and how this process is performed. Section 2 describes the kinds of expressions of doubt found in our corpus analysis, and Section 3 discusses the factors that must be taken into account in recognizing the kind of expression of doubt that we have been studying. Section 4 presents our process model for recognizing complex discourse acts (such as expressions of doubt) and assimilating them into the dialogue context. First it discusses why it is necessary to capture varying degrees of belief, describes the multistrength model of belief used in our system, and discusses how our description of actions avoids assuming that a speaker will automatically adopt a communicated proposition. Then it introduces the notion of an action that requires evidence for its recognition and presents our recognition algorithm that uses a combination of linguistic, world, and contextual knowledge. Section 5 steps through an extended example that illustrates our system's ability to recognize complex discourse acts and model negotiation subdialogues. Section 6 discusses the evaluation of our system and our plans for future work, and Section 7 discusses related research. The examples in this paper are taken from a university advisement domain, since this is the domain in which we have implemented our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To identify how speakers express doubt, we analyzed a corpus of naturally occurring dialogues in the domains of financial planning, university courses, real estate, pets, taxes, and travel. The real estate, pets, and financial planning (Harry Gross Transcripts 1982) dialogues were transcribed from radio talk shows, the taxes and travel (SRI Transcripts 1992) dialogues were transcribed from tapes of simulated interactions, and the university courses dialogues (Columbia University Transcripts, 1985) were transcribed from student advisement sessions. In the corpus we found instances in which a speaker expressed doubt at a proposition by contending that some other conflicting proposition was true. 2 In addition, we extracted other examples of such expressions of doubt from the dialogues in novels. These kinds of expressions of doubt can be realized as surface negative questions or tag questions and are often accompanied by the cue word but, as in the following example taken from the Harry Gross financial planning dialogues (Harry Gross Transcripts 1982) in which S2's last utterance expresses doubt at Sl's recommendation: I would like to see that into an individual retirement account rollover in a mutual fund group.", "cite_spans": [ { "start": 236, "end": 266, "text": "(Harry Gross Transcripts 1982)", "ref_id": "BIBREF42" }, { "start": 338, "end": 360, "text": "(SRI Transcripts 1992)", "ref_id": "BIBREF78" }, { "start": 463, "end": 502, "text": "(Columbia University Transcripts, 1985)", "ref_id": null }, { "start": 1035, "end": 1065, "text": "(Harry Gross Transcripts 1982)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation from Naturally Occurring Dialogues", "sec_num": "2." }, { "text": "Yes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "At my age?", "sec_num": null }, { "text": "Uh, yeah but isn't there any risk?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "At my age?", "sec_num": null }, { "text": "However, our corpus analysis also provided instances where surface negative and tag questions were used to seek verification, such as in the following excerpt from the set of financial planning dialogues: $1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "At my age?", "sec_num": null }, { "text": "$2: $1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "At my age?", "sec_num": null }, { "text": "And if you have more money left after you pay the taxes what difference does it make if you pay a few bucks more in taxes? I'm telling my wife but she won't listen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "At my age?", "sec_num": null }, { "text": "Well maybe she'll listen to me. If you get 200 bucks--isn't it better to have 200 bucks and uh have 200 left than to have nothing at all?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "At my age?", "sec_num": null }, { "text": "Our recognition algorithm has only been concerned with recognizing instances in which a speaker expresses doubt by contending that some other proposition is true. However, in our corpus, speakers also expressed doubt in the following ways, and our future work will include extending our system to handle these: 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "At my age?", "sec_num": null }, { "text": "Drawing attention to an inconsistent feature or proposition: The speaker brings into focus a feature or proposition that is already part of the dialogue context but that is intended to discredit the proposition being doubted. These utterances were often realized as an elliptical fragment and included \"And you're how old?', \"Even though it's four more years?\", and \"At my age?\" Drawing attention to violated expectations: The speaker mentions an expectation that is inconsistent with the doubted proposition. An example of this from our corpus is \"You're kidding, what happened to the seventy-eight dollar fares or those sort of things ?\" 2 We found a very few instances in which the speaker asked the hearer if he was sure the conflicting proposition wasn't true; an example from our corpus is \"Are you sure he didn't name himself as attorney for the estate?\" taken from the Harry Gross financial planning dialogues (Harry Gross Transcripts 1982 ).", "cite_spans": [ { "start": 918, "end": 947, "text": "(Harry Gross Transcripts 1982", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "At my age?", "sec_num": null }, { "text": "Our current system does not handle such expressions of doubt. 3 Walker (1996) analyzed the Harry Gross financial planning dialogues (Harry Gross Transcripts 1982) to identify features that distinguish acceptance from rejection. However, she did not consider expressions of doubt and some of her rejections would fall into our \"express doubt\" category.", "cite_spans": [ { "start": 64, "end": 77, "text": "Walker (1996)", "ref_id": "BIBREF85" }, { "start": 132, "end": 162, "text": "(Harry Gross Transcripts 1982)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "At my age?", "sec_num": null }, { "text": "\u2022 Repetition: The speaker queries the doubted proposition; this usually took the form of a declarative followed by a question mark, as in \"You have 40 thou(sand) inn a ram fund?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "At my age?", "sec_num": null }, { "text": "\u2022 Explicit statements and questions: The speaker explicitly doubts what has been said or asks for justification; examples from our corpus include \"I'm not so sure of that.\" and \"Who ever said that?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "At my age?", "sec_num": null }, { "text": "\u2022 Cue words: The speaker uses discourse markers to convey his doubt. In addition to the cue word but that is often used to realize expressions of doubts in the other categories, cue words such as Really? and What? were used by themselves to express doubt and cue words such as even though were used to convey doubt in utterances such as \"Even though it's four more years ?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "At my age?", "sec_num": null }, { "text": "When a listener in a collaborative interaction does not accept the proposition that the speaker is trying to convey, a negotiation subdialogue ensues in which the participants attempt to \"square away\" (Joshi 1982 ) their disparate beliefs. Negotiation subdialogues often involve complex discourse acts that implicitly refer to some proposition that is part of the existing dialogue context. We have found that such complex discourse acts require evidence for their recognition. Since the motivation for our work is the recognition of communicative acts that occur in negotiation subdialogues, this section examines in detail how plausibility and evidence affect the recognition of one kind of complex discourse act, an expression of doubt. For a listener to recognize a discourse action, it must be plausible that the speaker holds the requisite beliefs for performing the action. (A belief is plausible if the available evidence does not refute it.) For example, in order for a listener to interpret an utterance as felicitously asking a question in order to obtain information, it must be plausible that the speaker does not already know the information and that the speaker believes that the listener may be able to provide it. Similarly, to interpret an utterance as expressing doubt at a proposition Pdoubt by contending Pi, it must be plausible that the speaker holds certain beliefs. Consider a university setting in which each course has only one instructor and a speaker uses the proposition Pi, that Dr. Brown teaches Architecture, to express doubt at the proposition Pdoubt, that Dr. Smith is teaching Architecture, as in utterance (9) of Figure 1 . In order to intend (9) as an expression of doubt in a collaborative dialogue, the speaker must believe 1.", "cite_spans": [ { "start": 201, "end": 212, "text": "(Joshi 1982", "ref_id": "BIBREF46" } ], "ref_spans": [ { "start": 1650, "end": 1658, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Recognizing Expressions of Doubt", "sec_num": "3." }, { "text": "3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "that the hearer has some belief in Pdoubt that Pi is true that if Pi is true, then Pdoubt is not The hearer must be able to plausibly ascribe each of these beliefs to the speaker in recognizing the expression of doubt. First, it must be plausible that the speaker believes that the hearer has some belief in the proposition that is being doubted, since it is pointless in a collaborative dialogue to express doubt at something about which there is no disagreement. 4 It must also be plausible that the speaker has some belief in Pi, (7) EA: What is Dr. Smith teaching? (8) CA: Dr. Smith is teaching Architecture. (9) EA: Isn't Dr. Brown teaching Architecture?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "A dialogue with an expression of doubt. since otherwise the hearer could not be expected to believe that the speaker was using a conflict between Pi and Pdoubt to question the validity of Paoubt. Similarly, it must be plausible that the speaker believes that if Pi is true, then Pdoubt is not; otherwise the hearer could not be expected to think that the speaker believes that the truth of Pi raises doubts about Pdoubt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Although being able to ascribe beliefs as plausible is necessary for recognition of all discourse actions, some discourse actions, such as the expressions of doubt that we consider in this paper, require further evidence. This evidence is provided by linguistic, world, and contextual knowledge. These knowledge sources can either provide evidence for a generic discourse act (such as an expression of doubt) or evidence that the conditions are satisfied for performing a specific discourse act (such as expressing doubt that Dr. Smith is teaching Architecture). In addition, contextual knowledge can suggest a particular interpretation when equivalent evidence exists for several specific discourse acts. These knowledge sources are discussed in the next sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Act. A number of researchers (Reichman 1978 (Reichman , 1985 Grosz and Sidner 1986; Polanyi 1986; Cohen 1987; Hirschberg and Litman 1987; Litman and Allen 1987; Schiffrin 1987; Hinkelman 1989; Litman and Hirschberg 1990; Knott and Dale 1994; Knott and Mellish 1996; Marcu 1997) have investigated the use in discourse of special words and phrases such as but, anyway, and by the way.", "cite_spans": [ { "start": 29, "end": 43, "text": "(Reichman 1978", "ref_id": "BIBREF69" }, { "start": 44, "end": 60, "text": "(Reichman , 1985", "ref_id": "BIBREF71" }, { "start": 61, "end": 83, "text": "Grosz and Sidner 1986;", "ref_id": "BIBREF40" }, { "start": 84, "end": 97, "text": "Polanyi 1986;", "ref_id": "BIBREF65" }, { "start": 98, "end": 109, "text": "Cohen 1987;", "ref_id": "BIBREF29" }, { "start": 110, "end": 137, "text": "Hirschberg and Litman 1987;", "ref_id": "BIBREF45" }, { "start": 138, "end": 160, "text": "Litman and Allen 1987;", "ref_id": "BIBREF54" }, { "start": 161, "end": 176, "text": "Schiffrin 1987;", "ref_id": "BIBREF75" }, { "start": 177, "end": 192, "text": "Hinkelman 1989;", "ref_id": "BIBREF44" }, { "start": 193, "end": 220, "text": "Litman and Hirschberg 1990;", "ref_id": "BIBREF55" }, { "start": 221, "end": 241, "text": "Knott and Dale 1994;", "ref_id": "BIBREF48" }, { "start": 242, "end": 265, "text": "Knott and Mellish 1996;", "ref_id": "BIBREF49" }, { "start": 266, "end": 277, "text": "Marcu 1997)", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic Knowledge 3.1.1 Evidence for a Generic Discourse", "sec_num": "3.1" }, { "text": "They found that these clue words, or discourse markers, have a number of different functions, including indicating the role of an utterance in the dialogue, conveying the relationship between utterances, suggesting shifts in focus of attention, conveying the structure of the discourse, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Knowledge 3.1.1 Evidence for a Generic Discourse", "sec_num": "3.1" }, { "text": "Consider again the dialogue shown in Figure 1 . If EA had followed (7)-(8) with (9a) (9)a.", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 45, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Linguistic Knowledge 3.1.1 Evidence for a Generic Discourse", "sec_num": "3.1" }, { "text": "EA: Isn't Architecture one of our required courses? then EA's utterance would not be interpreted as expressing doubt but would instead be understood as merely seeking information about the Architecture course. However, if this utterance is preceded by the clue word but, as in (9b) below,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Knowledge 3.1.1 Evidence for a Generic Discourse", "sec_num": "3.1" }, { "text": "b. EA: But isn't Architecture one of our required courses? then the utterance is expressing doubt, though we have difficulty ascertaining the reason for this doubt--perhaps EA believes that Dr. Smith does not teach courses that students are required to take! Thus, clue words comprise one source of evidence in the recognition of discourse acts. In particular, a clue word can provide evidence for a generic discourse act, such as Express-Doubt, but it remains for other sources to resolve what is being doubted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Knowledge 3.1.1 Evidence for a Generic Discourse", "sec_num": "3.1" }, { "text": "3.1.2 Evidence for a Specific Discourse Act. Expressions of doubt do not always include clue words, as illustrated by utterance (9) in Figure 1 . In the absence of a clue word, we need evidence that the speaker holds the three beliefs, listed earlier in Section 3, for performing a specific discourse act. Evidence for the second belief (that the speaker believes that Pi is true) is often provided by the surface form of the utterance, such as an utterance of the form \"Isn't Pi ?\"--for example, \"Isn't Dr. Brown teaching Architecture?\" in (9) . This surface form indicates a strong belief in the queried proposition while a simple yes-no question, such as \"Is Dr. Brown teaching Architecture?\", does not. Therefore, if EA were to follow (7)-(8) with \"Is Dr. Brown teaching Architecture?\", EA would seem to have a misconception that more than one person can teach a course or perhaps be seeking information in order to subsequently express doubt--but the utterance itself is not conveying doubt that Dr. Smith is teaching Architecture. Thus the surface form of the utterance is one source of evidence that the speaker holds the requisite beliefs for performing a specific discourse act.", "cite_spans": [ { "start": 523, "end": 544, "text": "Architecture?\" in (9)", "ref_id": null } ], "ref_spans": [ { "start": 135, "end": 143, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Linguistic Knowledge 3.1.1 Evidence for a Generic Discourse", "sec_num": "3.1" }, { "text": "World knowledge in the form of stereotypical beliefs is another source of evidence that the speaker holds the requisite beliefs for a particular discourse act. For example, world knowledge can provide evidence for the third speaker belief, that if Pi is true, then Pdoubt is not. Suppose that it is stereotypically believed that prestigious fellowships are awarded for sabbaticals, that faculty on sabbatical do not teach, and that faculty only teach in their area of expertise. Consider the dialogue shown in Figure 2 . After (13), there are two propositions that have been conveyed by CA but not yet completely accepted by EA: the proposition that Dr. Smith is not on sabbatical and the proposition that Dr. Smith is teaching CS360, cormnunicated by utterances 13and 11, respectively. A subsequent utterance might express doubt at one of these propositions or might forego the opportunity to doubt them, perhaps by pursuing some discourse act unrelated to either of the propositions. Consider the following three possible continuations of the dialogue:", "cite_spans": [], "ref_spans": [ { "start": 510, "end": 518, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "World Knowledge", "sec_num": "3.2" }, { "text": "(14)a.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "World Knowledge", "sec_num": "3.2" }, { "text": "b.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "World Knowledge", "sec_num": "3.2" }, { "text": "EA: Wasn't Dr. Smith awarded a Fulbright? EA: Isn't Dr. Smith a theory person? EA: Isn't Dr. Smith an excellent teacher?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.", "sec_num": null }, { "text": "While (14a) and (14b) seem to be expressing doubt, (14c) is simply seeking further information about Dr. Smith. The reason for this difference in interpretation is that in the case of (14a) and (14b), evidence from world knowledge suggests that EA believes that Pi (the proposition that EA contends is true) implies that one of the two open propositions is false, whereas no such evidence exists in the case of (14c). In the case of (14a), since it is stereotypically believed that prestigious fellowships are awarded for sabbaticals, EA's utterance should be interpreted as expressing doubt at the proposition that Dr. Smith is not on sabbatical. In the case of (14b), since Dr. Smith being a theory person is an alternative to Dr. Smith being a systems person and it is stereotypically believed that being a systems person is necessary for teaching CS360 (a systems course), EA's utterance would instead be interpreted as expressing doubt at the proposition that Dr. Smith is teaching CS360. Thus, world knowledge in the form of stereotypical beliefs is another source of evidence that the speaker holds the requisite beliefs for performing a particular discourse act. If EA had uttered (14c), EA's utterance would be interpreted as merely seeking new information since there is no domain knowledge suggesting that EA believes that Dr. Smith being an excellent teacher contributes to determining whether Dr. Smith is (10) EA: Who is teachir~g CS360 (a systems course)? (11) CA: Dr. Smith is teaching CS360. (12) EA: Isn't Dr. Smith on sabbatical? (13) CA: No, Dr. Smith is not on sabbatical. on sabbatical or to identifying the instructor of CS360. Note that (14c) demonstrates why plausibility alone is insufficient for recognition. Although there is no evidence that EA believes that Dr. Smith being an excellent teacher implies that Dr. Smith is on sabbatical or that Dr. Smith is not teaching CS360, there is also no evidence to the contrary, and thus it is plausible that EA believes that Dr. Smith being an excellent teacher indicates that he is on sabbatical or that he is not teaching CS360. This is not sufficient, however, to interpret (14c) as an expression of doubt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.", "sec_num": null }, { "text": "An agent can infer from a dialogue many of the beliefs of the other participant. These acquired beliefs about the other participant's beliefs form one kind of contextual knowledge that can be used as evidence for the beliefs listed above. In addition, contextual knowledge determines the salience (or degree of prominence) of propositions at the current point in the dialogue, and salience is a factor that constrains the interpretation of coherent discourse actions. Consider the first three utterances in the dialogue shown in Figure 2 . EA's acceptance of CA's telling of the proposition that Dr. Smith is teaching CS360 establishes the mutual belief that CA believes that Dr. Smith is teaching CS360 and thus provides evidence for the first belief; 5 in addition, the proposition that Dr. Smith is teaching CS360 becomes salient and is added to the dialogue context. Thus, while an utterance such as (12a) (12)a.", "cite_spans": [], "ref_spans": [ { "start": 529, "end": 537, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Contextual Knowledge", "sec_num": "3.3" }, { "text": "EA: Doesn't Dr. Smith usually teach theory courses? might be used following (11) to express doubt at the statement that Dr. Smith is teaching CS360, it cannot be used following (11) to express doubt at the proposition that Dr. Smith teaches CS410 because 1) there is no reason for EA to believe that CA has any belief in the proposition that Dr. Smith teaches CS410, and 2) the proposition that Dr. Smith teaches CS410 is not salient at this point in the dialogue. In addition, contextual knowledge plays two other roles in the recognition of discourse acts. First, in the case of expressions of doubt, contextual knowledge distinguishes propositions that have not yet been accepted by the speaker and thus are open for rejection. Consider again the dialogue in Figure 2 . After (13), there are two propositions that have not yet been accepted by EA and are thus open for rejection by EA. If EA were to continue with (14b), repeated below, (14)b.", "cite_spans": [], "ref_spans": [ { "start": 762, "end": 770, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Contextual Knowledge", "sec_num": "3.3" }, { "text": "EA: Isn't Dr. Smith a theory person?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Knowledge", "sec_num": "3.3" }, { "text": "then EA would again be expressing doubt at the proposition that Dr. Smith is teaching CS360 and would have implicitly conveyed acceptance of the proposition that 5 Note that here EA is only accepting CA's felicitous telling of the proposition, but EA is not adopting the proposition as one of his own beliefs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Knowledge", "sec_num": "3.3" }, { "text": "Dr. Smith is not on sabbatical. Thus, as the conversation continues, only one proposition would remain open for rejection: the proposition that Dr. Smith is teaching CS360. This claim is supported by a combination of 1) the stack paradigm (Polanyi 1986; Reichman 1978; Grosz and Sidner 1986; Litman and Allen 1987) , which treats topic structure as following a stack-like discipline; 2) focusing heuristics (McKeown 1983) that suggest that if a speaker has more to say about a topic, then he should do so before moving back to a topic deeper on the stack; and 3) the notion of implicit acceptance (discussed in Section 4.6) that argues that passing up the opportunity to reject an assertion in a collaborative dialogue communicates acceptance of it. Second, contextual knowledge orders propositions according to their relative salience in the current dialogue. This salience can be used to arbitrate among discourse acts for which there is equivalent evidence. Consider again the dialogue in Figure 2 and suppose that EA had continued with (14d).", "cite_spans": [ { "start": 239, "end": 253, "text": "(Polanyi 1986;", "ref_id": "BIBREF65" }, { "start": 254, "end": 268, "text": "Reichman 1978;", "ref_id": "BIBREF69" }, { "start": 269, "end": 291, "text": "Grosz and Sidner 1986;", "ref_id": "BIBREF40" }, { "start": 292, "end": 314, "text": "Litman and Allen 1987)", "ref_id": "BIBREF54" }, { "start": 407, "end": 421, "text": "(McKeown 1983)", "ref_id": "BIBREF61" } ], "ref_spans": [ { "start": 992, "end": 1000, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Contextual Knowledge", "sec_num": "3.3" }, { "text": "(14)d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Knowledge", "sec_num": "3.3" }, { "text": "EA: But isn't Dr. Smith an excellent teacher?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Knowledge", "sec_num": "3.3" }, { "text": "Here we have a clue word suggesting an expression of doubt, but the speaker could be expressing doubt either that Dr. Smith is not on sabbatical or that Dr. Smith is teaching CS360. In both cases, we lack evidence for the third speaker belief. Contextual knowledge suggests that, all other things being equal, the proposition being doubted is the proposition that Dr. Smith is not on sabbatical, since it is the most salient proposition that is open for rejection at this point in the dialogue. Thus, contextual knowledge arbitrates when equivalent evidence is available for several specific discourse acts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Knowledge", "sec_num": "3.3" }, { "text": "In addition to the requisite speaker beliefs being plausible and the constraints on the discourse act being satisfied (such as the constraint that a proposition be salient at the current point in the dialogue), certain discourse acts require additional evidence for their recognition. Two kinds of evidence that may be used in recognizing discourse actions are 1) evidence (such as a clue word) for a generic discourse act, and 2) evidence that a speaker holds the requisite beliefs for performing a particular discourse act. Evidence for these beliefs can come from linguistic, world, or contextual knowledge. Although we have illustrated each of these knowledge sources by showing how they might provide evidence for one of the requisite beliefs for expressing doubt, it should be noted that each knowledge source might also be used as evidence for other beliefs required for expressing doubt or for beliefs for other discourse acts. For example, although it does not generally arise in the kind of interactive dialogues that we are studying, world knowledge in the form of stereotypical beliefs might be used as evidence that a speaker believes that a hearer has some belief in the doubted proposition", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "3.4" }, { "text": "4. The Process Model Grosz and Sidner (1986) claim that a robust model of understanding must use multiple knowledge sources in order to recognize the complex relationships that utterances have to one another. We have developed an algorithm that combines linguistic, world, and contextual knowledge, such as that identified in Section 3, in order to recognize complex discourse acts, including one kind of expression of doubt. Linguistic knowledge consists of clue words and the surface form of the utterance; world knowledge includes a set of stereotypical beliefs that users generally hold and recipes for performing discourse acts; and contextual knowledge consists of a model of the user's beliefs acquired from the preceding dialogue, the current structure of the discourse, the existing focus of attention (that aspect of the task on which the participants' attention is currently centered), and the relative salience (degree of prominence) of propositions in the discourse. The remainder of this section presents the core ideas of our algorithm. Section 4.1 shows why it is necessary to capture varying degrees of belief in a proposition and presents the multistrength belief model used in our system. Section 4.2 describes our representation of recipes for actions and shows how our recipe for an Inform action refrains from assuming that the listener will adopt the communicated proposition as part of his own beliefs; it also presents a recipe for expressing doubt and shows how constraints on the speaker's beliefs are captured in the recipe's applicability conditions. Section 4.3 gives an overview of our dialogue model. Section 4.4 describes how chaining is used to hypothesize a sequence of higher-level discourse acts that a speaker may be performing; Section 4.5 introduces the notion of a discourse action that requires evidence for its recognition; Section 4.6 discusses how our model recognizes implicit acceptance of a discourse act; and Section 4.7 presents our recognition algorithm that uses a combination of linguistic, world, and contextual knowledge in recognizing discourse acts.", "cite_spans": [ { "start": 21, "end": 44, "text": "Grosz and Sidner (1986)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Pdoubt\"", "sec_num": null }, { "text": "As argued in Section 3, if a speaker is expressing doubt at a proposition Pdoubt by contending some other proposition Pi, then the speaker must have some belief in Pi. Evidence for this belief is often provided by the surface form of the speaker's utterance, such as an utterance of the form \"Isn't Pi?\" for example, \"Isn't Dr. Brown teaching Architecture?\" But if we treat such an utterance as conveying certain belief that Pi is true, then we cannot handle situations in which an utterance such as this is merely requesting verification since a speaker cannot felicitously seek verification of a proposition that he already knows is true. Therefore, since modeling only ignorance and certainty is inadequate for recognizing complex discourse acts, it is necessary to model the strength of an agent's beliefs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A MultiStrength Belief Model", "sec_num": "4.1" }, { "text": "We use a multistrength model of belief, which captures not only ignorance and certainty about the truth of a proposition but also several degrees of belief in between. Utterances of the form \"Isn't Pi ?\" are treated as conveying a strong (but uncertain) belief in Pi. In this way, our system is able to handle instances in which an utterance of the form \"Isn't Pi?\" is used to request verification (since the utterance is viewed as conveying uncertainty about Pi) as well as instances in which it is used to express doubt (since the utterance is viewed as conveying some belief in Pi).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Varying Degrees of Belief.", "sec_num": "4.1.1" }, { "text": "Our multistrength belief model maintains three degrees of belief: certain belief (a belief strength of C); strong but uncertain belief, as in \"Isn't Dr. Brown teaching Architecture?\" (a belief strength of S); and weak belief, as in \"I think that Dr. Cayne might be an education instructor\" (a belief strength of W). Three degrees of disbelief (indicated by attaching a subscript of N, such as SN to represent strong disbelief and WN to represent weak disbelief) are also maintained, and one degree indicating no belief about a proposition (a belief strength of 0). We adopted three degrees of positive and negative belief in our model because that was the minimum number of degrees required for modeling the beliefs communicated in the dialogues that we examined and in the negotiation subdialogues that our system is intended to handle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Varying Degrees of Belief.", "sec_num": "4.1.1" }, { "text": "Although an agent has some specific strength of belief in a proposition, the other agent may not always know precisely what that strength of belief is but may be able to bound it--for example, he may be able to say that the first agent has some belief in a proposition. Our belief model uses belief intervals to capture this, where a belief interval specifies the range of strengths within which an agent's beliefs are thought to fall. Allen and Perrault (1980) noted the need to represent an agent's wanting to know the referent of a term in a proposition, without having to specify what that referent was. For example, if EA asks CA \"Who is teaching CS360?', we cannot represent CA's belief that EA wants to know the teacher of CS360 as believe(CA, want(EA, believe(EA, Teaches(Dr.Smith, CS360)))) since this representation says that CA believes that EA wants to believe that the teacher is Dr. Smith (but EA, in asking the question, may not be predisposed to any such belief and may in fact reject \"Dr. Smith\" as the answer to the question). Allen and Perrault addressed this with knowref and knowif predicates, which represented an agent's knowing the referent of a term in a proposition and knowing whether a proposition is true. Thus CA's belief that EA wants to know the teacher of CS360 in the above example might be represented as believe(CA, want(EA, knowref(EA, _fac, Teaches(_fac, CS360))))", "cite_spans": [ { "start": 436, "end": 461, "text": "Allen and Perrault (1980)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Representing Varying Degrees of Belief.", "sec_num": "4.1.1" }, { "text": "In our multistrength belief model, knowref is treated as being certain about the referent of the term in the specified proposition and knowif is treated as being certain about whether a proposition is true or false.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Varying Degrees of Belief.", "sec_num": "4.1.1" }, { "text": "As the dialogue progresses, the belief model must capture the changing beliefs of the user. When a discourse act has been successful, its goals can be used to update the belief model. For example, if the user explicitly accepts the proposition conveyed by the utterance \"Dr. Smith is teaching Architecture\" (perhaps by saying \"Yes, I'I1 accept that\"), then the system can update its belief model to include the belief that the user himself believes that Dr. Smith is teaching Architecture. However, explicit acceptance is less common than implicit acceptance; Section 4.6 discusses implicit acceptance and its recognition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Varying Degrees of Belief.", "sec_num": "4.1.1" }, { "text": "Since we do not currently have a response generation component, our system processes the utterances of both participants, alternating between playing the role of CA and the role of EA. Note that this differs from playing the role of a third-party observer--when the system plays the role of EA, the system has access to EA's beliefs (including EA's beliefs about the current dialogue model and EA's beliefs about CA), and when the system plays the role of CA, it has access to CA's beliefs. However, whenever the system assumes the role of a participant and processes a new utterance, it is assumed that this participant has correctly interpreted previous utterances and has a correct model of the preceding dialogue. 4.1.2 Related Work on Modeling Belief. Young (1987) built a model in which the beliefs of the user are part of an explicit, missing, or stereotype module (he used the system's beliefs as a stereotype). Although this system provides needed differentiation among beliefs that the system knows the user holds, those that the system has attributed to the user, and those about which the system has no knowledge, this model still does not contain degrees of partial belief that are essential for modeling discourse acts such as expressing doubt. Ballim and Wilks (1991) developed a nested belief model that captures an agent's beliefs about other agents' beliefs. Their system combines belief ascription based on stereotypes with belief ascription based on perturbations of the system's own beliefs, but they do not represent how strongly a belief is held. Galliers (1991 Galliers ( , 1992 has specified a nonnumeric theory of belief revision that relates strength of belief to persistence of belief. She points out that a belief model for communication must contain a multistrength model of beliefs that can be modified as the conversation proceeds. She uses endorsements (Cohen 1985) in an assumptionbased truth maintenance system (ATMS [DeKleer 1986] ) to specify a system that orders beliefs according to how strongly they are held. This ordering is used to calculate which beliefs should be revised when beliefs are challenged in the course of conversation. Walker (1991 Walker ( , 1992 has examined dialogues in which people repeat what they already know either in question or statement form (e.g., \"I have four children.\" \"OK. Four children.\"). Walker claims that this repetition by the second speaker is given so that the first speaker realizes that her utterance was understood and believed. That is, cooperative listeners often provide some evidence to speakers to indicate that the listener believes the speaker's claims. Like Galliers, Walker has based the strength of belief on the amount and kind of evidence available for that belief. Cohen and Levesque (1991a) also found this kind of corroboration. Our work has not investigated the belief reasoning process or how much evidence is needed for an agent to have a particular amount of confidence in a belief. Instead, we have been concerned with taking into account different communicated strengths of belief and the impact that the different belief strengths have on the recognition of discourse acts.", "cite_spans": [ { "start": 757, "end": 769, "text": "Young (1987)", "ref_id": "BIBREF88" }, { "start": 1259, "end": 1282, "text": "Ballim and Wilks (1991)", "ref_id": null }, { "start": 1570, "end": 1584, "text": "Galliers (1991", "ref_id": "BIBREF37" }, { "start": 1585, "end": 1602, "text": "Galliers ( , 1992", "ref_id": "BIBREF38" }, { "start": 1886, "end": 1898, "text": "(Cohen 1985)", "ref_id": "BIBREF20" }, { "start": 1952, "end": 1966, "text": "[DeKleer 1986]", "ref_id": "BIBREF32" }, { "start": 2176, "end": 2188, "text": "Walker (1991", "ref_id": "BIBREF83" }, { "start": 2189, "end": 2204, "text": "Walker ( , 1992", "ref_id": "BIBREF84" }, { "start": 2763, "end": 2789, "text": "Cohen and Levesque (1991a)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Representing Varying Degrees of Belief.", "sec_num": "4.1.1" }, { "text": "Our multistrength belief model is very simple and is only intended to meet our system's need for representing how strongly an agent holds a particular belief. We recently became aware of work by Driankov on a logic in which belief/disbelief pairs capture how strongly a proposition is believed (Driankov 1988; Bonarini, Cappelletti, and Corrao 1990) . 6 This work appears to be the only formally defined and well-developed logic that models strength of belief. With the exception that Driankov's logic does not inchide a state of weak belief, it appears to provide the representational and reasoning capability needed by our system and we intend to investigate it for future use.", "cite_spans": [ { "start": 294, "end": 309, "text": "(Driankov 1988;", "ref_id": "BIBREF33" }, { "start": 310, "end": 349, "text": "Bonarini, Cappelletti, and Corrao 1990)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Representing Varying Degrees of Belief.", "sec_num": "4.1.1" }, { "text": "In previous work, we noted the need to differentiate among domain, problem-solving, and discourse actions (Lambert and Carberry 1991; Elzer 1995) . In task-oriented consultation dialogues, the participants are constructing a plan for achieving some domain goal, such as owning a home, and the resultant plan will consist of domain actions such as applying for a mortgage. In order to construct the domain plan, the participants pursue problem-solving actions such as evaluating alternative domain actions or correcting an action in the partially constructed domain plan. Domain and problemsolving actions have been investigated by many researchers (Allen and Perrault 1980; Perrault and Allen 1980; Wilensky 1981; Litman and Allen 1987; van Beek and Cohen 1986; Ramshaw 1989; Carberry 1990) .", "cite_spans": [ { "start": 106, "end": 133, "text": "(Lambert and Carberry 1991;", "ref_id": "BIBREF51" }, { "start": 134, "end": 145, "text": "Elzer 1995)", "ref_id": "BIBREF34" }, { "start": 648, "end": 673, "text": "(Allen and Perrault 1980;", "ref_id": "BIBREF1" }, { "start": 674, "end": 698, "text": "Perrault and Allen 1980;", "ref_id": "BIBREF64" }, { "start": 699, "end": 713, "text": "Wilensky 1981;", "ref_id": "BIBREF87" }, { "start": 714, "end": 736, "text": "Litman and Allen 1987;", "ref_id": "BIBREF54" }, { "start": 737, "end": 761, "text": "van Beek and Cohen 1986;", "ref_id": "BIBREF82" }, { "start": 762, "end": 775, "text": "Ramshaw 1989;", "ref_id": "BIBREF68" }, { "start": 776, "end": 790, "text": "Carberry 1990)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Recipes", "sec_num": "4.2" }, { "text": "Discourse actions are communicative actions that are executed by the dialogue participants in order to obtain or convey the information needed to pursue the problemsolving actions necessary for constructing the domain plan. Examples of very different discourse actions include answering a question, informing, and expressing doubt. Although our system models domain, problem-solving, and discourse actions, this paper is only concerned with recognizing discourse acts, particularly complex discourse acts such as expressing doubt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Recipes", "sec_num": "4.2" }, { "text": "Our system's knowledge about how to perform actions is contained in a library 6 We would like to thank one of the anonymous reviewers and Ingrid Zukerrnan for bringing this work to our attention.", "cite_spans": [ { "start": 78, "end": 79, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Recipes", "sec_num": "4.2" }, { "text": "of discourse, problem-solving, and domain recipes (Pollack 1990) . Our representation of a recipe includes a header giving the action defined by the recipe, the recipe type, preconditions, applicability conditions, constraints, a body, effects, and a goal. The recipe type is primitive, specialization, or decomposition. If the recipe type is primitive, then the body of the recipe is empty and the header action corresponds with a primitive action in the domain. In a specialization recipe, the body gives a set of alternative ways of performing the header action (Pollack 1990; Kautz 1990) . For example, one might earn credit in a course either by taking the course for credit or getting credit by exam. In a decomposition recipe, the body gives a set of subactions that constitute performing the header action. A # preceding a subaction in the body of a decomposition recipe indicates that the subaction can be performed any number of times (including zero). 7 Constraints limit the allowable instantiation of variables in each component of a recipe (Litman and Allen 1987) . For example, a variable might have a constraint requiring that it be instantiated with a proposition that is salient at the current point in the discourse. Which instantiations of variables will satisfy the constraints is part of the shared knowledge of the participants. 8 Applicability conditions (Carberry 1987) are conditions that must be satisfied in order for a recipe to be reasonable to pursue in a given situation. The applicability conditions of our discourse recipes capture attitudes (beliefs and wants) that the agent of the action must hold in order for it to be felicitous (Searle 1970) . Applicability conditions differ from preconditions in that one can plan to satisfy preconditions but it is generally anomalous to try to satisfy applicability conditions. For example, in order for _agent1 to inform _agent2 of _proposition, _agent1 must believe that _proposition is true and must not believe that _agent2 already believes _proposition. It would be anomalous for _agent1 to try to adopt the proposition as one of his beliefs solely for the sake of being able to inform someone else of it, and similarly it would be anomalous for _agent1 to get _agent2 to disbelieve a proposition so that _agent1 can subsequently inform him of it. 9", "cite_spans": [ { "start": 50, "end": 64, "text": "(Pollack 1990)", "ref_id": "BIBREF66" }, { "start": 565, "end": 579, "text": "(Pollack 1990;", "ref_id": "BIBREF66" }, { "start": 580, "end": 591, "text": "Kautz 1990)", "ref_id": "BIBREF47" }, { "start": 1054, "end": 1077, "text": "(Litman and Allen 1987)", "ref_id": "BIBREF54" }, { "start": 1379, "end": 1394, "text": "(Carberry 1987)", "ref_id": "BIBREF7" }, { "start": 1668, "end": 1681, "text": "(Searle 1970)", "ref_id": "BIBREF76" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Recipes", "sec_num": "4.2" }, { "text": "Belief intervals are used in the applicability conditions to specify the range of , is that _agent2 be certain that _proposition is true. On the other hand, believe(_agent2, _proposition, [CN:S]) means that _agent2 is not convinced that _proposition is true (i.e., _agent2 could have any belief ranging from being certain that _proposition is false to having a strong belief that it is true). Thus the applicability condition believe (_agent l, believe (_agent2, _proposition, [CN : S] ), [0 :C] ) of the Inform act in Figure 3 means that _agent I must 7 In this work, we are interested in understanding, not generating, responses. However, a generation system would pursue an action preceded by # if its applicability conditions are satisfied and the system does not believe that the action's goal will be satisfied if the action is omitted. The belief reasoning techniques described in Cawsey et al. (1992) can be used in modeling this. 8 In this research, we have assumed that the participants have equivalent knowledge of language and maintain equivalent discourse models, and we have not addressed the problem of recognizing miscommunication. 9 How applicability conditions on discourse acts are checked during planning is an interesting question that requires further research. Consider an agent who wants to determine whether a proposition is true; the agent might accomplish this by asking another agent about the proposition. An applicability condition on asking another agent is that the speaker wants to know the other agent's belief about the proposition. But when did this want come into existence? It certainly must be satisfied at the time the question is asked, but instead of being part of the initial state, it appears to result from the speaker's decision about how to obtain the desired information. Surface-Say-Prop(_agentl, _agent2, _proposition) #Address-Understanding(_agentl, _agent2, _proposition) Effects:", "cite_spans": [ { "start": 888, "end": 908, "text": "Cawsey et al. (1992)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 519, "end": 527, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Discourse Recipes", "sec_num": "4.2" }, { "text": "told-about(_agentl, _agent2, _proposition) Goal:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Recipes", "sec_num": "4.2" }, { "text": "beheve(_agent2, believe(_agentl, _proposition, [C:C]), [C:C])", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Recipes", "sec_num": "4.2" }, { "text": "Recipes for Inform and Tell discourse acts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure S", "sec_num": null }, { "text": "either be ignorant about _agent2's belief in _proposition or have some belief (possibly certain) that _agent2 is not already certain that _proposition is true, i.e., _agentl does not believe that _agent2 is already convinced that _proposition is true. In determining the belief strengths specified in the applicability conditions, we examined the beliefs that an agent must hold and tried to identify the minimum and maximum strength of belief that would make the discourse act reasonable to pursue. We have divided the effects of an action into two subclasses: 1) the results of correctly performing the action, which are labeled effects, and 2) the desired effects of the action (over which the agent may lack control), which are labeled goals. For example, in the case of domain actions, the effect of applying for graduate study is that one has applied, while the goal is that one be accepted for graduate study. This distinction between effects and goals is particularly important in the case of discourse actions, where the agent cannot be assured that an action will have its intended result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure S", "sec_num": null }, { "text": "Variables in recipes are represented as lowercase strings preceded by an underscore, with the string reflecting the variable's type; for example, _course1 and _course2 refer to variables of type course.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure S", "sec_num": null }, { "text": "In Allen's seminal model of plan recognition (Allen 1979) , the bodies of operators could contain either goals to be achieved or action names with parameters. In our system, preconditions are represented as goals to be achieved, while the bodies of recipes specify actions. Since the recipe for each action in our recipe library contains a single goal in its goal field, this suffices--the goal makes clear the purpose of the action in a plan. However, in a richer domain where an action could be used to achieve several different goals, 1\u00b0 it would be necessary to specify the intended goal in the recipe body and chain from it to the desired action in order to capture the motivation for performing the action. During plan recognition, our system matches goals 10 For example, one might read a book to gain knowledge or to entertain oneself. against preconditions of other actions, and it matches actions against the subactions in the bodies of the recipes for other actions. The Effects field is not used for chaining during plan recognition; however, as discussed in Section 4.6.2, the effects and goals in discourse recipes are used for updating a model of the user's beliefs.", "cite_spans": [ { "start": 45, "end": 57, "text": "(Allen 1979)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Figure S", "sec_num": null }, { "text": "The bodies of our discourse recipes are based on work by other researchers (Allen and Perrault 1980; Searle 1970; Cohen and Perrault 1979) , dialogues in which we have participated, the naturally occurring dialogues that we examined, and our hypotheses about how our system might be expanded in the believe that _agent2 knows the information, are based on criteria identified in Searle 1970, Cohen and Perrault (1979) , and Allen and Perrault (1980) . The body of the Ask-Ref action consists of making the request itself and making the request acceptable; this is because in our own interactions we have encountered situations in which an agent will make a request and then justify it to the listener. The applicability conditions of Ask-Ref refer to _agent2's beliefs about the proposition; for example, one applicability condition is that _agent1 wants to know the term that _agent2 believes will satisfy a proposition. We contend that this captures what a speaker wants in asking a listener about a proposition (i.e., the speaker wants to know the listener's beliefs about the proposition), and this formulation also allows the Ask-Ref to be used as a subaction of a Test-Knowledge act in a tutoring system. 11 Note that the fact that a speaker who is seeking information really wants to know correct information about a proposition was captured in the applicability conditions of the Obtain-Info-Ref discourse act.", "cite_spans": [ { "start": 75, "end": 100, "text": "(Allen and Perrault 1980;", "ref_id": "BIBREF1" }, { "start": 101, "end": 113, "text": "Searle 1970;", "ref_id": "BIBREF76" }, { "start": 114, "end": 138, "text": "Cohen and Perrault 1979)", "ref_id": "BIBREF28" }, { "start": 392, "end": 417, "text": "Cohen and Perrault (1979)", "ref_id": "BIBREF28" }, { "start": 424, "end": 449, "text": "Allen and Perrault (1980)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Formulation of Discourse Recipes.", "sec_num": "4.2.1" }, { "text": "Some of our discourse recipes, such as Obtain-Info-Ref, include subactions by both the initiating agent and the other participant. This captures the intention of the initiating agent to perform his required subactions as well as the intention that the other agent follow through on her role in the plan for this action. Thus, once the second agent recognizes that the initiating agent wants to obtain information, the second agent will also recognize that the initiating agent intends for her to play the role of providing that information. While an agent can construct a plan that includes acts by another agent, the planning agent cannot guarantee the other agent's behavior and thus such discourse plans can fail. (See Chu-Carroll and Carberry [1994] for research on dialogues in which agents do not always follow through on their intended role yet still fulfill their collaborative responsibilities.) A different approach would be to maintain such knowledge about adjacency pairs (Schegloff and Sachs 1973) and expected continuations in a transition network separate from the discourse recipes, as was done by Reithinger and Maier (1995) . The advantage of this approach is that fewer discourse recipes are needed and continuations are generalized. Such a representation would enable us to remove the actions that address acceptance from our discourse recipes but still capture the expectations for them in the transition network. However, the disadvantage of this approach is that the higherlevel discourse act would no longer constrain the possible continuations. Grosz and Sidner (1990) , the assumption that one participant will slavishly respond to the wishes of the other participant does not reflect collaborative interaction. In Cohen and Perrault's formulation of speech act operators (Cohen and Perrault 1979) , the effect of an Inform was that the hearer believed that the speaker believed the proposition. He postulated a Convince act that would cause the hearer to believe the proposition, but this act was left undeveloped and its definition did not allow for the participants to negotiate their beliefs. The effect of an Inform act in Allen and Perrault's system (Allen and Perrault 1980) is that the hearer believes the communicated proposition--this definition would seem to say that the hearer always accepts the information provided by the speaker. 12 Although Allen and Perrault's model was only concerned with recognizing the intention to perform an Inform act, using his formulation to model negotiation dialogues (where Inform actions may not automatically accomplish their purpose) is problematic. In Perrault's (1990) persistence model of belief, the hearer adopts a communicated proposition unless he has evidence to the contrary, in which case his original belief persists. Thus, models such as Allen and Perrault's cannot account for a hearer who does not accept a communicated proposition, and Perrault's model cannot account for a hearer who changes his beliefs about a proposition after negotiation.", "cite_spans": [ { "start": 738, "end": 753, "text": "Carberry [1994]", "ref_id": "BIBREF13" }, { "start": 984, "end": 1010, "text": "(Schegloff and Sachs 1973)", "ref_id": "BIBREF74" }, { "start": 1114, "end": 1141, "text": "Reithinger and Maier (1995)", "ref_id": "BIBREF72" }, { "start": 1570, "end": 1593, "text": "Grosz and Sidner (1990)", "ref_id": "BIBREF41" }, { "start": 1798, "end": 1823, "text": "(Cohen and Perrault 1979)", "ref_id": "BIBREF28" }, { "start": 2182, "end": 2207, "text": "(Allen and Perrault 1980)", "ref_id": "BIBREF1" }, { "start": 2629, "end": 2646, "text": "Perrault's (1990)", "ref_id": "BIBREF63" } ], "ref_spans": [], "eq_spans": [], "section": "Formulation of Discourse Recipes.", "sec_num": "4.2.1" }, { "text": "We want to overcome these limitations and be able to handle negotiation subdialogues in which participants attempt to come to some agreement about their disparate beliefs. Thus, the body of the recipe for our Inform act ( Figure 3 ) contains two subactions: one in which the speaker tells the hearer a proposition and a second in which the participants address the believability of the communicated proposition and try to come to agreement. In addition, as discussed in the preceding section, the effects of our discourse recipes are often different from the goals. Although this does not solve the problem of recognizing perlocutionary effects, it does allow us to capture the notion that one can, for example, perform an Inform act without the hearer adopting the communicated proposition. Thus, the goal of a discourse recipe is a desired perlocutionary effect (an effect that the speaker wishes the action to have, e.g., believe (hearer, P, [C:C] ) in the case of an Inform action), and the effects of a discourse recipe are the illocutionary effects (that is, the effect that the speech act has when it is performed and recognized by the hearer, e.g., believe (hearer, want (speaker, believe (hearer, P, Figure 4 presents our discourse recipe for expressing doubt. Note that its applicability conditions capture the requisite beliefs listed in Section 3. The second applicability condition in Figure 4 excludes certain belief in _proposition2; this is because the body of the recipe is an action of conveying uncertain belief in _proposition2 and represents instances where an expression of doubt is realized as a surface negative question or a tag question. Note also that one of the effects of the Express-Doubt discourse act is that the listener believes that the speaker wants to resolve the conflict between the two propositions and that the goal of the A recipe for an Express-Doubt discourse act.", "cite_spans": [], "ref_spans": [ { "start": 222, "end": 230, "text": "Figure 3", "ref_id": null }, { "start": 1209, "end": 1217, "text": "Figure 4", "ref_id": null }, { "start": 1398, "end": 1406, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Formulation of Discourse Recipes.", "sec_num": "4.2.1" }, { "text": "Express-Doubt discourse act is that the listener also wants to resolve the conflict. The mutual desire for conflict resolution resulting from a successful Express-Doubt discourse act leads to a negotiation subdialogue (initiated by the Express-Doubt action).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Express-Doubt Discourse Recipe.", "sec_num": "4.2.3" }, { "text": "We maintain a structure called a dialogue model that captures the system's beliefs about the existing dialogue context. The discourse level of the dialogue model contains a tree structure called the discourse tree. Each node of the discourse tree represents a discourse or communicative act that has been initiated by one of the dialogue participants, and the children of a node represent discourse acts that are being pursued in order to perform the parent action. Figure 5 . The lowest uncompleted action in the discourse tree is marked as the focus of attention; 13 it represents the first expectation for subsequent utterances. In Figure 5 , the focus of attention is the Tell action. The active path consists of the sequence of actions along the path from the action that is the focus of attention to the root node. The actions on the active path provide successive expectations about the role of the next utterance in the dialogue; actions closer to the current focus of attention are regarded as more salient than those further back on the active path. 14 For example, in the discourse tree of Figure 5 , the first expectation is that if EA does not understand CA's previous statement, then EA will now choose to address her understanding of it and thereby contribute to the Tell act that is the existing focus of attention. The next expectation is that if EA has any doubt about the proposition conveyed by CA, then EA will choose to address its believability, thereby contributing to the Inform discourse act that is the next action on the active path. (As shown in Figure 3 , Address-Understanding is a subaction in the recipe for the Tell discourse act, and Address-Believability is a subaction in the recipe for the Inform discourse act.)", "cite_spans": [], "ref_spans": [ { "start": 466, "end": 474, "text": "Figure 5", "ref_id": null }, { "start": 635, "end": 643, "text": "Figure 5", "ref_id": null }, { "start": 1101, "end": 1109, "text": "Figure 5", "ref_id": null }, { "start": 1575, "end": 1583, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "The Dialogue Model", "sec_num": "4.3" }, { "text": "13 By uncompleted, we mean not as yet known to be completed. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Dialogue Model", "sec_num": "4.3" }, { "text": "Our process model starts with the semantic representation of a new utterance and uses plan inference rules (Allen and Perrault 1980; Carberry 1988 ) along with constraint satisfaction (Litman and Allen 1987) to hypothesize chains of actions A1,A2 ..... An that the speaker might be intending to perform with the utterance. In such a chain, action Ai contributes to the performance of its successor action Ai+l. For example, the semantic representation of an utterance such as \"Dr. Smith is teaching Architecture\" is Surface-Say-Prop(_agentl, _agent2, Teaches(Dr.Smith, Architecture))", "cite_spans": [ { "start": 107, "end": 132, "text": "(Allen and Perrault 1980;", "ref_id": "BIBREF1" }, { "start": 133, "end": 146, "text": "Carberry 1988", "ref_id": "BIBREF8" }, { "start": 184, "end": 207, "text": "(Litman and Allen 1987)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Hypothesizing Discourse Acts by Chaining", "sec_num": "4.4" }, { "text": "A Surface-Say-Prop is a subaction in the recipe for a Tell discourse act, which in turn is a subaction in the recipe for an Inform discourse act. Thus chaining from the surface utterance produces a sequence of hypothesized discourse acts, each of which plays a role in the performance of its successor on the chain. We have expanded on Litman and Allen's (1987) notion of constraint satisfaction and Allen and Perrault's (1980) use of beliefs. As described earlier, many of the applicability conditions in our discourse recipes are beliefs that the agent of the action must hold in order for the action to be felicitous. Our recognition algorithm requires that the system be able to plausibly ascribe these beliefs in hypothesizing a new action; if the belief ascription is implausible or if the constraints of the discourse recipe are not satisfied, the inference is rejected.", "cite_spans": [ { "start": 336, "end": 361, "text": "Litman and Allen's (1987)", "ref_id": "BIBREF54" }, { "start": 400, "end": 427, "text": "Allen and Perrault's (1980)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Hypothesizing Discourse Acts by Chaining", "sec_num": "4.4" }, { "text": "As we claimed in Section 3, actions such as the expressions of doubt in utterances (14a) and (14b) (repeated in Figure 6 ) require evidence for their recognition. Let us further examine why this is the case. Figure 7 illustrates a situation in which we have several discourse acts (with different degrees of salience) that the agent might be expected to pursue. The solid boxes show some of the actions that are part of the ", "cite_spans": [], "ref_spans": [ { "start": 112, "end": 120, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 208, "end": 216, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Evidence Actions", "sec_num": "4.5" }, { "text": "Relating an inference path to the existing dialogue context. existing dialogue context 15 and the dashed boxes show a chain of actions inferred from the new utterance. As depicted in the figure, the process model can infer a chain of actions starting with some surface speech act surface-action(EA, CA, PROPD), up to some other action, action (EA, CA, PROPD), up to some other action, e-action (EA, CA, _propl, PROPD). The action e-action contains two propositions. One of these, PROPD, is instantiated by chaining from the earlier action, act ion (EA, CA, PROPD). However, the other proposition, _propl, cannot be instantiated by plan chaining; it must be instantiated by unification with a proposition from the existing dialogue context. For example, if e-action is identified as contributing to action-3 in the existing dialogue context, then _propl might be instantiated as PROPC. On the other hand, if e-action is identified as contributing to action-i, then _propl might be instantiated as PROPA. Chaining might suggest three possibilities (e-action could contribute to action-3, or to action-I, or to neither of them), and the relative salience of the propositions in the existing dialogue context is not sufficient to identify this relationship. As a concrete example, consider again the dialogue segment in Figure 6 consisting of utterances (10)-(13) followed by one of utterances (14a)-(14c). After utterance (13), the actions on the active path of the dialogue model include (among others): action-l: Inform(CA, EA, Teaches(Smith,CS360)) action-2: Address-Unacceptance(EA,CA, Teaches(Smith,CS360), On-Sabbatical(Smith)) action-3: Inform(CA, EA, ~On-Sabbatical(Smith)) If EA utters (14a), (14b), or (14c), three inference paths can be constructed: one that links up to action-I, one that links up to action-3, and one that does not link up to any action on the active path. If CA utters (14a), then CA's action should be identified as contributing to action-3 above and thus the proposition being doubted should be recognized as ~On-Sabbatical(Smith), even though this proposition did not appear explicitly in EA's utterance. However, if EA utters (14b), then EA's action should be identified as contributing only to action-1 and the proposition being doubted therefore should be recognized as Teaches(Smith, CS360); in this case, we are rejecting the inference path that links up to action-3 even though action-3 is more salient at this point in the dialogue. On the other hand, if EA utters (14c), then EA's action should be recognized as not contributing to any of the actions on the active path and interpreted as merely seeking information about Dr. Smith. Since chaining and salience alone are insufficient to identify the correct interpretation, we need some additional mechanism.", "cite_spans": [], "ref_spans": [ { "start": 1316, "end": 1324, "text": "Figure 6", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "We define an evidence-action (abbreviated e-action) to be an action that introduces a new parameter that cannot be directly instantiated by chaining from the utterance. We contend that such actions require evidence for their recognition. In our model, the relationship between _prop1 (the proposition whose instantiation must be inferred from the existing dialogue context) and PROPD (a proposition instantiated by chaining from the current utterance) is modeled in the applicability conditions of a recipe for e-action. For example, Express-Doubt, whose recipe was given in Figure 4 , is an example of an e-action. The parameter _proposition1 cannot be instantiated from plan chaining from the surface utterance because _proposition1 does not appear in the body of the Express-Doubt recipe; therefore, Express-Doubt is an e-action because it contains a parameter (_proposition1) that must be instantiated by unification with a proposition extracted from the existing dialogue context. The relationship that _proposition1 has to the proposition contained in the utterance that is expressing doubt is modeled in the last applicability condition of the Express-Doubt recipe (see Figure 4 ). This applicability condition states that the agent of the Express-Doubt action believes that _propesition2 implies that _proposition1 does not hold. As we've shown earlier, plan chaining and plausibility are insufficient for recognizing an Express-Doubt discourse act; evidence is required. 4.5.1 Types of Evidence. Our recognition algorithm captures the kinds of evidence identified in Section 3: 1) evidence provided by world knowledge, contextual knowledge, and the surface form of the utterance indicating that the applicability conditions for an e-action are satisfied, and 2) linguistic evidence from clue words suggesting a generic discourse action. Grosz and Sidner (1986) claim that when evidence is available from one source, less evidence should be required from others. Thus, if there is evidence indicating that the applicability conditions for a discourse act hold, then less linguistic evidence suggesting the discourse act should be required. This is the case for interpreting (9) (repeated below) as an expression of doubt. 7EA: What is Dr. Smith teaching?", "cite_spans": [ { "start": 1846, "end": 1869, "text": "Grosz and Sidner (1986)", "ref_id": "BIBREF40" } ], "ref_spans": [ { "start": 575, "end": 583, "text": "Figure 4", "ref_id": null }, { "start": 1177, "end": 1185, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "CA: Dr. Smith is teaching Architecture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "(9) EA: Isn't Dr\u00b0 Brown teaching Architecture?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "Even though there is no linguistic clue word suggesting an Express-Doubt discourse act, there is enough evidence from the surface form of the utterance and from world and contextual knowledge to correctly interpret (9) as an expression of doubt at the proposition conveyed by (8). Let us examine this evidence in more detail. The applicability conditions of the Express-Doubt discourse act (see Figure 4) Belief (c) models how the proposition in utterance (9) (that Dr. Brown is teaching Architecture) relates to the proposition in the existing dialogue context (that Dr. Smith is teaching Architecture). Therefore, evidence that EA holds belief (c) (that Dr. Brown teaching Architecture is an indication that Dr. Smith is not teaching Architecture) is particularly significant since it shows how the utterance relates to the preceding discourse.", "cite_spans": [], "ref_spans": [ { "start": 395, "end": 404, "text": "Figure 4)", "ref_id": null } ], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "The system has evidence \"for all three applicability conditions. The system's evidence that EA holds belief (a) is provided by beliefs derived from the goal of the Tell discourse act. In utterance (8), CA initiates a Tell discourse act as part of an Inform discourse act; thus immediately after (8), both the Inform and the Tell are part of the existing dialogue context. If (9) is indeed an expression of doubt, then it contributes to the Inform act by addressing the believability of the communicated proposition. In this case, EA will have implicitly conveyed that EA understood CA's previous utterance, i.e., EA will have passed up the opportunity to contribute to the Tell discourse act that is a child of the Inform act, and will have thereby implicitly conveyed that the Tell act was successful. This notion of implicit acceptance is discussed further in Section 4.6. Since the goal of CA's Tell act is that EA believe that CA believes that Dr. Smith teaches Architecture, the hypothesis that the Tell act has completed successfully (and therefore that its goal has been achieved) provides evidence that (a) is a belief held by EA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "The surface form of (9) provides evidence that EA believes (b), since it conveys an uncertain but still strong belief that Dr. Brown is teaching Architecture. Finally, if the system's model of a stereotypical user indicates that users typically believe that each course has only one instructor, then this world knowledge provides evidence that EA believes (c). Thus, the system has evidence for all three of the applicability conditions. In addition, contextual knowledge indicates that the single constraint on the Express-Doubt discourse act is satisfied--namely, that proposition Pdoubt be salient at this point in the dialogue. Thus, the system would recognize (9) as an expression of doubt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "However, \"Isn't Dr. Smith an excellent teacher?\" would not be recognized as an expression of doubt because the system would have no evidence that EA believes that being an excellent teacher suggests that Dr. Smith is not teaching Architecture. Thus, world knowledge helps the system to correctly differentiate between utterances that are expressions of doubt and those that are not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "On the other hand, if there is sufficient linguistic knowledge suggesting a particular discourse action, then the applicability conditions should be attributed to the speaker as long as they are plausible. So, if the clue word but is used, then a nonacceptance discourse action such as expressing doubt should be easier to recognize (i.e., should require less evidence that the applicability conditions hold) than if the clue word is not present. Thus, if EA said \"But isn't Dr. Smith an excellent teacher?\", then even though there is no world knowledge indicating that all of the applicability conditions hold, the linguistic clue word is sufficient evidence to interpret this utterance as an expression of doubt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "The concept of accommodation in conversation (Lewis 1979; Thomason 1990 ) (removing obstacles to the speaker's goals) suggests that a listener might recognize a surface negative question as an expression of doubt by accommodating a belief about some incompatibility between the proposition conveyed by the surface negative question and a proposition that might be doubted. But in the extreme case, this means that any surface negative question could be recognized as an expression of doubt. We contend that there should be evidence for such recognition. This is similar to Pollack's model of plan recognition (Pollack 1990 ) that can account for user misconceptions; instead of inferring a relationship between every query and the speaker's goal, Pollack requires that the system apply only well-motivated rules that hypothesize principled variations of the system's own beliefs and that the system treat as incoherent any queries that cannot be explained via these rules. (Pollack's example of incoherence is the query \"I want to talk to Kathy, so I need to find out how to stand on my head.\") In our model we look for evidence of incompatibility, and in our implemented system this evidence takes the form of stereotypical befiefs about the domain. While our implementation does not include other means of deducing an incompatibility, they are not precluded by our theory but are left for future work. Moreover, it should be noted that if the speaker intends for the hearer to recognize the expression of doubt from the incompatibility between the doubted proposition and the proposition that the speaker is contending is true, then the speaker must believe that the belief about the incompatibility is a mutual belief. Our stereotypical befiefs fall into this category. In other cases, the clue word but causes the listener to accommodate a belief about incompatibility that he might not otherwise have done and thereby recognize the expression of doubt.", "cite_spans": [ { "start": 45, "end": 57, "text": "(Lewis 1979;", "ref_id": "BIBREF53" }, { "start": 58, "end": 71, "text": "Thomason 1990", "ref_id": "BIBREF79" }, { "start": 609, "end": 622, "text": "(Pollack 1990", "ref_id": "BIBREF66" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "So, in the case of complex discourse acts such as expressing doubt, the system should require evidence for recognizing the discourse act and should prefer to recognize discourse acts for which there is multiple evidence: both linguistic clue words suggesting the generic discourse act and evidence suggesting that the applicability conditions for a particular discourse act are satisfied. However, the system should be willing to accept just one kind of evidence when that is all that is available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "In a collaborative task-oriented dialogue, the participants are working together to construct a plan for accomplishing a task. If the collaboration is to be successful, the participants must agree on the plan being constructed and the actions being taken to construct it. Thus, since a communicated proposition is presumed to be relevant to this plan construction process, the dialogue participants are obligated to communicate as soon as possible any discrepancies in belief about such propositions (Walker and Whittaker 1990; Chu-Carroll and Carberry 1995b) and to enter into a negotiation subdialogue in which they attempt to \"square away\" (Joshi 1982 ) their disparate beliefs.", "cite_spans": [ { "start": 500, "end": 527, "text": "(Walker and Whittaker 1990;", "ref_id": "BIBREF86" }, { "start": 528, "end": 559, "text": "Chu-Carroll and Carberry 1995b)", "ref_id": "BIBREF15" }, { "start": 643, "end": 654, "text": "(Joshi 1982", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Implicit Acceptance", "sec_num": "4.6" }, { "text": "In our earlier work (Carberry 1985 (Carberry , 1989 , we claimed that a cooperative participant must accept a response or pursue discourse goals directed toward being able to accept the response. As we noted there, this acceptance need not be explicitly communicated to the other participant; for example, failure to initiate a negotiation subdialogue con-veys implicit acceptance of the proposition communicated by an Inform action. This notion of implicit acceptance is similar to an expanded form of Perrault's default reasoning about the effects of an inform act (Perrault 1990) . Our model captures this by recognizing implicit acceptance when an agent foregoes the opportunity to address acceptance of an action and moves on to pursue other discourse actions. 4.6.1 Acceptance Actions. If a statement is intended to answer a question, the listener in a collaborative dialogue must note when he believes that the statement does not suffice as a complete answer. However, doing so implies that the listener believes the statement, since it is inefficient to address a proposition's completeness as an answer if one does not accept the proposition. Similarly, if a listener does not believe a communicated proposition, he must convey this disagreement as soon as possible (Walker and Whittaker 1990) . But by questioning the validity of a proposition, the listener conveys that he believes that he understood the utterance. As Clark and Schaefer (1989) note, by passing up the opportunity to ask for a repair, a listener conveys that he has understood an utterance. Thus we hypothesize that listeners convey their acceptance (or lack of acceptance) in a multistage acceptance phase: 1) understanding, 2) believability, 3) completeness. 16", "cite_spans": [ { "start": 20, "end": 34, "text": "(Carberry 1985", "ref_id": "BIBREF6" }, { "start": 35, "end": 51, "text": "(Carberry , 1989", "ref_id": "BIBREF9" }, { "start": 567, "end": 582, "text": "(Perrault 1990)", "ref_id": "BIBREF63" }, { "start": 1275, "end": 1302, "text": "(Walker and Whittaker 1990)", "ref_id": "BIBREF86" }, { "start": 1430, "end": 1455, "text": "Clark and Schaefer (1989)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Implicit Acceptance", "sec_num": "4.6" }, { "text": "Acceptance can be communicated explicitly or implicitly. We include actions that address acceptance in the body of six of our discourse recipes. These recipes were selected because they allow us to capture acceptance of a question (the recipes for In this research, we have been primarily concerned with one aspect of acceptance: believing the proposition communicated by an Inform action. For example, the actions in the body of the Inform recipe (see Figure 3) are: 1) the speaker (_agent1) tells the listener (_agent2) the proposition that the speaker wants the listener to believe; and 2) the speaker and listener address believability by discussing whatever is necessary in order for the listener and speaker to come to an agreement about this proposition. 17 This second action, and the subactions executed as part of performing it, account for subdialogues that address the believability of the proposition conveyed in the Inform action. Other actions related to acceptance are captured in other discourse recipes. For example, the Tell action has a body containing a Surface-Say-Prop action and an Address-Understanding action; the latter enables both participants to ensure that the utterance has been understood. Similarly, the Answer-Ref action contains an Inform action and an Address-Answer-Acceptability action that ensures that the Inform action is sufficient to answer the question. Further research is needed to model the full range of actions that address acceptance and to recognize utterances resulting from them.", "cite_spans": [], "ref_spans": [ { "start": 453, "end": 462, "text": "Figure 3)", "ref_id": null } ], "eq_spans": [], "section": "Implicit Acceptance", "sec_num": "4.6" }, { "text": "The discourse tree reflects the order of acceptance actions. As discussed above, lack of understanding should be addressed before believability. This is reflected in the discourse tree that results from a statement, such as the one in Figure 5 , where the Tell action (whose recipe contains an Address-Understanding action) is a descendant of the Inform action (whose recipe contains an Address-Believability action); in addition, since the statement in Figure 5 is intended to answer a question, the Inform act is a descen- ", "cite_spans": [], "ref_spans": [ { "start": 235, "end": 243, "text": "Figure 5", "ref_id": null }, { "start": 454, "end": 462, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Implicit Acceptance", "sec_num": "4.6" }, { "text": "Dialogues conveying different implicit acceptance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "dant of the Answer-Ref action (whose recipe contains an Address-Answer-Acceptability action). Since the Tell is the current focus of attention, it must be completed before other actions are pursued. Thus, if the listener believes that the telling has not been successful (i.e., the listener does not fully understand the utterance), then the listener will pursue discourse acts that contribute to its Address-Understanding subaction. Once the Tell has been successfully completed, then attention reverts back to the Inform act. The Inform must be successfully completed before other higher-level acts are pursued further. Thus if the Inform has not been successful (i.e., the listener does not accept the communicated proposition), then the listener will pursue discourse acts that contribute to its Address-Believability subaction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "We have concentrated primarily on recognizing the acceptance and nonacceptance of propositions communicated by Inform actions: i.e., modeling negotiation subdialogues in which participants do not automatically believe everything that they are told. Others, Allen and Schubert (1991) , Clark and Schaefer (1989) , Traum and Hinkelman (1992) , and Traum (1994) have investigated how understanding and lack of understanding are communicated and can be recognized. 4.6.2 Modeling Implicit Acceptance. Our system models implicit acceptance in collaborative dialogue as passing up the opportunity to express lack of acceptance. For example, consider the two dialogue variations shown in Figure 8 . Figure 5 depicts the discourse tree constructed from utterances (15) and (16) in Figure 8 , with the current focus of attention, the Tell action, marked with an asterisk. In attempting to assimilate (17a) into this discourse tree, the system's first expectation is that (17a) will address the understanding of (16) if EA does not understand it (i.e., as part of the Tell action that is the current focus of attention in Figure 5 ). The next expectation is that (17a) will relate to the Inform action in Figure 5 , by addressing the believability of the proposition conveyed by (16). The system finds that the best interpretation of (17a) is that of expressing doubt at the proposition that Dr. Smith is teaching CS360, thus confirming the secondary expectation that (17a) is addressing the believability of the proposition conveyed by (16). This recognition of (17a) as part of the Inform action in Figure 5 indicates that EA has implicitly indicated understanding, by passing up the opportunity to address understanding in the Tell action that appears at a lower level in the discourse tree and by moving instead to a relevant higher-level action; (17a) is thus (implicitly) conveying that the Tell action has been successful.", "cite_spans": [ { "start": 257, "end": 282, "text": "Allen and Schubert (1991)", "ref_id": "BIBREF2" }, { "start": 285, "end": 310, "text": "Clark and Schaefer (1989)", "ref_id": "BIBREF17" }, { "start": 313, "end": 339, "text": "Traum and Hinkelman (1992)", "ref_id": "BIBREF81" }, { "start": 346, "end": 358, "text": "Traum (1994)", "ref_id": "BIBREF80" } ], "ref_spans": [ { "start": 681, "end": 689, "text": "Figure 8", "ref_id": null }, { "start": 692, "end": 700, "text": "Figure 5", "ref_id": null }, { "start": 773, "end": 781, "text": "Figure 8", "ref_id": null }, { "start": 1112, "end": 1120, "text": "Figure 5", "ref_id": null }, { "start": 1195, "end": 1203, "text": "Figure 5", "ref_id": null }, { "start": 1591, "end": 1599, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Thus, when an utterance contributes to an ancestor of an action Ai and all of Ai's applicability conditions, except those negated by the goal, are still satisfied, then Ai is assumed to have completed successfully; if that were not true, the dialogue participants would have been required to address those actions. '8 When an action 18 By requiring that all applicability conditions, except those negated by the goal, still be satisfied in order for the action to be viewed as successful, we eliminate situations in which the agent of the Inform act is recognized as successful, the system updates its model of the user's beliefs with the effects and goals of the completed action. For example, in determining whether (17a) in Figure 8 is expressing doubt at (16) (thereby implicitly indicating that (16) has been understood and that the Tell action has therefore been successful), the system tentatively hypothesizes that the effects and goals of the Tell action hold, resulting in the tentative belief that EA believes that CA believes that Dr. Smith is teaching CS360. If the system determines that this Express-Doubt action is the most coherent interpretation of (17a), it attributes the hypothesized beliefs to EA. Now consider a dialogue in which utterances (15) and (16) in Figure 8 are instead followed by utterance (17b). In this case, the system finds that the best interpretation of (17b) does not contribute to any of the actions in the existing discourse tree; instead (17b) is identified as initiating an entirely new Obtain-Info-Ref action at the discourse level, resulting in a new discourse tree. Since EA has gone on to pursue some other discourse action unrelated to any of the acts that were part of the previous discourse tree, the system recognizes not only EA's understanding of (16) but also EA's implicit acceptance of the proposition conveyed by (16). That is, because the system interprets EA's utterance as foregoing the opportunity to initiate a negotiation subdialogue to address the acceptance of the proposition communicated by the Inform action, the system recognizes that the Inform action has been successful and that EA has implicitly conveyed acceptance of the proposition that Dr. Smith is teaching CS360.", "cite_spans": [], "ref_spans": [ { "start": 727, "end": 735, "text": "Figure 8", "ref_id": null }, { "start": 1281, "end": 1289, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Our recognition algorithm, outlined in Figure 9 , assimilates a new utterance into the existing dialogue context and identifies discourse acts that the speaker is pursuing. It proceeds as follows: Start with the semantic representation of the utterance and extract from it two kinds of linguistic information: 1) clue words that might suggest a generic discourse act, and 2) beliefs that are conveyed by the surface form of the utterance. In our implemented system, possible clue words are explicitly noted in the semantic representation of the utterance, and beliefs conveyed by the surface form of an utterance are extracted from the applicability conditions of the recipe for the surface speech act. For example, the surface form of an utterance such as \"Isn't Dr. Smith on sabbatical?\" conveys that the speaker has a strong but uncertain belief in the queried proposition; this is captured in the applicability conditions of the recipe for a Surface-Neg-YN-Question discourse act (see the appendix).", "cite_spans": [], "ref_spans": [ { "start": 39, "end": 47, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "The Recognition Algorithm", "sec_num": "4.7" }, { "text": "Next, use plan inference rules to hypothesize sequences of actions A1,Ai2 ..... Ai~ i (inference paths) such that A1 is the surface action directly associated with the speaker's utterance and Aidi is an action on the active path in the existing dialogue context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Recognition Algorithm", "sec_num": "4.7" }, { "text": "By requiring that an inference path link up with an action that is already part of the existing dialogue context, we are capturing the expectation that the new utterance will contribute to an action that has already been initiated. This corresponds to a focusing heuristic that captures expectations for new utterances in an ongoing dialogue (Carberry 1990 ). For any inference path, if Ai~ is not the focus of attention in the existing dialogue context, then Aid~ must be an ancestor of the action that is the focus of attention; tentatively hypothesize that each of the actions on the active path between the existing focus of attention (the focus of attention immediately prior to the new utterance) and Aidl have completed successfully and use this hypothesis in reasoning has become convinced by the other participant that the proposition he was trying to convey is really false. In such cases, the applicability condition believe(_agentl, _proposition) of the Inform will no longer be true and thus the Inform act will not be viewed as completing successfully. We have not addressed situations in which the participants cannot resolve their disagreements and agree to disagree. ", "cite_spans": [ { "start": 342, "end": 356, "text": "(Carberry 1990", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "The Recognition Algorithm", "sec_num": "4.7" }, { "text": "Pseudocode outlining our recognition algorithm. about the actions on the inference path. 19 If the applicability conditions for any of the actions on an inference path are implausible or if the constraints are not satisfied, reject the inference path.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "For actions that are e-actions, determine how much evidence is available for the action. Reject any inference paths containing an e-action for which there is neither linguistic evidence suggesting the generic discourse act (such as the clue word but suggesting an Express-Doubt action) 2\u00b0 nor evidence from the surface form of the utterance, world knowledge, and contextual knowledge indicating that the applicability conditions for the particular discourse action are satisfied. If there is an e-action for which both kinds of evidence exist (both linguistic evidence for the generic discourse act and evidence that the applicability conditions are satisfied), then consider only inference paths containing an e-action for which there is such multiple evidence and select the inference path A1,Ai2,...,Ai~ for which Ai~ i is closest to the focus of attention in the existing dialogue context. Otherwise, if there is an inference path containing an e-action for which one kind of evidence exists, then select the inference path A1,Ai2 ..... Ai~i for which Ai~i is closest to the focus of attention in the existing dialogue context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "If a satisfactory inference path containing an e-action cannot be found, then consider inference paths that contain no e-actions. 21 If there is more than one such inference path, select the one that links up to an action that is closest to the focus of attention on the discourse level. If there is no inference path linking up to an action on the existing discourse level, then select the inference path that links up to an action that is closest to the focus of attention on the problem-solving and domain levels. (Our dialogue model actually contains three levels: domain, problem-solving, and discourse. This paper is primarily concerned with recognizing actions on the discourse level; we will briefly discuss the domain and problem-solving levels in Section 5.1.) This latter case corresponds to initiating a new discourse segment, and thus a new discourse tree is constructed at the discourse level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "Our algorithm identifies a best interpretation of the speaker's utterance. However, since the algorithm uses heuristics, its interpretation can be incorrect and miscommunication can result. Our current system does not include mechanisms for detecting and recovering from such errors. Clark and Schaeffer (1989) discuss second, third, and fourth turn repairs in discourse, and McRoy and Hirst (1995) provide an excellent formal model of repair in dialogue.", "cite_spans": [ { "start": 284, "end": 310, "text": "Clark and Schaeffer (1989)", "ref_id": null }, { "start": 376, "end": 398, "text": "McRoy and Hirst (1995)", "ref_id": "BIBREF62" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "The preceding sections have provided the key mechanisms necessary for modeling negotiation subdialogues. Our recipes differentiate between the effects and the goals of a discourse act. Thus, instead of assuming that a communicated proposition will automatically be accepted by the listener, the effect of our Inform action is only that 19 The action at the focus of attention and some of its ancestor actions may have completed successfully, which becomes evident when the participants choose not to address them further. For example, as a result of providing an answer to a question, the active path may include the discourse actions the listener believes that the speaker wants the listener to believe the communicated proposition, while its goal is that the listener will actually adopt the proposition as one of his own beliefs. In addition, the body of the Inform discourse recipe contains not only an action capturing the telling of the proposition but also an action capturing the participants' addressing the believability of the communicated proposition. Our algorithm for recognizing discourse actions and assimilating them into the dialogue model can recognize when an agent is expressing doubt at a communicated proposition by contending that some other proposition is true. Our ability to recognize implicit as well as explicit acceptance of a communicated proposition enables us to identify when an agent has adopted a communicated proposition as part of his beliefs. This section describes our implementation and demonstrates our system's capability with two extended negotiation subdialogues that illustrate 1) the role of linguistic, contextual, and world knowledge in resolving expressions of doubt; 2) expressions of doubt at both immediately preceding and earlier utterances; 3) multiple expressions of doubt at the same proposition; 4) negotiation subdialogues embedded within other negotiation subdialogues; and 5) explicit and implicit acceptance. The recipes for the discourse acts used in these examples can be found in the appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Negotiation Subdialogues", "sec_num": "5." }, { "text": "Our system for recognizing complex discourse acts and handling negotiation subdialogues has been integrated into the tripartite dialogue model presented in Lambert and Carberry (1991) . This dialogue model contains three levels of tree structures, one for each kind of action discussed in Section 4.2 (domain, problem-solving, and discourse) with links among the actions on different levels. At the lowest level, the discourse actions are represented; these actions may contribute to the problem-solving actions at the middle level which, in turn, may contribute to the domain actions at the highest level. Figure 10 illustrates the tripartite dialogue model for a situation in which CA has previously answered a question about the cost of registering for CS180, and then EA asks \"When does CS180 meet?\" Note that the discourse level in Figure 10 only reflects the current query about when CS180 meets, since previous queries have already achieved their discourse goals. Since this paper is concerned almost exclusively with the discourse level of the dialogue model, we will not discuss the overall tripartite model further, except to note that the construction of a new discourse tree requires that the system identify its relationship to existing or new actions at the problem-solving and domain levels (Lambert and Carberry 1991) .", "cite_spans": [ { "start": 156, "end": 183, "text": "Lambert and Carberry (1991)", "ref_id": "BIBREF51" }, { "start": 1306, "end": 1333, "text": "(Lambert and Carberry 1991)", "ref_id": "BIBREF51" } ], "ref_spans": [ { "start": 607, "end": 616, "text": "Figure 10", "ref_id": "FIGREF10" }, { "start": 837, "end": 846, "text": "Figure 10", "ref_id": "FIGREF10" } ], "eq_spans": [], "section": "Implementation", "sec_num": "5.1" }, { "text": "Our system has been implemented in Common Lisp on a Sun Sparcstation and tested in a university advisement domain. Figure 11 lists some of the beliefs included in the system's model of a stereotypical user. In our current implementation, only the clue word but is recognized as linguistic evidence for an Express-Doubt discourse act.", "cite_spans": [], "ref_spans": [ { "start": 115, "end": 124, "text": "Figure 11", "ref_id": null } ], "eq_spans": [], "section": "Implementation", "sec_num": "5.1" }, { "text": "In future work, we will expand the clue words taken into account by our system. Figure 12 contains an extended negotiation dialogue (portions of this dialogue have been given earlier). This dialogue illustrates a number of features that our system can handle. Utterances (18) and (19) establish the initial context in which CA has pro-Domain Level", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 89, "text": "Figure 12", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Implementation", "sec_num": "5.1" }, { "text": ". ............. ,,, ,. ...................................................... ................... ................ ,, .......... ....... A simple tripartite dialogue model. Dr. Brown is not teaching Architecture. The rest of this section works through the details of how our system processes these utterances, recognizes the discourse acts they are pursuing, and incrementally builds the discourse tree of the dialogue model. In our examples, the system will switch between playing the role of EA and the role of CA. However, when processing an utterance, the system will have access only to the beliefs of the participant whose identity it has assumed (namely, the listener), along with the correct dialogue model at the time the utterance is made. As each of the above actions is inferred, the system checks that its constraints are satisfied and that its applicability conditions are plausible. Since this is the only chain of actions suggested by plan inferencing on the discourse level, the system recognizes these discourse actions; it then infers problem-solving actions from the discourse actions and, eventually, domain actions from the problem-solving actions. As actions are recognized, the system updates its model of EA's beliefs, wants, and knowledge from the actions' applicability conditions. Figure 13 shows the initial tripartite dialogue model that is produced. Since this paper is primarily concerned with the recognition of actions on the discourse level, the remainder of our figures will only display the discourse level and will omit the problem-solving and domain level actions. (19) : Answering the Question. The system is now playing the role of EA (listener) and must understand CA's utterance of (19). The semantic representation of (19) is:", "cite_spans": [ { "start": 2, "end": 136, "text": "............. ,,, ,. ...................................................... ................... ................ ,, .......... .......", "ref_id": null } ], "ref_spans": [ { "start": 1309, "end": 1318, "text": "Figure 13", "ref_id": "FIGREF15" }, { "start": 1604, "end": 1608, "text": "(19)", "ref_id": null } ], "eq_spans": [], "section": "An Extended Example", "sec_num": "5.2" }, { "text": "Surface-Say-Prop(CA, EA, Teaches(Dr. Smith, Arch))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "Chaining suggests that the surface speech act might be part of a Tell action, which might be part of an Inform action since the surface speech act and the Tell act are part of the body of the Tell and Inform acts, respectively. The applicability conditions for all of these actions are plausible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "The system tries to extend the inference chain from the Inform action. An Inform can be part of the recipes for several discourse actions, including Give-Background and Answer-Ref. However, these actions are e-actions and, with the exception of Answer-Ref, inference of these e-actions is rejected. For example, Give-Background is an e-action because it relates the proposition in the current utterance to some other proposition, the proposition about which background is being given. The recipe for Give-Background contains a constraint that there be a particular relationship between the proposition in the Inform action in its body and some other proposition conveyed by CA. Since CA has made no previous utterances, there is no other proposition conveyed by CA and thus this constraint cannot be satisfied. Consequently, Give-Background is rejected. 23 A full discussion of the Give-Background action and its recipe can be found in (Lambert 1993 ).", "cite_spans": [ { "start": 936, "end": 949, "text": "(Lambert 1993", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "On the other hand, Answer-Kef(CA, EA, _term, _proposition) can be inferred from Inform(CA, EA, Teaches (Dr. Smith, Arch) ) and the system has evidence for its recognition. Answer-Re/is an e-action since the parameters _term and _proposition cannot be instantiated from the Inform action that precedes it on the inference chain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "23 Although CA can provide background information prior to conveying the proposition about which the background is being given, the Give-Background action in these instances will be recognized in assimilating CA's second utterance (the utterance about which the background is being given) (Lambert and Carberry 1991) . Tripartite dialogue model for utterance (18).", "cite_spans": [ { "start": 289, "end": 316, "text": "(Lambert and Carberry 1991)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "As discussed in Section 4.5.1, evidence for e-actions may take one of two forms: 1) evidence from world and contextual knowledge and the surface form of the utterance indicating that the applicability conditions for a particular e-action are satisfied, and 2) linguistic evidence from clue words suggesting a generic discourse action. In this case, there are no clue words, so any evidence must be from world and contextual knowledge or the surface form of the utterance. Answer-Re/(CA, EA, _term, _proposition) can be a subaction in the body of 0btain-Info-Ref (EA, CA, _term, _proposition) ; unifying with the Obtain-In/o-Re/ action that is part of the existing discourse tree causes the parameters _proposition and _term to be instantiated as Teaches (Dr . Smith, _course) and _course, respectively, in both the Obtain-In/o-Re/ and Answer-Re/actions.", "cite_spans": [ { "start": 562, "end": 591, "text": "(EA, CA, _term, _proposition)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "World and contextual knowledge provide evidence that the applicability conditions of Answer-Re/are satisfied with these instantiations. The third applicability condition in the Answer-Re/recipe captures the required relationship between the new parameter _proposition and the parameter _propanswer that appears in the Inform discourse act. It indicates that CA must believe that _propanswer (where _propanswer is instantiated from the Inform act as Teaches (Dr. Smith, Arch)) is an instance of the queried proposition, _proposition, with the queried term _term instantiated. Since the system (playing the role of EA in this case) believes that the participants have equivalent knowledge about language and how terms can be instantiated 24 and since the system believes that the two propositions unify, the system has evidence that the third applicability condition is satisfied. In addition, there is evidence that the other two applicability conditions are satisfied. With the above instantiations, these applicability conditions become:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "Applicability conditions for Answer-Ref :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "believe~A,want(EA,knowref(EA,_course,believe~A,Teaches(Dr. Smith,_course), Surface-Neg-YN-Question(EA, CA, Teaches(Dr.Brown, Arch))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "The surface form of (20) suggests that EA thinks that Dr. Brown is teaching Architecture, but is not certain of it. This belief is captured in the applicability condition 24 This does not mean that the instantiation will result in a true proposition, only that it is a legal instantiation of the term. For example, CS180 is a legal instantiation of the _course term in the proposition Teaches(Jones _course) although Teaches(Jones, CS180) may be false. We have not addressed the problem of misconceptions about class membership. of the recipe for a Surface-Neg-YN-Question. Since we assume a noise-free medium and well-formed utterances, surface speech acts always execute successfully and are correctly recognized. Thus, the beliefs captured in the applicability conditions of the surface speech act are immediately entered into the system's model of EA's beliefs. The most salient interpretation of (20), that it is addressing the understanding of (19) and thus contributing to the Tell discourse act that is the current focus of attention in the dialogue, is rejected. 25", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "The system can construct an inference path suggesting that the utterance contributes to the Inform discourse act that is the parent of the Tell act in the existing discourse tree. In particular, by chaining from subactions to parent actions (actions whose recipes contain the subaction), the system constructs an inference path containing the following chain of actions: If the last action on this inference path is unified with the Inform act in the exisUng discourse tree, then _propositionl in the recipe for Address-Believability will be instantiated as Teaches (Dr. Smith, Arch), indicating that EA uttered (20) in order to express doubt at the proposition that Dr. Smith is teaching Architecture and thereby contribute to addressing the believability of that proposition. This interpretation would indicate that EA had passed up the opportunity to contribute to the Tell discourse act that is the focus of attention in the existing discourse tree. Thus when the system considers this interpretation, it hypothesizes that the Tell act has been successful and that its goal has been achieved, and it tentatively adds believe(EA, believe(CA, Teaches(Dr. Smith, Arch), [C:C]), [C:C]) to the belief model.", "cite_spans": [ { "start": 1179, "end": 1185, "text": "[C:C])", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "Inform(CA, EA, _propositionl)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "As we have seen previously, Express-Doubt is an e-action since it is the action on the inference path at which the parameter _propo s it ion 1 is first introduced. Therefore, although the applicability conditions for each of the actions on the above inference path are plausible, we need evidence for the Express-Doubt act. There is no linguistic clue suggesting that (20) is an Express-Doubt action. The system then checks to see if it has evidence that the applicability conditions for the Express- Doubt The system's belief model provides evidence for the first applicability condition, that EA believes that CA believes that Dr. Smith teaches Architecture, since it has been tentatively updated to include the goal of the Tell discourse act, as noted above. The belief model also provides evidence for the second applicability condition, since it has been updated to include the beliefs captured in the applicability conditions of the recipe for the surface speech act. The system's model of a stereotypical user contains the beliefs given in Figure 11 , including the belief that there is only one professor per course. This stereotypical belief provides evidence for the final applicability condition (that EA believes that Dr. Brown teaching Architecture implies that Dr. Smith is not teaching Architecture). Since users typically believe that only one teacher is used per course, perhaps EA does also. If EA believes that there is only one professor per course and that Dr. Brown is teaching Architecture, then EA would believe that Dr. Smith would not be teaching Architecture. So the system has evidence for all three of the applicability conditions in the Express-Doubt recipe. In addition, the constraint of the Express-Doubt action is satisfied since the proposition that Dr. Smith teaches Architecture is a parameter of an action on the active path and thus is salient.", "cite_spans": [], "ref_spans": [ { "start": 501, "end": 506, "text": "Doubt", "ref_id": null }, { "start": 1047, "end": 1056, "text": "Figure 11", "ref_id": null } ], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "Since there is evidence from world and contextual knowledge and the surface form of the utterance that the applicability conditions hold for interpreting (20) as an expression of doubt and since there is no evidence for any other e-action, the system infers that this is the correct interpretation and stops. Thus, (20) is interpreted as an Express-Doubt action, as shown in Figure 15 actions have been successful and tentatively updates its belief model to reflect the effects and goals of these actions. In particular, the following two beliefs (among others) are tentatively added to the system's model of CA's beliefs: Resolve-Conflict (see the appendix) is an e-action since it introduces two new propositions (the propositions about which there is conflict) that cannot be instantiated by chaining from the Inform action in its body, and the system must be able to determine what conflict the utterance is trying to resolve. If the Address-Unacceptance action is unified with the Address-Unacceptance that is part of the existing discourse tree, then the conflicting propositions, _propositionl and _proposition2, are instantiated as Teaches (Dr. Smith, Arch) and Teaches (Dr.Brown, Arch), respectively, in both Address-Unacceptance and Resolve-Conflict. The system has evidence for the Resolve-Conflict action with these instantiations. The constraints that _propositionl and _proposition2 be salient and that _proposition2 and _proposition3 be the opposite of one another are satisfied. First, Teaches (Dr . Smith, Arch) and Teaches(Dr.Brown, Arch) are the propositions instantiating _propositionl and _proposition2, and they are salient since they are part of an action on the active path of the existing discourse tree. Second, the proposition instantiating _proposition2 is the opposite of the proposition conveyed by CA's current utterance. Evidence for the first two applicability conditions, 1) that CA believes that EA beheves that Dr. Brown's teaching Architecture implies that Dr. Smith is not teaching Architecture and 2) that CA believes that EA has an uncertain belief in the proposition that Dr. Brown teaches Architecture, is provided by the system's tentatively updated model of CA's beliefs. Evidence for the final applicability condition, that CA believes that Dr. Smith is teaching Architecture, is also provided by the system's model of CA's beliefs. When CA's Inform action in (19) was recognized, the system updated its model of CA's beliefs to include the beliefs contained in the applicability conditions for the Inform act; thus the belief model indicates that CA believes that Dr. Smith is teaching Architecture. Since the system has evidence for the e-action on the inference path (and since there are no other inference paths containing e-actions), the system recognizes this chain of actions and interprets (21) as informing EA that Dr. Brown is not teaching Architecture as part of attempting to resolve the conflict suggested by EA. Thus the Resolve-Conflict action is recognized as contributing to the Address-Unacceptance action that was begun in (20).", "cite_spans": [], "ref_spans": [ { "start": 375, "end": 384, "text": "Figure 15", "ref_id": null } ], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "believe", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "The semantic representation of (22) is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "Surface-Say-Prop(CA, EA, on-sabbatical(Dr.Brown))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "The Surface-Say-Prop is part of Tell(CA, EA, on-sabbatical(Dr.Brown)), which is part of Inform(CA, EA, on-sabbatical(Dr.Brown)). Further chaining suggests that the Inform action could be part of several other actions. We will discuss two of these possibilities, Address-Acceptance and Explain-Claim. In the Address-Acceptance case, CA might be uttering (22) to support the statement that she made in (21); in the Explain-Claim case, CA might be uttering (22) to explain why the supposedly conflicting propositions are not really in conflict.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "Let us examine the Address-Acceptance case first. Address-Acceptance(CA, EA, _propositionl) is part of a recipe for Address-Believability(CA, EA, _propositionl), which in turn is part of a recipe for Inform (CA, EA, _propositionl) .", "cite_spans": [ { "start": 207, "end": 230, "text": "(CA, EA, _propositionl)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": "If this is CA's immediately preceding Inform act, then unifying with this Inform act will cause _propositionl to be instantiated as ~Teaches (Dr.Brown, Arch) in the Inform, Address-Believability, and Address-Acceptance actions. Address-Acceptance is an e-action, since it is the action on the inference path at which a new proposition is first introduced. If _proposition3 in the recipe for Address-Acceptance (see the appendix) is instantiated with Teaches (Dr. Brown, Arch), then the constraints are obviously satisfied. (Note that _proposition2 in the recipe for Address-Acceptance is instantiated with on-sabbatical (Dr. Brown) as a result of chaining from the surface speech act to the Inform act in the body of the Address-Acceptance action.) The system has evidence that the applicability conditions are satisfied with these instantiafions. Evidence for the first applicability condition is provided by the system's model of a stereotypical user, which indicates that it is generally believed that professors on sabbatical do not teach. Evidence for the second applicability condition is provided by the system's model of CA's beliefs, which was updated to contain the effect of utterance 20 Resolve-Conflict and Explain-Claim actions to be instantiated with Teaches (Dr. Smith, Arch) and Teaches (Dr . Brown , Arch), respectively. In the recipe for Explain-Claim, _proposition3 has been instantiated with on-sabbatical (Dr. Brown) by chaining from the surface speech act to the Explain-Claim action. Explain-Claim is an e-action. However, the system lacks evidence for its second applicability condition, that CA believes that Dr. Brown being on sabbatical implies that Dr. Brown teaching Architecture and Dr. Smith teaching Architecture are not in conflict with one another. Thus this potential interpretation is rejected. Since the inference path containing the Address-Acceptance discourse act is the only one whose e-action has evidence supporting its recognition, the system recognizes (22) as addressing the acceptance of the proposition conveyed by (21)--namely, that Dr. Brown is not teaching Architecture. Thus, CA's response in (21) and (22) indicates that CA is trying to resolve EA's and CA's conflicting beliefs. The structure of the discourse tree after these utterances is shown in Figure 16 , above the numbers (18)-(22). 26", "cite_spans": [], "ref_spans": [ { "start": 2305, "end": 2314, "text": "Figure 16", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Utterance", "sec_num": "5.2.2" }, { "text": ": Embedded Negotiation Subdialogue. The system is now playing the role of CA (listener) and must assimilate EA's utterance of (23). The semantic representation of (23) is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterances (23)-(26)", "sec_num": "5.2.5" }, { "text": "Surface-Neg-YN-Question(EA, CA, on-campus(Dr.Brown, Yesterday)) Clueword(But)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterances (23)-(26)", "sec_num": "5.2.5" }, { "text": "The Surface-Neg-YN-Question in utterance (23) is one way to Convey-Uncertain-Belief. salient open proposition at this point in the dialogue and thus the most expected candidate. Plan chaining suggests that the Convey-Uncertain-Belief could be part of an Express-Doubt action, which in turn could be part of an Address-Unacceptance action, which could be part of an Address-Believability action, which could be part of the Inform action in (22). As in utterance 20, there is evidence that the applicability conditions for the e-action (the Express-Doubt action) hold: for example, world knowledge indicates that a typical user believes that professors who are on campus are not on sabbatical, providing evidence for the third applicability condition. Thus, there is both linguistic evidence for a generic nonacceptance discourse act and evidence from world and contextual knowledge and the surface form of the utterance that the applicability conditions and constraints are satisfied for the specific action of expressing doubt at the proposition that Dr. Brown is on sabbatical. Since no other e-action has both kinds of evidence, (23) is interpreted as expressing doubt at the proposition conveyed by (22).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterances (23)-(26)", "sec_num": "5.2.5" }, { "text": "The system now reverts to playing the role of EA (listener) and must assimilate the next two utterances in which CA resolves the doubt that EA has expressed in (23), by agreeing that Dr. Brown was on campus yesterday but explaining the purpose of his visit (one that is an exception to the rule that people on sabbatical are not on campus). Plan inferencing for utterance (24) is identical to that of utterance (21) and will not be described further.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterances (23)-(26)", "sec_num": "5.2.5" }, { "text": "From the Surface-Say-Prop in (25) , plan inference rules suggest that the Surface-Say-Prop is part of a Tell action that is part of an Inform action. As was the case for utterance (22), the Inform action can be part of several different higher-level actions, including Address-Acceptance and Explain-Claim. Since Address-Acceptance is a subaction in a recipe for Address-Believability, and Address-Believability is a subaction in a recipe for Inform, CA might be trying to offer support for the Inform act of (24), Inform(CA, EA, on-campus(Dr.Brown, Yesterday)). However, this time the applicability conditions for the Address-Acceptance action are implausible. In particular, as a result of the effect of the Convey-Uncertain-Belief action in (23), the system's model of CA's beliefs indicates that CA believes that EA has some belief in the proposition that Dr. Brown was on campus yesterday. The second applicability condition of the recipe for addressing the acceptance of the proposition conveyed by (24), believe(CA, believe(EA, -~on-campus(Dr.Brown, Yesterday), [W:S]), [W:C]), conflicts with this belief--i.e., Address-Acceptance is reasonable to pursue only when an agent has some reason to believe that the listener disbelieves the proposition in question. Since the second applicability condition is implausible, the inference path containing the Address-Acceptance action is rejected. However, the system does have evidence for interpreting (25) as an Explain-Claim.", "cite_spans": [ { "start": 29, "end": 33, "text": "(25)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Utterances (23)-(26)", "sec_num": "5.2.5" }, { "text": "Explain-Claim(CA, EA, _propositionl, _proposition2) is part of the recipe for Resolve-Conflict(CA, EA, _propositionl, _proposition2). If this is the Resolve-Conyqict action that is closest to the focus of attention in the existing discourse tree, then unification will cause _propositionl and _proposition2 in Resolve-Conflict and Explain-Claim to be instantiated respectively with on-sabbatical(Dr.Brown) and on-campus(Dr. Brown, Yesterday). In the recipe for Explain-Claim, _proposition3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterances (23)-(26)", "sec_num": "5.2.5" }, { "text": "was instantiated with Give (Dr. Brown, University-Colloquiura) during chaining from the surface speech act. The system has evidence for the e-action Explain-Claim because it has evidence that its applicability conditions hold--namely, that CA believes that EA believes that Dr. Brown's being on campus implies that he is not on sabbatical from the effect of the Express-Doubt action; that CA believes that Dr. Brown's giving a University colloquium implies that being on campus is not in conflict with being on sabbatical, from the model of stereotypical beliefs; and that CA believes that EA believes that Dr. Brown was on campus yesterday, from the effect of the Convey-Uncertain-Belief discourse act accomplished by (23). Since this is the only inference path containing an e-action for which the system has evidence, utterance (25) is interpreted as contributing to resolving the conflict suggested in (23) by explaining the claim that the propositions do not really conflict in this instance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterances (23)-(26)", "sec_num": "5.2.5" }, { "text": "The system now reverts to playing the role of CA (listener) and must assimilate EA's utterances. In (26), EA indicates explicit acceptance of the most salient Inform action, so the system is able to determine that EA has accepted CA's response in (25). Other inform actions remain open for rejection and must still be implicitly or explicitly accepted. In this dialogue, the Inform actions in (22) and (21) are implicitly accepted in utterance (27). Althougl~ utterance (27) might cause one to hypothesize that (26) was indicating explicit acceptance of all of the propositions conveyed by utterances (21)-(25), it is not possible to decide with certainty from a simple \"ok\" exactly how many", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterances (23)-(26)", "sec_num": "5.2.5" }, { "text": "Inform actions EA is accepting. Thus our system assumes that the speaker accepts as little as possible, which is the most salient Inform action.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterances (23)-(26)", "sec_num": "5.2.5" }, { "text": "Utterances (23)-(26) illustrate our model's handling of negotiation subdialogues embedded within other negotiation subdialogues. The subtree contained within the dashed lines in Figure 16 shows the structure of this embedded negotiation subdialogue. The linguistic clue but in (27) again suggests nonacceptance. Since (25) has been explicitly accepted, the propositions open for rejection are those conveyed in (22), (21), and (19). Once again, chaining from the surface speech act can produce a chain of actions containing an Express-Doubt action and terminating with one of the Inform actions that is on the active path of the existing discourse tree. If the Inform action is Inform(Ca, EA, Teaches (Dr. Smith, Arch)), then the Express-Doubt action will be instantiated as Expres s-Doubt (EA, CA, Teaches (Dr. Smith, Arch), Specialty (Dr. Smith, Theory) ). The system has evidence that this action's applicability conditions are satisfied. The evidence for the first two applicability conditions is similar to the evidence for interpreting utterance (20) as expressing doubt. World knowledge provides evidence for the third applicability condition. The system's model of stereotypical user beliefs indicates that it is typically believed that faculty only teach courses in their field. Other system knowledge states that Architecture and Theory are different fields. So in this case, the system's world knowledge provides evidence that Dr. Smith's being a theory person is an indication to the user that Dr. Smith does not teach Architecture. Thus the system has two kinds of evidence for interpreting (27) as expressing doubt at the proposition conveyed by (19): linguistic evidence for a generic Express-Doubt discourse act and evidence that the applicability conditions are satisfied for the particular discourse act of expressing doubt at the proposition that Dr. Smith is teaching Architecture. Since the system does not have multiple evidence for any of the other interpretations, the system recognizes (27) as again expressing doubt about the proposition conveyed by (19). Thus, the system is able to recognize and assimilate a second expression of doubt at the proposition conveyed in (19), even after intervening dialogue. The discourse tree for the entire dialogue is given in Figure 16 . A second negotiation subdialogue.", "cite_spans": [], "ref_spans": [ { "start": 178, "end": 187, "text": "Figure 16", "ref_id": "FIGREF6" }, { "start": 2289, "end": 2298, "text": "Figure 16", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Utterances (23)-(26)", "sec_num": "5.2.5" }, { "text": "Since EA's utterance reverts back to addressing the acceptance of the proposition conveyed by (19), EA has foregone the opportunity to challenge the claims made in utterances (22) and (21). Since the befief model indicates that the applicability conditions of the Inform actions are still satisfied (except those negated by achievement of the goal), the system infers that EA has implicitly accepted the statements in (22) and (21), that Dr. Brown is on sabbatical and that Dr. Brown is not teaching Architecture, and the system updates its model of EA's beliefs. Figure 17 contains a second negotiation dialogue. Due to space limitations, we will only discuss two interesting features of the dialogue and its processing by our system. Utterance (30) illustrates the use of a linguistic clue word to suggest an expression of doubt. In interpreting utterance (30), the system constructs an inference path containing the action:", "cite_spans": [], "ref_spans": [ { "start": 564, "end": 573, "text": "Figure 17", "ref_id": null } ], "eq_spans": [], "section": "Utterance (27): Multiple", "sec_num": "5.2.6" }, { "text": "Express-Doubt(EA, CA, Meets(CS510, MonYPM), Teaches(Dr. Jones, CS510))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Second Example", "sec_num": "5.3" }, { "text": "Although the system does not have evidence that all of the applicability conditions for this Express-Doubt action are satisfied, the linguistic clue but does provide evidence for the generic Express-Doubt act. Since this is the only inference path containing an e-action for which there is evidence, the system recognizes (30) as expressing doubt at the proposition that CS510 meets on Monday at 7PM by contending that Dr. Jones is teaching CS510. In this case, the system lacked evidence for the third applicability condition in the recipe for Express-Doubt. But having recognized that EA is expressing doubt, it attributes to EA the beliefs captured in the applicability conditions. In particular, the system attributes to EA the belief that Dr. Jones teaching CS510 implies that CS510 would not meet on Monday at 7PM, though it has no idea why EA believes that this implication holds--perhaps EA befieves that Dr. Jones has to be home to take care of his children at night.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Second Example", "sec_num": "5.3" }, { "text": "When utterance (33) occurs, there are three propositions that have not yet been accepted by EA, and the system considers the possibility that EA is performing one of three express doubt actions, namely In all three cases, the system lacks evidence that the third applicability condition in the Express-Doubt recipe is satisfied. However, the linguistic clue word but provides evidence for a generic Express-Doubt action. Since the system has equivalent evidence for all three of the Express-Doubt acts, contextual knowledge is used to choose among them. Since the proposition Teaches (Dr. Hart, CS510) is closest to the existing focus of attention in the discourse tree, it is the most salient of the three propositions that are open for rejection. Utterance (33) is therefore interpreted as expressing doubt at the proposition that Dr. Hart is teaching CS510 by contending that it is a graduate-level course. Thus contextual knowledge arbitrates when equivalent evidence is available for several specific discourse acts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Second Example", "sec_num": "5.3" }, { "text": "We undertook an evaluation of our prototype system both to assess whether it derived appropriate interpretations of utterances and to identify areas for further research. We obtained eight human volunteers, six of whom are not engaged in NLP research and two of whom are involved in unrelated NLP projects. The subjects were given a set of world knowledge stereotypically believed in the domain, such as that faculty on sabbatical do not normally teach. The subjects were presented with a set of dialogues and asked to analyze several utterances from each dialogue. The selected utterances did not include simple questions initiating the dialogue or straightforward answers to questions, since it seemed likely that the subjects would agree with the system's interpretation and thus the results would be biased in favor of the system. The selected utterances did include surface negative questions (both with and without a clue word but), statements interpreted by our system as support for a previous assertion or as explanations about why a proposition was not in conflict with a previous claim, and examples of implicit acceptance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Future Work", "sec_num": "6." }, { "text": "For each utterance selected for analysis, the subjects were given a suggested interpretation, and asked whether the suggested interpretation was reasonable and whether they could identify a better interpretation. 27 For 15 of 20 utterances, the subjects unanimously believed that the system's interpretation was best. It should be noted there was unanimous agreement that utterance (42) below should be interpreted as an expression of doubt but that utterance (39) should not. There were two categories of utterances where the subjects disagreed. In the case of surface negative questions that did not express doubt, such as utterance (39) above, 27 The subjects were not told that the suggested interpretation was the one produced by our system but only that we were trying to determine how utterances in a discourse should be interpreted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Future Work", "sec_num": "6." }, { "text": "the suggested interpretation given to the subjects was that the speaker was seeking information about whether the queried proposition was true. When the subjects did not interpret the utterance as an expression of doubt (see below), five of them contended that a better interpretation would be that EA was seeking verification of the queried proposition. Since our system already recognizes from the surface negative question that the speaker has a strong (but uncertain) belief in the queried proposition, it is easy to extend our system so that it can explicitly identify a Seek-Veri~cation discourse act. The other category for which there was disagreement was surface negative questions where a clue word was not present and the stereotypical domain knowledge did not provide a conflict. In two of five instances, some subjects used their own experience to identify a mutual belief that might suggest a conflict, such as the belief that sometimes certain faculty are not allowed to teach graduate level courses. While this knowledge cannot be captured as a default rule, it does represent a kind of shared experiential knowledge that would provide weak evidence for a potential conflict. However, it should be noted that our subjects were split on how these problematic cases should be interpreted, agreeing with the system's interpretation slightly more than half the time. There was also another such surface negative question where one subject viewed the system's interpretation as reasonable but argued that an expression of doubt would be a better interpretation. In order to derive this interpretation, the subject posited an attribute for the speaker that was neither evident from the dialogue nor stereotypically true. (The other subjects agreed that the system's interpretation was best.) These examples bear on the issue of accommodation mentioned in Section 4.5.1, since one could argue that the subjects who interpreted the utterances as expressions of doubt were trying to accommodate an incompatibility. This is particularly true in the last instance where the subject found it necessary to resort to nonshared knowledge in making the interpretation. However, it is unclear whether a speaker would expect a listener to recognize such utterances as expressions of doubt without additional clues. As noted below, our future research will consider other forms of evidence (gestural and intonational) in order to resolve such ambiguous utterances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Future Work", "sec_num": "6." }, { "text": "After they had finished analyzing the dialogues, we asked the subjects to construct three dialogues containing an expression of doubt and to explain why the expression of doubt should be interpreted as such. While these dialogues provided no contradictions to our approach, they did provide a couple of interesting examples, such as the following dialogue, that suggest areas for future work. 43EA: We have basil, parsley, and oregano, but we need marjoram.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Future Work", "sec_num": "6." }, { "text": "(44) CA: Isn't marjoram the same as oregano?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Future Work", "sec_num": "6." }, { "text": "Clearly (44) is expressing doubt at the claim conveyed by (43), but it relies on shared world knowledge that if a list contains X items, the X items are presumed to be different. Our system does not currently include such knowledge. Our subjects commented that intonation and facial gesture might alter their interpretation of the utterances in the dialogues; we are beginning research that will take these kinds of evidence into account (Carberry, Chu-Carroll, and Lambert 1996) . In addition, we will be expanding the kinds of world knowledge incorporated into our system, and will be considering both the strength of different pieces of evidence and how several pieces of weak evidence affect interpretation. We would also like to extend our use of linguistic clues to include a wide variety of clue words and phrases and to recognize the functions that these words can play. In addition, we are developing a plan-based response generation component (Chu-Carroll and Carberry 1994) .", "cite_spans": [ { "start": 438, "end": 479, "text": "(Carberry, Chu-Carroll, and Lambert 1996)", "ref_id": "BIBREF11" }, { "start": 953, "end": 984, "text": "(Chu-Carroll and Carberry 1994)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Future Work", "sec_num": "6." }, { "text": "Initial work on this component includes a subsystem that can identify what evidence to present to a user when conflicts arise Carberry 1995b, 1998) and what information to request when the system cannot rationally decide whether to accept a proposition conveyed by the user Carberry 1995a, 1998) .", "cite_spans": [ { "start": 126, "end": 147, "text": "Carberry 1995b, 1998)", "ref_id": null }, { "start": 274, "end": 295, "text": "Carberry 1995a, 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Future Work", "sec_num": "6." }, { "text": "We will also be investigating the scale-up of our system as we extend its coverage. Part of the motivation for the content of the current discourse recipes was their future extension to other domains, such as tutoring. For example, as discussed in Section 4.2.1, the formulation of our Ask-Ref recipe allows it to be used as a subaction of a future Test-Knowledge discourse act since the recipe does not presume that the speaker is ignorant about the correct value of the requested term. This should aid in extending the kinds of discourse acts that can be handled. Although transporting our system to another domain will require encoding new domain knowledge and new domain recipes, the recipes for discourse and problem-solving acts are domain-independent and thus will remain unchanged. Moreover, the knowledge captured in our recipes is communicative knowledge shared by dialogue participants; we believe that such communicative knowledge (such as how to express doubt) is finite although the possible intentions (such as the intention of expressing doubt at Dr. Smith teaching CS360) are infinite. Grosz and Sidner (1986) postulated a theory of discourse structure that included linguistic, intentional, and attentional components, and they argued that the dominance and satisfaction-precedes relationships between discourse segments must be identified in order to determine discourse structure. They also noted three kinds of information that contribute to determining the purposes of discourse segments and their relationship to one another: linguistic markers, utterance-level intentions, and general knowledge about actions and objects. Subsequently Lochbaum (1994) developed an algorithm based on Grosz and Sidner's SharedPlan model (Grosz and Sidner 1990) that recognizes discourse segment purposes and discourse structure.", "cite_spans": [ { "start": 1103, "end": 1126, "text": "Grosz and Sidner (1986)", "ref_id": "BIBREF40" }, { "start": 1659, "end": 1674, "text": "Lochbaum (1994)", "ref_id": "BIBREF57" }, { "start": 1743, "end": 1766, "text": "(Grosz and Sidner 1990)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Future Work", "sec_num": "6." }, { "text": "We contend that, in order to understand utterances and respond appropriately, it is necessary not only to determine the structure of the discourse but also to identify the communicative acts that an agent intends to perform with an utterance. 2s For example, if a listener does not recognize when an utterance such as \"Wasn't Dr. Smith on campus yesterday?\" is expressing doubt, then the listener's response might fail to address the reasons for this doubt. Our research provides a computational algorithm that uses multiple knowledge sources to recognize complex discourse acts, including expressions of doubt, and to identify their relationship to one another. This algorithm and our strategy for recognizing implicit acceptance enable us to model negotiation subdialogues, something that previous systems have been unable to handle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grosz and Sidner's Theory of Discourse Processing", "sec_num": "7.1" }, { "text": "Several researchers have built argument understanding systems, but none has addressed participants coming to an agreement or mutual belief about a particular situation, either because the researchers investigated monologues only (Cohen 1987; Cohen and Young 1991) , or because they assumed that dialogue participants do not change their minds (Flowers, McGuire, and Birnbaum 1982; Quilici 1991) . Cohen (1987) developed an argument understanding system that used clue words and an evidence oracle to build a discourse structure for arguments based on which utterances served as support for other utterances. Cohen's model, however, handles only monologues, so responses to arguments are not modeled in her system. Birnbaum, Flowers, Dyer, and McGuire (Flowers and Dyer 1984; McGuire, Birnbaum, and Flowers 1981; Birnbaum, Flowers, and McGuire 1980) developed a system that finds flaws in arguments and determines how to respond. Quilici (1991) created a system in which agents respond to each other's arguments based on a justification pattern that will support the agent's position. Both Quilici and Birnbaum et al., however , assume that all participants in an argument will retain their opinion throughout the course of the argument, and concentrate mainly on how to find flaws in arguments and construct responses based on those findings; they do not address actually winning arguments. Reichman (1981) modeled informal debates by using her idea of context spaces and expectations to determine who should respond and what possible topics might be addressed. However, she does not provide a detailed computational mechanism for recognizing the role of each utterance in a debate.", "cite_spans": [ { "start": 229, "end": 241, "text": "(Cohen 1987;", "ref_id": "BIBREF29" }, { "start": 242, "end": 263, "text": "Cohen and Young 1991)", "ref_id": "BIBREF30" }, { "start": 343, "end": 380, "text": "(Flowers, McGuire, and Birnbaum 1982;", "ref_id": "BIBREF36" }, { "start": 381, "end": 394, "text": "Quilici 1991)", "ref_id": "BIBREF67" }, { "start": 397, "end": 409, "text": "Cohen (1987)", "ref_id": "BIBREF29" }, { "start": 714, "end": 774, "text": "Birnbaum, Flowers, Dyer, and McGuire (Flowers and Dyer 1984;", "ref_id": null }, { "start": 775, "end": 811, "text": "McGuire, Birnbaum, and Flowers 1981;", "ref_id": "BIBREF60" }, { "start": 812, "end": 848, "text": "Birnbaum, Flowers, and McGuire 1980)", "ref_id": "BIBREF4" }, { "start": 929, "end": 943, "text": "Quilici (1991)", "ref_id": "BIBREF67" }, { "start": 1089, "end": 1125, "text": "Quilici and Birnbaum et al., however", "ref_id": null }, { "start": 1391, "end": 1406, "text": "Reichman (1981)", "ref_id": "BIBREF70" } ], "ref_spans": [], "eq_spans": [], "section": "Argument Understanding Systems", "sec_num": "7.2" }, { "text": "Several models of discourse have recently been built which view conversation as a kind of collaborative behavior in which speakers try to make themselves understood and listeners work with speakers to help speakers attain this goal. Clark and Schaefer (1989) contend that utterances must be \"grounded,\" or understood, by both parties, but they do not address conflicts in belief, only lack of understanding. Walker (1992) has found many occasions of redundancy in collaborative dialogues, and explains these by claiming that people repeat themselves in order to ensure that each utterance has been understood. 29 Clark and Wilkes-Gibbs (1990) propose a collaborative model of dialogue in which referring is viewed as a collaborative process and each conversation unit is viewed as a contribution, which consists of 1) an utterance that performs a referring action, and 2) the utterances required to understand the referent described in the utterance. Heeman (1991) implemented this model in a plan-based collaborative model of dialogue that is able to plan and recognize referring expressions and their corrections.", "cite_spans": [ { "start": 233, "end": 258, "text": "Clark and Schaefer (1989)", "ref_id": "BIBREF17" }, { "start": 408, "end": 421, "text": "Walker (1992)", "ref_id": "BIBREF84" }, { "start": 613, "end": 642, "text": "Clark and Wilkes-Gibbs (1990)", "ref_id": "BIBREF18" }, { "start": 951, "end": 964, "text": "Heeman (1991)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Models of Collaborative Behavior", "sec_num": "7.3" }, { "text": "Other collaborative models assume that two participants are working together to achieve a common goal (Cohen and Levesque 1990a , 1991a , 1991b Lochbaum, Grosz, and Sidner 1990; Lochbaum 1991; Grosz and Sidner 1990; Searle 1990 ). Searle (1990) proposes a model in which the two agents working together have a joint intention, a \"we intention,\" instead of individual intentions. Cohen and Levesque (1990a , 1990b , 1990c , 1991a , 1991b have developed a formal theory in which agents are jointly committed to accomplishing a goal, so both parties have individual intentions to accomplish the goal as part of their joint commitment. Grosz, Lochbaum, and Sidner (Grosz and Sidner 1990; Lochbaum, Grosz, and Sidner 1990; Lochbaum 1991 ) have specified a system in which two agents are working to accomplish some common goal by building a \"shared plan\" in which each agent holds certain beliefs and intentions. These beliefs and intentions indicate that the agents intend to perform some joint action, and that they believe they can perform this action. All of these models indicate the need for modeling collaborative dialogue, but none suggests a system that can handle the kind of negotiation subdialogues that people often engage in when trying to negotiate their conflicts in belief, even when they are both working towards the same goal.", "cite_spans": [ { "start": 102, "end": 127, "text": "(Cohen and Levesque 1990a", "ref_id": null }, { "start": 128, "end": 135, "text": ", 1991a", "ref_id": "BIBREF26" }, { "start": 136, "end": 143, "text": ", 1991b", "ref_id": "BIBREF27" }, { "start": 144, "end": 177, "text": "Lochbaum, Grosz, and Sidner 1990;", "ref_id": "BIBREF58" }, { "start": 178, "end": 192, "text": "Lochbaum 1991;", "ref_id": "BIBREF56" }, { "start": 193, "end": 215, "text": "Grosz and Sidner 1990;", "ref_id": "BIBREF41" }, { "start": 216, "end": 227, "text": "Searle 1990", "ref_id": "BIBREF77" }, { "start": 231, "end": 244, "text": "Searle (1990)", "ref_id": "BIBREF77" }, { "start": 379, "end": 404, "text": "Cohen and Levesque (1990a", "ref_id": null }, { "start": 405, "end": 412, "text": ", 1990b", "ref_id": "BIBREF23" }, { "start": 413, "end": 420, "text": ", 1990c", "ref_id": "BIBREF25" }, { "start": 421, "end": 428, "text": ", 1991a", "ref_id": "BIBREF26" }, { "start": 429, "end": 436, "text": ", 1991b", "ref_id": "BIBREF27" }, { "start": 632, "end": 683, "text": "Grosz, Lochbaum, and Sidner (Grosz and Sidner 1990;", "ref_id": "BIBREF58" }, { "start": 684, "end": 717, "text": "Lochbaum, Grosz, and Sidner 1990;", "ref_id": "BIBREF58" }, { "start": 718, "end": 731, "text": "Lochbaum 1991", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Models of Collaborative Behavior", "sec_num": "7.3" }, { "text": "We have presented a plan-based model for handling cooperative negotiation subdialogues. Our system infers both the communicative actions that people pursue when speaking and the beliefs underlying these actions. Beliefs, and the strength of these beliefs, are recognized from the surface form of utterances and from the explicit and implicit acceptance of previous utterances. Our algorithm for recognizing discourse actions combines linguistic, contextual, and world knowledge in a unified framework. By combining these different knowledge sources, we are able to recognize complex discourse acts such as expressing doubt, to identify the relationship of utterances to one another, and to model negotiation subdialogues. Since negotiation is an integral part of multiagent activity, our process model addresses an important aspect of cooperative interaction and thus is a step toward an intelligent and robust natural language consultation system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "Address-Unacceptance (_agentl, _agent2, _proposition1, _proposition2) {By noting a conflicting _proposition2, _agent1 initiates negotiation of his unacceptance of_proposition1} Recipe-Type: Appl Cond: Body:", "cite_spans": [ { "start": 21, "end": 69, "text": "(_agentl, _agent2, _proposition1, _proposition2)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Recipe Action:", "sec_num": null }, { "text": "We are using \"express doubt\" in the sense of challenging the truth of a proposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Searle (1970) notes that there are two kinds of questions, ones whose objective is to obtain knowledge and ones whose objective is to test another's knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In personal communication, Allen has said that the effect of his Inform action was intended to capture the agent's goal in performing the action. InAllen (1979) he mentions the need for a Decide-to-Believe act, but nothing further is done with it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These are actions on the active path of the dialogue model; the actions that are deepest on the active path are closer to the current focus of attention and are therefore regarded as more salient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Questions must also be accepted and assimilated into a dialogue. Our model has recently been expanded to address the acceptance of questions(Bartelt 1996), but we are concentrating on statements in this paper. 17 Since our system does not generate responses, we do not model what the speakers need to discuss; however, if a speaker expresses doubt at a proposition by contending that a second proposition is true, then the speaker is introducing this second proposition and its relationship to the first proposition as items that need to be discussed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Our system currently maintains a list of clue words and discourse acts that each clue word might suggest. 21 Due to length restrictions, we have omitted a part of the algorithm that deals with focusing heuristics that are not needed for the kinds of utterances addressed in this paper; an example of utterances needing the full algorithm is given inLambert and Carberry (1991).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "~ is usually implies rather than strict implication. The semantics of this predicate is that there may be a small number of cases where the antecedent is true and the consequent is not. This is similar to a default rule. For example, the On-Campus rule might be viewed as Vy: on-campus(y) A faculty(y) A M ~on-sabbatical(y) ~ ~on-sabbatical(y). However, as with any default rule, there could be exceptions; for example, one might be on sabbatical but have returned to campus to give a colloquium.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In our implemented system, it is rejected because there is no recipe for the Address-Understanding action that is part of the body of the recipe for the Tell discourse act, and thus it is not possible to construct an inference path from the utterance to the Address-Understanding act. In the future, our expanded system will include such recipes, and the interpretation will be rejected because of lack of evidence for the e-action on the inference path or because its constraints are not satisfied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In a dialogue, Grosz and Sidner's discourse segment purpose is intended to capture the purpose of a segment consisting of a series of utterances by both participants, not the communicative intentions underlying each participant's discourse actions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Another reason for repetition, she claims, is for centering(Grosz, Joshi, and Weinstein 1995), but she concentrates on repetitions that give evidence of understanding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the National Science Foundation under Grant No. IRI-9122026. The Government has certain rights in this material. We would like to thank Rachel Sacher for her help in our corpus analysis and the anonymous reviewers for their helpful comments on the manuscript.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Address-Acceptance(_agentl, _agent2, _proposition1) {_agent1 tries to make _proposition1 believable to _agent2} Recipe-Type: Decomposition Appl Cond: believe(_agentl, _proposition2 --~ ~_proposition3, [S:C]) believe (_agentl, believe(_agent2, _proposition3, [W:S] ", "cite_spans": [ { "start": 201, "end": 207, "text": "[S:C])", "ref_id": null }, { "start": 216, "end": 263, "text": "(_agentl, believe(_agent2, _proposition3, [W:S]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Recipe Action:", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Plan-Based Approach to Speech Act Recognition", "authors": [ { "first": "James", "middle": [], "last": "Allen", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allen, James. 1979. A Plan-Based Approach to Speech Act Recognition. Ph.D. thesis, University of Toronto, Toronto, Ontario, Canada.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Analyzing intention in utterances", "authors": [ { "first": "James", "middle": [], "last": "Allen", "suffix": "" }, { "first": "C. Raymond", "middle": [], "last": "Perrault", "suffix": "" } ], "year": 1980, "venue": "Artificial Intelligence", "volume": "15", "issue": "", "pages": "143--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allen, James and C. Raymond Perrault. 1980. Analyzing intention in utterances. Artificial Intelligence, 15:143-178.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Beliefs, stereotypes, and dynamic agent modeling", "authors": [ { "first": "James", "middle": [], "last": "Allen", "suffix": "" }, { "first": "Lenhart", "middle": [], "last": "Schubert", "suffix": "" } ], "year": 1991, "venue": "User Modeling and User-Adapted Interaction", "volume": "1", "issue": "1", "pages": "33--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allen, James and Lenhart Schubert. 1991. The trains project. Technical Report 91-1, Department of Computer Science, University of Rochester, Rochester, NY. Ballim, Afzal and Yorick Wilks. 1991. Beliefs, stereotypes, and dynamic agent modeling. User Modeling and User-Adapted Interaction, 1(1):33-66.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A computer program that recognizes rejected questions computationally", "authors": [ { "first": "Margaret", "middle": [], "last": "Bartelt", "suffix": "" } ], "year": 1996, "venue": "Proceedings of NCUR-IO", "volume": "", "issue": "", "pages": "989--993", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bartelt, Margaret. 1996. A computer program that recognizes rejected questions computationally. In Proceedings of NCUR-IO, pages 989-993.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Towards an AI model of argumentation", "authors": [ { "first": "Lawrence", "middle": [], "last": "Birnbaum", "suffix": "" }, { "first": "Margot", "middle": [], "last": "Flowers", "suffix": "" }, { "first": "Rod", "middle": [], "last": "Mcguire", "suffix": "" } ], "year": 1980, "venue": "Proceedings of the First National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "306--309", "other_ids": {}, "num": null, "urls": [], "raw_text": "Birnbaum, Lawrence, Margot Flowers, and Rod McGuire. 1980. Towards an AI model of argumentation. In Proceedings of the First National Conference on Artificial Intelligence, pages 306-309.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Network-based management of subjective judgements: A proposal accepting cyclic dependencies", "authors": [ { "first": "Andrea", "middle": [], "last": "Bonarini", "suffix": "" }, { "first": "Ernesto", "middle": [], "last": "Cappelletti", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Corrao", "suffix": "" } ], "year": 1990, "venue": "Politecnico di Milano", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonarini, Andrea, Ernesto Cappelletti, and Antonio Corrao. 1990. Network-based management of subjective judgements: A proposal accepting cyclic dependencies. Technical Report 90-067, Dipartimento di Elettronica, Politecnico di Milano, Milano, Italy.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A pragmatics based approach to understanding intersentential ellipsis", "authors": [ { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1985, "venue": "Proceedings of the 23rd Annual Meeting", "volume": "", "issue": "", "pages": "188--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carberry, Sandra. 1985. A pragmatics based approach to understanding intersentential ellipsis. In Proceedings of the 23rd Annual Meeting, pages 188-197. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Pragmatic modeling: Toward a robust natural language interface", "authors": [ { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1987, "venue": "Computational Intelligence", "volume": "3", "issue": "", "pages": "117--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carberry, Sandra. 1987. Pragmatic modeling: Toward a robust natural language interface. Computational Intelligence, 3:117-136.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Modeling the user's plans and goals", "authors": [ { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1988, "venue": "Computational Linguistics", "volume": "14", "issue": "3", "pages": "23--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carberry, Sandra. 1988. Modeling the user's plans and goals. Computational Linguistics, 14(3):23-37.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Pragmatics-Based Approach to Ellipsis Resolution", "authors": [ { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1989, "venue": "Computational Linguistics", "volume": "15", "issue": "2", "pages": "75--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carberry, Sandra. 1989. A Pragmatics-Based Approach to Ellipsis Resolution. Computational Linguistics, 15(2):75-96.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Plan Recognition in Natural Language Dialogue", "authors": [ { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1990, "venue": "ACL-MIT Press Series on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carberry, Sandra. 1990. Plan Recognition in Natural Language Dialogue. ACL-MIT Press Series on Natural Language Processing. MIT Press, Cambridge, MA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Modeling intention: Issues for spoken language dialogue systems", "authors": [ { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Chu-Carroll", "suffix": "" }, { "first": "Lynn", "middle": [], "last": "Lambert", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the International Symposium on Spoken Dialogue", "volume": "", "issue": "", "pages": "13--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carberry, Sandra, Jennifer Chu-Carroll, and Lynn Lambert. 1996. Modeling intention: Issues for spoken language dialogue systems. In Proceedings of the International Symposium on Spoken Dialogue, pages 13-24.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Automating the librarian: A fundamental approach using belief revision", "authors": [ { "first": "Alison", "middle": [], "last": "Cawsey", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Galiiers", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Reece", "suffix": "" }, { "first": "Karen", "middle": [ "Sparck" ], "last": "Jones", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cawsey, Alison, Julia Galiiers, Steven Reece, and Karen Sparck Jones. 1992. Automating the librarian: A fundamental approach using belief revision. Technical Report 243, University of Cambridge Computer Laboratory, Cambridge, England.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A plan-based model for response generation in collaborative task-oriented dialogues", "authors": [ { "first": "Jennifer", "middle": [], "last": "Chu-Carroll", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the Twelfth National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "799--805", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chu-Carroll, Jennifer and Sandra Carberry. 1994. A plan-based model for response generation in collaborative task-oriented dialogues. In Proceedings of the Twelfth National Conference on Artificial Intelligence, pages 799-805.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Generating information-sharing subdialogues in expert-user consultation", "authors": [ { "first": "Jennifer", "middle": [], "last": "Chu-Carroll", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 14th International Joint Conference on Artificial Intelligence", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chu-Carroll, Jennifer and Sandra Carberry. 1995a. Generating information-sharing subdialogues in expert-user consultation. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, pages 1,243-1,250.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Response generation in collaborative negotiation", "authors": [ { "first": "Jennifer", "middle": [], "last": "Chu-Carroll", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting", "volume": "", "issue": "", "pages": "136--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chu-Carroll, Jennifer and Sandra Carberry. 1995b. Response generation in collaborative negotiation. In Proceedings of the 33rd Annual Meeting, pages 136-143.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Collaborative response generation in planning dialogues", "authors": [ { "first": "Jennifer", "middle": [], "last": "Chu-Carroll", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "3", "pages": "355--400", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chu-Carroll, Jennifer and Sandra Carberry. 1998. Collaborative response generation in planning dialogues. Computational Linguistics, 24(3):355-400.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Contributing to discourse", "authors": [ { "first": "Herbert", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Schaefer", "suffix": "" } ], "year": 1989, "venue": "Cognitive Science", "volume": "", "issue": "", "pages": "259--294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, Herbert and Edward Schaefer. 1989. Contributing to discourse. Cognitive Science, pages 259-294.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Referring as a collaborative process", "authors": [ { "first": "Herbert", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Deanna", "middle": [], "last": "Wilkes-Gibbs", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, Herbert and Deanna Wilkes-Gibbs. 1990. Referring as a collaborative process. In Philip Cohen, Jerry Morgan, and", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Intentions in Communication", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "463--493", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Pollack, editors, Intentions in Communication. MIT Press, Cambridge, MA, pages 463-493.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Heuristic Reasoning about Uncertainty: An Artificial Intelligence Approach", "authors": [ { "first": "Paul", "middle": [ "R" ], "last": "Cohen", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Paul R. 1985. Heuristic Reasoning about Uncertainty: An Artificial Intelligence Approach. Pitman Publishing Company.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Persistence, intention, and commitment", "authors": [ { "first": "Philip", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Hector", "middle": [], "last": "Levesque", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Philip and Hector Levesque. 1990b. Persistence, intention, and commitment. In Philip Cohen, Jerry Morgan, and", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Intentions in Communication", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "33--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Pollack, editors, Intentions in Communication. MIT Press, Cambridge, MA, pages 33-70.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Rational interaction as the basis for communication", "authors": [ { "first": "Philip", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Hector", "middle": [], "last": "Levesque", "suffix": "" } ], "year": 1990, "venue": "Intentions in Communication", "volume": "", "issue": "", "pages": "221--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Philip and Hector Levesque. 1990c. Rational interaction as the basis for communication. In Philip Cohen, Jerry Morgan, and Martha Pollack, editors, Intentions in Communication. MIT Press, Cambridge, MA, pages 221-256.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Confirmations and joint action", "authors": [ { "first": "Philip", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Hector", "middle": [], "last": "Levesque", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "951--957", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Philip and Hector Levesque. 1991a. Confirmations and joint action. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 951-957.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "SRI International", "authors": [ { "first": "Philip", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Hector", "middle": [], "last": "Levesque", "suffix": "" } ], "year": 1991, "venue": "", "volume": "504", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Philip and Hector Levesque. 1991b. Teamwork. Technical Report 504, SRI International, Menlo Park, California.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Elements of a plan-based theory of speech acts", "authors": [ { "first": "Philip", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "C. Raymond", "middle": [], "last": "Perrault", "suffix": "" } ], "year": 1979, "venue": "Cognitive Science", "volume": "3", "issue": "", "pages": "177--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Philip and C. Raymond Perrault. 1979. Elements of a plan-based theory of speech acts. Cognitive Science, 3:177-212.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Analyzing the structure of argumentative discourse", "authors": [ { "first": "Robin", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1987, "venue": "Computational Linguistics", "volume": "13", "issue": "1-2", "pages": "11--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Robin. 1987. Analyzing the structure of argumentative discourse. Computational Linguistics, 13(1-2):11-24.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Determining intended evidence relations in natural language arguments", "authors": [ { "first": "Robin", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Mark", "middle": [ "Anthony" ], "last": "Young", "suffix": "" } ], "year": 1991, "venue": "Computational Intelligence", "volume": "7", "issue": "", "pages": "110--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Robin and Mark Anthony Young. 1991. Determining intended evidence relations in natural language arguments. Computational Intelligence, 7:110-118.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Transcripts derived from audiotape conversations made at Columbia University", "authors": [], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Columbia University Transcripts. 1985. Transcripts derived from audiotape conversations made at Columbia University, New York, NY. Provided by Kathleen McKeown.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "An assumption-based TMS", "authors": [ { "first": "Johan", "middle": [], "last": "Dekleer", "suffix": "" } ], "year": 1986, "venue": "Arti~cial Intelligence", "volume": "28", "issue": "", "pages": "269--301", "other_ids": {}, "num": null, "urls": [], "raw_text": "DeKleer, Johan. 1986. An assumption-based TMS. Arti~cial Intelligence, 28:269-301.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Towards a Many-Valued Logic of Quanti~ed Belief", "authors": [ { "first": "Dimiter", "middle": [], "last": "Driankov", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Driankov, Dimiter. 1988. Towards a Many-Valued Logic of Quanti~ed Belief. Ph.D. thesis, Linkoping University, Department of Computer and Information Science, Linkoping, Sweden.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "The role of user preferences and problem-solving knowledge in plan recognition for expert consultation systems", "authors": [ { "first": "Stephanie", "middle": [], "last": "Elzer", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the IJCAI Workshop on the Next Generation of Plan Recognition Systems", "volume": "", "issue": "", "pages": "37--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elzer, Stephanie. 1995. The role of user preferences and problem-solving knowledge in plan recognition for expert consultation systems. In Proceedings of the IJCAI Workshop on the Next Generation of Plan Recognition Systems, pages 37-41.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Really arguing with your computer", "authors": [ { "first": "Margot", "middle": [], "last": "Flowers", "suffix": "" }, { "first": "Michael", "middle": [ "E" ], "last": "Dyer", "suffix": "" } ], "year": 1984, "venue": "Proceedings of the National Computer Conference", "volume": "", "issue": "", "pages": "653--659", "other_ids": {}, "num": null, "urls": [], "raw_text": "Flowers, Margot and Michael E. Dyer. 1984. Really arguing with your computer. In Proceedings of the National Computer Conference, pages 653-659.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Adversary arguments and the logic of personal attack", "authors": [ { "first": "Margot", "middle": [], "last": "Flowers", "suffix": "" }, { "first": "Rod", "middle": [], "last": "Mcguire", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Birnbaum", "suffix": "" } ], "year": 1982, "venue": "Strategies for Natural Language Processing", "volume": "", "issue": "", "pages": "275--294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Flowers, Margot, Rod McGuire, and Lawrence Birnbaum. 1982. Adversary arguments and the logic of personal attack. In W. Lehnert and M. Ringle, editors, Strategies for Natural Language Processing. Lawrence Erlbaum Associates, Hillsdale, NJ, pages 275-294.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Belief revision and a theory of con~nunication", "authors": [ { "first": "Julia", "middle": [], "last": "Galliers", "suffix": "" }, { "first": "", "middle": [], "last": "Rose", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Galliers, Julia Rose. 1991. Belief revision and a theory of con~nunication. Technical Report 193, University of Cambridge, Cambridge, England.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Belief Revision, Cambridge tracts in theoretical computer science", "authors": [ { "first": "Julia", "middle": [], "last": "Galliers", "suffix": "" }, { "first": "", "middle": [], "last": "Rose", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Galliers, Julia Rose. 1992. Autonomous belief revision and communication. In P. Gardenfors, editor, Belief Revision, Cambridge tracts in theoretical computer science. Cambridge University Press, Cambridge, England.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Centering: A framework for modeling the local coherence of discourse", "authors": [ { "first": "", "middle": [], "last": "Grosz", "suffix": "" }, { "first": "Aravind", "middle": [ "K" ], "last": "Barbara", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "", "middle": [], "last": "Weinstein", "suffix": "" } ], "year": 1995, "venue": "Computational Linguistics", "volume": "21", "issue": "2", "pages": "203--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grosz, Barbara, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203-225.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Attention, intentions, and the structure of discourse", "authors": [ { "first": "Barbara", "middle": [], "last": "Grosz", "suffix": "" }, { "first": "Candace", "middle": [], "last": "Sidner", "suffix": "" } ], "year": 1986, "venue": "Computational Linguistics", "volume": "12", "issue": "3", "pages": "175--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grosz, Barbara and Candace Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 12(3):175-204.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Plans for discourse", "authors": [ { "first": "Barbara", "middle": [], "last": "Grosz", "suffix": "" }, { "first": "Candace", "middle": [], "last": "Sidner", "suffix": "" } ], "year": 1990, "venue": "Intentions in Communication", "volume": "", "issue": "", "pages": "417--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grosz, Barbara and Candace Sidner. 1990. Plans for discourse. In Philip Cohen, Jerry Morgan, and Martha Pollack, editors, Intentions in Communication. MIT Press, Cambridge, MA, pages 417-444.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Transcripts derived from tapes of the radio talk show Harry Gross: Speaking of your money. Provided by the Dept", "authors": [ { "first": "Harry", "middle": [], "last": "Gross Transcripts", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harry Gross Transcripts. 1982. Transcripts derived from tapes of the radio talk show Harry Gross: Speaking of your money. Provided by the Dept. of Computer Science at the University of Pennsylvania.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "A computational model of collaboration on referring expressions", "authors": [ { "first": "Peter", "middle": [], "last": "Heeman", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeman, Peter. 1991. A computational model of collaboration on referring expressions. Master's thesis, University of Toronto, September. Also Technical Report CSRI-251.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Two constraints on speech act ambiguity", "authors": [ { "first": "Elizabeth", "middle": [], "last": "Hinkelman", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the 27th Annual Meeting", "volume": "", "issue": "", "pages": "212--219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hinkelman, Elizabeth. 1989. Two constraints on speech act ambiguity. In Proceedings of the 27th Annual Meeting, pages 212-219. Association for Computational Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Now let's talk about now", "authors": [ { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" }, { "first": "Diane", "middle": [], "last": "Litman", "suffix": "" } ], "year": 1987, "venue": "Proceedings of the 25th Annual Meeting", "volume": "", "issue": "", "pages": "163--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hirschberg, Julia and Diane Litman. 1987. Now let's talk about now. In Proceedings of the 25th Annual Meeting, pages 163-171.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Mutual beliefs in question-answer systems", "authors": [ { "first": "Aravind", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "181--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshi, Aravind K. 1982. Mutual beliefs in question-answer systems. In N. Smith, editor, Mutual Beliefs. Academic Press, NY, pages 181-197.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "A circumscriptive theory of plan recognition", "authors": [ { "first": "Henry", "middle": [], "last": "Kautz", "suffix": "" } ], "year": 1990, "venue": "Intentions in Communication", "volume": "", "issue": "", "pages": "105--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kautz, Henry. 1990. A circumscriptive theory of plan recognition. In Philip Cohen, Jerry Morgan, and Martha Pollack, editors, Intentions in Communication. MIT Press, Cambridge, MA, pages 105-133.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Using linguistic phenomena to motivate a set of coherence relations", "authors": [ { "first": "Alistair", "middle": [], "last": "Knott", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" } ], "year": 1994, "venue": "Discourse Processes", "volume": "18", "issue": "", "pages": "35--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knott, Alistair and Robert Dale. 1994. Using linguistic phenomena to motivate a set of coherence relations. Discourse Processes, 18(1):35-62.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "A feature-based account of the relations signalled by sentence and clause connectives", "authors": [ { "first": "Alistair", "middle": [], "last": "Knott", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Mellish", "suffix": "" } ], "year": 1996, "venue": "Language and Speech", "volume": "39", "issue": "2-3", "pages": "143--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knott, Alistair and Chris Mellish. 1996. A feature-based account of the relations signalled by sentence and clause connectives. Language and Speech, 39(2-3):143-183.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Recognizing Complex Discourse Acts: A Tripartite Plan-Based Model of Dialogue", "authors": [ { "first": "Lynn", "middle": [], "last": "Lambert", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lambert, Lynn. 1993. Recognizing Complex Discourse Acts: A Tripartite Plan-Based Model of Dialogue. Ph.D. thesis, University of Delaware, June.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "A tripartite plan-based model of dialogue", "authors": [ { "first": "Lynn", "middle": [], "last": "Lambert", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 29th Annual Meeting", "volume": "", "issue": "", "pages": "47--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lambert, Lynn and Sandra Carberry. 1991. A tripartite plan-based model of dialogue. In Proceedings of the 29th Annual Meeting, pages 47-54. Association for Computational Linguistics.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Modeling negotiation subdialogues", "authors": [ { "first": "Lynn", "middle": [], "last": "Lambert", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Carberry", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 30th Annual Meeting", "volume": "", "issue": "", "pages": "193--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lambert, Lynn and Sandra Carberry. 1992. Modeling negotiation subdialogues. In Proceedings of the 30th Annual Meeting, pages 193-200. Association for Computational Linguistics.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Scorekeeping in a language game", "authors": [ { "first": "D", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 1979, "venue": "Journal of Philosophical Logic", "volume": "8", "issue": "", "pages": "339--359", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lewis, D. 1979. Scorekeeping in a language game. Journal of Philosophical Logic, 8:339-359.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "A plan recognition model for subdialogues in conversation", "authors": [ { "first": "Diane", "middle": [], "last": "Litman", "suffix": "" }, { "first": "James", "middle": [], "last": "Allen", "suffix": "" } ], "year": 1987, "venue": "Cognitive Science", "volume": "11", "issue": "", "pages": "163--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Litman, Diane and James Allen. 1987. A plan recognition model for subdialogues in conversation. Cognitive Science, 11:163-200.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Disambiguating cue phrases in text and speech", "authors": [ { "first": "Diane", "middle": [], "last": "Litman", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 13th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "251--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Litman, Diane and Julia Hirschberg. 1990. Disambiguating cue phrases in text and speech. In Proceedings of the 13th International Conference on Computational Linguistics, pages 251-256.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "An algorithm for plan recognition in collaborative discourse", "authors": [ { "first": "Karen", "middle": [], "last": "Lochbaum", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 29th Annual Meeting", "volume": "", "issue": "", "pages": "33--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lochbaum, Karen. 1991. An algorithm for plan recognition in collaborative discourse. In Proceedings of the 29th Annual Meeting, pages 33-38. Association for Computational Linguistics.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Using Collaborative Plans to Model the Intentional Structure of Discourse", "authors": [ { "first": "Karen", "middle": [], "last": "Lochbaum", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lochbaum, Karen. 1994. Using Collaborative Plans to Model the Intentional Structure of Discourse. Ph.D. thesis, Harvard University, Cambridge, MA. Technical Report: TR-25-94.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Models of plans to support communication: An initial report", "authors": [ { "first": "Karen", "middle": [], "last": "Lochbaum", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Grosz", "suffix": "" }, { "first": "Candace", "middle": [], "last": "Sidner", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the Eighth National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "485--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lochbaum, Karen, Barbara Grosz, and Candace Sidner. 1990. Models of plans to support communication: An initial report. In Proceedings of the Eighth National Conference on Artificial Intelligence, pages 485-490.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "The rhetorical parsing of natural language text", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 35th Annual Meeting", "volume": "", "issue": "", "pages": "96--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcu, Daniel. 1997. The rhetorical parsing of natural language text. In Proceedings of the 35th Annual Meeting, pages 96-103. Association for Computational Linguistics.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Opportunistic processing in arguments", "authors": [ { "first": "Rod", "middle": [], "last": "Mcguire", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Birnbaum", "suffix": "" }, { "first": "Margot", "middle": [], "last": "Flowers", "suffix": "" } ], "year": 1981, "venue": "Proceedings of the 1981 International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "58--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "McGuire, Rod, Lawrence Birnbaum, and Margot Flowers. 1981. Opportunistic processing in arguments. In Proceedings of the 1981 International Joint Conference on Artificial Intelligence, pages 58-60.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Focus constraints on language generation", "authors": [ { "first": "Kathleen", "middle": [ "R" ], "last": "Mckeown", "suffix": "" } ], "year": 1983, "venue": "Proceedings of the Third National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "582--587", "other_ids": {}, "num": null, "urls": [], "raw_text": "McKeown, Kathleen R. 1983. Focus constraints on language generation. In Proceedings of the Third National Conference on Artificial Intelligence, pages 582-587.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "The repair of speech act misunderstandings by abductive inference", "authors": [ { "first": "Susan", "middle": [], "last": "Mcroy", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 1995, "venue": "Computational Linguistics", "volume": "21", "issue": "4", "pages": "435--478", "other_ids": {}, "num": null, "urls": [], "raw_text": "McRoy, Susan and Graeme Hirst. 1995. The repair of speech act misunderstandings by abductive inference. Computational Linguistics, 21(4):435-478.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "An application of default logic to speech act theory", "authors": [ { "first": "Raymond", "middle": [], "last": "Perrault", "suffix": "" } ], "year": 1990, "venue": "Intentions in Communication", "volume": "", "issue": "", "pages": "161--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Perrault, Raymond. 1990. An application of default logic to speech act theory. In Philip Cohen, Jerry Morgan, and Martha Pollack, editors, Intentions in Communication. MIT Press, Cambridge, MA, pages 161-185.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "A plan-based analysis of indirect speech acts", "authors": [ { "first": "Raymond", "middle": [], "last": "Perrault", "suffix": "" }, { "first": "James", "middle": [], "last": "Allen", "suffix": "" } ], "year": 1980, "venue": "American Journal of Computational Linguistics", "volume": "6", "issue": "3-4", "pages": "167--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Perrault, Raymond and James Allen. 1980. A plan-based analysis of indirect speech acts. American Journal of Computational Linguistics, 6(3-4):167-182.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "The linguistic discourse model: Towards a formal theory of discourse structure", "authors": [ { "first": "Livia", "middle": [], "last": "Polanyi", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Polanyi, Livia. 1986. The linguistic discourse model: Towards a formal theory of discourse structure. Technical Report 6409, Bolt Beranek and Newman Laboratories Inc., Cambridge, MA.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Plans as complex mental attitudes", "authors": [ { "first": "Martha", "middle": [], "last": "Pollack", "suffix": "" } ], "year": 1990, "venue": "Intentions in Communication", "volume": "", "issue": "", "pages": "77--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pollack, Martha. 1990. Plans as complex mental attitudes. In Philip Cohen, Jerry Morgan, and Martha Pollack, editors, Intentions in Communication. MIT Press, Cambridge, MA, pages 77-104.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "The Correction Machine: A Computer Model of Recognizing and Producing Belief Justifications in Argumentative Dialogs", "authors": [ { "first": "Alexander", "middle": [], "last": "Quilici", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quilici, Alexander. 1991. The Correction Machine: A Computer Model of Recognizing and Producing Belief Justifications in Argumentative Dialogs. Ph.D. thesis, Department of Computer Science, University of California at Los Angeles, Los Angeles, CA.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "A Metaplan model for problem-solving discourse", "authors": [ { "first": "Lance", "middle": [ "A" ], "last": "Ramshaw", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the Fourth Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "35--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramshaw, Lance A. 1989. A Metaplan model for problem-solving discourse. In Proceedings of the Fourth Conference of the European Chapter of the Association for Computational Linguistics, pages 35-42.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Conversational coherency. Cognitive Science", "authors": [ { "first": "Rachel", "middle": [], "last": "Reichman", "suffix": "" } ], "year": 1978, "venue": "", "volume": "2", "issue": "", "pages": "283--327", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reichman, Rachel. 1978. Conversational coherency. Cognitive Science, 2:283-327.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Modeling informal debates", "authors": [ { "first": "Rachel", "middle": [], "last": "Reichman", "suffix": "" } ], "year": 1981, "venue": "Proceedings of the 1981 International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "19--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reichman, Rachel. 1981. Modeling informal debates. In Proceedings of the 1981 International Joint Conference on Artificial Intelligence, pages 19-24.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Getting Computers to Talk Like You and Me", "authors": [ { "first": "Rachel", "middle": [], "last": "Reichman", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reichman, Rachel. 1985. Getting Computers to Talk Like You and Me. MIT Press, Cambridge, MA.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "Utilizing statistical dialogue act processing in verbmobil", "authors": [ { "first": "Norbert", "middle": [], "last": "Reithinger", "suffix": "" }, { "first": "Elisabeth", "middle": [], "last": "Maier", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting", "volume": "", "issue": "", "pages": "116--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reithinger, Norbert and Elisabeth Maier. 1995. Utilizing statistical dialogue act processing in verbmobil. In Proceedings of the 33rd Annual Meeting, pages 116--121. Association for Computational Linguistics.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Discourse processing of dialogues with multiple threads", "authors": [ { "first": "Barbara", "middle": [ "Di" ], "last": "Rosg Carolyn Penstein", "suffix": "" }, { "first": "Lori", "middle": [], "last": "Eugenio", "suffix": "" }, { "first": "Carol", "middle": [], "last": "Levin", "suffix": "" }, { "first": "", "middle": [], "last": "Van Ess-Dykema", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting", "volume": "", "issue": "", "pages": "31--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "RosG Carolyn Penstein, Barbara Di Eugenio, Lori Levin, and Carol Van Ess-Dykema. 1995. Discourse processing of dialogues with multiple threads. In Proceedings of the 33rd Annual Meeting, pages 31-38. Association for Computational Linguistics.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Opening up closings", "authors": [ { "first": "Emanuel", "middle": [], "last": "Schegloff", "suffix": "" }, { "first": "Harvey", "middle": [], "last": "Sachs", "suffix": "" } ], "year": 1973, "venue": "Semiotica", "volume": "8", "issue": "", "pages": "289--327", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schegloff, Emanuel and Harvey Sachs. 1973. Opening up closings. Semiotica, 8:289-327.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "Discourse Markers", "authors": [ { "first": "Deborah", "middle": [], "last": "Schiffrin", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schiffrin, Deborah. 1987. Discourse Markers. Cambridge University Press, Cambridge, England.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Speech Acts: An Essay in the Philosophy of Language", "authors": [ { "first": "John", "middle": [], "last": "Searle", "suffix": "" } ], "year": 1970, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Searle, John. 1970. Speech Acts: An Essay in the Philosophy of Language. Cambridge University Press, London, England.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Collective Intentions and Actions", "authors": [ { "first": "John", "middle": [], "last": "Searle", "suffix": "" } ], "year": 1990, "venue": "Intentions in Communication", "volume": "", "issue": "", "pages": "401--416", "other_ids": {}, "num": null, "urls": [], "raw_text": "Searle, John. 1990. Collective Intentions and Actions. In Philip Cohen, Jerry Morgan, and Martha Pollack, editors, Intentions in Communication. MIT Press, Cambridge, MA, pages 401-416.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "Transcripts derived from audiotape conversations made at SRI International", "authors": [ { "first": "", "middle": [], "last": "Sri Transcripts", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "SRI Transcripts. 1992. Transcripts derived from audiotape conversations made at SRI International, Menlo Park, CA. Prepared by Jacqueline Kowtko under the direction of Patti Price.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "Accommodation, meaning, and implicature: Interdisciplinary foundations for pragmatics", "authors": [ { "first": "Richmond", "middle": [], "last": "Thomason", "suffix": "" } ], "year": 1990, "venue": "Intentions in Communication", "volume": "", "issue": "", "pages": "325--363", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomason, Richmond. 1990. Accommodation, meaning, and implicature: Interdisciplinary foundations for pragmatics. In Philip Cohen, Jerry Morgan, and Martha Pollack, editors, Intentions in Communication. MIT Press, Cambridge, MA, pages 325-363.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "A Computational Theory of Grounding in Natural Language Conversation", "authors": [ { "first": "David", "middle": [], "last": "Traum", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Traum, David. 1994. A Computational Theory of Grounding in Natural Language Conversation. Ph.D. thesis, University of Rochester.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "Conversation acts in task-oriented spoken dialogue", "authors": [ { "first": "David", "middle": [], "last": "Traum", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Hinkelman", "suffix": "" } ], "year": 1992, "venue": "Computational Intelligence", "volume": "8", "issue": "3", "pages": "575--599", "other_ids": {}, "num": null, "urls": [], "raw_text": "Traum, David and Elizabeth Hinkelman. 1992. Conversation acts in task-oriented spoken dialogue. Computational Intelligence, 8(3):575-599.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "Towards user specific explanations from expert systems", "authors": [ { "first": "", "middle": [], "last": "Van Beek", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1986, "venue": "Proceedings of the Sixth Canadian Conference on Arti~'cial Intelligence", "volume": "", "issue": "", "pages": "194--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "van Beek, Peter and Robin Cohen. 1986. Towards user specific explanations from expert systems. In Proceedings of the Sixth Canadian Conference on Arti~'cial Intelligence, pages 194-198.", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "Redundancy in collaborative dialogue", "authors": [ { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 1991, "venue": "Working Notes for the AAAI Fall Symposium: Discourse Structure in Natural Language Understanding and Generation", "volume": "", "issue": "", "pages": "124--129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Walker, Marilyn. 1991. Redundancy in collaborative dialogue. In Working Notes for the AAAI Fall Symposium: Discourse Structure in Natural Language Understanding and Generation, pages 124-129.", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "Redundancy in collaborative dialogue", "authors": [ { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Fifteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "345--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Walker, Marilyn. 1992. Redundancy in collaborative dialogue. In Proceedings of the Fifteenth International Conference on Computational Linguistics, pages 345-351.", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "Inferring acceptance and rejection in dialog by default rules of inference", "authors": [ { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 1996, "venue": "Language and Speech", "volume": "39", "issue": "2-3", "pages": "265--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Walker, Marilyn. 1996. Inferring acceptance and rejection in dialog by default rules of inference. Language and Speech, 39(2-3):265-304.", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "Mixed initiative in dialogue: An investigation into discourse segmentation", "authors": [ { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Whittaker", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 28th Annual Meeting", "volume": "", "issue": "", "pages": "70--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Walker, Marilyn and Steve Whittaker. 1990. Mixed initiative in dialogue: An investigation into discourse segmentation. In Proceedings of the 28th Annual Meeting, pages 70-78. Association for Computational Linguistics.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Meta-Planning: Representing and using knowledge about planning in problem solving and natural language understanding", "authors": [ { "first": "Robert", "middle": [], "last": "Wilensky", "suffix": "" } ], "year": 1981, "venue": "Cognitive Science", "volume": "5", "issue": "", "pages": "197--233", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilensky, Robert. 1981. Meta-Planning: Representing and using knowledge about planning in problem solving and natural language understanding. Cognitive Science, 5:197-233.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "The design and implementation of an evidence oracle for the understanding of arguments", "authors": [ { "first": "Mark", "middle": [], "last": "Young", "suffix": "" }, { "first": "", "middle": [], "last": "Anthony", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Young, Mark Anthony. 1987. The design and implementation of an evidence oracle for the understanding of arguments. Technical report, University of Waterloo, Waterloo, Ontario, Canada.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "dialogue with two open propositions.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "strengths that an agent's behefs may assume. Intervals such as [bi:bj] specify a strength of belief between bi and bj inclusive. For example, Figure 3 displays the recipes for the Inform and Tell discourse acts. The goal of the Inform action, believe (_agent2, _proposition, [C : C] )", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "future. For example, consider the discourse recipes for Obtain-Info-Ref, Ask-Ref, and Answer-Ref shown in the appendix. To obtain information about a proposition via dialogue, _agent1 must ask another agent about the proposition (Ask-ReJ) and the second agent must provide the requested information (Answer-ReJ); this is typical of naturally occurring dialogue and is captured in the body of our Obtain-Info-Ref discourse act. The applicability conditions of the Obtain-Info-Ref act, such as the condition that _agent1", "num": null, "uris": null, "type_str": "figure" }, "FIGREF4": { "text": "[C : C] ) ), [C : C] ) in the case of an Inform action).", "num": null, "uris": null, "type_str": "figure" }, "FIGREF5": { "text": "For example, asking and answering a question (Ask-Ref and Answer-ReJ) are part of obtaining information (Obtain-Info-ReJ) in the discourse tree in", "num": null, "uris": null, "type_str": "figure" }, "FIGREF6": { "text": "sample dialogue with surface negative questions.l action-l(EA, CA, PROPA) I l action-2(EA~ CAt PROPB) I \"\" \u00a2....._ m action-3(EA, CA, PROPC) I \"~.. '',, \"''.., e-action(EA, CA,_propl, PROPD) j I surface-action(EA, CA, PROPD) r I ................ ._J", "num": null, "uris": null, "type_str": "figure" }, "FIGREF7": { "text": "Ask-Ref and Ask-IJ), acceptance of the answer to a question (the recipes for Answer-Ref and Answer-IJ), and acceptance of a statement (the recipes for Inform and Tell).", "num": null, "uris": null, "type_str": "figure" }, "FIGREF8": { "text": "A1 = surface action associated with the speaker's utterance LE = clue words extracted from semantic representation of utterance D = dialogue model B --listener's beliefs A d = action at current focus of attention in D;;Construct inference paths that link up to active path of dialogue model S*--{Pi = A1,Ai2 ..... Aiei ] on-active-path(Ai~i, D) APi is an inference path constructed from A1 } ;;Eliminate inference paths with unsatisfied constraints or implausible applicability conditionsFor each Pi C S Do Begin Bi~---B If A~, \u00a2 A iThen Bi ~ BiU {beliefs that A d and all actions on the active path between Ai and Aid ihave completed successfully} If (3Aj)(3Ck) Aj C Pi A is-constraint(Ck, Aj) A -~Ck Then S ~ S-Pi Else If (3Aj)(3ACk) Aj E Pi A is-app-cond(ACk,Aj) A-~plausible(ACk,Bi) Then S ~ S -Pi End ;;Determine how much evidence is available for each e-action So~ O, $1 ~ O, $2 ~ 0 For each Pi E S Do If (3Aj) Aj C Pi A e-action(Aj) Then Begin If ling-evid(Aj,LE) A app-cond-evid(Aj,Bi) Then $2 ~ S2U{Pi} Else If ling-evid(Aj,LE) V app-cond-evid(Aj,Bi) Then $1 *--S1U{Pi}End Else So ~--SoU{Pi} ;;So contains inference paths with no e-actions ;;Select inference paths containing actions with the most evidence If $2 ~ 0 Then S ~-$2 ;;S contains inference paths with multiple evidence Else If $1 # 0 Then S ~ $1 ;;S contains inference paths with evidence Else S ~ So ;;S contains inference paths with no e-actions ;;Select inference path containing the action closest to current focus of attention If S#0 Then P ,--Pi ] Pi E S A Pi = A1, A 6 .... , Aidi A-~(3Pj) Pj E S APj = A1,Aj2 ..... Aid j A closer-to-curr-discourse-focus(Aja j,Aiei,D) Else Begin B ~-BU {beliefs that all actions on active path have completed successfully} S ~ {Pi = A1,A6 ..... Ain ] Pi is an inference path constructed from A1 A no-elts-on-active-path(Pi,D) A (VAj)(VCk)(Aj ff Pi A is-constraint(Ck, Aj) ~ Ck) A (VAj)(VACk)(Aj E Pi A is-app-cond(ACk, Aj) --~ plausible(ACk, B)) P ~--Pi [ Pi E S A -~(3Pj) Pj E S A links-closer-to-ps-dom-focus(Pj,Pi,D) End ;; Assimilate utterance into dialogue model Add P = A1, Ap2 .... , Apk to D Mark Ap2 as new focus of attention in D", "num": null, "uris": null, "type_str": "figure" }, "FIGREF9": { "text": "....................... . .................................. | .............................................. ....... . ....................... (EA, CA, Leam-Material(EA, CSl80,_fae)) [ ............. Instantiate-Vars(EA, CA, Attend-Class(EA, _place, _time), Learn-Matefial(EA, CS 180, _fac)) ] Instantiate-Single-Var(EA, CA, _time, Attend-Class(EA, _place, _time), Learn-Material(EA, CS180, _fac)-Ref(EA, CA, _time, Meets(CS 180, _time)) ] WH-Question(EA, CA, _time, Meets(CS 180, _time)) ] ........................ ........................................................................... , ....... , ............. , ........................................ + EA: When does CS180 meet?", "num": null, "uris": null, "type_str": "figure" }, "FIGREF10": { "text": "Figure 10", "num": null, "uris": null, "type_str": "figure" }, "FIGREF11": { "text": "Sabbatical rule: Teachers on sabbatical usually do not teach. (Vx Vy course(y) A faculty(x) A on-sabbatical(x) ~ -~teaches(x, y) ) On Campus rule: Teachers on campus usually are not on sabbatical. (Vy faculty(y) A on-campus(y) ~ -~on-sabbatical(y) ) One Course rule: Teachers usually teach only one course a semester. (Vx Vy Vz # y faculty(x) A course(y) A course(z) A teaches(x, y) ~ -~teaches(x, z) ) One Professor rule: Each course usually has only one instructor. (Vx Vy Vz # y course(x) A faculty(y) A faculty(z) A teaches(y, x) ~ -~teaches(z, x) ) Expertise rule: Teachers usually do not teach courses outside their area of expertise. (Vx Vy Vz faculty(x) A course(y) A area(z) A specialty(x, z) A -~in-area(y, z) -~teaches(x, y) )", "num": null, "uris": null, "type_str": "figure" }, "FIGREF12": { "text": "Extended", "num": null, "uris": null, "type_str": "figure" }, "FIGREF13": { "text": "5.2.1 Utterance (18):Establishing the Initial Context. The system first plays the role of CA (the listener) and must understand EA's utterance of (18). The semantic representation of (18) is:Surface-WH-question(EA, CA, _course, Teaches(Dr. Smith, _course)) The Surface-WH-Question is a subaction in the body of a recipe for a Ref-Request discourse act; the Ref-Request is a subaction in the recipe for an Ask-Ref discourse act; and the Ask-Ref is a subaction in the recipe for an Obtain-Info-Ref discourse act. Therefore the following chain of actions is hypothesized: 0btain-Info-Ref (EA, CA, _course, Teaches(Dr.Smith, _course)) T Ask-Ref(EA, CA, _course, Teaches (Dr. Smith, _course)) T Ref-Request(EA, CA, _course, Teaches(Dr. Smith, _course)) T Surface-WH-Question(EA, CA, _course, Teaches(Dr. Smith, _course))", "num": null, "uris": null, "type_str": "figure" }, "FIGREF14": { "text": "Vars(EA, CA, Learn-Material(EA, _course, Dr. Smith), Take-Course(EA, _course)) I * I lnstantiate-Single-Var(EA, CA,_course, Learn-Material(EA, course, Dr. Smith), Take-Coursel EA,_course)Info-Ref(EA, CA, _course, Teaches(Dr. Smith, _course)) ] $ I Ask-Ref(EA, CA, course, Teaches(Dr. Smith,_course)) ] e Ref-Request(EA, CA, _course, Teaches(Dr.Smith,_course)) i Surface-WH-Question(EA,CA,_c'ourse,Teaches(Dr.Smith,_course)) ] .............................................................", "num": null, "uris": null, "type_str": "figure" }, "FIGREF15": { "text": "Figure 13", "num": null, "uris": null, "type_str": "figure" }, "FIGREF16": { "text": "[c:c] ))), [w:c] ) believe(CA, ~knowref(EA,_course,believe(CA,Teaches(Dr. Smith,_course), [C:C])), [W:C])", "num": null, "uris": null, "type_str": "figure" }, "FIGREF17": { "text": ", _course, believe(CA, Teaches(Dr. Smith, _course), [C:C])) This is equivalent to tentatively hypothesizing that CA has recognized the intentions communicated by utterance (18) and inferred the discourse level of the dialogue model depicted in Figure 13. Thus the system's model of CA's beliefs (resulting from CA's recognition of the Ask-ReJ) provides evidence that the applicability conditions of the Answer-Ref discourse act are satisfied. Since this inference chain is the only one containing an e-action for which there is evidence, the system recognizes CA's utterance as providing Architecture as the answer to EA's question about what Dr. Smith is teaching and thereby contributing to the Obtain-Info-Ref action initiated by EA. The updated discourse tree is shown in Figure 14, with the new focus of attention marked with an asterisk. 5.2.3 Utterance (20): Initial Expression of Doubt. The system is again playing the role of CA (listener) and must understand EA's utterance of (20). The semantic representation of (20) is:", "num": null, "uris": null, "type_str": "figure" }, "FIGREF18": { "text": "Discourse tree for first two utterances in Figure 12. Inform(CA, EA, Teaches(Dr.Smith,Arch)) I \u2022 [Te,,(CA, EA, Teac os OrSmith Arch)) ] [ Surface-Say-Prop(CA, EA, Teaches(Dr. Smith, Arch)) ] (19) CA: Dr. Smith is teaching Architecture.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF19": { "text": "Address-Believability(CA, EA, _propositionl)T Address-Unacceptance(EA, CA, _propositionl, Teaches(Dr.Brown, Arch)) T Express-Doubt(EA, CA, _propositionl, Teaches(Dr.Brown, Arch)) T Convey-Uncertain-Belief (EA, CA, Teaches(Dr.Brown, Arch)) T Surface-Neg-YN-Question(EA, CA, Teaches (Dr.Brown, Arch))", "num": null, "uris": null, "type_str": "figure" }, "FIGREF20": { "text": "action hold. The applicability conditions are: believe(EA, believe(CA, Teaches (Dr. Smith, Arch), [S:C]), [S:C]) believe(EA, Teaches(Dr. Brown, Arch), [W:S]) believe(EA, Teaches(Dr. Brown, Arch) --~ ~Teaches(Dr. Smith, Arch), [S:C])", "num": null, "uris": null, "type_str": "figure" }, "FIGREF21": { "text": ". 5.2.4 Utterances (21)-(22): Attempted Resolution of Conflict. The system is now playing the role of EA (listener) and must assimilate CA's utterances (21)-(22). The semantic representation of (21) is: Surface-Say-Prop(CA, EA, ~Teaches(Dr.Brown, Architecture)) Plan chaining indicates that the Surface-Say-Prop may be part of Tell(CA, EA, ~Teaches(Dr.Brown, Architecture)), which might be part of Inform(CA, EA, -~Teaches(Dr. Brown, Architecture)), which might be part of Resolve-, EA, _propositionl, _proposition2), which might in turn be part of Address-Unacceptance(EA, CA, _propositionl, _proposition2). If this is the Address-Unacceptance action that is part of the existing discourse tree in Figure 15, then the Express-Doubt and Convey-Uncertain-Belief actions in Figure 15 have completed successfully. Thus, in considering this interpretation, the system hypothesizes that these", "num": null, "uris": null, "type_str": "figure" }, "FIGREF22": { "text": "(CA, believe (EA, Teaches (Dr. Brown, Arch)-~Teaches (Dr. Smith, Arch), IS:C]), [s:c]) believe(CA, believe(EA,Teaches(Dr.Brown,Arch), [W:S]), [S:C])", "num": null, "uris": null, "type_str": "figure" }, "FIGREF23": { "text": "Figure 16 Discourse tree for dialogue in Figure 12.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF24": { "text": "Expressions of Doubt and Implicit Acceptance. The system is still playing the role of CA (listener). The semantic representation of EA's next utterance is Surface-Neg-YN-Question(EA, CA, Specialty(Dr. Smith, Theory)) Clueword(But)", "num": null, "uris": null, "type_str": "figure" }, "FIGREF25": { "text": "Express-Doubt (EA, Express-Doubt (EA, Expre s s-Doubt (EA, CA, Meets(CS510, MonTPM), Graduate-Course(CS510)) CA, ~Teaches(Dr. Jones, CS510), Graduate-Course(CS510)) CA, Teaehes(Dr.Hart, CS510), Graduate-Course(CS510))", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "html": null, "type_str": "table", "text": "For example, an Evaluate-Answer discourse act is an expected follow-up to an Answer-Ref when they are part of a higher-level Test-Knowledge discourse act but not when the Answer-Ref is part of an Obtain-Info-Ref discourse act. Further research is needed to identify the best mechanism for capturing the requisite discourse knowledge. 4.2.2 The Inform Discourse Recipe. As noted by", "content": "", "num": null }, "TABREF4": { "html": null, "type_str": "table", "text": "Dr. Smith is teaching CS360. EA: Isn't Dr. Smith on sabbatical? CA: No, Dr. Smith is not on sabbatical. Wasn't Dr. Smith awarded a Fulbright? EA: Isn't Dr. Smith a theory person? EA: Isn't Dr. Smith an excellent teacher?", "content": "
(10)EA: Who is teaching CS360 (a systems course)?
(11) (12) (13) CA: (14)a. EA:
(14)b.
(14)c.
", "num": null }, "TABREF5": { "html": null, "type_str": "table", "text": "CA believes that Dr. Smith is teaching Architecture; that Dr. Brown is teaching Architecture; and that Dr. Brown teaching Architecture is an indication that Dr. Smith is not teaching Architecture.", "content": "
specify that EA must have some belief in
each of the following:
a\u00b0
b.
", "num": null }, "TABREF7": { "html": null, "type_str": "table", "text": "Answer-Ref, Inform, and Tell with the Tell discourse act being the current focus of attention; if the other participant then performs a discourse act that is a subaction of the Answer-Ref but not of the Inform, then he has accepted the proposition conveyed by the Inform and it has completed successfully.", "content": "", "num": null }, "TABREF10": { "html": null, "type_str": "table", "text": "If utterance 19 is in fact an Answer-Ref that contributes to the Obtain-Info-Ref that is part of the existing dialogue context, then CA has recognized and is responding to the Ask-Ref that is a child of the Obtain-Info-Ref in the existing dialogue context. Consequently, in considering the Answer-Ref interpretation, the system can hypothesize that CA has recognized the Ask-Ref and can tentatively attribute to CA the belief that the Ask-Refs applicability conditions were satisfied, as shown below.", "content": "
Beliefs attributed to CA (by virtue of hypothesis that CA has recognized the Ask-Ref):
", "num": null }, "TABREF11": { "html": null, "type_str": "table", "text": "Ref(EA, CA. _course, Teaches(Dr. Smith, _course)) I I Answer-Ref(CA. EA, _course, Teaches(Dr.Smith,_course)) [", "content": "
Obtain-lnfo-Ref(EA, CA, _course, Teaches(Dr. Smith, course))]
[ Ask-t Ref-Request(EA, CA, _course, Teaches(Dr.Smith,_course)) I
Surface-WH-Question(EA, CA, _course, Teaches(Dr.Smith,_course)) [
(18) EA: What is Dr. Smith teaching?
Key:
* Current focus of attention
", "num": null }, "TABREF14": { "html": null, "type_str": "table", "text": "CS510 meets on Monday night at 7PM. EA: But isn't Dr. Jones teaching CS510? CA: No, Dr. Jones is not teaching CS510.Dr. Hart is teaching CS510.", "content": "
(28)EA: When does CS510 meet?
(29) (30) (31) (32) (33) EA: CA: But isn't CS510 a graduate course?
(34) CA:Yes, CS510 is a graduate course.
(35)Dr. Hart teaches both graduate and undergraduate courses.
(36) EA:What courses are prerequisites for CS510?
Figure 17
", "num": null } } } }