{ "paper_id": "J97-1004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:43:47.104249Z" }, "title": "Developing and Empirically Evaluating Robust Explanation Generators: The KNIGHT Experiments", "authors": [ { "first": "James", "middle": [ "C" ], "last": "Lester", "suffix": "", "affiliation": {}, "email": "lester@adm.csc.ncsu.edu" }, { "first": "Bruce", "middle": [ "W" ], "last": "Porter", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carolina State University", "location": { "postBox": "Box 8206", "postCode": "27695-8206", "settlement": "Raleigh", "region": "North, NC" } }, "email": "porter@cs.utexas.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited. This paper reports on a seven-year effort to empirically study explanation generation from semantically rich, large-scale knowledge bases. In particular, it describes KNIGHT, a robust explanation system that constructs multisentential and multiparagraph explanations from the Biology Knowledge Base, a large-scale knowledge base in the domain of botanical anatomy, physiology, and development. We introduce the Two-Panel evaluation methodology and describe how KNIGHT'S performance was assessed with this methodology in the most extensive empirical evaluation conducted on an explanation system. In this evaluation, KNIGHT scored within \"half a grade\" of domain experts, and its performance exceeded that of one of the domain experts.", "pdf_parse": { "paper_id": "J97-1004", "_pdf_hash": "", "abstract": [ { "text": "To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited. This paper reports on a seven-year effort to empirically study explanation generation from semantically rich, large-scale knowledge bases. In particular, it describes KNIGHT, a robust explanation system that constructs multisentential and multiparagraph explanations from the Biology Knowledge Base, a large-scale knowledge base in the domain of botanical anatomy, physiology, and development. We introduce the Two-Panel evaluation methodology and describe how KNIGHT'S performance was assessed with this methodology in the most extensive empirical evaluation conducted on an explanation system. In this evaluation, KNIGHT scored within \"half a grade\" of domain experts, and its performance exceeded that of one of the domain experts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the course of their daily affairs, scientists explain complex phenomena--both to one another and to lay people--in a manner that facilitates clear communication. Similarly, physicians, lawyers, and teachers are equally facile at generating explanations in their respective areas of expertise. In an effort to computationalize this critical ability, research in natural language generation has addressed a broad range of issues in automatically constructing text from formal representations of domain knowledge. Research on text planning (Hovy 1993; Maybury 1992; McCoy 1989 McCoy , 1990 McKeown 1985; Paris 1988) has developed techniques for determining the content and organization of many genres, and explanation generation (Cawsey 1992; McKeown, Wish, and Matthews 1985; Moore 1995) in particular has been the subject of intense investigation. In addition to exploring a panorama of application domains, the explanation community has begun to assemble these myriad designs into a coherent framework. As a result, we have begun to see a crystalization of the major components, as well as detailed analyses of their roles in explanation (Suthers 1991) .", "cite_spans": [ { "start": 540, "end": 551, "text": "(Hovy 1993;", "ref_id": "BIBREF15" }, { "start": 552, "end": 565, "text": "Maybury 1992;", "ref_id": "BIBREF25" }, { "start": 566, "end": 576, "text": "McCoy 1989", "ref_id": "BIBREF26" }, { "start": 577, "end": 589, "text": "McCoy , 1990", "ref_id": null }, { "start": 590, "end": 603, "text": "McKeown 1985;", "ref_id": "BIBREF27" }, { "start": 604, "end": 615, "text": "Paris 1988)", "ref_id": "BIBREF35" }, { "start": 729, "end": 742, "text": "(Cawsey 1992;", "ref_id": "BIBREF5" }, { "start": 743, "end": 776, "text": "McKeown, Wish, and Matthews 1985;", "ref_id": "BIBREF29" }, { "start": 777, "end": 788, "text": "Moore 1995)", "ref_id": "BIBREF33" }, { "start": 1141, "end": 1155, "text": "(Suthers 1991)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Despite this success, empirical results in explanation generation are limited. Although techniques for developing and evaluating robust explanation generation should yield results that are more conclusive than those produced by prototype, \"proof-of-concept\" systems, with only a few notable exceptions (Cawsey 1992; Hovy 1990; Kukich 1983; Mittal 1993; Robin 1994) , most work has adopted a research methodology in which a proof-of-concept system is constructed and its operation is analyzed on a few examples. While isolating one or a small number of problems enables researchers to consider particular issues in detail, it is difficult to gauge the scalability and robustness of a proposed approach.", "cite_spans": [ { "start": 302, "end": 315, "text": "(Cawsey 1992;", "ref_id": "BIBREF5" }, { "start": 316, "end": 326, "text": "Hovy 1990;", "ref_id": null }, { "start": 327, "end": 339, "text": "Kukich 1983;", "ref_id": "BIBREF19" }, { "start": 340, "end": 352, "text": "Mittal 1993;", "ref_id": "BIBREF32" }, { "start": 353, "end": 364, "text": "Robin 1994)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A critical factor contributing to the dearth of empirical results is the absence of semantically rich, large-scale knowledge bases (KBs). Knowledge bases housing tens of thousands of different concepts and hundreds of different relations could furnish ample raw materials for empirical study, but no work in explanation generation has been conducted or empirically evaluated in the context of these knowledge bases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To empirically study explanation generation from semantically rich, large-scale knowledge bases, we undertook a seven-year experiment. First, our domain experts (one employed full-time) constructed the Biology Knowledge Base (Porter et al. 1988 ), a very large structure representing more than 180,000 facts about botanical anatomy, physiology, and development. Second, we designed, implemented, and empirically evaluated KNIGHT (Lester 1994) , a robust explanation system that extracts information from the Biology Knowledge Base, organizes it, and realizes it in multisentential and multiparagraph expository explanations of complex biological phenomena. Third, we developed a novel evaluation methodology for gauging the effectiveness of explanation systems and employed this methodology to evaluate KNIGHT. This paper describes the lessons learned during the course of the \"KNIGHT experiments.\" In the spirit of EDGE (Cawsey 1992) and PAULINE (Hovy 1990) , which synthesize work in interactive explanation systems and generational pragmatics, respectively, KNIGHT addresses a broad range of issues, all in the context of semantically rich, large-scale knowledge bases:", "cite_spans": [ { "start": 225, "end": 244, "text": "(Porter et al. 1988", "ref_id": "BIBREF36" }, { "start": 429, "end": 442, "text": "(Lester 1994)", "ref_id": "BIBREF20" }, { "start": 803, "end": 810, "text": "KNIGHT.", "ref_id": null }, { "start": 921, "end": 934, "text": "(Cawsey 1992)", "ref_id": "BIBREF5" }, { "start": 947, "end": 958, "text": "(Hovy 1990)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 Robust Knowledge-Base Access: KNIGHT exploits a library of robust knowledge-base access methods that insulate discourse planners from the idiosyncracies and errors in knowledge bases. These \"view construction\" methods selectively extract coherent packets of propositions about the structure and function of objects, the changes made to objects by processes, and the temporal attributes and temporal decompositions of processes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 Discourse-Knowledge Engineering: Discourse-knowledge engineers, i.e., knowledge engineers who encode discourse knowledge, should be able to inspect and easily modify discourse-planning specifications for rapid iterative refinement. The Explanation Design Package (EDP) formalism is a convenient, schema-like (McKeown 1985; Paris 1988 ) programming language for text planning. Because the EDP formalism is a hybrid of the declarative and procedural paradigms, discourse-knowledge engineers can easily understand EDPs, modify them, and use them to represent new discourse knowledge. EDPS have been used by KNIGHT to generate hundreds of expository explanations of biological objects and processes.", "cite_spans": [ { "start": 310, "end": 324, "text": "(McKeown 1985;", "ref_id": "BIBREF27" }, { "start": 325, "end": 335, "text": "Paris 1988", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 Explanation Planning: KNIGHT employs a robust explanation planner that selects EDPS and applies them to invoke knowledge-base accessors. The explanation planner considers the desired length of explanations and the relative importance of subtopics as it constructs explanation plans encoding content and organization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 Functional Realization: KNIGHT's functional realization system (Callaway and Lester 1995) is built on top of a unification-based surface generator with a large systemic grammar (Elhadad 1992) .", "cite_spans": [ { "start": 65, "end": 91, "text": "(Callaway and Lester 1995)", "ref_id": "BIBREF4" }, { "start": 179, "end": 193, "text": "(Elhadad 1992)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To assess KNIGHT'S performance, we developed the Two-Panel evaluation methodology for natural language generation and employed it in the most extensive and rigorous empirical evaluation ever conducted on an explanation system. In this study, KNIGHT constructed explanations on randomly chosen topics from the Biology Knowledge Base. A panel of domain experts was instructed to produce explanations on these same topics, and both KNIGHT'S explanations and the explanations produced by this panel were submitted to a second panel of domain experts. The second panel then graded all of the explanations on several dimensions with an A-F scale. KNIGHT scored within approximately half a grade of the domain experts, and its performance exceeded that of one of the domain experts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This paper is structured as follows: The task of explanation generation is characterized and the Biology Knowledge Base is described. A brief description of KNIGHT's knowledge-base access methods is followed by (1) a description of the EDP language, (2) KNIGHT'S explanation planner, and (3) an overview of the realization techniques. The empirical evaluation is then discussed in some detail. The paper concludes with discussions of related work and future research directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Explanation generation is the task of extracting information from a formal representation of knowledge, imposing an organization on it, and realizing the information in text. An explanation system must be able to map from a formal representation of domain knowledge (i.e., one which can be used for automated reasoning, such as the predicate calculus) to a textual representation of domain knowledge. Because of the significant differences in formal and textual representational schemes, successfully bridging the gap between them is one of the major challenges faced by an explanation system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Task of Explanation Generation", "sec_num": "2." }, { "text": "To communicate complex ideas, an explanation system should be able to produce extended explanations such as those in Figure 1 , which shows several explanations produced by KNIGHT from the domain of botanical anatomy, physiology, and development. Note that each of these is a multisentential explanation; the first is a multiparagraph explanation. These explanations are expository: in contrast to causal explanations produced by automated reasoning systems, expository explanations describe domain phenomena, such as anatomical structures and physiological processes.", "cite_spans": [], "ref_spans": [ { "start": 117, "end": 125, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Task of Explanation Generation", "sec_num": "2." }, { "text": "Explanation generation is typically decomposed into two subtasks, explanation planning and realization. Explanation planning itself has two subtasks: content deterruination, in which knowledge structures are extracted from a knowledge base, and organization, in which the selected knowledge structures are arranged in a manner appropriate for communication in natural language. To perform these tasks successfully, an explanation planner must have access to discourse knowledge, which informs its decisions about the content and organization of textual explanations. The organizational aspect of discourse knowledge plays a particularly important role in the construction of extended explanations. Our work focuses on content determination and organization and de-emphasizes issues in realization. However, to more cleanly evaluate the explanation planning work, we have developed a robust realization system that is built on FUF (Elhadad 1992 ), a unification-based implementation of a large systemic grammar.", "cite_spans": [ { "start": 930, "end": 943, "text": "(Elhadad 1992", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "The Task of Explanation Generation", "sec_num": "2." }, { "text": "Response: Embryo sac formation is a kind of female gametophyte formation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question: What happens during embryo sac formation?", "sec_num": null }, { "text": "During embryo sac formation, the embryo sac is formed from the megaspore mother cell. Embryo sac formation occurs in the ovule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question: What happens during embryo sac formation?", "sec_num": null }, { "text": "Embryo sac formation is a step of angiosperm sexual reproduction. It consists of megasporogenesis and embryo sac generation. During megasporogenesis, the megaspore mother cell divides in the nucellus to form 4 megaspores. During embryo sac generation, the embryo sac is generated from the megaspore.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question: What happens during embryo sac formation?", "sec_num": null }, { "text": "Response: Pollen tube growth is a step of angiosperm sexual reproduction. It consists of sperm cell generation and sperm cell transport. During sperm cell generation, the pollen generative cell divides to form 2 angiosperm sperm cells. During sperm cell transport, 2 angiosperm sperm cells are transported from the pollen tube to the embryo sac.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question: What happens during pollen tube growth?", "sec_num": null }, { "text": "Question: What is a spore?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question: What happens during pollen tube growth?", "sec_num": null }, { "text": "Response: The spore is a kind of haploid cell. 4 spores are produced from the spore mother cell during sporogenesis. The spore divides to form 2 plant gametes during gametogenesis. Gametogenesis is a step of gametophyte development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question: What happens during pollen tube growth?", "sec_num": null }, { "text": "Response: The root system is part of the plant and is connected to the mainstem. It is below the hypocotyl and is surrounded by the rhizosphere. The subregions of the root system include the meristem, which is where root system growth occurs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question: What is a root system?", "sec_num": null }, { "text": "Explanations produced by K NIGHT from the Biology Knowledge Base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Evaluating the performance of explanation systems is a critical and nontrivial problem. Although gauging the performance of explanation systems is inherently difficult, five evaluation criteria can be applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Criteria and Desiderata", "sec_num": "2.1" }, { "text": "\u2022 Coherence: A global assessment of the overall quality of the explanations generated by a system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Criteria and Desiderata", "sec_num": "2.1" }, { "text": "\u2022 Content: The extent to which the information is adequate and focused.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Criteria and Desiderata", "sec_num": "2.1" }, { "text": "\u2022 Organization: The extent to which the information is well organized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Criteria and Desiderata", "sec_num": "2.1" }, { "text": "\u2022 Writing style: The quality of the prose.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Criteria and Desiderata", "sec_num": "2.1" }, { "text": "\u2022 Correctness: For scientific explanations, the extent to which the explanations are in accord with the established scientific record.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Criteria and Desiderata", "sec_num": "2.1" }, { "text": "In addition to performing well on the evaluation criteria, if explanation systems are to make the difficult transition from research laboratories to field applications, we want them to exhibit two important properties, both of which significantly affect scalability. First, these systems' representation of discourse knowledge should be easily inspected and modified. To develop explanation systems for a broad range of domains, tasks, and question types, discourse-knowledge engineers must be able to create and efficiently debug the discourse knowledge that drives the systems' behavior. The second property that explanation systems should exhibit is robustness. Despite the complex and possibly malformed representational structures that an explanation system may encounter in its knowledge base, it should be able to cope with these structures and construct reasonably well-formed explanations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Criteria and Desiderata", "sec_num": "2.1" }, { "text": "Given the state of the art in explanation generation, the field is now well positioned to explore what may pose its greatest challenge and at the same time may result in its highest payoff: generating explanations from semantically rich, large-scale knowledge bases. Large-scale knowledge bases encode information about domains that cannot be reduced to a small set of principles or axioms. For example, the field of human anatomy and physiology encompasses a body of knowledge so immense that many years of study are required to assimilate only one of its subfields, such as immunology. Large-scale knowledge bases are currently being constructed for many applications, and the ability to generate explanations from these knowledge bases for a broad range of tasks such as education, design, and diagnosis is critical.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "2.2" }, { "text": "Large-scale knowledge bases whose representations are semantically rich are particularly intriguing. These knowledge bases consist of highly interconnected networks of (at least) tens of thousands of facts. Hence, they represent information not only about a large number of concepts but also about a large number of relationships that hold between the concepts. One such knowledge base is the Biology Knowledge Base (Porter et al. 1988) , an immense structure encoding information about botanical anatomy, physiology, and development. One of the largest knowledge bases in existence, it is encoded in the KM frame-based knowledge representation language. 1 KM provides the basic functionalities of other frame-based representation languages and is accompanied by a graphical user interface, KNED, for entering, viewing, and editing frame-based structures (Eilerts 1994) .", "cite_spans": [ { "start": 416, "end": 436, "text": "(Porter et al. 1988)", "ref_id": "BIBREF36" }, { "start": 855, "end": 869, "text": "(Eilerts 1994)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "2.2" }, { "text": "The backbone of the Biology Knowledge Base is its taxonomy, which is a large hierarchical structure of biological objects and biological processes. In addition to the objects and processes, the taxonomy includes the hierarchy of relations that may appear on concepts. The relation taxonomy provides a useful organizing structure for encoding information about \"second order\" relations, i.e., relations among all of the first order relations. Figure 2 depicts the Biology Knowledge Base's representation of embryo sac formation. This is a typical fragment of its semantic network. Each of the nodes in this network is a concept, e.g., megaspore mother cell, which we refer to as a unit or a frame. 1 A detailed description of the semantics of the representation language may be found in Chapter 2 of (Acker 1992) . A representation of embryo sac formation.", "cite_spans": [ { "start": 799, "end": 811, "text": "(Acker 1992)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 442, "end": 450, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "2.2" }, { "text": "Each of the arcs is a relation in the knowledge base. For example, the location for embryo sac formation is the concept ovule. We refer to these relations as slots or attributes and to the units that fill these slots, e.g., ovule, as values. In addition, we call a structure of the form (Unit Slot Value) a triple. The Biology Knowledge Base currently contains more than 180,000 explicitly represented triples, and its deductive closure is significantly larger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "2.2" }, { "text": "We chose biology as a domain for three reasons. First, it required us to grapple with difficult representational problems. Unlike a domain such as introductory geometry, biology cannot be characterized by a small set of axioms. Second, biology is not a \"single-task\" subject. Unlike the knowledge bases of conventional expert systems, e.g., MYCIN (Buchanan and Shortliffe 1984) , the Biology Knowledge Base is not committed to any particular task or problem-solving method. Rather, it encodes general knowledge that can support diverse tasks and methods such as tutoring students, performing diagnosis, and organizing reference materials. For example, in addition to its use in explanation generation, it has been used as the basis for an automated qualitative model builder (Rickel and Porter 1994) for qualitative reasoning. Finally, we chose biology because of the availability of local domain experts at the University of Texas at Austin.", "cite_spans": [ { "start": 347, "end": 377, "text": "(Buchanan and Shortliffe 1984)", "ref_id": "BIBREF3" }, { "start": 775, "end": 799, "text": "(Rickel and Porter 1994)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "2.2" }, { "text": "It is important to note that the authors and the domain experts entered into a \"contractual agreement\" with regard to representational structures in the Biology Knowledge Base. To eliminate all requests for representational modifications that would skew the knowledge bas e to the task of explanation generation, the authors entered into this agreement: they could request representational changes only if knowledge was incon-sistent or missing. This facilitated a unique experiment in which the representational structures were not tailored to the task of explanation generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "2.2" }, { "text": "To perform well, an explanation system must select from a knowledge base precisely that information needed to answer users' questions with coherent and complete explanations. Given the centrality of content determination for explanation generation, it is instructive to distinguish two types of content determination, both of which play key roles in an explanation system's behavior: Local content determination is the selection of relatively small knowledge structures, each of which will be used to generate one or two sentences; global content determination is the process of deciding which of these structures to include in an explanation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accessing Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "3." }, { "text": "There are two benefits of interposing a knowledge-base-accessing system between an explanation planner, which performs global content determination, and a knowledge base. First, it keeps the explanation planner at arm's length from the representation of domain knowledge, thereby making the planner less dependent on the particular representational conventions of the knowledge base and more robust in the face of errors. In addition, it can help build explanations that are coherent. Studies of coherence have focused on one aspect of coherence, cohesion, which is determined by the overall organization and realization of the explanation (Grimes 1975; Halliday and Hassan 1976; Hobbs 1985; Joshi and Weinstein 1981) . However, the question \"To insure coherence, how should the content of individual portions of an explanation be selected?\" is equally important. Halliday and Hassan (1976) term this aspect of coherence semantic unity. There are at least two approaches to achieving semantic unity: either \"packets\" of propositions must be directly represented in the domain knowledge, or a knowledge-base-accessing system must be able to extract them at runtime.", "cite_spans": [ { "start": 640, "end": 653, "text": "(Grimes 1975;", "ref_id": "BIBREF10" }, { "start": 654, "end": 679, "text": "Halliday and Hassan 1976;", "ref_id": null }, { "start": 680, "end": 691, "text": "Hobbs 1985;", "ref_id": "BIBREF14" }, { "start": 692, "end": 717, "text": "Joshi and Weinstein 1981)", "ref_id": "BIBREF16" }, { "start": 864, "end": 890, "text": "Halliday and Hassan (1976)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Accessing Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "3." }, { "text": "One type of coherent knowledge packet is a view. For example, the concept photosynthesis can be viewed as either a production process or an energy transduction process. Viewed as production, it would be described in terms of its raw materials and products: \"During photosynthesis, a chloroplast uses water and carbon dioxide to make oxygen and glucose.\" Viewed as energy transduction, it would be described in terms of input energy forms and output energy forms: \"During photosynthesis, a chloroplast converts light energy to chemical bond energy.\" The view that is taken of a concept has a significant effect on the content that is selected for its description. If an explanation system could (a) invoke a knowledge-base-accessing system to select views, and (b) translate the views to natural language (Figure 3 ), it would be well on its way to producing coherent explanations.", "cite_spans": [], "ref_spans": [ { "start": 804, "end": 813, "text": "(Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Accessing Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "3." }, { "text": "As a building block for the KNIGHT explanation system, we designed and implemented a robust KB-accessing system that extracts views (Acker 1992; McCoy 1989; McKeown, Wish, and Matthews 1985; Souther et al., 1989; Swartout 1983; Suthers 1988 Suthers , 1993 of concepts represented in a knowledge base. Each view is a coherent subgraph of the knowledge base describing the structure and function of objects, the change made to objects by processes, and the temporal attributes and temporal decompositions of processes. Each of the nine accessors in our library (Table 1) can be applied to a given concept (the concept of interest) to retrieve a view of that concept. There are three classes of accessors: those that are applicable to all concepts (As-Kind-Of and Functional), those that are applicable to objects (Partonomic-Connection and Substructural), and those that are applicable to processes (Auxiliary-Process--which includes Accessing and translating a view of photosynthesis.", "cite_spans": [ { "start": 132, "end": 144, "text": "(Acker 1992;", "ref_id": "BIBREF0" }, { "start": 145, "end": 156, "text": "McCoy 1989;", "ref_id": "BIBREF26" }, { "start": 157, "end": 190, "text": "McKeown, Wish, and Matthews 1985;", "ref_id": "BIBREF29" }, { "start": 191, "end": 212, "text": "Souther et al., 1989;", "ref_id": null }, { "start": 213, "end": 227, "text": "Swartout 1983;", "ref_id": null }, { "start": 228, "end": 240, "text": "Suthers 1988", "ref_id": "BIBREF42" }, { "start": 241, "end": 255, "text": "Suthers , 1993", "ref_id": "BIBREF44" } ], "ref_spans": [ { "start": 559, "end": 568, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Accessing Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "3." }, { "text": "Causal, Modulatory, Temporal, and Locational subtypes--Participants, Core-Connection, 2 Subevent, and Temporal-Step). In addition to these \"top level\" accessors, the library also provides a collection of some 20 \"utility\" accessors that extract particular aspects of views previously constructed by the system. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accessing Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "3." }, { "text": "To illustrate, the Participants accessor extracts information about the \"actors\" of the given process. For example, some of the actors in the photosynthesis process are chloroplasts, light, chlorophyll, carbon dioxide, and glucose. By specifying a reference process-the second argument of the Participants accessor--the external agent can request a view of the process from the perspective of the reference process. For example, if the system applies the Participants accessor with photosynthesis as the concept of interest and production as the reference process, then the accessor will extract information about the producer (chloroplast), the raw materials (water and carbon dioxide), and the products (oxygen and glucose). In contrast, if the system applies the Participants accessor with photosynthesis as the concept of interest but with energy transduction as the reference process, then it would extract information about the transducer (chlorophyll), the energy provider (a photon), the input energy form (light), and the output energy form (chemical bond energy). By selecting different reference concepts, different information about a particular process will be returned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accessing Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "3." }, { "text": "In addition to coherence, robustness is an important design criterion. We define robustness as the ability to gracefully cope with the complex representational struc- Table 1 Library of knowledge-base accessors.", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 174, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Accessing Semantically Rich, Large-Scale Knowledge Bases", "sec_num": "3." }, { "text": "As-Kind-Of concept Finds view of concept as a kind of reference reference concept.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB accessor Arguments Description of View", "sec_num": null }, { "text": "Finds temporal, causal, or locational view-type information about process as specified by view-type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auxiliary-Process process", "sec_num": null }, { "text": "Finds \"actor-oriented\" view of process as reference viewed from the perspective of reference process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Participants process", "sec_num": null }, { "text": "Finds the connection between process and a \"core\" process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Core-Connection process", "sec_num": null }, { "text": "Finds functional view of process object with respect to process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Functional object", "sec_num": null }, { "text": "Finds the connection from object to a Connection \"superpart\" of the object in the \"partonomy.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partonomic-object", "sec_num": null }, { "text": "Step process Finds view of \"steps\" of process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subevent process Substructural object Temporal-", "sec_num": null }, { "text": "Finds structural view of parts of object.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subevent process Substructural object Temporal-", "sec_num": null }, { "text": "Finds view of process with respect to another process of which process is a \"step.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subevent process Substructural object Temporal-", "sec_num": null }, { "text": "tures encountered in large-scale knowledge bases without failing (halting execution). The KB accessors achieve robust performance in four ways:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subevent process Substructural object Temporal-", "sec_num": null }, { "text": "\u2022 Omission Toleration: They do not assume that essential information will actually appear on a given concept in the knowledge base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subevent process Substructural object Temporal-", "sec_num": null }, { "text": "\u2022 Type Checking: They employ a type-checking system that exploits the knowledge base's taxonomy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subevent process Substructural object Temporal-", "sec_num": null }, { "text": "\u2022 Error Handling: When they detect an irregularity, they return appropriate error codes to the explanation planner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subevent process Substructural object Temporal-", "sec_num": null }, { "text": "\u2022 Term Accommodation: They tolerate specialized (and possibly unanticipated) representational vocabulary by exploiting the relation taxonomy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subevent process Substructural object Temporal-", "sec_num": null }, { "text": "The following four techniques operate in tandem to achieve robustness. First, to cope with knowledge structures that contain additional, unexpected information, the KB accessors were designed to behave as \"masks.\" When they are applied to particular structures in a knowledge base, the accessors mask out all attributes that they were not designed to seek. Hence, they are unaffected by inappropriate attributes that were installed on a concept erroneously. Second, sometimes a domain-knowledge engineer installs inappropriate values on legal attributes. When the accessors encounter attributes with inappropriate values, they prevent fatal errors from occurring by employing a rigorous type-checking system. For example, suppose a domain-knowledge engineer had erroneously installed an object as one of the subevents of a process. The type-checking system detects the problem. Third, when problems are detected, the nature of the error is noted and reported to the explanation planner. Because the planner can reason about the types of problems, it can properly attend to them by excising the offending content from the explanation it is constructing. The KB accessor library currently uses more than 25 different error codes to report error conditions. For example, it will report no superevent available if the \"parent\" event of a process has not been included. Fourth, the KB accessors exhibit immunity to modifications of the representational vocabulary by the domain-knowledge engineer. For example,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subevent process Substructural object Temporal-", "sec_num": null }, { "text": "given an object, the Substructural accessor inspects the object to determine its parts. Rather than merely examining the attribute parts on the given object, the Substructural accessor examines all known attributes that bear the parts relation to other objects. These attributes include has basic unit, layers, fused parts, and protective components. The Substructural accessor recognizes that each of these attributes are partonomic relations by exploiting the knowledge base's relation taxonomy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subevent process Substructural object Temporal-", "sec_num": null }, { "text": "By using these techniques together, we have developed a KB-accessing system that has constructed several thousand views without failing. Moreover, the view types on which the accessors are based performed well in a preliminary empirical study (Acker and Porter 1994) , and evaluations of the KB accessors' ability to construct coherent views, as measured by domain experts' ratings of KNIGHT'S explanations (Section 8), are encouraging.", "cite_spans": [ { "start": 243, "end": 266, "text": "(Acker and Porter 1994)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Subevent process Substructural object Temporal-", "sec_num": null }, { "text": "Since the time of Aristotle, a central tenet of rhetoric has been that a rich structure underlies text. This structure shapes a text's meaning and assists its readers in deciphering that meaning. For almost two decades, computational linguists have studied the problem of automatically inducing this structure from a given text. Research in explanation planning addresses the inverse problem: automatically creating this structure by selecting facts from a knowledge base and subsequently using these facts to produce text. To automatically construct explanation plans (trees that encode the hierarchical structure of texts, as well as their content [Grosz and Sidner 1986; Mann and Thompson 1987] ), an explanation system must possess discourse knowledge (knowledge about what characterizes a clear explanation). This discourse knowledge enables it to make decisions about what information to include in its explanations and how to organize the information.", "cite_spans": [ { "start": 650, "end": 673, "text": "[Grosz and Sidner 1986;", "ref_id": "BIBREF11" }, { "start": 674, "end": 697, "text": "Mann and Thompson 1987]", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "A Programming Language for Discourse Knowledge", "sec_num": "4." }, { "text": "It is important to emphasize the following distinction between discourse knowledge and explanation plans: discourse knowledge specifies the content and organization for a class of explanations, e.g., explanations of processes, whereas explanation plans specify the content and organization for a specific explanation, e.g., an explanation of how photosynthesis produces sugar. Discourse-knowledge engineers build representations of discourse knowledge, and this discourse knowledge is then used by a computational module to automatically construct explanation plans, which are then interpreted by a realization system to produce natural language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Programming Language for Discourse Knowledge", "sec_num": "4." }, { "text": "The KB-accessing system described above possesses discourse knowledge in the form of KB accessors. Applying this discourse knowledge, the system retrieves views from the knowledge base. Although this ability to perform local content determination is essential, it is insufficient; given a query posed by a user, the generator must be able to choose multiple KB accessors, provide the appropriate arguments to these accessors, and organize the resulting views. Hence, in addition to discourse knowledge about local content determination, an explanation system that produces multiparagraph explanations must also possess knowledge about how to perform global content determination and organization. This section sets forth two design requirements for a representation of discourse knowledge, desclibes the Explanation Design Package formalism, which was designed to satisfy these requirements, and discusses how EDPs can be used to encode discourse knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Programming Language for Discourse Knowledge", "sec_num": "4." }, { "text": "Our goal is to develop a representation of discourse knowledge that satisfies two requirements: It should be expressive, and it should facilitate efficient representation of discourse knowledge by discourse-knowledge engineers. 4 Each of these considerations are discussed in turn, followed by a representation that satisfies these criteria.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements for a Discourse-Knowledge Representation", "sec_num": "4.1" }, { "text": "Expressiveness. A representation of discourse knowledge must permit discourse-knowledge engineers to state how an explanation planner should:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements for a Discourse-Knowledge Representation", "sec_num": "4.1" }, { "text": "\u2022 select propositions from a knowledge base by extracting views,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements for a Discourse-Knowledge Representation", "sec_num": "4.1" }, { "text": "\u2022 control the amount of detail in an explanation, i.e., if a user requests that terse explanations be generated, the explanation planner should select only the most important propositions,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements for a Discourse-Knowledge Representation", "sec_num": "4.1" }, { "text": "\u2022 consider contextual conditions when determining which propositions to include,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements for a Discourse-Knowledge Representation", "sec_num": "4.1" }, { "text": "\u2022 order the propositions, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements for a Discourse-Knowledge Representation", "sec_num": "4.1" }, { "text": "\u2022 group the propositions into appropriate segments, e.g., paragraphs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements for a Discourse-Knowledge Representation", "sec_num": "4.1" }, { "text": "The first three aspects of expressiveness are concerned with content determination.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements for a Discourse-Knowledge Representation", "sec_num": "4.1" }, { "text": "To effectively express what content should be included in explanations, a representation of discourse knowledge should enable discourse-knowledge engineers to encode specifications about how to choose propositions about particular topics, the importance of those topics, and ufider what conditions the propositions associated with the topics should be included. These \"inclusion conditions\" govern the circumstances under which the explanation planner will select particular classes of propositions from the knowledge base when constructing an explanation. For example, a discourse-knowledge engineer might express the rule: \"The system should communicate the location of a process if and only if the user of the system is familiar with the object where the process occurs.\" As the explanation planner uses this knowledge to construct a response, it can determine if the antecedent of the rule (\"the user of the system is familiar with the object where the process occurs\") is satisfied by the current context; if the antecedent is satisfied, then the explanation planner can include in the explanation the subtopics associated with the rule's consequent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements for a Discourse-Knowledge Representation", "sec_num": "4.1" }, { "text": "The final two aspects of expressiveness (ordering and grouping of propositions) are concerned with organization. To encode organizational knowledge, a representation of discourse knowledge should permit discourse-knowledge engineers to encode topic/subtopic relationships. For example, the subtopics of a process description might include (1) a categorical description of the process (describing taxonomically what kind of process it is), (2) how the actors of the process interact, and (3) the location of the process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements for a Discourse-Knowledge Representation", "sec_num": "4.1" }, { "text": "A representation should be sufficiently expressive that it can be used to encode the kinds of discourse knowledge discussed above, and it should be applicable to representing discourse knowledge for a broad range of discourse genres and domains. However, discourse knowledge does not specify what syntactic structure to impose on a sentence, nor does it lend any assistance in making decisions about matters such as pronominalization, ellipsis, or lexical choice. These decisions are delegated to the realization system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements for a Discourse-Knowledge Representation", "sec_num": "4.1" }, { "text": "Discourse-Knowledge Engineering. For a given query type, domain, and task, a discourseknowledge engineer must be able to represent the discourse knowledge needed by an explanation system for responding to questions of that type in that domain about that task. Pragmatically, to represent discourse knowledge for a broad range of queries, domains, and tasks, a formalism must facilitate efficient representation of discourse knowledge. Kittredge, Korelsky, and Rarnbow (1991) have observed that representing new domain-dependent discourse knowledge--they term it domain communication knowledge--is required to create advanced discourse generators, e.g., those for special purpose report planning. Therefore, ease of creation, modification, and reuse are important goals for the design of a discourse formalism. For example, to build an explanation system for the domain of physics, a discourse-knowledge engineer could either build an explanation system de novo or modify an existing system. On the face of it, the second alternative involves less work and is preferable, but designing explanation systems that can be easily modified is a nontrivial task. In the case of physics, a discourse-knowledge engineer may need to modify an existing explanation system so that it can produce explanations appropriate for mathematical explanations. To do so, the discourse-knowledge engineer would ideally take an off-the-shelf explanation generator and add discourse knowledge about how to explain mathematical interpretations of the behavior of physical systems. Because of the central role played by discourse-knowledge engineers, a representation of discourse knowledge should be designed to minimize the effort required to understand, modify, and represent new discourse knowledge.", "cite_spans": [ { "start": 435, "end": 474, "text": "Kittredge, Korelsky, and Rarnbow (1991)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Requirements for a Discourse-Knowledge Representation", "sec_num": "4.1" }, { "text": "Explanation Design Packages emerged from an effort to accelerate the representation of discourse knowledge without sacrificing expressiveness. Our previous explanation generators employed a representation of discourse knowledge that was coded directly in Lisp (Lester and Porter 1991a, 1991b) . Although this approach worked well for small prototype explanation systems, it proved unsatisfactory for building fully functioning explanation systems. In particular, it was very difficult to maintain and extend discourse knowledge expressed directly in code.", "cite_spans": [ { "start": 260, "end": 271, "text": "(Lester and", "ref_id": "BIBREF20" }, { "start": 272, "end": 292, "text": "Porter 1991a, 1991b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Explanation Design Packages", "sec_num": "4.2" }, { "text": "Although EDPs are more schema-like than plan-based approaches and consequently do not permit an explanation system to reason about the goals fulfilled by particular text segments, 5 they have proven enormously successful for discourse-knowledge engineering. EDPs give discourse-knowledge engineers an appropriate set of abstractions for specifying the content and organization of explanations. They combine a framebased representation language with embedded procedural constructs. To mirror the structure of expository texts, an EDP contains a hierarchy of nodes, which provides the global organization of explanations. EDPs are schema-like (McKeown 1985; Paris 1988) structures that include constructs found in traditional programming languages. Just as prototypical programming languages offer conditionals, iterative control structures, and procedural abstraction, EDPS offer discourse-knowledge engineers counterparts of 5 See Section 9 for a discussion of this disadvantage. Because EDPS are frame-based, they can be easily viewed and edited by knowledge engineers using the graphical tools commonly associated with frame-based languages. The EDP formalism has been implemented in the KM frame-based knowledge representation language, which is the same representational language used in the Biology Knowledge Base. Because KM is accompanied by a graphical user interface, discourseknowledge engineers are provided with a development environment that facilitates EDP construction. This has proven to be very useful for addressing a critical problem in scaling up explanation generation: maintaining a knowledge base of discourse knowledge that can be easily constructed, viewed, and navigated by discourse-knowledge engineers.", "cite_spans": [ { "start": 641, "end": 655, "text": "(McKeown 1985;", "ref_id": "BIBREF27" }, { "start": 656, "end": 667, "text": "Paris 1988)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Explanation Design Packages", "sec_num": "4.2" }, { "text": "EDPs have several types of nodes, where each type provides a particular set of attributes to the discourse-knowledge engineer ( Table 2 ). Note that content specification nodes may have elaboration nodes as their children, which in turn may have their own content specification nodes. This recursive appearance of content specification nodes permits a discourse-knowledge engineer to construct arbitrarily deep trees. In general, a node of a particular type in an EDP is used by the explanation planner to construct a corresponding node in an explanation plan. We discuss the salient aspects of each type of node below. 7 Exposition Nodes. An exposition node is the top-level unit in the hierarchical structure and constitutes the highest-level grouping of content. For example, the exposition 6 EDPs are Turing-equivalent. 7 Representational details of EDPs are discussed in (Lester 1994 ).", "cite_spans": [ { "start": 876, "end": 888, "text": "(Lester 1994", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 128, "end": 135, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Explanation Design Packages", "sec_num": "4.2" }, { "text": "node of the Explain-Process EDP has four children, Process Overview, Output-Actor-Fates, Temporal Info, and Process Details, each of which is a topic node. Both the order and grouping of the topic nodes named in an exposition node are significant. The order specifies the linear left-to-right organization of the topics, and the grouping specifies the paragraph boundaries. The content associated with topic nodes that are grouped together will appear in a single paragraph in an explanation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explanation Design Packages", "sec_num": "4.2" }, { "text": "Topic Nodes. Topic nodes are subtopics of exposition nodes, and each topic node includes a representation of the conditions under which its content should be added to an explanation. Topic nodes have the atomic inclusion property, which enables an explanation planner to make an \"atomic\" decision about whether to include--or exclude--all of the content associated with a topic node. Atomicity permits discourseknowledge engineers to achieve coherence by demanding that the explanation planner either include or exclude all of a topic's content. At runtime, if the explanation planner determines that inclusion conditions are not satisfied or if a topic is not sufficiently important given space limitations (see below), it can comprehensively eliminate all content associated with the topic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explanation Design Packages", "sec_num": "4.2" }, { "text": "An important aspect of discourse knowledge is the relative importance of subtopics with respect to one another. If an explanation's length must be limited--such as when a user has employed the verbosity preference parameter to request terse explanations-an explanation planner should be able to decide at runtime which propositions to include. EDPS permit discourse-knowledge engineers to specify the relative importance of each topic by assigning a qualitative value (Low, Medium, or High) to its centrality attribute.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explanation Design Packages", "sec_num": "4.2" }, { "text": "Another important aspect of representing discourse knowledge is the ability to encode the conditions under which a group of propositions should be included in an explanation. Discourse-knowledge engineers can express these inclusion conditions as predicates on the knowledge base and on a user model (if one is employed). For example, he or she should be able to express the condition that the content associated with the Output-Actor-Fates topic should be included only if the process being discussed is a conversion process. Inclusion conditions are expressed as Boolean expressions that may contain both built-in user modeling predicates and user-defined functions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explanation Design Packages", "sec_num": "4.2" }, { "text": "Content Speci~cation Nodes. Content specification nodes house the high-level specifications for extracting content from the knowledge base. To fulfill this function, they provide constructs known as content specification expressions. These expressions are instantiated at runtime by the explanation planner, which then dispatches the knowledge base accessors named in the expressions to extract propositions from the knowledge base. Content specification expressions reside in content specification nodes, as in Figure 4 . When creating content specification expressions, the discourse-knowledge engineer may name any knowledge base accessor in the KB accessor library. For example, the Super-Structural Connection content specification in Figure 4 names a KB accessor called Find-Partonomic-Connection, and the Process Participants Description content specification names the Make-Participants-View accessor.", "cite_spans": [], "ref_spans": [ { "start": 512, "end": 520, "text": "Figure 4", "ref_id": null }, { "start": 740, "end": 748, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Explanation Design Packages", "sec_num": "4.2" }, { "text": "Although the discourse-knowledge engineer may write arbitrarily complex specification expressions in which function invocations are deeply nested, these expressions can become difficult to understand, debug, and maintain. Just as other programming languages provide local variables, e.g., the binding list of a let statement in Lisp, so do content specification nodes. Each time a discourse-knowledge engineer creates a local variable, he or she creates an expression for computing the value of the local ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explanation Design Packages", "sec_num": "4.2" }, { "text": "Example content specification nodes. Elaboration Nodes. Elaboration nodes specify optional content that may be included in explanations. They are structurally and functionally identical to topic nodes, i.e., they have exactly the same attributes, and the children of elaboration nodes are content specifications. The distinction between elaboration nodes and topic nodes is maintained only as a conceptual aid to discourse-knowledge engineers: it stands as a reminder that topic nodes are used to specify the primary content of explanations, and elaboration nodes are used to specify supplementary content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "A discourse-knowledge engineer can use EDPS to encode discourse knowledge for his or her application. In our work, we focused on two types of texts that occur in many domains: process descriptions and object descriptions. For example, in biology, one encounters many process-oriented descriptions of physiological and reproductive mechanisms, as well as many object-oriented descriptions of anatomy. In the course of our research, we informally reviewed numerous (on the order of one hundred) passages in several biology textbooks. These passages focused on explanations of the anatomy, physiology, and reproduction of plants. Some explanations were very terse (e.g., those that occurred in glossaries), whereas some were more verbose (e.g., multipage explanations of physiological processes). Most of the texts also contained information about other aspects of botany, such as experimental methods and historical developments; these were omitted from the analysis. We manually \"parsed\" each passage into an informal language of structure, function, and process which is commonly found in the discourse literature; see Mann and Thompson (1987), McKeown (1985) , Paris (1988) , Souther et al. (1989) , and Suthers (1988) , for example. Our final step was to generalize the most commonly occurring patterns into abstractions that covered as many aspects of the passages as possible, which we then encoded in two Explanation Design Packages. While this work was essential for gaining insights about biological texts, it was a sketchy and preliminary effort to informally characterize their content and organization. A promising line of future work is to construct a large corpus of parsed discourse through a formal analysis. This will enable the natural language generation community to begin making inroads into producing discourse in the same manner that corpus-based techniques have aided discourse understanding efforts. The EDPs resulting from the analysis, Explain-Process and Explain-Object, can be used by an explanation planner to generate explanations about the processes and objects of physical systems. While these EDPS enable an explanation planner to generate quality explanations, we conjecture that employing a large library of specialized EDPS would produce explanations of higher quality. For the same reason that Kittredge, Korelsky, and Rambow (1991) note that domain-dependent discourse knowledge is critical for special purpose discourse generation, it appears that including EDPS specific to describing particular classes of biological processes (e.g., development and reproduction), would yield explanations whose content and organization better mirror that of explanations produced by domain experts. 8", "cite_spans": [ { "start": 1119, "end": 1127, "text": "Mann and", "ref_id": "BIBREF23" }, { "start": 1128, "end": 1159, "text": "Thompson (1987), McKeown (1985)", "ref_id": null }, { "start": 1162, "end": 1174, "text": "Paris (1988)", "ref_id": "BIBREF35" }, { "start": 1177, "end": 1198, "text": "Souther et al. (1989)", "ref_id": null }, { "start": 1205, "end": 1219, "text": "Suthers (1988)", "ref_id": "BIBREF42" }, { "start": 2330, "end": 2368, "text": "Kittredge, Korelsky, and Rambow (1991)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Developing Task-Specific EDPs", "sec_num": "4.3" }, { "text": "Although we will not discuss the details of the EDPs here, it is instructive to examine their structure and function. The Explain-Process EDP ( Figure 5 ) can be used by the explanation planner to generate explanations about the processes that physical objects engage in. For example, given a query about how a biological process such as embryo sac formation is carried out, the explanation planner can apply the Explain-Process EDP to construct an explanation plan that houses the content and organization of the explanation. The Explain-Process EDP has four primary topics:", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 152, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Developing Task-Specific EDPs", "sec_num": "4.3" }, { "text": "\u2022 Process Overview: Explains how a process fits into a taxonomy, discusses the role played by its actors, and discusses where it occurs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Developing Task-Specific EDPs", "sec_num": "4.3" }, { "text": "\u2022 Process Details: Explains the steps of a process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Developing Task-Specific EDPs", "sec_num": "4.3" }, { "text": "\u2022 Temporal Attributes: Explains how a process is related temporally to other processes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Developing Task-Specific EDPs", "sec_num": "4.3" }, { "text": "\u2022 Output-Actor-Fates: Discusses how the \"products\" of a process are used by other processes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Developing Task-Specific EDPs", "sec_num": "4.3" }, { "text": "As computational linguists have known for many years, formally characterizing texts is a very difficult, time-consuming, and error-prone process. Because any initial discourse representation effort must, by necessity, be considered only a beginning, the next step was to incrementally revise the EDPs. The EDPs were used to automatically construct hundreds of explanations: the explanation planner used the EDPs to construct explanation plans, and the realization system translated these plans to natural language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Developing Task-Specific EDPs", "sec_num": "4.3" }, { "text": "The resulting explanations were presented to our domain expert, who critiqued both their content and organization, and we used these critiques to incrementally revise the EDPs. The majority of revisions involved the reorganization and removal of nodes in the EDPs. For example, the domain expert consistently preferred a different global organization than the one encoded in the original Explain-Process EDP. He also preferred explanations produced by a version of the Explain-Process EDP in which the information that had previously been associated with a Process Significance topic was associated with the Temporal Attributes topic. Moreover, he found that an Actor Elaborations node produced information that was \"intrusive.\" Some revisions involved ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Developing Task-Specific EDPs", "sec_num": "4.3" }, { "text": "The final version of the Explain-Process explanation design. modifications to particular attributes of the nodes. For example, the inclusion condition on the original Output-Actor-Fates topic was TRUE. Instead, the domain expert preferred for explanations to include the content associated with this topic only when the process being described was a \"conversion\" process. After approximately twenty passes through the critiquing and revision phases, EDPs were devised that produced clear explanations meeting with the domain expert's approval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "Explanation planning is the task of determining the content and organization of explanations. We have designed an architecture for explanation generation and implemented a full-scale explanation generator, KNIGHT, 9 based upon this architecture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Planning Explanations", "sec_num": "5." }, { "text": "Explanation generation begins when the user poses a query, which includes a verbosity specification that comes in the form of a qualitative rating expressing the desired length of the explanation (Figure 6 ). The query interpreter--whose capabilities have been addressed only minimally in our work--translates the query to a canonical form, which is passed, along with the verbosity specification, to the explanation planner. Explanation planning is a synthetic task in which multiple resources are consulted to assemble data structures that specify the content and organization of explanations. KNIGHT's explanation planner uses the following resources: the Biology Knowledge Base, Explanation Design Packages, the KB-accessing system, and an overlay user model. 1\u00b0", "cite_spans": [], "ref_spans": [ { "start": 196, "end": 205, "text": "(Figure 6", "ref_id": null } ], "eq_spans": [], "section": "An Architecture for Explanation Generation", "sec_num": "5.1" }, { "text": "The explanation planner invokes the EDP Selector, which chooses an Explanation Design Package from the EDP library. The explanation planner then applies the EDP by traversing its hierarchical structure. For each node in the EDP, the planner determines if it should construct a counterpart node in the explanation plan it is building. (Recall that the topic nodes and elaboration nodes of an EDP are instantiated only when their conditions are satisfied.) As the plan is constructed, the explanation planner updates the user model to reflect the contextual changes that will result from explaining the views in the explanation plan, attends to the verbosity specification, and invokes KB accessors to extract information from the knowledge base. Recall that the accessors return views, which are subgraphs of the knowledge base. The planner attaches the views to the explanation plan; they become the plan's leaves. Planning is complete when the explanation planner has traversed the entire EDP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Architecture for Explanation Generation", "sec_num": "5.1" }, { "text": "The planner passes the resulting explanation plan to the realization component (Section 6) for translation to natural language. The views in the explanation plan are grouped into paragraph clusters. After some \"semantic polishing\" to improve the content for linguistic purposes, the realization component translates the views in the explanation plan to sentences. The realization system collects into a paragraph all of the sentences produced by the views in a particular paragraph cluster. Explanation generation terminates when the realization component has translated all of the views in the explanation plan to natural language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Architecture for Explanation Generation", "sec_num": "5.1" }, { "text": "The EXPLAIN algorithm (Figure 7 ) is supplied with a query type (e.g., Describe-Process), a primary concept (e.g., embryo sac formation), and a verbosity specification (e.g., High).", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 31, "text": "(Figure 7", "ref_id": null } ], "eq_spans": [], "section": "The Explanation-Planning Algorithms", "sec_num": "5.2" }, { "text": "Its first step is to select an appropriate EDP. The EDP library has an indexing structure that maps a query type to the EDP that can be used to generate explanations for queries of that type. This indexing structure permits EDP selection to be reduced to a simple look-up operation. For example, given the query type Describe-Process, the EDP Selector will return the Explain-Process Explanation Design Package. The planner is now in a position to apply the selected EDP to the knowledge base. The APPLY EDP algorithm takes four arguments: the exposition node of the EDP that will be applied, a newly created exposition node, which will become the root of the explanation plan that will be constructed, the verbosity specification, and the loop variable bindings. 11", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Explanation-Planning Algorithms", "sec_num": "5.2" }, { "text": "The planner first locates the root of the selected EDP, which is an exposition node. Next, it creates the corresponding exposition node for the soon-to-be-constructed explanation plan. It then invokes the APPLY EDP algorithm, which is given the exposition An architecture for explanation generation. EXPLAIN (Query-Type, Concept, Verbosity) if legal-query (Query-Type, Concept, Verbosity) then EDP ,--select-edp (Query-Type) EDP-Exposition-Node ,--get-root (EDP) New-Exposition-Node ~--construct-node (EDP-Exposition-Node) Explanation-Plan ~ apply-edp (EDP-Exposition-Node, New-Exposition-Node, Verbosity, nil) Explanation-Leaves *--linearize (Explanation-Plan) realize (Explanation-Leaves)", "cite_spans": [ { "start": 300, "end": 340, "text": "EXPLAIN (Query-Type, Concept, Verbosity)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Explanation-Planning Algorithms", "sec_num": "5.2" }, { "text": "The EXPLAIN algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "node of the EDP to be applied, the newly created exposition node that will become the root of the explanation plan, the verbosity, and a list of the loop variable bindings. 12 The APPLY EDP algorithm (Figure 8 ) and the algorithms it invokes traverse the hierarchical structure of the EDP to build an explanation plan. Its first action is to obtain the children of the EDP's exposition node; these are the topic nodes of the EDP. For each topic node, the EDP Applier constructs a new (corresponding) topic node for the evolving explanation plan. The Applier must then weigh several factors in its decision about whether to include the topic in the explanation: inclusion, which is the inclusion condition associated with the topic; centrality, which is the centrality rating that the discourse-knowledge engineer has assigned to the topic; and verbosity, which is the verbosity specification supplied by the user.", "cite_spans": [], "ref_spans": [ { "start": 200, "end": 209, "text": "(Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "If the inclusion condition evaluates to FALSE, the topic should be excluded regardless of the other two factors. Otherwise, the COMPUTE INCLUSION algorithm must consider the topic's importance and the amount of detail requested and will include the topic in the following circumstances: the verbosity is High; the verbosity is Low but the topic's centrality has been rated as High by the discourse-knowledge engineer; or the verbosity is Medium and the topic's centrality has been rated as Medium or High.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "When the COMPUTE INCLUSION algorithm returns TRUE, the Applier obtains the children of the EDP's topic. These are its content specification nodes. For each of the topic's content specification nodes, the Applier invokes the DETERMINE CONTENT algorithm, which itself invokes KB accessors named in the EDP's content specification nodes. This action extracts views from the knowledge base and attaches them to the explanation plan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "To determine the content of the information associated with elaboration nodes, DETERMINE CONTENT invokes the APPLY EDP algorithm. Because it was the APPLY EDP algorithm that invoked DETERMINE CONTENT, this is a recursive call. In this invocation of APPLY EDP--as opposed to the \"top-level\" invocation by the EXPLAIN algorithm--APPLY EDP is given an elaboration node instead of a topic node. By recursively invoking APPLY EDP, DETERMINE CONTENT causes the planner to traverse the elaboration branches of a content node. The recursion bottoms out when the system encounters the leaves of the EDP, i.e., content specification nodes in the EDP that do not have elaborations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "Rather than merely returning a flat list of views, the EXPLAIN algorithm examines the paragraph specifications in the nodes of the EDP it applied. The paragraph spec-APPLY-EDP (EDP-Exposit-Node, New-Exposit-Node, Verbosity, Loop-Var-Bindings ) Children (EDP-Content-Speci~'cation-Node, New-Topic-Node, Verbosity, Loop-Var-B indings ) ", "cite_spans": [ { "start": 176, "end": 243, "text": "(EDP-Exposit-Node, New-Exposit-Node, Verbosity, Loop-Var-Bindings )", "ref_id": null }, { "start": 253, "end": 333, "text": "(EDP-Content-Speci~'cation-Node, New-Topic-Node, Verbosity, Loop-Var-B indings )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "The EDP application algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "ifications of a given node organize the children of that node into paragraph clusters. The order of the paragraph clusters controls the global structure of the final textual explanation; the order of the views in each paragraph cluster determines the order of sentences in the final text. 13 Finally, the EXPLAIN algorithm passes the paragraph clusters to the REALIZE algorithm, which translates them to natural language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "The explanation planner should be viewed as an automatic specification writer: its task is to write specifications for the realization component, which interprets the specifications to produce natural language. Although our work focuses on the design, construction, and evaluation of explanation planners, by constructing a full-scale natural language generator, it becomes possible to conduct a \"pure\" empirical evaluation of explanation planners. Without a realization component, the plans produced by an explanation planner would need to be manually translated to natural language, which would raise questions about the purity of the experiments. We therefore designed and implemented a full-scale realization component. 14 13 The realization algorithm treats these groupings as suggestions that may be overridden in extenuating circumstances. 14 During the past few years, we have developed a series of realization systems. The first realizer, which was designed and implemented by the first author, was a template-based generator. The second realizer, which was designed by Kathy Mitchell and the authors (Mitchell 1992) , used the Penman (Mann 1983 ) surface generator. The third realizer (Callaway and Lester 1995) is described briefly in this section; it was developed by the first author and Charles Callaway. that what follows will be some type of verbal phrase, in this case a sentence. The second line contains the keyword pro,, which denotes that everything in its scope will describe the structure of the entire verbal phrase. The next structure comes under the heading partic; this is where the thematic roles of the clause are specified. In this instance, one thematic role exists in the main sentence, the agent (or subject), which is further defined by its lexical entry and a modifying prepositional phrase indicated by the keyword qualifier. The structure beginning with circum creates the subordinate infinitival purpose clause. It has two thematic roles, subject and object. The subject has a pointer to identify itself with the subject of the main clause while the object contains a typical noun phrase. The feature set for the circum clause indicates the wide range of possibilities for placement of the clause as well as for introducing additional phrasal substructures into the purpose clause.", "cite_spans": [ { "start": 1110, "end": 1125, "text": "(Mitchell 1992)", "ref_id": "BIBREF30" }, { "start": 1144, "end": 1154, "text": "(Mann 1983", "ref_id": "BIBREF23" }, { "start": 1195, "end": 1221, "text": "(Callaway and Lester 1995)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Realization", "sec_num": "6." }, { "text": "To construct functional descriptions from views extracted from a knowledge base, KNIGHT employs a functional realization system (Callaway and Lester 1995) . Given a view, the functional realizer uses its knowledge of case mappings, syntax, and lexical information to construct a functional description, which it then passes to the FUF surface generator. The functional realizer consists of five principal components:.", "cite_spans": [ { "start": 128, "end": 154, "text": "(Callaway and Lester 1995)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Realization", "sec_num": "6." }, { "text": "\u2022 Lexicon: Physically distributed throughout the knowledge base; each concept frame has access to all of the lexical information relevant to its own realization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Realization", "sec_num": "6." }, { "text": "\u2022 Functional Description Skeleton Library: Contains a large number of Functional Description (FD) Skeletons, each of which encodes the associated syntactic, semantic, and role assignments for interpreting a specific type of message specification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Realization", "sec_num": "6." }, { "text": "\u2022 Functional Description Skeleton Retriever: Charged with the task of selecting the correct Functional Description Skeleton from the skeleton library.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Realization", "sec_num": "6." }, { "text": "\u2022 Noun Phrase Generator: Responsible for drawing lexical information from the lexicon to create a self-contained functional description representing each noun phrase required by the FD-Skeleton processor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Realization", "sec_num": "6." }, { "text": "\u2022 Functional Description Skeleton Processor: Gathers all of the available information from the FD-Skeleton, the lexicon, and the noun phrase generator; produces the final functional description.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Realization", "sec_num": "6." }, { "text": "When the functional realizer is given a view, its first task is to determine the appropriate FD-Skeleton to use. Once this is accomplished, the FD-Skeleton is passed along with the message specification to the FD-Skeleton processor. The FD-Skeleton processor first determines if each of the essential descriptors is present; if any of these tests fail, it will note the deficiency and abort. If the message is well-formed, the FD-Skeleton processor passes each realizable concept unit found on the message specification to the noun phrase generator, which uses the lexicon to create a functional description representing each concept unit. The noun phrase generator then returns each functional description to the FD-Skeleton processor, which assigns case roles to the (sub)functional descriptions. The resulting functional description, which encodes the functional structure for the entire content of the message specification, is then passed to the surface realizer. Surface realization is accomplished by FUF (Elhadad 1992) . Developed by Elhadad and his colleagues at Columbia, FUF is accompanied by an extensive, portable English grammar, which is \"the result of five years of intensive experimentation in grammar writing\" (p. 121) and is currently the largest \"generation grammar\" in existence (Elhadad 1992) . Given a set of functional descriptions, FUF constructs the final text.", "cite_spans": [ { "start": 1012, "end": 1026, "text": "(Elhadad 1992)", "ref_id": "BIBREF9" }, { "start": 1300, "end": 1314, "text": "(Elhadad 1992)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Realization", "sec_num": "6." }, { "text": "To illustrate the behavior of the system, consider the concept of embryo sac formation. The semantic network in the Biology Knowledge Base that represents information about embryo sac formation was shown in Figure 2 . When KNIGHT is given the task of explaining this concept, 16 it applies the Explain-Process EDP as illustrated in Figure 5 .", "cite_spans": [], "ref_spans": [ { "start": 207, "end": 215, "text": "Figure 2", "ref_id": null }, { "start": 332, "end": 340, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Example Behavior", "sec_num": "7." }, { "text": "KNIGHT first finds the topics of the Explain-Process exposition node, which are Process Overview, Output-Actor-Fates, Temporal Information, and Process Details. During its traversal of this tree, it begins with Process Overview, which has a High centrality rating and an inclusion condition of TRUE. KNIGHT executes the COMPUTE INCLUSION algorithm with the given verbosity of High, which returns TRUE, i.e., the information associated with the topic should be included.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example Behavior", "sec_num": "7." }, { "text": "Hence, it now begins to traverse the children of this topic node, which are the As-Kind-Of-Process Description, Process Participants, and Location Description content specification nodes. For the As-Kind-Of Process Description, it computes a value for the local variable ?Reference-Concept, which returns the valuefemalegametophyteformation. It then instantiates the content specification template on As-Kind-Of Process Description, which it then evaluates. This results in a call to the As-Kind-Of KB accessor, which produces a view. The view produced in this execution will eventually be translated to the sentence, \"Embryo sac formation is a kind of female gametophyte formation.\" Similarly, KNIGHT instantiates the content specification expressions of Process Participants Description and Location Description, which also cause KB accessors to be invoked; these also return views. The first of these views will be used to produce the sentence, \"During embryo sac formation, the embryo sac is formed from the megaspore mother cell,\" and the second will produce the sentence, \"Embryo sac formation occurs in the ovule.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example Behavior", "sec_num": "7." }, { "text": "Next KNIGHT visits the Location Partonomic-Connection node, which is an elaboration of Location Description. However, because its inclusion condition is not satisfied, this branch of the traversal halts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example Behavior", "sec_num": "7." }, { "text": "Next, KNIGHT visits each of the other topics of the Explain-Process exposition node: Output-Actor-Fates, Temporal Information and Process Details. When it visits the Output-Actor-Fates topic, the inclusion condition is not satisfied. Because it was given a High verbosity specification and the inclusion conditions are satisfied, both Temporal Information and Process Details are used to determine additional content. The view constructed from Temporal Information will produce the sentence, \"Embryo sac formation is a step of angiosperm sexual reproduction,\" and the Process Details will result in the generation of descriptions of the steps of embryo sac formation, namely, megasporogenesis and embryo sac generation. When the views in the resulting explanation plan ( Figure 10 ) are translated to text by the realization system, KNIGHT produces the explanation shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 771, "end": 780, "text": "Figure 10", "ref_id": null }, { "start": 874, "end": 882, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Example Behavior", "sec_num": "7." }, { "text": "These algorithms have been used to generate explanations about hundreds of different concepts in the Biology Knowledge Base. For example, Section 2 shows other explanations generated by KNIGHT. The explanation of pollen tube growth was produced by applying the Explain-Process EDP, and the explanations of spore and root system were produced by applying the Explain-Object EDP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example Behavior", "sec_num": "7." }, { "text": "Traditionally, research projects in explanation generation have not included empirical evaluations. Conducting a formal study with a generator has posed difficulties for at least three reasons: the absence of large-scale knowledge bases; the problem of robustness; and the subjective nature of the task. First, the field of explanation generation has experienced a dearth of \"raw materials.\" The task of an explanation generator is three-fold: to extract information from a knowledge base, to organize this information, and to translate it to natural language. Unless an explanation generator has access to a sufficiently large knowledge base, the first step--and hence the second and third-cannot be carried out enough times to evaluate the system empirically. Unfortunately, because of the tremendous cost of construction, large-scale knowledge bases are scarce.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "8." }, { "text": "Second, even if large-scale knowledge bases were more plentiful, an explanation generator cannot be evaluated unless it is sufficiently robust to produce many explana-Process Overview", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "8." }, { "text": "An explanation plan for embryo sac formation: High verbosity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "tions. In very practical terms, a generator is likely to halt abruptly when it encounters unusual and unexpected knowledge structures; if this happens frequently, the system will generate too few explanations to enable a meaningful evaluation. We conjecture that most implemented explanation generators would meet with serious difficulties when applied to a large-scale knowledge base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "Third, explanation generation is an ill-defined task. It stands in contrast to a machine learning task such as rule induction from examples. Although one can easily count the number of examples that an induction program classifies correctly, there is no corresponding objective metric for an explanation generator. Ideally, we would like to \"measure\" the coherence of explanations. Although it is clear that coherence is of paramount importance for explanation generation, there is no litmus test for it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "Given these difficulties, how can one evaluate the architectures, algorithms, and knowledge structures that form the basis for an explanation generator? The traditional approach has been to conduct an analytical evaluation of a system's architecture and demonstrate that it can produce well-formed explanations on a few examples. While this evaluation technique is important, it is not sufficient. Three steps can be taken to promote better evaluation. First, we can construct large-scale knowledge bases, such as the Biology Knowledge Base. Second, we can design and implement robust explanation systems that employ a representation of discourse knowledge that is easily manipulable by discourse-knowledge engineers. Third, to ensure that a knowledge base is not tailored to the purposes of explanation generation, we can enter into a contractual agreement with knowledge engineers; this eliminates all requests for representational modifications that would skew the representation to the task of explanation generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "The Two-Panel evaluation methodology can be used to empirically evaluate natural language generation work. We developed this methodology, which involves two pan-els of domain experts, to combat the inherent subjectivity of NLG: although multiple judges will rarely reach a consensus, their collective opinion provides persuasive evidence about the quality of explanations. To ensure the integrity of the evaluation results, a central stipulation of the methodology is that the following condition be maintained throughout the study:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "8.1" }, { "text": "Computer Blindness: None of the participants can be aware that some texts are machine-generated or, for that matter, that a computer is in any way involved in the study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "8.1" }, { "text": "The methodology involves four steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "8.1" }, { "text": "Generation of explanations by computer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "Formation of two panels of domain experts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Generation of explanations by one panel of domain experts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Evaluation of all explanations by the second panel of domain experts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "Each of these is discussed in turn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "Explanation Generation: KNIGHT. Because KNIGHT's operation is initiated when a user poses a question, the first task was to select the questions it would be asked. To this end, we combed the Biology Knowledge Base for concepts that could furnish topics for questions. Although the knowledge base focuses on botanical anatomy, physiology, and development, it also contains a substantial amount of information about biological taxons. Because this latter area is significantly less developed, we ruled out concepts about taxons. In addition, we ruled out concepts that were too abstract (e.g., Object).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "We then requested KNIGHT to generate explanations about the 388 concepts that passed through these filters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "To thoroughly exercise KNIGHT'S organizational abilities, we were most interested in observing its performance on longer explanations. Hence, we eliminated explanations of concepts that were sparsely represented in the knowledge base. To this end, we passed the 388 explanations through a \"length filter\": explanations that consisted of at least 3 sentences were retained; shorter explanations were disposed of. 17 This produced 87 explanations, of which 48 described objects and 39 described processes. Finally, to test an equal number of objects and processes, we randomly chose 30 objects and 30 processes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "To address the difficult problem of subjectivity, we assembled 12 domain experts, all of whom were Ph.D. students or post-doctoral scientists in biology. Because we wanted to gauge KNIGHT's performance relative to humans, we assigned each of the experts to one of two panels: the Writing Panel and the Judging Panel. By securing the services of such a large number of domain experts, we were able to form relatively large panels of 4 writers and 8 judges (Figure 11 ). To promote high-quality human-generated explanations, we assigned the 4 most experienced experts to the Writing Panel. The remaining 8 experts were assigned to the Judging Panel to evaluate explanations. To minimize the effect of factors that might make it difficult for judges to compare KNIGHT's explanations with those of domain experts, we took three precautions. First, we attempted to control for the length of explanations. Although we could not impose hard constraints, we made suggestions about how long a typical explanation might be. Second, to make the \"level\" of the explanations comparable, we asked writers to compose explanations for a particular audience, freshman biology students. Third, so that the general topics of discussion would be comparable, we aslNode TypeAttributesAttribute Value(s)ExpositionChildren<Topics)TopicChildren(Content Specifications)Centrality{Low, Medium, High}Inclusion Condition<Variable Boolean Expression)Local Variables((Var) , <Variable Expr.}) PairsContentChildren{ <Content Spec's), (Elaborations) }SpecificationContent Specification<Variable ExpressionTemplatewith KB Accessor)Iteration Type{Non-Iter., Iter., Conditional-Iter.}Iterate-Over<Variable Expression)TemplateLoop Variable(Var>Iteration Condition<Variable Boolean Expression)Local Variables((Var} , <Variable Expr.>) PairsElaborationChildren(Content Specifications)Centrality{Low, Medium, High}Inclusion Condition<Variable Boolean Expression)Local Variables((Var} , <Variable Expr.)) Pairsthese constructs that are precisely customized for explanation-planning. 6 Moreover,each", "type_str": "table" }, "TABREF7": { "text": "Comprehensive analysis.", "html": null, "num": null, "content": "
GeneratorOverallContentOrganizationWritingCorrectness
KNIGHT2.374-0.13 2.654-0.132.454-0.162.40\u00b10.133.074-0.15
Human2.854-0.15 2.95+0.163.074-0.162.934-0.163.164-0.15
Table 4
Differences and significance.
OverallContentOrganizationWritingCorrectness
Difference0.480.300.620.530.09
t statistic-2.36-1.47-2.73-2.54-0.42
Significance0.020.140.070.010.67
Significant?YesNoNoYesNo
", "type_str": "table" }, "TABREF8": { "text": "Explanation of objects.", "html": null, "num": null, "content": "
GeneratorGrade
KNIGHT2.65&0.19
Human2.93+0.19
Difference0.28
t statistic-1.05
Significance0.30
Significant?No
", "type_str": "table" }, "TABREF9": { "text": "", "html": null, "num": null, "content": "", "type_str": "table" }, "TABREF11": { "text": "KNIGHT vs. individual writers.", "html": null, "num": null, "content": "
KNIGHTvs. Writer 1vs. Writer 2vs. Writer 3vs. Writer 4
KNIGHT1.93i0.292.73+0.232.734-0.272.074-0.23
Human3.60\u00b10.163.404-0.232.804-0.281.604-0.23
Difference1.670.670.070.47
t statistic-5.16-2.03-0.171.42
Significance0.000.050.860.16
Significant?YesNoNoNo
9. Related Work
", "type_str": "table" } } } }