{ "paper_id": "P05-1026", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:38:19.502458Z" }, "title": "Experiments with Interactive Question-Answering", "authors": [ { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Computer Corporation Richardson", "location": { "settlement": "Texas", "country": "USA" } }, "email": "" }, { "first": "Andrew", "middle": [], "last": "Hickl", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Computer Corporation Richardson", "location": { "settlement": "Texas", "country": "USA" } }, "email": "" }, { "first": "John", "middle": [], "last": "Lehmann", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Computer Corporation Richardson", "location": { "settlement": "Texas", "country": "USA" } }, "email": "" }, { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Computer Corporation Richardson", "location": { "settlement": "Texas", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes a novel framework for interactive question-answering (Q/A) based on predictive questioning. Generated off-line from topic representations of complex scenarios, predictive questions represent requests for information that capture the most salient (and diverse) aspects of a topic. We present experimental results from large user studies (featuring a fully-implemented interactive Q/A system named FERRET) that demonstrates that surprising performance is achieved by integrating predictive questions into the context of a Q/A dialogue.", "pdf_parse": { "paper_id": "P05-1026", "_pdf_hash": "", "abstract": [ { "text": "This paper describes a novel framework for interactive question-answering (Q/A) based on predictive questioning. Generated off-line from topic representations of complex scenarios, predictive questions represent requests for information that capture the most salient (and diverse) aspects of a topic. We present experimental results from large user studies (featuring a fully-implemented interactive Q/A system named FERRET) that demonstrates that surprising performance is achieved by integrating predictive questions into the context of a Q/A dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper, we propose a new architecture for interactive question-answering based on predictive questioning. We present experimental results from a currently-implemented interactive Q/A system, named FERRET, that demonstrates that surprising performance is achieved by integrating sources of topic information into the context of a Q/A dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In interactive Q/A, professional users engage in extended dialogues with automatic Q/A systems in order to obtain information relevant to a complex scenario. Unlike Q/A in isolation, where the performance of a system is evaluated in terms of how well answers returned by a system meet the specific information requirements of a single question, the performance of interactive Q/A systems have traditionally been evaluated by analyzing aspects of the dialogue as a whole. Q/A dialogues have been evaluated in terms of (1) efficiency, defined as the number of questions that the user must pose to find particular information, (2) effectiveness, defined by the relevance of the answers returned, (3) user satisfaction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to maximize performance in these three areas, interactive Q/A systems need a predictive dialogue architecture that enables them to propose related questions about the relevant information that could be returned to a user, given a domain of interest. We argue that interactive Q/A systems depend on three factors: (1) the effective representation of the topic of a dialogue, (2) the dynamic recognition of the structure of the dialogue, and (3) the ability to return relevant answers to a particular question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we describe results from experiments we conducted with our own interactive Q/A system, FERRET, under the auspices of the ARDA AQUAINT 1 program, involving 8 different dialogue scenarios and more than 30 users. The results presented here illustrate the role of predictive questioning in enhancing the performance of Q/A interactions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the remainder of this paper, we describe a new architecture for interactive Q/A. Section 2 presents the functionality of several of FERRET's modules and describes the NLP techniques it relies upon. In Section 3, we present one of the dialogue scenarios and the topic representations we have employed. Section 4 highlights the management of the interaction between the user and FERRET, while Section 5 presents the results of evaluating our proposed model, and Section 6 summarizes the conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We have found that the quality of interactions produced by an interactive Q/A system can be greatly enhanced by predicting the range of questions that a user might ask in the context of a given topic. If a large database of topic-relevant questions were available for a wide variety of topics, the accuracy of a state-of-the-art Q/A system such as could be enhanced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interactive Question-Answering", "sec_num": "2" }, { "text": "In FERRET, our interactive Q/A system, we store such \"predicted\" pairs of questions and answers in a database known as the Question Answer Database (or QUAB). FERRET uses this large set of topicrelevant question-and-answer pairs to improve the interaction with the user by suggesting new questions. For example, when a user asks a question like (Q1) (as illustrated in Table 1 ), FERRET returns an answer to the question (A1) and proposes (Q2), (Q3), and (Q4) as suggestions of possible continuations of the dialogue. Users then choose how to continue the interaction by either (1) ignoring the suggestions made by the system and proposing a different question, or by (2) selecting one of the proposed questions and examining its answer. Figure 1 illustrates the architecture of FERRET. The interactions are managed by a dialogue shell, which processes questions by transforming them into their corresponding predicate-argument structures 2 .", "cite_spans": [], "ref_spans": [ { "start": 369, "end": 376, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 738, "end": 746, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Interactive Question-Answering", "sec_num": "2" }, { "text": "The data collection used in our experiments was Modules from the FERRET's dialogue shell interact with modules from the predictive dialogue block. Central to the predictive dialogue is the topic representation for each scenario, which enables the population of a Predictive Dialogue Network (PDN). The PDN consists of a large set of questions that were asked or predicted for each topic. It is a network because questions are related by \"similarity\" links, which are computed by the Question Similarity module. The topic representation enables an Information Extraction module based on (Surdeanu and Harabagiu, 2002) to find topic-relevant information in the document collection and to use it as answers for the QUABs. The questions associated with each predicted answer are generated from patterns that are related to the extraction patterns used for identifying topic relevant information. The quality of the dialog between the user and FERRET depends on the quality of the topic representations and the coverage of the QUABs.", "cite_spans": [ { "start": 586, "end": 616, "text": "(Surdeanu and Harabagiu, 2002)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Interactive Question-Answering", "sec_num": "2" }, { "text": "3) Military Operations: Army, Navy, Air Force, Leaders, Capabilities, Intentions 4) Allies/Partners: Coalition Forces 5) Weapons: Chemical, Biological, Materials, Stockpiles, Facilities, Access, Research Efforts, Scientists 6) Citizens: Population, Growth Rate, Education 8) Economics: Growth Domestic Product, Growth Rate, Imports 9) Threat Perception: Border and Surrounding States, International, Terrorist Groups 10) Behaviour: Threats, Invasions, Sponsorship and Harboring of Bad Actors 13) Leadership: 7) Industrial: Major Industrires, Exports, Power Sources 14) Behaviour: Threats to use WMDs, Actual Usage, Sophistication of Attack, Anectodal or Simultaneous Serving as a background to the scenarios, the following list contains subject areas that may be relevant to the scenarios under examination, and it is provided to assist the analyst in generating questions. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1) Country Profile", "sec_num": null }, { "text": "Our experiments in interactive Q/A were based on several scenarios that were presented to us as part of the ARDA Metrics Challenge Dialogue Workshop. Figure 2 illustrates one of these scenarios. It is to be noted that the general background consists of a list of subject areas, whereas the scenario is a narration in which several sub-topics are identified (e.g. production of toxins or exportation of materials). The creation of scenarios for interactive Q/A requires several different types of domain-specific knowledge and a level of operational expertise not available to most system developers. In addition to identifying a particular domain of interest, scenarios must specify the set of relevant actors, outcomes, and related topics that are expected to operate within the domain of interest, the salient associations that may exist between entities and events in the scenario, and the specific timeframe and location that bound the scenario in space and time. In addition, real-world scenarios also need to identify certain operational parameters as well, such as the identity of the scenario's sponsor (i.e. the organization sponsoring the research) and audience (i.e. the organization receiving the information), as well as a series of evidence conditions which specify how much verification information must be subject to before it can be accepted as fact. We assume the set of sub-topics mentioned in the general background and the scenario can be used together to define a topic structure that will govern future interactions with the Q/A system. In order to model this structure, the topic representation that we create considers separate topic signatures for each sub-topic.", "cite_spans": [], "ref_spans": [ { "start": 150, "end": 158, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Modeling the Dialogue Topic", "sec_num": "3" }, { "text": "The notion of topic signatures was first introduced in (Lin and Hovy, 2000) . For each subtopic in a scenario, given (a) documents relevant to the sub-topic and (b) documents not relevant to the subtopic, a statistical method based on the likelihood ratio is used to discover a weighted list of the most topic-specific concepts, known as the topic signature. Later work by (Harabagiu, 2004) demonstrated that topic signatures can be further enhanced by discovering the most relevant relations that exist between pairs of concepts. However, both of these types of topic representations are limited by the fact that they require the identification of topic-relevant documents prior to the discovery of the topic signatures. In our experiments, we were only presented with a set of documents relevant to a particular scenario; no further relevance information was provided for individual subject areas or sub-topics.", "cite_spans": [ { "start": 55, "end": 75, "text": "(Lin and Hovy, 2000)", "ref_id": "BIBREF5" }, { "start": 373, "end": 390, "text": "(Harabagiu, 2004)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling the Dialogue Topic", "sec_num": "3" }, { "text": "In order to solve the problem of finding relevant documents for each subtopic, we considered four different approaches:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling the Dialogue Topic", "sec_num": "3" }, { "text": "Approach 1: All documents in the CNS collection were initially clustered using K-Nearest Neighbor (KNN) clustering (Dudani, 1976) . Each cluster that contained at least one keyword that described the sub-topic was deemed relevant to the topic.", "cite_spans": [ { "start": 115, "end": 129, "text": "(Dudani, 1976)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling the Dialogue Topic", "sec_num": "3" }, { "text": "Since individual documents may contain discourse segments pertaining to different sub-topics, we first used TextTiling (Hearst, 1994) to automatically segment all of the documents in the CNS collection into individual text tiles. These individual discourse segments then served as input to the KNN clustering algorithm described in Approach 1.", "cite_spans": [ { "start": 119, "end": 133, "text": "(Hearst, 1994)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "Approach 3: In this approach, relevant documents were discovered simultaneously with the discovery of topic signatures. First, we associated a binary seed relation \u00a2 \u00a1 for each each sub-topic \u00a3 \u00a1 . (Seed relations were created both by hand and using the method presented in (Harabagiu, 2004) .) Since seed relations are by definition relevant to a particular subtopic, they can be used to determine a binary partition of the document collection \u00a4 into (1) a relevant set of documents \u00a5 \u00a6 \u00a1 (that is, the documents relevant to relation \u00a1 ) and (2) a set of non-relevant documents \u00a4 -\u00a5 \u00a6 \u00a1 . Inspired by the method presented in (Yangarber et al., 2000) , a topic signature (as calculated by (Harabagiu, 2004) ) is then produced for the set of documents in \u00a5 \u00a7 \u00a1 . For each subtopic \u00a3 \u00a1 defined as part of the dialogue scenario, documents relevant to a corresponding seed relation \u00a1 are added to \u00a5 iff the relation \u00a1 meets the density criterion (as defined in (Yangarber et al., 2000) ). If \u00a9 represents the set of documents where \u00a2 \u00a1 is recognized, then the density criterion can be defined as:", "cite_spans": [ { "start": 274, "end": 291, "text": "(Harabagiu, 2004)", "ref_id": "BIBREF2" }, { "start": 626, "end": 650, "text": "(Yangarber et al., 2000)", "ref_id": "BIBREF10" }, { "start": 689, "end": 706, "text": "(Harabagiu, 2004)", "ref_id": "BIBREF2" }, { "start": 957, "end": 981, "text": "(Yangarber et al., 2000)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": ". Once \u00a9 is added to \u00a5 ! \u00a1 , then a new topic signature is calculated for \u00a5 . Relations extracted from the new topic signature can then be used to determine a new document partition by re-iterating the discovery of the topic signature and of the documents relevant to each subtopic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "Approach 4: Approach 4 implements the technique described in Approach 3, but operates at the level of discourse segments (or texttiles) rather than at the level of full documents. As with Approach 2, segments were produced using the TextTiling algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "In modeling the dialogue scenarios, we considered three types of topic-relevant relations: (1) structural relations, which represent hypernymy or meronymy relations between topic-relevant concepts, (2) definition relations, which uncover the characteristic properties of a concept, and (3) extraction relations, which model the most relevant events or states associated with a sub-topic. Al-though structural relations and definition relations are discovered reliably using patterns available from our Q/A system , we found only extraction relations to be useful in determining the set of documents relevant to a subtopic. Structural relations were available from concept ontologies implemented in the Q/A system. The definition relations were identified by patterns used for processing definition questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "Extraction relations are discovered by processing documents in order to identify three types of relations, including: (1) syntactic attachment relations (including subject-verb, object-verb, and verb-PP relations), (2) predicate-argument relations, and 3salience-based relations that can be used to encode long-distance dependencies between topic-relevant concepts. (Salience-based relations are discovered using a technique first reported in (Harabagiu, 2004) which approximates a Centering Theory-style approach (Kameyama, 1997) We made the extraction relations associated with each topic signature more general (a) by replacing words with their (morphological) root form (e.g. wounded with wound, weapons with weapon), (b) by replacing lexemes with their subsuming category from an ontology of 100,000 words (e.g. truck is replaced by VEHICLE, ARTIFACT, or OBJECT), and (c) by replacing each name with its name class (Egypt with COUNTRY). Once extraction relations were obtained for a particular set of documents, the resulting set of relations were ranked according to a method proposed in (Yangarber, 2003) . Under this approach, the score associated with each relation is given by:", "cite_spans": [ { "start": 443, "end": 460, "text": "(Harabagiu, 2004)", "ref_id": "BIBREF2" }, { "start": 514, "end": 530, "text": "(Kameyama, 1997)", "ref_id": "BIBREF4" }, { "start": 1094, "end": 1111, "text": "(Yangarber, 2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9 \" ! $ # & % ( ' ) \u00a9 0 \u00a3 2 1 \u00a6 3 \u00a7 \u00a6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": ", where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "4\u00a9 5 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "represents the cardinality of the documents where the relation is identified, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "\u00a3 2 1 \u00a9 3 \u00a7 \u00a9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "represents support associated with the relation .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "\u00a3 2 1 \u00a9 3 \u00a7 \u00a9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "is defined as the sum of the relevance of each document in \u00a9 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "\u00a3 2 1 \u00a9 3 \u00a7 \u00a9 6 7 9 8 A @ \u00a5 B \u00a5 D C \u00a7 F \u00cb", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": ". The relevance of a document that contains a topic-significant relation can be defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "\u00a5 B \u00a5 G C \u00a7 F \u00cb H P I R Q T S @ D U \u00a7 I R Q V \u00a6 \u00a5 \u00a1 \u00a7 \u00a9 \u1e84", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": ", where X \u00a3 represents the topic signature of the subtopic 4 . The accuracy of the relation, then, is given by: . We use a different learner for each subtopic in order to train simultaneously on each iteration. (The calculation of topic signatures continues to iterate until there are no more relations that can be added to the overall topic signature.) When the precision of a relation to a subtopic \u00a3 \u00a1 is computed, it takes into account the negative evidence of its relevance to any other subtopic", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "V \u00a6 \u00a5 \u00a1 \u00a7 \u00a9 Y \u00a7 7 9 8 A @ \u00a5 B \u00a5 G C b a d c \u00a7 F \u00cb e Q 7 g f i h p \u00a1 \u00a5 B \u00a5 G C b a r q \u00a7 F \u00cb \u1e84 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "\u00a3 \u00a1 t s \u00a3 f . If V \u00a5 \u00a1 \u00a7 \u00a6 t u w v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": ", the relation is not included in the topic signature, where relations are ranked by the score", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "\u00a3 \u00a1 A \u00a3 \u00a6 \u00a5 \u00a7 \u00a9 x V \u00a6 \u00a5 \u00a1 \u00a7 \u00a9 # C \u00a3 \u00a2 y \u00a7 \u00a3 2 1 \u00a9 3 \u00a7 \u00a9 \u1e84 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "Representing topics in terms of relevant concepts and relations is important for the processing of questions asked within the context of a given topic. For interactive Q/A, however, the ideal topic-structured representation would be in the form of questionanswer pairs (QUABs) that model the individual segments of the scenario. We have currently created two sets of QUABs: a handcrafted set and an automatically-generated set. For the manuallycreated set of QUABs, 4 linguists manually generated 3210 question-answer pairs for each of the 8 dialogue scenarios considered in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "In a separate effort, we devised a process for automatically populating the QUAB for each scenario. In order to generate question-answer pairs for each subtopic, we first identified relevant text passages in the document collection to serve as \"answers\" and then generated individual questions that could be ancontains only the seed relation. Additional relations can be added with each iteration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "swered by each answer passage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach 2:", "sec_num": null }, { "text": "We defined an answer passage as a contiguous sequence of sentences with a positive answer rank and a passage price of u 4. To select answer passages for each subtopic \u00a3 \u00a1 , we calculate an answer rank,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Answer Identification:", "sec_num": null }, { "text": "\u00a7 7 c \u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Answer Identification:", "sec_num": null }, { "text": ", that sums across the scores of each relation from the topic signature that is identified in the same text window. Initially, the text window is set to one sentence. (If the sentence is part of a quote, however, the text window is immediately expanded to encompass the entire sentence that contains the quote.) Each passage with \u00a7 v is then considered to be a candidate answer passage. The text window of each candidate answer passage is then expanded to include the following sentence. If the answer rank does not increase with the addition of the succeeding sentence, then the price 3of the candidate answer passage is incremented by 1, otherwise it is decremented by 1. : \"in the early 1970s\"; Category: TIME E2: \"Egyptian President Anwar Sadat\"; Category: PERSON E3: \"Egypt\"; Category: COUNTRY E4: \"BW stockpile\"; Category: UNKNOWN 4 entities 2 predicates: P1=\"validate\"; P2=\"has\" PROCESSING Reference 1 (definitional) Figure 4 : Associating Questions with Answers.", "cite_spans": [], "ref_spans": [ { "start": 924, "end": 932, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Answer Identification:", "sec_num": null }, { "text": "Question Generation: In order to automatically generate questions from answer passages, we considered the following two problems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Answer Identification:", "sec_num": null }, { "text": "Problem 1: Every word in an answer passage can refer to an entity, a relation, or an event. In order for question generation be successful, we must determine whether a particular reference is \"interesting\" enough to the scenario such that it deserves to be mentioned in a topic-relevant question. For example, Figure 4 illustrates an answer that includes two predicates and four entities. In this case, four types of reference are used to associate these linguistic objects with other related objects: (a) definitional reference, used to link entity (E1) \"Anwar Sadat\" to a corresponding attribute \"Egyptian President\", (b) metonymic reference, since (E1) can be coerced into (E2), (c) part-whole reference, since \"BW stockpiles\"(E4) necessarily imply the existence of a \"BW program\"(E5), and ( enemies would never use BW because they are aware that the Predicates: P'1=state; P'2 = never use; P3 = be aware;", "cite_spans": [], "ref_spans": [ { "start": 310, "end": 318, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Answer Identification:", "sec_num": null }, { "text": "Causality: P'2(BW) = NON\u2212NEGATIVE RESULT(P5); P'5 = \"obstacle\" Reference: P'1 P'6 = view QUESTIONS Does Egypt view the possesion of BW as an obstacle? Does Egypt view the possesion of BW as a deterrent? P'4 = have P\"4 = \"the possesion\" P\"4 = \"the possesion\" = nominalization(P'4) = EFFECT(P'2(BW)) PROCESSING specialization Pattern: Does Egypt P'6 P\"4(BW) as a P'5? Figure 5 : Questions for Implied Causal Relations.", "cite_spans": [], "ref_spans": [ { "start": 366, "end": 374, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Answer Identification:", "sec_num": null }, { "text": "Problem 2: We have found that the identification of the association between a candidate answer and a question depends on (a) the recognition of predicates and entities based on both the output of a named entity recognizer and a semantic parser (Surdeanu et al., 2003) and their structuring into predicate-argument frames, (b) the resolution of reference (addressed in Problem 1), (c) the recognition of implicit relations between predications stated in the answer. Some of these implicit relations are referential, as is the relation between predicates ) and (2) the result, which eliminates the semantic effect of the negative polarity item never by implying the predicate 3 \u00a9 , obstacle. The questions that are generated are based on question patterns associated with causal relations and therefore allow different degrees for the specificity of the resultative, i.e obstacle or deterrent.", "cite_spans": [ { "start": 244, "end": 267, "text": "(Surdeanu et al., 2003)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Answer Identification:", "sec_num": null }, { "text": "We generated several questions for each answer passage. Questions were generated based on patterns that were acquired to model interrogations using relations between predicates and their arguments. Such interrogations are based on (1) associations between the answer type (e.g. DATE) and the question stem (e.g. \"when\" and (2) the relation between predicates, question stem and the words that determine the answer type (Narayanan and Harabagiu, 2004) . In order to obtain these predicate-argument patterns, we used 30% (approximately 1500 questions) of the handcrafted questionanswer pairs, selected at random from each of the 8 dialogue scenarios. As Figures 4 and 5 illustrate, we used patterns based on (a) embedded predicates and (b) causal or counterfactual predicates.", "cite_spans": [ { "start": 419, "end": 450, "text": "(Narayanan and Harabagiu, 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 652, "end": 667, "text": "Figures 4 and 5", "ref_id": null } ], "eq_spans": [], "section": "Answer Identification:", "sec_num": null }, { "text": "As illustrated in Figure 1 , the main idea of managing dialogues in which interactions with the Q/A system occur is based on the notion of predictions, i.e. by proposing to the user a small set of questions that tackle the same subject as her question (as illustrated in Table 1 ). The advantage is that the user can follow-up with one of the pre-processed questions, that has a correct answer and resides in one of the QUABs. This enhances the effectiveness of the dialogue. It also may impact on the efficiency, i.e. the number of questions being asked if the QUABs have good coverage of the subject areas of the scenario. Moreover, complex questions, that generally are not processed with high accuracy by current state-ofthe-art Q/A systems, are associated with predictive questions that represent decompositions based on similarities between predicates and arguments of the original question and the predicted questions.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 26, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 271, "end": 278, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "The selection of the questions from the QUABs that are proposed for each user question is based on a similarity-metric that ranks the QUAB questions. To compute the similarity metric, we have experimented with seven different metrics. The first four metrics were introduced in (Lytinen and Tomuro, 2002) .", "cite_spans": [ { "start": 277, "end": 303, "text": "(Lytinen and Tomuro, 2002)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "Similarity Metric 1 is based on two processing steps: (a) the content words of the questions are weighted using the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "\u00a2 \u00a1 \u00a4 \u00a3 E \u00a1 measure used in In- formation Retrieval \u00a5 \u00a1 \u00a5 \u00a7 \u00a1 \u00a7 I \u00a7 \u00a6 % ( ' ) \u00a7 \u00a1 \u00a2 \u00a1 \u1e84 \u00a9 \u00a2 8 c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": ", where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "is the number of questions in the QUAB,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "E \u00a1 \u00a1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "is the number of questions containing \u00a1 and \u00a2 \u00a1 \u00a2 \u00a1 is the number of times \u00a1 appears in the question. This allows the user question and any QUAB question to be transformed into two vectors,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "! \u00a5 # \" % $ \u00a5 & $ % ' ' ' $ \u00a5 ) ( 0 and # 1 ! \u00a5 2 1 \" % $ \u00a5 2 1 & ) $ % ' ' ' $ \u00a5 2 1 4 3 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "; (b) the term vector similarity is used to compute the similarity between the user question and any question from the QUAB:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "5 ' 7 6 \u00a7 $ # 1 t \u00a7 7 \u00a1 \u00a5 c \u00a5 2 1 c \u00a2 8 \u00a7 W \u00a7 7 \u00a1 \u00a5 0 c \" & @ 9 \u00a7 7 \u00a1 \u00a5 0 1 c \" &", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "Similarity Metric 2 is based on the percent of user question terms that appear in the QUAB question. It is obtained by finding the intersection of the terms in the term vectors of the two questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "Similarity Metric 3 is based on semantic information available from WordNet. It involves: (a) finding the minimum path between Word-Net concepts. Given two terms :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "H \u00a7 $ 0 I Q P S R a c @ 7 \" U T q @ & \u00a9 \u00a7 \u00a1 $ f , where \u00a9 \u00a7 \u00a1 $ f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "is the path length between \u00a1 and f . (b) the semantic similarity between the user question", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "X V 1 $ 1 0 $ % ' ' ' $ 1 C 0 and the QUAB question X W 1 Y X $ X 0 $ % ' ' ' $ X G 0 to be defined as \u00a5 D A \u00a7 X a $ X b 1 G c U d T U e ! g f c U e T U d ! U d f U e , where h \u00a7 X \u00a4 i $ X b p x 7 i @ D U r qf \u00a4 s u t v x w \u1ef3 w \u00a4 i T p !", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "Similarity Metric 4 is based on the question type similarity. Instead of using the question class, determined by its stem, whenever we could recognize the answer type expected by the question, we used it for matching. As backoff only, we used a question type similarity based on a matrix akin to the one reported in (Lytinen and Tomuro, 2002) Similarity Metric 5 is based on question concepts rather than question terms. In order to translate question terms into concepts, we replaced (a) question stems (i.e. a WH-word + NP construction) with expected answer types (taken from the answer type hierarchy employed by FERRET's Q/A system) and (b) named entities with corresponding their corresponding classes. Remaining nouns and verbs were also replaced with their WordNet semantic classes, as well. Each concept was then associated with a weight: concepts derived from named entities classes were weighted heavier than concepts from answer types, which were in turn weighted heavier than concepts taken from WordNet clases. Similarity was then computed across \"matching\" concepts. 5 The resultant similarity score was based on three variables:", "cite_spans": [ { "start": 316, "end": 342, "text": "(Lytinen and Tomuro, 2002)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "= sum of the weights of all concepts matched between a user query ( QUAB questions are clustered based on their mapping to a vector of important concepts in the QUAB.The clustering was done using the K-Nearest Neighbor (KNN) method (Dudani, 1976) . Instead of measuring the similarity between the user question and each question in the QUAB, similarities are computed only between the user question and the centroid of each cluster.", "cite_spans": [ { "start": 232, "end": 246, "text": "(Dudani, 1976)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "Similarity Metric 7 was derived from the results of Similarity Metrics 5 and 6 above. In this case, if the QUAB question ( ) that was deemed to be most similar to a user question ( ) under Similarity Metric 5 is contained in the cluster of QUAB questions deemed to be most similar to under Similarity Metric 6, then receives a cluster adjustment score in order to boost its ranking within its QUAB cluster. We calculate the cluster adjustment score as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "\u00a2 \u00a1 A \u00a3 \u00a5 \u00a18f \u00a7 \u00a7 \u00a3 \u00c4 # \u00a7 I Q \u00a4 \u1e84 W \u00a6 \u00a7 \u00a3 A \u00a3 \u00a2 # \u00a4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": ", where \u00a4 represents the difference in rank between the centroid of the cluster and the previous rank of the QUAB question .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "In the currently-implemented version of FERRET, we used Similarity Metric 5 to automatically identify the set of 10 QUAB questions that were most similar to a user's question. These question-andanswer pairs were then returned to the user -along with answers from FERRET's automatic Q/A system -as potential continuations of the Q/A dialogue. We used the remaining 6 similarity metrics described in this section to manually assess the impact of similarity on a Q/A dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Managing Interactive Q/A Dialogues", "sec_num": "4" }, { "text": "To date, we have used FERRET to produce over 90 Q/A dialogues with human users. Figure 6 illustrates three turns from a real dialogue from a human user investigating Iran's chemical weapons prorgram. As it can be seen coherence can be established between the user's questions and the system's answers (e.g. Q3 is related to both A1 and A3) as well as between the QUABs and the user's follow-up questions (e.g. QUAB (1b) is more related to Q2 than either Q1 or A1). Coherence alone is not sufficient to analyze the quality of interactions, however. In order to better understand interactive Q/A dialogues, we have conducted three sets of experiments with human users of FERRET. In these experiments, users were allotted two hours to interact with Ferret to gather information requested by a dialogue scenario similar to the one presented in Figure 2 . In Experiment 1 (E1), 8 U.S. Navy Reserve (USNR) intelligence analysts used FERRET to research 8 different scenarios related to chemical and biological weapons. Experiment 2 and Experiment 3 considered several of the same scenarios addressed in E1: E2 included 24 mixed teams of analysts and novice users working with 2 scenarios, while E3 featured 4 USNR analysts working with 6 of the original 8 scenarios. (Details for each experiment are provided in Table 2 .) Users were also given a task to focus their research; in E1 and E3, users prepared a short report detailing their findings; in E2, users were given a list of \"challenge\" questions to answer. In E1 and E2, users had access to a total of 3210 QUAB questions that had been hand-created by developers for each the 8 dialogue scenarios. (Table 3 provides totals for each scenario.) In E3, users performed research with a version of FERRET that included no QUABs at all. We have evaluated FERRET by measuring efficiency, effectiveness, and user satisfaction:", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 88, "text": "Figure 6", "ref_id": "FIGREF7" }, { "start": 840, "end": 848, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 1305, "end": 1312, "text": "Table 2", "ref_id": "TABREF9" }, { "start": 1648, "end": 1656, "text": "(Table 3", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Experiments with Interactive Q/A Dialogues", "sec_num": "5" }, { "text": "Efficiency FERRET's QUAB collection enabled users in our experiments to find more relevant information by asking fewer questions. When manuallycreated QUABs were available (E1 and E2), users submitted an average of 12.25 questions each session. When no QUABs were available (E3), users entered a total of 44.5 questions per session. Table 4 lists the number of QUAB question-answer pairs selected by users and the number of user questions entered by users during the 8 scenarios considered in E1. In E2, freed from the task of writing a research report, users asked significantly (p 0.05) fewer questions and selected fewer QUABs than they did in E1. (See Table 5 ).", "cite_spans": [], "ref_spans": [ { "start": 333, "end": 340, "text": "Table 4", "ref_id": "TABREF13" }, { "start": 656, "end": 663, "text": "Table 5", "ref_id": "TABREF14" } ], "eq_spans": [], "section": "Experiments with Interactive Q/A Dialogues", "sec_num": "5" }, { "text": "Effectiveness QUAB question-answer pairs also improved the overall accuracy of the answers returned by FERRET. To measure the effectiveness of a Q/A dialogue, human annotators were used to perform a post-hoc analysis of how relevant the QUAB pairs returned by FERRET were to each question entered by a user: each QUAB pair returned was graded as \"relevant\" or \"irrelevant\" to a user question in a forced-choice task. Aggregate relevance scores were used to calculate (1) the percentage of relevant QUAB pairs returned and (2) the mean reciprocal rank (MRR) for each user question. MRR is defined as`C", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments with Interactive Q/A Dialogues", "sec_num": "5" }, { "text": "7 \u00a1 p c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments with Interactive Q/A Dialogues", "sec_num": "5" }, { "text": ", whree \u00a1 is the lowest rank of any relevant answer for the \u00a3 \u00a1 \u00a3 \u00a2 user query 7 . Table 6 describes the performance of FERRET when each of the 7 similarity measures presented in Section 4 are used to return QUAB pairs in response to a query. When only answers from FERRET's automatic Q/A system were available to users, only 15.7% of system responses were deemed to be relevant to a user's query. In contrast, when manually-generated QUAB pairs were introduced, as high as 84% of the system's responses were deemed to be relevant. The results listed in Table 6 show that the best metric is Similarity Metric 5. Thse results suggest that the selection of relevant questions depends on sophisticated similarity measures that rely on conceptual hierarchies and semantic recognizers.", "cite_spans": [], "ref_spans": [ { "start": 83, "end": 90, "text": "Table 6", "ref_id": "TABREF16" }, { "start": 554, "end": 561, "text": "Table 6", "ref_id": "TABREF16" } ], "eq_spans": [], "section": "Experiments with Interactive Q/A Dialogues", "sec_num": "5" }, { "text": "We evaluated the quality of each of the four sets of automatically-generated QUABs in a similar fashion. For each question submitted by a user in E1, E2, and E3, we collected the top 5 QUAB question-answer pairs (as determined by Similarity Metric 5) that FERRET returned. As with the manually-generated QUABs, the automatically- User Satisfaction Users were consistently satisfied with their interactions with FERRET. In all three experiments, respondents claimed that they found that FERRET (1) gave meaningful answers, (2) provided useful suggestions, (3) helped answer specific questions, and (4) promoted their general understanding of the issues considered in the scenario. Complete results of this study are presented in Table 8 ", "cite_spans": [], "ref_spans": [ { "start": 728, "end": 735, "text": "Table 8", "ref_id": "TABREF19" } ], "eq_spans": [], "section": "Experiments with Interactive Q/A Dialogues", "sec_num": "5" }, { "text": "We believe that the quality of Q/A interactions depends on the modeling of scenario topics. An ideal model is provided by question-answer databases (QUABs) that are created off-line and then used to 8 Evaluation scale: 1-does not describe the system, 5completely describes the system make suggestions to a user of potential relevant continuations of a discourse. In this paper, we have presented FERRET, an interactive Q/A system which makes use of a novel Q/A architecture that integrates QUAB question-answer pairs into the processing of questions. Experiments with FERRET have shown that, in addition to being rapidly adopted by users as valid suggestions, the incorporation of QUABs into Q/A can greatly improve the overall accuracy of an interactive Q/A dialogue.", "cite_spans": [ { "start": 199, "end": 200, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "AQUAINT is an acronym for Advanced QUestion Answering for INTelligence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We have employed the same representation of predicateargument structures as those encoded in PropBank. We use a semantic parser (described in(Surdeanu et al., 2003)) that recognizes predicate-argument structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Center for Non-Proliferation Studies at the Monterrey Institute of International Studies distributes collections of print and online documents on weapons of mass destruction. More information at: http://cns.miis.edu.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Initially, 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the case of ambiguous nouns and verbs associated with multiple WordNet classes, all possible classes for a term were considered in matching.6 We set d = 0.4 and # = 0.1 in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We chose MRR as our scoring metric because it reflects the fact that a user is most likely to examine the first few answers from any system, but that all correct answers returned by the system have some value because users will sometimes examine a very large list of query results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The distance-weighted k-nearest-neighbour rule", "authors": [ { "first": "S", "middle": [], "last": "Dudani", "suffix": "" } ], "year": 1976, "venue": "IEEE Transactions on Systems, Man, and Cybernetics, SMC", "volume": "6", "issue": "4", "pages": "325--327", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Dudani. 1976. The distance-weighted k-nearest-neighbour rule. IEEE Transactions on Systems, Man, and Cybernetics, SMC-6(4):325-327.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Answer Mining by Combining Extraction Techniques with Abductive Reasoning", "authors": [ { "first": "S", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "D", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "C", "middle": [], "last": "Clark", "suffix": "" }, { "first": "M", "middle": [], "last": "Bowden", "suffix": "" }, { "first": "J", "middle": [], "last": "Williams", "suffix": "" }, { "first": "J", "middle": [], "last": "Bensley", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Twelfth Text Retrieval Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Harabagiu, D. Moldovan, C. Clark, M. Bowden, J. Williams, and J. Bensley. 2003. Answer Mining by Combining Ex- traction Techniques with Abductive Reasoning. In Proceed- ings of the Twelfth Text Retrieval Conference (TREC 2003).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Incremental Topic Representations", "authors": [ { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th COLING Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanda Harabagiu. 2004. Incremental Topic Representations. In Proceedings of the 20th COLING Conference, Geneva, Switzerland.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Multi-Paragraph Segmentation of Expository Text", "authors": [ { "first": "Marti", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 32nd Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti Hearst. 1994. Multi-Paragraph Segmentation of Exposi- tory Text. In Proceedings of the 32nd Meeting of the Associ- ation for Computational Linguistics, pages 9-16.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Recognizing Referential Links: An Information Extraction Perspective", "authors": [ { "first": "Megumi", "middle": [], "last": "Kameyama", "suffix": "" } ], "year": 1997, "venue": "Workshop of Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts, (ACL-97/EACL-97)", "volume": "", "issue": "", "pages": "46--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Megumi Kameyama. 1997. Recognizing Referential Links: An Information Extraction Perspective. In Workshop of Opera- tional Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts, (ACL-97/EACL-97), pages 46-53.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Automated Acquisition of Topic Signatures for Text Summarization", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th COLING Conference", "volume": "", "issue": "", "pages": "495--501", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin and Eduard Hovy. 2000. The Automated Acqui- sition of Topic Signatures for Text Summarization. In Pro- ceedings of the 18th COLING Conference, pages 495-501.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The Use of Question Types to Match Questions in FAQFinder", "authors": [ { "first": "S", "middle": [], "last": "Lytinen", "suffix": "" }, { "first": "N", "middle": [], "last": "Tomuro", "suffix": "" } ], "year": 2002, "venue": "Papers from the 2002 AAAI Spring Symposium on Mining Answers from Texts and Knowledge Bases", "volume": "", "issue": "", "pages": "46--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Lytinen and N. Tomuro. 2002. The Use of Question Types to Match Questions in FAQFinder. In Papers from the 2002 AAAI Spring Symposium on Mining Answers from Texts and Knowledge Bases, pages 46-53.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Question Answering Based on Semantic Structures", "authors": [ { "first": "Srini", "middle": [], "last": "Narayanan", "suffix": "" }, { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th COLING Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srini Narayanan and Sanda Harabagiu. 2004. Question An- swering Based on Semantic Structures. In Proceedings of the 20th COLING Conference, Geneva, Switzerland.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Infratructure for open-domanin information extraction", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Sanda", "middle": [ "M" ], "last": "Harabagiu", "suffix": "" } ], "year": 2002, "venue": "Conference for Human Language Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Surdeanu and Sanda M. Harabagiu. 2002. Infratructure for open-domanin information extraction. In Conference for Human Language Technology (HLT-2002).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Using predicate-argument structures for information extraction", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Sanda", "middle": [ "M" ], "last": "Harabagiu", "suffix": "" }, { "first": "John", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Aarseth", "suffix": "" } ], "year": 2003, "venue": "ACL", "volume": "", "issue": "", "pages": "8--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Surdeanu, Sanda M. Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument structures for in- formation extraction. In ACL, pages 8-15.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Automatic Acquisition of Domain Knowledge for Information Extraction", "authors": [ { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Pasi", "middle": [], "last": "Tapanainen", "suffix": "" }, { "first": "Silja", "middle": [], "last": "Huttunen", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th COLING Conference", "volume": "", "issue": "", "pages": "940--946", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roman Yangarber, Ralph Grishman, Pasi Tapanainen, and Silja Huttunen. 2000. Automatic Acquisition of Domain Knowl- edge for Information Extraction. In Proceedings of the 18th COLING Conference, pages 940-946.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Counter-Training in Discovery of Semantic Patterns", "authors": [ { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41th Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "343--350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roman Yangarber. 2003. Counter-Training in Discovery of Semantic Patterns. In Proceedings of the 41th Meeting of the Association for Computational Linguistics, pages 343-350.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "FERRET -A Predictive Interactive Question-Answering Architecture.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "Example of a Dialogue Scenario.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF3": { "text": "Example of two topic signatures acquired for the scenario illustrated inFigure 2.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF4": { "text": "Figure 3illustrates the topic signatures resulting for the scenario illustrated inFigure 2.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF5": { "text": "4. A special case of implicit relations are the causal relations.Figure5 illustrates an answer where a causal relation exists and is marked by the cue phrase because. Predicates -like those inFigure 5can be phrasal (like are the ones that ultimately de-termine the selection of the answer. The predi", "uris": null, "num": null, "type_str": "figure" }, "FIGREF7": { "text": "A sample interactive Q/A dialogue.", "uris": null, "num": null, "type_str": "figure" }, "TABREF1": { "num": null, "text": "", "html": null, "type_str": "table", "content": "" }, "TABREF2": { "num": null, "text": "terrorist Activity in Egypt increases, the Commander of the United States Army believes a better understanding of Egypt's Military capabilities is needed. Egypt's biological weapons database needs to be updated to correspond with the Commander's request. Focus your investigation on Egypt's access to old technology, assistance received from the Soviet Union for development of their pharmaceutical infrastructure, production of toxins and BW agents, stockpiles, exportation of these materials and development technology to Middle Eastern countries, and the effect that this information will have on the United States and Coalition Forces in the Middle East.", "html": null, "type_str": "table", "content": "
2) Government: Type of, Leadership, Relations
11) Transportation Infrastructure: Kilometers of Road, Rail, Air Runways, Harbors and Ports, RiversPlease incorporate any other related information to
12) Beliefs: Ideology, Goals, Intentionsyour report.
15) Weapons: Chemical, Bilogical, Materials, Stockpiles, Facilities, Access
" }, "TABREF5": { "num": null, "text": "", "html": null, "type_str": "table", "content": "
The text window
of each candidate answer passage continues to ex-
pand until3. Before the ranked list of candidate
answers can be considered by the Question Genera-
tion module, answer passages with a positive price3
are stripped of the last3 sentences.
ANSWER
In the early 1970s, Egyptian President Anwar Sadat
validates that Egypt has a BW stockpile.
E1
Predicate\u2212Argument Structures
P1: validate
arguments: A0 = E2: Answer Type: DefinitionEgyptian President X
A1 = P2: have
arguments: A0 = E3
A1 = E4E5: BW program
ArgM\u2212TMP: E1: Answer Type: Time
Reference 4 (relational)
P3: admit
QUESTIONS
Definition Pattern: Who is X?
Q1: Who is Anwar Sadat?
Pattern: When did E3 P1 to P2 E4?
Q2: When did Egypt validate to having BW stockpiles?
Pattern: When did E3 P3 to P2 E4?
Q3: When did Egypt admit to having BW stockpiles?
Pattern: When did E3 P3 to P2 E5?
Q4: When did Egypt admint to having a BW program?
" }, "TABREF7": { "num": null, "text": "(1b) Has the plant at Qazvin been linked to CW production? (1c) What CW does Iran produce? (1a) How did Iran start its CW program? Although Iran is making a concerted effort to attain an independent production capability for all aspects of chemical weapons program, it remains dependent on foreign sources for chemical warfare\u2212related technologies.", "html": null, "type_str": "table", "content": "
QUABs:
Answer (A1):
QUABs:
?(2a) What factories in Iran could produce CW?
(2b) Where are Iran's stockpiles of CW?
Answer(A2):(2c) Where has Iran bought equipment to produce CW?
According to several sources, Iran's primary suspected chemical weapons production facility is located in the city of Damghan.
QUABs:
Q3: What is Iran's goal for its CW program?(3a) What motivated Iran to expand its chemical weapons program?
(3b) How do CW figure into Iran's long\u2212term strategic plan?
Answer(A3):(3c) What are Iran's future CW plans?
In their pursuit of regional hegemony, Iran and Iraq probably regard CW weapons and missiles as necessary to support their
political and military objectives. Possession of chemical weapons would likely lead to increased intimidation of their Gulf,
neighbors, as well as increased willingness to confront the United States.
" }, "TABREF9": { "num": null, "text": "Experiment details", "html": null, "type_str": "table", "content": "" }, "TABREF11": { "num": null, "text": "QUAB distribution over scenarios", "html": null, "type_str": "table", "content": "
" }, "TABREF13": { "num": null, "text": "Efficiency of Dialogues in Experiment 1", "html": null, "type_str": "table", "content": "
CountrynQUABUser QTotal
(avg.)(avg.)(avg.)
Russia248.25.513.7
Egypt2410.87.618.4
TOTAL(E2)489.506.5516.05
" }, "TABREF14": { "num": null, "text": "Efficiency of Dialogues in Experiment 2", "html": null, "type_str": "table", "content": "" }, "TABREF16": { "num": null, "text": "Effectiveness of dialogs generated pairs were submitted to human assessors who annotated each as \"relevant\" or irrelevant to the user's query. Aggregate scores are presented in Table 7.", "html": null, "type_str": "table", "content": "
EgyptRussia
Approach% of Top 5% of Top 5
Responses Rel.MRRResponses Rel.MRR
to User Qto User Q
Approach 140.01%0.29560.25%0.310
Approach 236.00%0.24372.00%0.475
Approach 344.62%0.27160.00%0.297
Approach 468.05%0.51068.00%0.406
" }, "TABREF17": { "num": null, "text": "Quality of QUABs acquired automatically", "html": null, "type_str": "table", "content": "" }, "TABREF19": { "num": null, "text": "User Satisfaction Survey Results", "html": null, "type_str": "table", "content": "
" } } } }