{ "paper_id": "N03-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:07:11.119143Z" }, "title": "An Analysis of Clarification Dialogue for Question Answering", "authors": [ { "first": "Marco", "middle": [], "last": "De Boni", "suffix": "", "affiliation": {}, "email": "mdeboni@cs.york.ac.uk" }, { "first": "Suresh", "middle": [], "last": "Manandhar", "suffix": "", "affiliation": {}, "email": "suresh@cs.york.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We examine clarification dialogue, a mechanism for refining user questions with follow-up questions, in the context of open domain Question Answering systems. We develop an algorithm for clarification dialogue recognition through the analysis of collected data on clarification dialogues and examine the importance of clarification dialogue recognition for question answering. The algorithm is evaluated and shown to successfully recognize the occurrence of clarification dialogue in the majority of cases and to simplify the task of answer retrieval.", "pdf_parse": { "paper_id": "N03-1007", "_pdf_hash": "", "abstract": [ { "text": "We examine clarification dialogue, a mechanism for refining user questions with follow-up questions, in the context of open domain Question Answering systems. We develop an algorithm for clarification dialogue recognition through the analysis of collected data on clarification dialogues and examine the importance of clarification dialogue recognition for question answering. The algorithm is evaluated and shown to successfully recognize the occurrence of clarification dialogue in the majority of cases and to simplify the task of answer retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Question Answering Systems aim to determine an answer to a question by searching for a response in a collection of documents (see Voorhees 2002 for an overview of current systems). In order to achieve this (see for example Harabagiu et al. 2002) , systems narrow down the search by using information retrieval techniques to select a subset of documents, or paragraphs within documents, containing keywords from the question and a concept which corresponds to the correct question type (e.g. a question starting with the word \"Who?\" would require an answer containing a person). The exact answer sentence is then sought by either attempting to unify the answer semantically with the question, through some kind of logical transformation (e.g. Moldovan and Rus 2001) or by some form of pattern matching (e.g. Soubbotin 2002; Harabagiu et al. 1999) .", "cite_spans": [ { "start": 130, "end": 143, "text": "Voorhees 2002", "ref_id": "BIBREF21" }, { "start": 223, "end": 245, "text": "Harabagiu et al. 2002)", "ref_id": "BIBREF10" }, { "start": 742, "end": 764, "text": "Moldovan and Rus 2001)", "ref_id": "BIBREF17" }, { "start": 807, "end": 822, "text": "Soubbotin 2002;", "ref_id": "BIBREF19" }, { "start": 823, "end": 845, "text": "Harabagiu et al. 1999)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Clarification dialogues in Question Answering", "sec_num": "1" }, { "text": "Often, though, a single question is not enough to meet user's goals and an elaboration or clarification dialogue is required, i.e. a dialogue with the user which would enable the answering system to refine its understanding of the questioner's needs (for reasons of space we shall not investigate here the difference between elaboration dialogues, clarification dialogues and coherent topical subdialogues and we shall hence refer to this type of dialogue simply as \"clarification dialogue\", noting that this may not be entirely satisfactory from a theoretical linguistic point of view). While a number of researchers have looked at clarification dialogue from a theoretical point of view (e.g. Ginzburg 1998; Ginzburg and Sag 2000; van Beek at al. 1993) , or from the point of view of task oriented dialogue within a narrow domain (e.g. Ardissono and Sestero 1996) , we are not aware of any work on clarification dialogue for open domain question answering systems such as the ones presented at the TREC workshops, apart from the experiments carried out for the (subsequently abandoned) \"context\" task in the TREC-10 QA workshop (Voorhees 2002; Harabagiu et al. 2002 ). Here we seek to partially address this problem by looking at some particular aspect of clarification dialogues in the context of open domain question answering. In particular, we examine the problem of recognizing that a clarification dialogue is occurring, i.e. how to recognize that the current question under consideration is part of a previous series (i.e. clarifying previous questions) or the start of a new series; we then show how the recognition that a clarification dialogue is occurring can simplify the problem of answer retrieval.", "cite_spans": [ { "start": 695, "end": 709, "text": "Ginzburg 1998;", "ref_id": "BIBREF6" }, { "start": 710, "end": 732, "text": "Ginzburg and Sag 2000;", "ref_id": "BIBREF7" }, { "start": 733, "end": 754, "text": "van Beek at al. 1993)", "ref_id": "BIBREF20" }, { "start": 838, "end": 865, "text": "Ardissono and Sestero 1996)", "ref_id": "BIBREF0" }, { "start": 1130, "end": 1145, "text": "(Voorhees 2002;", "ref_id": "BIBREF21" }, { "start": 1146, "end": 1167, "text": "Harabagiu et al. 2002", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Clarification dialogues in Question Answering", "sec_num": "1" }, { "text": "The TREC-2001 QA track included a \"context\" task which aimed at testing systems' ability to track context through a series of questions (Voorhees 2002) . In other words, systems were required to respond correctly to a kind of clarification dialogue in which a full understanding of questions depended on an understanding of previous questions. In order to test the ability to answer such questions correctly, a total of 42 questions were prepared by NIST staff, divided into 10 series of related question sentences which therefore constituted a type of clarification dialogue; the sentences varied in length between 3 and 8 questions, with an average of 4 questions per dialogue. These clarification dialogues were however presented to the question answering systems already classified and hence systems did not need to recognize that clarification was actually taking place. Consequently systems that simply looked for an answer in the subset of documents retrieved for the first question in a series performed well without any understanding of the fact that the questions constituted a coherent series.", "cite_spans": [ { "start": 136, "end": 151, "text": "(Voorhees 2002)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "The TREC Context Experiments", "sec_num": "2" }, { "text": "In a more realistic approach, systems would not be informed in advance of the start and end of a series of clarification questions and would not be able to use this information to limit the subset of documents in which an answer is to be sought.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TREC Context Experiments", "sec_num": "2" }, { "text": "We manually analysed the TREC context question collection in order to determine what features could be used to determine the start and end of a question series, with the following conclusions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of the TREC context questions", "sec_num": "3" }, { "text": "\u2022 Pronouns and possessive adjectives: questions such as \"When was it born?\", which followed \"What was the first transgenic mammal?\", were referring to some previously mentioned object through a pronoun (\"it\"). The use of personal pronouns (\"he\", \"it\", \u2026) and possessive adjectives (\"his\", \"her\",\u2026) which did not have any referent in the question under consideration was therefore considered an indication of a clarification question.. \u2022 Absence of verbs: questions such as \"On what body of water?\" clearly referred to some previous question or answer. \u2022 Repetition of proper nouns: the question series starting with \"What type of vessel was the modern Varyag?\" had a follow-up question \"How long was the Varyag?\", where the repetition of the proper noun indicates that the same subject matter is under investigation. \u2022 Importance of semantic relations: the first question series started with the question \"Which museum in Florence was damaged by a major bomb explosion?\"; follow-up questions included \"How many people were killed?\" and \"How much explosive was used?\", where there is a clear semantic relation between the \"explosion\" of the initial question and the \"killing\" and \"explosive\" of the following questions. Questions belonging to a series were \"about\" the same subject, and this aboutness could be seen in the use of semantically related words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of the TREC context questions", "sec_num": "3" }, { "text": "It was therefore speculated that an algorithm which made use of these features would successfully recognize the occurrence of clarification dialogue. Given that the only available data was the collection of \"context\" questions used in TREC-10, it was felt necessary to collect further data in order to test our algorithm rigorously. This was necessary both because of the small number of questions in the TREC data and the fact that there was no guarantee that an algorithm built for this dataset would perform well on \"real\" user questions. A collection of 253 questions was therefore put together by asking potential users to seek information on a particular topic by asking a prototype question answering system a series of questions, with \"cue\" questions derived from the TREC question collection given as starting points for the dialogues. These questions made up 24 clarification dialogues, varying in length from 3 questions to 23, with an average length of 12 questions (the data is available from the main author upon request). The differences between the TREC \"context\" collection and the new collection are summarized in the following The questions were recorded and manually tagged to recognize the occurrence of clarification dialogue. The questions thus collected were then fed into a system implementing the algorithm, with no indication as to where a clarification dialogue occurred. The system then attempted to recognize the occurrence of a clarification dialogue. Finally the results given by the system were compared to the manually recognized clarification dialogue tags. In particular the algorithm was evaluated for its capacity to:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments in Clarification Dialogue Recognition", "sec_num": "4" }, { "text": "\u2022 recognize a new series of questions (i.e. to tell that the current question is not a clarification of any previous question) (indicated by New in the results table)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments in Clarification Dialogue Recognition", "sec_num": "4" }, { "text": "\u2022 recognize that the current question is clarifying a previous question (indicated by Clarification in the table)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments in Clarification Dialogue Recognition", "sec_num": "4" }, { "text": "Our approach to clarification dialogue recognition looks at certain features of the question currently under consideration (e.g. pronouns and proper nouns) and compares the meaning of the current question with the meanings of previous questions to determine whether they are \"about\" the same matter. Given a question q 0 and n previously asked questions q -1 ..q -n we have a function Clarification_Question which is true if a question is considered a clarification of a previously asked question. In the light of empirical work such as (Ginzburg 1998) , which indicates that questioners do not usually refer back to questions which are very distant, we only considered the set of the previously mentioned 10 questions.", "cite_spans": [ { "start": 537, "end": 552, "text": "(Ginzburg 1998)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Clarification Recognition Algorithm", "sec_num": "5" }, { "text": "A question is deemed to be a clarification of a previous question if: 1. There are direct references to nouns mentioned in the previous n questions through the use of pronouns (he, she, it, \u2026) or possessive adjectives (his, her, its\u2026) which have no references in the current question. 2. The question does not contain any verbs 3. There are explicit references to proper and common nouns mentioned in the previous n questions, i.e. repetitions which refer to an identical object; or there is a strong sentence similarity between the current question and the previously asked questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clarification Recognition Algorithm", "sec_num": "5" }, { "text": "In other words:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clarification Recognition Algorithm", "sec_num": "5" }, { "text": "Clarification_Question (q n ,q -1 ..q -n ) is true if", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clarification Recognition Algorithm", "sec_num": "5" }, { "text": "1. q 0 has pronoun and possessive adjective references to q -1 ..q -n 2. q 0 does not contain any verbs 3. q 0 has repetition of common or proper nouns in q -1 ..q -n or q 0 has a strong semantic similarity to some q \u2208 q -1 ..q -n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clarification Recognition Algorithm", "sec_num": "5" }, { "text": "A major part of our clarification dialogue recognition algorithm is the sentence similarity metric which looks at the similarity in meaning between the current question and previous questions. WordNet (Miller 1999; Fellbaum 1998 ), a lexical database which organizes words into synsets, sets of synonymous words, and specifies a number of relationships such as hypernym, synonym, meronym which can exist between the synsets in the lexicon, has been shown to be fruitful in the calculation of semantic similarity. One approach has been to determine similarity by calculating the length of the path or relations connecting the words which constitute sentences (see for example Green 1997 and St-Onge 1998) ; different approaches have been proposed (for an evaluation see (Budanitsky and Hirst 2001) ), either using all WordNet relations (Budanitsky and Hirst 2001) or only is-a relations (Resnik 1995; Jiang and Conrath 1997; Mihalcea and Moldvoan 1999) . Miller (1999) , Harabagiu et al. (2002) and De Boni and Manandhar (2002) found WordNet glosses, considered as micro-contexts, to be useful in determining conceptual similarity. (Lee et al. 2002) have applied conceptual similarity to the Question Answering task, giving an answer A a score dependent on the number of matching terms in A and the question. Our sentence similarity measure followed on these ideas, adding to the use of WordNet relations, part-ofspeech information, compound noun and word frequency information. In particular, sentence similarity was considered as a function which took as arguments a sentence s 1 and a second sentence s 2 and returned a value representing the semantic relevance of s 1 in respect of s 2 in the context of knowledge B, i.e.", "cite_spans": [ { "start": 201, "end": 214, "text": "(Miller 1999;", "ref_id": "BIBREF16" }, { "start": 215, "end": 228, "text": "Fellbaum 1998", "ref_id": "BIBREF5" }, { "start": 675, "end": 689, "text": "Green 1997 and", "ref_id": "BIBREF8" }, { "start": 690, "end": 703, "text": "St-Onge 1998)", "ref_id": "BIBREF11" }, { "start": 769, "end": 796, "text": "(Budanitsky and Hirst 2001)", "ref_id": "BIBREF2" }, { "start": 835, "end": 862, "text": "(Budanitsky and Hirst 2001)", "ref_id": "BIBREF2" }, { "start": 886, "end": 899, "text": "(Resnik 1995;", "ref_id": "BIBREF18" }, { "start": 900, "end": 923, "text": "Jiang and Conrath 1997;", "ref_id": "BIBREF12" }, { "start": 924, "end": 951, "text": "Mihalcea and Moldvoan 1999)", "ref_id": null }, { "start": 954, "end": 967, "text": "Miller (1999)", "ref_id": "BIBREF16" }, { "start": 970, "end": 993, "text": "Harabagiu et al. (2002)", "ref_id": "BIBREF10" }, { "start": 998, "end": 1026, "text": "De Boni and Manandhar (2002)", "ref_id": "BIBREF4" }, { "start": 1131, "end": 1148, "text": "(Lee et al. 2002)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity Metric", "sec_num": "6" }, { "text": "( s 1 , s 2 , B ) = n \u2208 semantic-relevance(s 1 ,s,B) < semantic- relevance(s 2 ,s, B)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "semantic-relevance", "sec_num": null }, { "text": "represents the fact that sentence s 1 is less relevant than s 2 in respect to the sentence s and the context B. In our experiments, B was taken to be the set of semantic relations given by WordNet. Clearly, the use of a different knowledge base would give different results, depending on its completeness and correctness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "semantic-relevance", "sec_num": null }, { "text": "In order to calculate the semantic similarity between a sentence s 1 and another sentence s 2 , s 1 and s 2 were considered as sets P and Q of word stems. The similarity between each word in the question and each word in the answer was then calculated and the sum of the closest matches gave the overall similarity. In other words, given two sets Q and P, where Q={qw 1 ,qw 2 ,\u2026,qw n } and P={pw 1 ,pw 2 ,\u2026,pw m }, the similarity between Q and P is given by 1