{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:35.451190Z" }, "title": "Pseudo Ambiguous and Clarifying Questions Based on Sentence Structures Toward Clarifying Question Answering System", "authors": [ { "first": "Yuya", "middle": [], "last": "Nakano", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nara Institute of Science and Technology", "location": { "postCode": "8916-5, 6300192", "settlement": "Takayama, Ikoma, Nara", "country": "Japan" } }, "email": "" }, { "first": "Seiya", "middle": [], "last": "Kawano", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nara Institute of Science and Technology", "location": { "postCode": "8916-5, 6300192", "settlement": "Takayama, Ikoma, Nara", "country": "Japan" } }, "email": "" }, { "first": "Koichiro", "middle": [], "last": "Yoshino", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nara Institute of Science and Technology", "location": { "postCode": "8916-5, 6300192", "settlement": "Takayama, Ikoma, Nara", "country": "Japan" } }, "email": "" }, { "first": "Katsuhito", "middle": [], "last": "Sudoh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nara Institute of Science and Technology", "location": { "postCode": "8916-5, 6300192", "settlement": "Takayama, Ikoma, Nara", "country": "Japan" } }, "email": "" }, { "first": "Satoshi", "middle": [], "last": "Nakamura", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nara Institute of Science and Technology", "location": { "postCode": "8916-5, 6300192", "settlement": "Takayama, Ikoma, Nara", "country": "Japan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Question answering (QA) with disambiguation questions is essential for practical QA systems because user questions often do not contain information enough to find their answers. We call this task clarifying question answering, a task to find answers to ambiguous user questions by disambiguating their intents through interactions. There are two major problems in building a clarifying question answering system: data preparation of possible ambiguous questions and the generation of clarifying questions. In this paper, we tackle these problems by sentence generation methods using sentence structures. Ambiguous questions are generated by eliminating a part of a sentence considering the sentence structure. Clarifying the question generation method based on case frame dictionary and sentence structure is also proposed. Our experimental results verify that our pseudo ambiguous question generation successfully adds ambiguity to questions. Moreover, the proposed clarifying question generation recovers the performance drop by asking the user for missing information.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Question answering (QA) with disambiguation questions is essential for practical QA systems because user questions often do not contain information enough to find their answers. We call this task clarifying question answering, a task to find answers to ambiguous user questions by disambiguating their intents through interactions. There are two major problems in building a clarifying question answering system: data preparation of possible ambiguous questions and the generation of clarifying questions. In this paper, we tackle these problems by sentence generation methods using sentence structures. Ambiguous questions are generated by eliminating a part of a sentence considering the sentence structure. Clarifying the question generation method based on case frame dictionary and sentence structure is also proposed. Our experimental results verify that our pseudo ambiguous question generation successfully adds ambiguity to questions. Moreover, the proposed clarifying question generation recovers the performance drop by asking the user for missing information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Question answering (QA) is a conventional task of natural language processing to provide answers for given user questions. The advance of neural network-based QA systems has led to a variety of benchmark datasets of the QA task (Rajpurkar et al., 2016; . These benchmarks define the problem of QA as predicting a corresponding phrase (span) in documents to a given question when the system has both questions and target documents.", "cite_spans": [ { "start": 228, "end": 252, "text": "(Rajpurkar et al., 2016;", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most QA tasks defined in existing benchmark QA datasets assumes that the given questions have enough information for answering. However, real questions given by users are often ambiguous because users frequently forget to mention important terms or may hesitate. It is thus not always easy to derive clear answers for such ambiguous user questions. For example, when a user says, \"What is the masterpiece drawn by Leonardo da Vinci?\", the system cannot determine an answer because Leonardo da Vinci created several notable masterpieces ( Figure 1 ; ambiguous Q). Taylor (Taylor, 1962) defined four level categories of user states in information search.", "cite_spans": [ { "start": 570, "end": 584, "text": "(Taylor, 1962)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 538, "end": 546, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Q1 The actual, but unexpressed request \u2022 Q2 The conscious, within-brain description of the request \u2022 Q3 The formal statement of the request \u2022 Q4 The request as presented to the dialogue agent Most existing QA systems target Q3 or Q4; however, it is required for systems to answer questions categorized into Q2. In other words, user questions do not always contain sufficient information for finding the answer; however, systems can fill in the gap by asking back users directly (Small et al., 2003; Bertomeu et al., 2006; Kato et al., 2006; Aliannejadi et al., 2020) . SQuAD 2.0 (Rajpurkar et al., 2018) defined \"unanswerble questions\" in their dataset; however, our problem definition is that the system has potential answers but does not have enough information to reach them. Using clarifying questions is a common method in conversational search (Radlinski and Craswell, 2017; Trippas et al., 2018; Qu et al., 2020) ; it ascertains the user's retrieval intent with questions if the system cannot capture this from the initial request. Thus, the system can get additional information to the initial request using a clarifying question to make the user's intent clearer. In the previous example, the system can ask the user, \"Which museum displays this masterpiece?\" or \"What is the motif?\" to disambiguate possible answers to the given question ( Figure 1 ; clarifying Q1 and Q2). Some existing work tackled this problem on a QA system using question paraphrasing (Otsuka et al., 2019) and building ambiguous question answering datasets (Min et al., 2020) . However, it is not easy to build a dataset that covers any variation of ambiguous questions because of the diverse variety of ambiguity in questions ( Figure 1 ; Problem 1). Moreover, even if we can define the variation of ambiguity; it is still challenging to find appropriate clarifying questions for the disambiguation to shape the system answers ( Figure 1 ; Problem 2).", "cite_spans": [ { "start": 480, "end": 500, "text": "(Small et al., 2003;", "ref_id": "BIBREF24" }, { "start": 501, "end": 523, "text": "Bertomeu et al., 2006;", "ref_id": "BIBREF2" }, { "start": 524, "end": 542, "text": "Kato et al., 2006;", "ref_id": "BIBREF9" }, { "start": 543, "end": 568, "text": "Aliannejadi et al., 2020)", "ref_id": "BIBREF0" }, { "start": 581, "end": 605, "text": "(Rajpurkar et al., 2018)", "ref_id": "BIBREF19" }, { "start": 852, "end": 882, "text": "(Radlinski and Craswell, 2017;", "ref_id": "BIBREF18" }, { "start": 883, "end": 904, "text": "Trippas et al., 2018;", "ref_id": "BIBREF27" }, { "start": 905, "end": 921, "text": "Qu et al., 2020)", "ref_id": "BIBREF17" }, { "start": 1469, "end": 1490, "text": "(Otsuka et al., 2019)", "ref_id": "BIBREF16" }, { "start": 1542, "end": 1560, "text": "(Min et al., 2020)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 1352, "end": 1360, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1714, "end": 1722, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1915, "end": 1923, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Sentence structures have an essential role in clarifying the meaning because we control the sentence clarity by modifiers in syntax. This indicates that the sentence generation system can also control sentences' clarity by focusing on sentence structures. Based on this idea, in this work, we propose a pseudo ambiguous question generation method for covering variations of the ambiguous question, which are derived from clear questions collected in existing QA datasets ( Figure 1 ; Solution 1). The proposed method focuses on the syntax structures of question sentences to add ambiguity by eliminating some parts while considering grammatical roles from syntax point of view. We also propose a clarifying question generation method based on the case frame, which uses the syntax and semantic information of ambiguous questions (Figure 1 ; Solution 2). The clarifying question generation makes it possible to disambiguate the user's meaning by interacting with the user directly to improve the QA system performance.", "cite_spans": [], "ref_spans": [ { "start": 473, "end": 481, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 829, "end": 838, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conducted two experiments to investigate the quality of proposed generation systems. Qualities of the pseudo ambiguous questions are evaluated by both the QA system and the human subjective test. The performance of the clarifying question generation is investigated by QA system performance using both the ambiguous questions and answers to the clarifying questions given by crowdworkers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Section 2 sets forth our problem definition and system overview. Section 3 describes the pseudo ambiguous question generation method. Section 4 explains the proposed clarifying question generation method that uses sentence structures. Section 5 shows the evaluation setting and system performance to verify the ability of our generation system. We clarify the position of our system in relation to existing systems in Section 6, and then conclude this work in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our final goal is to build a clarifying question answering system that can ask a question back to users if the given questions do not contain sufficient information to distinguish the answer. We call such questions as ambiguous questions. Figure 2 shows the overall system. We extract questions from existing QA datasets to modify them to pseudo ambiguous questions because building ambiguous question datasets is costly (Aliannejadi et al., 2019; Xu et al., 2019) . Most of the existing QA datasets consist of pairs of clear questions and corresponding text spans on target documents. These questions are defined clearly to distinguish the answer terms from the document. In other words, if human experts receive these questions, they can find the answer from the documents even if it takes a lot of time. Our proposal eliminates some important parts of these questions to generate pseudo ambiguous questions using their syntax information. In the example presented in Figure 2 , the system adds ambiguity to the question by removing the verbal phrase that corresponds to the verb \"developed.\" When the QA system receives an ambiguous question from the pseudo ambiguous question gen- ", "cite_spans": [ { "start": 421, "end": 447, "text": "(Aliannejadi et al., 2019;", "ref_id": "BIBREF1" }, { "start": 448, "end": 464, "text": "Xu et al., 2019)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 239, "end": 245, "text": "Figure", "ref_id": null }, { "start": 970, "end": 978, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "System overview", "sec_num": "2" }, { "text": "QA model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concatenation", "sec_num": null }, { "text": "The answer is \"Transmetropolitan\" Question Answer Figure 2 : System overview erator, the QA system needs to generate a clarifying question. We focus on predicates in the ambiguous question and their missing cases on the syntax to generate the clarifying question. We used the case frame dictionary to estimate the missing case of the extracted predicates. In the example in Figure 2, the system generates the clarifying question \"When did the writer have a series?\" 1 because the system found that the adverbial modifier of \"had\" in the ambiguous question is missing. The system receives the answer to the clarifying question and then runs the QA model using both the ambiguous question and the answer to the clarifying question. Technical details are described in the following sections.", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 58, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Concatenation", "sec_num": null }, { "text": "It is not realistic to collect all possible varieties of ambiguous questions because possible ambiguous questions given to the QA system are diverse and depend on the situation that the users are facing. In this paper, we present a method to generate pseudo ambiguous questions by modifying questions in existing QA datasets. We apply syntax parsing to question sentences to focus on modifiers, which have a role in clarifying the question's intent, and then eliminate them from the questions to make the sentences ambiguous. This section describes the generation process and its evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pseudo ambiguous question generation", "sec_num": "3" }, { "text": "What was the first comic book written by the writer who had a series developed into a 2010 film with Bruce Willis and Morgan Freeman?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pseudo ambiguous question generation", "sec_num": "3" }, { "text": "What was the first comic book written by the writer who had a series developed into a 2010 film with Bruce Willis and Morgan Freeman? ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pseudo ambiguous question generation", "sec_num": "3" }, { "text": "A generation example is shown in Figure 3 . In this example, the system generates an ambiguous question \"What was the first comic book written by the writer who had a series?\" while eliminating the verbal phrase indicated by \"developed\" because the phrase describes the detail of the antecedent \"a series.\" We use the Stanford parser (Manning et al., 2014) 2 to get the syntax. Our system focuses on a verbal phrase (VP) and prepositional phrase (PP) as chunks to be removed. ", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 41, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Question generation using syntax information", "sec_num": "3.1" }, { "text": "We evaluated the proposed pseudo ambiguous question generation from two viewpoints: increased ambiguity and sentence quality, measured by QA system accuracy and human subjective evaluation, respectively. In the experiment, we used the Hot-potQA dataset 3 , which consists of training and development sets. Note that the test set is not distributed to be used on their leaderboard; we used the development set as our test set. We used the training set to train the QA model to be used for the first evaluation. We modified all 7,405 sentences in the development set to pseudo ambiguous questions. As the QA model, we used a BERT-based model with the same setting (Devlin et al., 2019) , which predicts a span in the given document set. Our system generated one ambiguous question for each original question in this evaluation by eliminating the shortest phrase. We tried three elimination strategies: removing a VP, removing a PP, and removing a VP and PP's shortest phrase (Mixed).", "cite_spans": [ { "start": 662, "end": 683, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation of pseudo ambiguous questions", "sec_num": "3.2" }, { "text": "We used exact matching (EM) and F1 scores to evaluate the QA accuracy. EM indicates the exact matching accuracy of the extracted answer from the target documents. QA answers often consist of several words; thus, the harmonic mean of precision and recall of word matching is also used (F1). Table 1 shows the result, which indicates that the accuracy of QA systems decreased in any condition; even our system removed the shortest phrase for each question. VP had the most significant impact on decreasing the score; this is probably because VPs are more widespread than PPs.", "cite_spans": [], "ref_spans": [ { "start": 290, "end": 297, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation on QA accuracy", "sec_num": "3.2.1" }, { "text": "In the human subjective evaluation, we hired three annotators who have comparable English reading We randomly sampled 200 sentences from the generated 7,405 sentences for the evaluation. Table 2 shows the result. # indicates frequencies. We categorized the selected 200 sentences into \"Normal\" and \"Irregular\" forms with their interrogative position. The \"Normal\" form sentences start from the interrogative. The \"Irregular\" has the interrogative on other parts. These results verified that the \"Mixed\" strategy achieved a suitable naturalness score of 2.371. However, the \"VP\" strategy has lower scores because it eliminates widespread spans and often removes necessary parts of questions. The \"Normal\" form had better scores than the \"Irregular\" form. Their sentence structures probably cause this; interrogatives in the \"Irregular\" form are sometimes placed on the leaves of syntax trees.", "cite_spans": [], "ref_spans": [ { "start": 187, "end": 194, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Evaluation of sentence quality", "sec_num": "3.2.2" }, { "text": "We built clarifying question generation system toward a clarifying question answering system, asking a question back to the questioners. The proposed system generates clarifying questions using predicate-argument structures; it finds predicates in ambiguous questions and generates questions to clarify their arguments. We used the case frame dictionary (Kawahara and Kurohashi, 2006; Kawahara et al., 2014) for the generation, which consists of frequencies of cases and arguments depending on predicates. This section describes the technical details of clarifying question generation.", "cite_spans": [ { "start": 354, "end": 384, "text": "(Kawahara and Kurohashi, 2006;", "ref_id": "BIBREF10" }, { "start": 385, "end": 407, "text": "Kawahara et al., 2014)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Clarifying question generation", "sec_num": "4" }, { "text": "Words or phrases that have specific roles to predicates on dependency structures are called arguments, with their semantic/syntactic roles (cases). For example, in the sentence \"I saw a girl,\" \"see Table 4 : Frequency of each case in the training data (saw)\" is a predicate, and \"I\" and \"a girl\" have roles to the predicate as \"nsubj (noun subject)\" and \"dobj (direct object).\" The case frame is a statistically collected dictionary consisting of cases, arguments, and frequencies (case frame frequency) for each predicate. Kawahara et al., (2014) is distributing a case frame dictionary, which is based on parsing results of the Stanford parser to a billion-sentences English corpus. An example of the case frame dictionary is shown in Table 3 . Each predicate entry has a corresponding predicate sense with its usage (see numbers after predicates in Table 3 ).", "cite_spans": [ { "start": 524, "end": 547, "text": "Kawahara et al., (2014)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 198, "end": 205, "text": "Table 4", "ref_id": null }, { "start": 737, "end": 744, "text": "Table 3", "ref_id": "TABREF6" }, { "start": 852, "end": 859, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Case frame", "sec_num": "4.1" }, { "text": "Our clarifying question generation outputs clarifying questions to a given ambiguous question sentence by the following four steps. Figure 4 illustrates the generation and selection process. We used the Stanford parser in predicate identification, using verbal tags: VB, VBD, VBG, VBN, VBP, and VBZ. We extracted triples of a predicate, an argument, and its case of these identified predicates.", "cite_spans": [], "ref_spans": [ { "start": 132, "end": 140, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Generation and selection process", "sec_num": "4.2" }, { "text": "In the missing case extraction, the system extracts missing cases (possible but unseen cases) of identified predicates. The system generates clarifying questions for filling these missing cases. In the example of Figure 4 , the \"adverbial modifiers (adv-mods)\" of \"write\" and \"have\" are extracted.", "cite_spans": [], "ref_spans": [ { "start": 213, "end": 221, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Predicate identification 2. Missing case extraction 3. Target case decision 4. Interrogative word decision", "sec_num": "1." }, { "text": "Target case decision prioritizes missing cases with case frequency and the relative position of predicates; frequent cases and predicates on postposed places have higher priority because frequent cases in questions probably contain essential information. Case frequencies are calculated from the QA system's training data, in our case, the training set of HotpotQA. Any questions in the training set are parsed to count the case frequency as shown in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 451, "end": 458, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Predicate identification 2. Missing case extraction 3. Target case decision 4. Interrogative word decision", "sec_num": "1." }, { "text": "Once the target predicate and the target case are decided, the case frame dictionary is used again to determine the interrogative word. The system looks up the entry of the decided predicate and case in the dictionary. Then the system picks up the most frequent interrogative word corresponding to them (interrogative word decision). The system generates clarifying questions using the decided interrogative word, predicate, and depending phrase to the predicate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicate identification 2. Missing case extraction 3. Target case decision 4. Interrogative word decision", "sec_num": "1." }, { "text": "We evaluated the proposed clarifying question generation system. We gave the pseudo ambiguous question generated by the method presented in Section 3 to the clarifying question generation described in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We used the HotpotQA dataset as the original QA dataset of our system. The HotpotQA dataset records many complicated sentences with several modifiers because the dataset was built for QA systems with multi-hop reasoning. As the QA model, we used a BERT-based model with the same setting (Devlin et al., 2019) , which predicts a span in the given document set. Specifically, we used the BERT-Base-Uncased model as the pre-trained model. In the fine-tuning, the batch size was 12, the training rate was 3e \u22125 , and the number of epochs was 2.", "cite_spans": [ { "start": 287, "end": 308, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setting", "sec_num": "5.1" }, { "text": "What was the first comic book written by the writer who had a series?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental setting", "sec_num": "5.1" }, { "text": "Parser ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental setting", "sec_num": "5.1" }, { "text": "When did the writer have a series? Figure 4 : Procedure to generate clarification questions As indicated in Figure 2 , the pseudo ambiguous question is given to the system and then the system generates a clarifying question to the ambiguous question. The system receives the user's reply to the clarifying question in the evaluation. In our evaluation, we allowed only one clarification for each question.", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 43, "text": "Figure 4", "ref_id": null }, { "start": 108, "end": 116, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Question generation", "sec_num": null }, { "text": "We generated pseudo ambiguous questions from the development set of the HotpotQA dataset as described in Section 3. In this experiment, we generated several pseudo ambiguous questions from one sentence with the following conditions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question generation", "sec_num": null }, { "text": "1. Eliminated words are less than 50% of the original question. 2. Eliminated words do not contain any interrogative words. 3. Eliminated parts are selected from both VPs and PPs. 4. QA system results are changed from correct to incorrect by the modification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question generation", "sec_num": null }, { "text": "The first and second points are necessary to generate interrogative sentences. For the fourth point, we input both the original question and the pseudo ambiguous question with the elimination to a QA model and compared their results as shown in Figure 5 . This is because our focus in this experiment is whether the clarifying question can recover important information by asking a question back to the user. We finally selected 850 sentences that match the above conditions. We generated clarifying questions to these 850 pseudo ambiguous questions. We used crowdsourcing to add the answer to the clarifying question. We showed the original question as \"intent,\" the pseudo ambiguous question as \"your question,\" and the clarifying question as \"clarification question\" to the crowdworkers and gave them the following instructions:", "cite_spans": [], "ref_spans": [ { "start": 245, "end": 253, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Question generation", "sec_num": null }, { "text": "What was the first comic book written by the writer who had a series developed into a 2010 film with Bruce Willis and Morgan Freeman?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question generation", "sec_num": null }, { "text": "What was the first comic book written by the writer who had a series developed into a 2010 film with Bruce Willis and Morgan Freeman? Assume that you are talking with a chat assistant. \"Intention\" indicates what you wanted to ask, and \"your question\" indicates what you said to the system. The system says a \"clarification question\" as a response to your question. First, select Yes/No according to whether the \"clarification question\" correctly specifies missing information of your \"intention\" or not. Then, write your answer for the \"clarification question\" in the shortest terms. Do not write the original question itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question generation", "sec_num": null }, { "text": "The crowdworkers thus evaluate the correctness of clarifying questions and then input the answer to the clarifying question. We assigned five crowdworkers for each sample and then determined the correctness label by the majority. We used all responses to clarifying questions to calculate the QA model accuracy. In other words, our evaluation score is calculated from 850 \u00d7 5 = 4, 250 samples. We concatenated the received answers to the ambiguous questions to be used as the input of the QA model. We used the same QA model as in Section 3.2, the BERT-based fine-tuned model. Table 5 : Evaluation scores of the QA system given both ambiguous questions and answers to the clarifying questions. Category means the added correctness of the clarifying questions. #q and #eval indicate the numbers of used questions and evaluation samples.", "cite_spans": [], "ref_spans": [ { "start": 577, "end": 584, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Question generation", "sec_num": null }, { "text": "For the correctness of clarifying questions, the ratio of samples evaluated as \"Yes\" was 486/850 = 0.572. This indicates that our clarifying question generation method based on sentence structure and the case frame dictionary successfully generated clarifying questions to major questions; however, we still need to refine the method by focusing on the content words of questions. Table 5 shows the accuracy of the QA system by inputting both ambiguous questions and generated clarifying questions. Note that scores are 0.0% if we give only ambiguous questions and 100.0% if we give the original question before adding the ambiguity. These results show that our clarifying question recovers 50% of lost information through interactions, which is lost in the modification process of a pseudo ambiguous question. Table 6 shows examples from the evaluation. In example 1, the pseudo ambiguous question generation removed the term \"Jerry Goldsmith\" and the clarifying question successfully got the word to recover the information. In example 2, the system also succeeded in recovering the removed information, but the QA system failed to output the correct answer by a small difference. In examples 3 and 7, the system's clarifying question is not appropriate, but the system output the correct answer. In examples 6 and 7, users may misunderstand their task and put a new question to clarify their original question. Recent search system interfaces probably cause this; the users usually give a new query to the system if their first search fails. We can improve the clarification quality in some cases; however, the system could get additional information to recover the information, even if the system failed to ask questions back to the users correctly. In general, when the ambiguous question was generated by eliminating PPs, our clarifying question success-fully worked in many cases to ask back the phrase. Recovering VPs was more difficult for the system.", "cite_spans": [], "ref_spans": [ { "start": 381, "end": 388, "text": "Table 5", "ref_id": null }, { "start": 811, "end": 818, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Experimental results", "sec_num": "5.2" }, { "text": "We built a generation system that clarifies user's requests by clarifying questions when the user's questions are ambiguous. There are two major approaches for building a QA system that can withdraw additional information to the initial ambiguous user query. One approach is based on paraphrasing, which paraphrases ambiguous sentences to clear sentences. The other major approach is using clarifying or confirmation questions, which is similar to our system. This section describes relationships to these works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related works", "sec_num": "6" }, { "text": "The paraphrasing approach's critical idea is converting given user questions to other forms (McKeown, 1983; Buck et al., 2017; Dong et al., 2017) . This idea is similar to query expansion, which is used in the information retrieval area. It is often difficult for users to express their questions in clear language. This difficulty often causes ambiguous questions. This kind of works tackled this problem by presenting possible paraphrases of the given ambiguous question with their answers. However, such approaches do not work well if paraphrased questions do not contain the appropriate question for the user. Moreover, the system needs paraphrasing datasets to learn the paraphrasing models, which requires enormous annotation costs in the open domain (Min et al., 2020) . Otsuka et al., (2019) used syntactic structures to generate pseudo training examples for the paraphrasing approach. Our approach is similar to their works; however, we also used statistical information from the case frame to distinguish the clarified point to realize a dialogue-based system. The dialogue-based approach has an advantage in decreasing user interaction costs if the system can predict the clarifying point appropriately.", "cite_spans": [ { "start": 92, "end": 107, "text": "(McKeown, 1983;", "ref_id": "BIBREF13" }, { "start": 108, "end": 126, "text": "Buck et al., 2017;", "ref_id": "BIBREF3" }, { "start": 127, "end": 145, "text": "Dong et al., 2017)", "ref_id": "BIBREF5" }, { "start": 757, "end": 775, "text": "(Min et al., 2020)", "ref_id": "BIBREF14" }, { "start": 778, "end": 799, "text": "Otsuka et al., (2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrasing approach", "sec_num": "6.1" }, { "text": "The second approach is giving clarifying questions to users, which is closer to our approach. The clarifying strategy has been used widely in conventional spoken dialogue systems because the systems sometimes fail the task by ambiguity caused by speech recognition or natural language understanding errors (Misu and Kawahara, 2006 ; Stoy- Table 6 : Examples of clarifying question answering. O, A, and C indicate an original question, ambiguous question generated from the original question, and the generated clarifying question, respectively. Crowdworkers saw these contexts and input \"(R) reply to C\". G is the correct answer to question O and QA w/ A is the output of the QA model given only the ambiguous question. QA w/ A+R uses both the ambiguous question and the reply to the clarifying question given by the crowdworkers. anchev et al., 2014). Our system uses this idea to tackle a problem of question ambiguity in the QA system caused by the user's ability or lack of knowledge. In recent QA systems, there is a study to learn the re-ranking function of clarifying questions by deep neural networks (Rao and Daum\u00e9 III, 2018) . They also proposed a model based on a generative neural network to generate clarifying questions (Rao and Daum\u00e9 III, 2019) . These studies require triples of an ambiguous question, a clarifying question, and a corresponding fact. Building a large dataset to cover open-domain QA is costly. Our system does not require such data preparation cost and uses a general syntactic parser and the case frame dictionary built without specified annotations. The system can work on any QA datasets already developed in the existing work of QA systems.", "cite_spans": [ { "start": 306, "end": 330, "text": "(Misu and Kawahara, 2006", "ref_id": "BIBREF15" }, { "start": 1109, "end": 1134, "text": "(Rao and Daum\u00e9 III, 2018)", "ref_id": "BIBREF21" }, { "start": 1234, "end": 1259, "text": "(Rao and Daum\u00e9 III, 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 339, "end": 346, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Clarifying approach", "sec_num": "6.2" }, { "text": "Question generation is also widely researched by using generative models (Duan et al., 2017; Du et al., 2017; Sasazawa et al., 2019) or syntactic rules (Heilman and Smith, 2010) . Our clarifying question generation is motivated by them.", "cite_spans": [ { "start": 73, "end": 92, "text": "(Duan et al., 2017;", "ref_id": "BIBREF7" }, { "start": 93, "end": 109, "text": "Du et al., 2017;", "ref_id": "BIBREF6" }, { "start": 110, "end": 132, "text": "Sasazawa et al., 2019)", "ref_id": "BIBREF23" }, { "start": 152, "end": 177, "text": "(Heilman and Smith, 2010)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Clarifying approach", "sec_num": "6.2" }, { "text": "In this paper, we worked on building a clarifying question answering system for ambiguous questions, questions with some necessary information dropped. We proposed two-generation methods toward the clarifying question answering system: pseudo ambiguous question generation based on syntax and clarifying question generation based on sentence structures and case frame dictionaries. Our experimental results revealed that these generation methods worked to drop and to regain the important information in the original clear questions. The system used domain-independent syntactic and semantic information of questions; thus, the method can be applied to various QA domains. Moreover, our method does not require data annotation; we can extend existing QA datasets for the clarifying QA task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "As future work, we can integrate our model with other generative models. Another approach is to use pseudo ambiguous questions as training data of QA-related modules such as discriminative systems to predict or score given questions. Improving the model architecture is another issue, for example, network design to feed the whole dialogue history to the QA network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Formally, this question should be \"When did the write have the series,\" but here we explain the system process with our system outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://nlp.stanford.edu/software/lex-parser.shtml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "What is the name of the executive producer of the film that has a score composed by Jerry Goldsmith? (A) ambiguous What is the name of the executive producer of the film that has a score composed? (C) clarifying which composed? 1The lamp used in many lighthouses is similiar to this type of lamp patented in 1780 by Aime Argand? ( Which other Mexican Formula One race car driver has held the podium besides the Force India driver born in (A) ambiguous Which other Mexican Formula One race car driver has held the podium besides the Force India driver? (C) clarifying where did the car hold? 6 (R) reply to C When was the force India driver born? ", "cite_spans": [], "ref_spans": [ { "start": 329, "end": 330, "text": "(", "ref_id": null } ], "eq_spans": [], "section": "Methods sentence (O) original", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Convai3: Generating clarifying questions for opendomain dialogue systems (clariq)", "authors": [ { "first": "Mohammad", "middle": [], "last": "Aliannejadi", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Kiseleva", "suffix": "" }, { "first": "Aleksandr", "middle": [], "last": "Chuklin", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dalton", "suffix": "" }, { "first": "Mikhail", "middle": [], "last": "Burtsev", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2009.11352" ] }, "num": null, "urls": [], "raw_text": "Mohammad Aliannejadi, Julia Kiseleva, Aleksandr Chuklin, Jeff Dalton, and Mikhail Burtsev. 2020. Convai3: Generating clarifying questions for open- domain dialogue systems (clariq). arXiv preprint arXiv:2009.11352.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Asking clarifying questions in open-domain information-seeking conversations", "authors": [ { "first": "Mohammad", "middle": [], "last": "Aliannejadi", "suffix": "" }, { "first": "Hamed", "middle": [], "last": "Zamani", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Crestani", "suffix": "" }, { "first": "W Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 42nd international acm sigir conference on research and development in information retrieval", "volume": "", "issue": "", "pages": "475--484", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, and W Bruce Croft. 2019. Asking clari- fying questions in open-domain information-seeking conversations. In Proceedings of the 42nd interna- tional acm sigir conference on research and develop- ment in information retrieval, pages 475-484.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Contextual phenomena and thematic relations in database qa dialogues: results from a wizard-of-oz experiment", "authors": [ { "first": "N\u00faria", "middle": [], "last": "Bertomeu", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Hans-Ulrich", "middle": [], "last": "Krieger", "suffix": "" }, { "first": "Brigitte", "middle": [], "last": "J\u00f6rg", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Interactive Question Answering Workshop at HLT-NAACL 2006", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "N\u00faria Bertomeu, Hans Uszkoreit, Anette Frank, Hans- Ulrich Krieger, and Brigitte J\u00f6rg. 2006. Contextual phenomena and thematic relations in database qa dialogues: results from a wizard-of-oz experiment. In Proceedings of the Interactive Question Answering Workshop at HLT-NAACL 2006, pages 1-8.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Ask the right questions: Active question reformulation with reinforcement learning", "authors": [ { "first": "Christian", "middle": [], "last": "Buck", "suffix": "" }, { "first": "Jannis", "middle": [], "last": "Bulian", "suffix": "" }, { "first": "Massimiliano", "middle": [], "last": "Ciaramita", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Gajewski", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Gesmundo", "suffix": "" }, { "first": "Neil", "middle": [], "last": "Houlsby", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.07830" ] }, "num": null, "urls": [], "raw_text": "Christian Buck, Jannis Bulian, Massimiliano Cia- ramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, and Wei Wang. 2017. Ask the right ques- tions: Active question reformulation with reinforce- ment learning. arXiv preprint arXiv:1705.07830.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4171- 4186.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning to paraphrase for question answering", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Mallinson", "suffix": "" }, { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "875--886", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question an- swering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 875-886.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning to ask: Neural question generation for reading comprehension", "authors": [ { "first": "Xinya", "middle": [], "last": "Du", "suffix": "" }, { "first": "Junru", "middle": [], "last": "Shao", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1342--1352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinya Du, Junru Shao, and Claire Cardie. 2017. Learn- ing to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1342-1352.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Question generation for question answering", "authors": [ { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "866--874", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 866- 874.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Good question! statistical ranking for question generation", "authors": [ { "first": "Michael", "middle": [], "last": "Heilman", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "609--617", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 An- nual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609-617, Los Angeles, California. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Woz simulation of interactive question answering", "authors": [ { "first": "Tsuneaki", "middle": [], "last": "Kato", "suffix": "" }, { "first": "Fumito", "middle": [], "last": "Fukumoto", "suffix": "" }, { "first": "Noriko", "middle": [], "last": "Masui", "suffix": "" }, { "first": "", "middle": [], "last": "Kando", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Interactive Question Answering Workshop at HLT-NAACL 2006", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsuneaki Kato, Jun'ichi Fukumoto, Fumito Masui, and Noriko Kando. 2006. Woz simulation of interactive question answering. In Proceedings of the Interactive Question Answering Workshop at HLT-NAACL 2006, pages 9-16.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Case frame compilation from the web using highperformance computing", "authors": [ { "first": "Daisuke", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2006, "venue": "LREC", "volume": "", "issue": "", "pages": "1344--1347", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daisuke Kawahara and Sadao Kurohashi. 2006. Case frame compilation from the web using high- performance computing. In LREC, pages 1344- 1347.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Inducing example-based semantic frames from a massive amount of verb uses", "authors": [ { "first": "Daisuke", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Peterson", "suffix": "" }, { "first": "Octavian", "middle": [], "last": "Popescu", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "58--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daisuke Kawahara, Daniel Peterson, Octavian Popescu, and Martha Palmer. 2014. Inducing example-based semantic frames from a massive amount of verb uses. In Proceedings of the 14th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 58-67.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The stanford corenlp natural language processing toolkit", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Manning", "suffix": "" }, { "first": "John", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Jenny", "middle": [ "Rose" ], "last": "Bauer", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "David", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguis- tics: system demonstrations, pages 55-60.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Paraphrasing questions using given and new information", "authors": [ { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 1983, "venue": "American Journal of Computational Linguistics", "volume": "9", "issue": "1", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kathleen McKeown. 1983. Paraphrasing questions us- ing given and new information. American Journal of Computational Linguistics, 9(1):1-10.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Ambigqa: Answering ambiguous open-domain questions", "authors": [ { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "5783--5797", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. Ambigqa: Answering am- biguous open-domain questions. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 5783- 5797.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Dialogue strategy to clarify user's queries for document retrieval system with speech interface", "authors": [ { "first": "Teruhisa", "middle": [], "last": "Misu", "suffix": "" }, { "first": "Tatsuya", "middle": [], "last": "Kawahara", "suffix": "" } ], "year": 2006, "venue": "Speech Communication", "volume": "48", "issue": "9", "pages": "1137--1150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teruhisa Misu and Tatsuya Kawahara. 2006. Dialogue strategy to clarify user's queries for document re- trieval system with speech interface. Speech Commu- nication, 48(9):1137-1150.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Specific question generation for reading comprehension", "authors": [ { "first": "Atsushi", "middle": [], "last": "Otsuka", "suffix": "" }, { "first": "Kyosuke", "middle": [], "last": "Nishida", "suffix": "" }, { "first": "Itsumi", "middle": [], "last": "Saito", "suffix": "" }, { "first": "Hisako", "middle": [], "last": "Asano", "suffix": "" }, { "first": "Junji", "middle": [], "last": "Tomita", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI 2019 Reasoning and Complex QA Workshop", "volume": "", "issue": "", "pages": "12--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atsushi Otsuka, Kyosuke Nishida, Itsumi Saito, Hisako Asano, and Junji Tomita. 2019. Specific question generation for reading comprehension. Proceedings of the AAAI 2019 Reasoning and Complex QA Work- shop, pages 12-20.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Open-retrieval conversational question answering", "authors": [ { "first": "Chen", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Liu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Cen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Minghui", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Bruce", "middle": [], "last": "Croft", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "539--548", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Qu, Liu Yang, Cen Chen, Minghui Qiu, W Bruce Croft, and Mohit Iyyer. 2020. Open-retrieval con- versational question answering. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 539-548.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A theoretical framework for conversational search", "authors": [ { "first": "Filip", "middle": [], "last": "Radlinski", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Craswell", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 conference on conference human information interaction and retrieval", "volume": "", "issue": "", "pages": "117--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filip Radlinski and Nick Craswell. 2017. A theoretical framework for conversational search. In Proceedings of the 2017 conference on conference human infor- mation interaction and retrieval, pages 117-126.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Know what you don't know: Unanswerable questions for squad", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "784--789", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 784-789.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Learning to ask good questions: Ranking clarification questions using neural expected value of perfect information", "authors": [ { "first": "Sudha", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2737--2746", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sudha Rao and Hal Daum\u00e9 III. 2018. Learning to ask good questions: Ranking clarification questions us- ing neural expected value of perfect information. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2737-2746.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Answer-based adversarial training for generating clarification questions", "authors": [ { "first": "Sudha", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "143--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sudha Rao and Hal Daum\u00e9 III. 2019. Answer-based adversarial training for generating clarification ques- tions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 143-155.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Neural question generation using interrogative phrases", "authors": [ { "first": "Yuichi", "middle": [], "last": "Sasazawa", "suffix": "" }, { "first": "Sho", "middle": [], "last": "Takase", "suffix": "" }, { "first": "Naoaki", "middle": [], "last": "Okazaki", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "106--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuichi Sasazawa, Sho Takase, and Naoaki Okazaki. 2019. Neural question generation using interrogative phrases. In Proceedings of the 12th International Conference on Natural Language Generation, pages 106-111.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Hitiqa: A data driven approach to interactive question answering: A preliminary report", "authors": [ { "first": "G", "middle": [], "last": "Sharon", "suffix": "" }, { "first": "Nobuyuki", "middle": [], "last": "Small", "suffix": "" }, { "first": "Tomek", "middle": [], "last": "Shimizu", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Strzalkowski", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2003, "venue": "New Directions in Question Answering", "volume": "", "issue": "", "pages": "94--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon G Small, Nobuyuki Shimizu, Tomek Strza- lkowski, and Ting Liu. 2003. Hitiqa: A data driven approach to interactive question answering: A pre- liminary report. In New Directions in Question An- swering, pages 94-104.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Towards natural clarification questions in dialogue systems", "authors": [ { "first": "Svetlana", "middle": [], "last": "Stoyanchev", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" } ], "year": 2014, "venue": "AISB symposium on questions, discourse and dialogue", "volume": "20", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Svetlana Stoyanchev, Alex Liu, and Julia Hirschberg. 2014. Towards natural clarification questions in di- alogue systems. In AISB symposium on questions, discourse and dialogue, volume 20.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The process of asking questions", "authors": [ { "first": "S", "middle": [], "last": "Robert", "suffix": "" }, { "first": "", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 1962, "venue": "", "volume": "13", "issue": "", "pages": "391--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert S Taylor. 1962. The process of asking questions. American documentation, 13(4):391-396.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Informing the design of spoken conversational search: Perspective paper", "authors": [ { "first": "Damiano", "middle": [], "last": "Johanne R Trippas", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Spina", "suffix": "" }, { "first": "Hideo", "middle": [], "last": "Cavedon", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Joho", "suffix": "" }, { "first": "", "middle": [], "last": "Sanderson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Human Information Interaction & Retrieval", "volume": "", "issue": "", "pages": "32--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johanne R Trippas, Damiano Spina, Lawrence Cavedon, Hideo Joho, and Mark Sanderson. 2018. Informing the design of spoken conversational search: Perspec- tive paper. In Proceedings of the 2018 Conference on Human Information Interaction & Retrieval, pages 32-41.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Asking clarification questions in knowledgebased question answering", "authors": [ { "first": "Jingjing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yuechen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Sun", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1618--1629", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingjing Xu, Yuechen Wang, Duyu Tang, Nan Duan, Pengcheng Yang, Qi Zeng, Ming Zhou, and SUN Xu. 2019. Asking clarification questions in knowledge- based question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1618-1629.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "William", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2369--2380", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2369-2380.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Towards conversational search and recommendation: System ask, user respond", "authors": [ { "first": "Yongfeng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Qingyao", "middle": [], "last": "Ai", "suffix": "" }, { "first": "Liu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "W Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th acm international conference on information and knowledge management", "volume": "", "issue": "", "pages": "177--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yongfeng Zhang, Xu Chen, Qingyao Ai, Liu Yang, and W Bruce Croft. 2018. Towards conversational search and recommendation: System ask, user respond. In Proceedings of the 27th acm international conference on information and knowledge management, pages 177-186.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The problem of clarifying QA", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "What)) (SQ (VBD was) (NP (DT the) (JJ first) (JJ comic) (NN book)) (VP (VBN written) (PP (IN by) (NP (NP (DT the) (NN writer)) (SBAR (WHNP (WP who)) (S (VP (VBD had) (NP (NP (DT a) (NN series)) (SBAR (S (VP (VBD developed) (PP (IN into) (NP (DT a) (CD 2010) (NN film))) (PP (IN with) (NP (NP (NNP Bruce) (NNP Willis)) (CC and) (NP (NNP Morgan) (NNP Freeman))))))))))))))) (. ?))) Generation of ambiguous question with removal of verbal phrase (VP)", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "Comparison of QA results", "type_str": "figure", "num": null, "uris": null }, "TABREF4": { "num": null, "html": null, "text": "Human evaluation of sentence quality skills to natives and asked them to evaluate sentences using the following three grades.", "type_str": "table", "content": "
\u2022 3: Fluent English sentence
\u2022 2: Grammatically correct English sentence
\u2022 1: Incorrect English sentence
" }, "TABREF6": { "num": null, "html": null, "text": "Examples in case frame", "type_str": "table", "content": "
CaseFreq.CaseFreq.
nmod nsubj dobj nsubjpass advmod dep conj cc advcl xcomp ccomp compound cop case compound:prt nmod:tmod81,442 60,702 49,679 23,910 17,991 6,817 5,335 5,152 4,943 4,521 4,461 1,740 1,554 1,529 1,344 1,132amod parataxis acl:relcl acl cc:preconj csubjpass nmod:poss nummod csubj expl iobj neg mwe appos nmod:npmod discourse951 452 444 285 282 218 177 175 143 108 100 83 62 37 27 6
" } } } }