{ "paper_id": "Q19-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:09:05.240739Z" }, "title": "CoQA: A Conversational Question Answering Challenge", "authors": [ { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "sivar@cs.stanford.edu" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "manning@cs.stanford.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable them to answer conversational questions. We introduce CoQA, a novel dataset for building Conversational Question Answering systems. Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. We analyze CoQA in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets (e.g., coreference and pragmatic reasoning). We evaluate strong dialogue and reading comprehension models on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating that there is ample room for improvement. We present CoQA as a challenge to the community at https://stanfordnlp.github. io/coqa.", "pdf_parse": { "paper_id": "Q19-1016", "_pdf_hash": "", "abstract": [ { "text": "Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable them to answer conversational questions. We introduce CoQA, a novel dataset for building Conversational Question Answering systems. Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. We analyze CoQA in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets (e.g., coreference and pragmatic reasoning). We evaluate strong dialogue and reading comprehension models on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating that there is ample room for improvement. We present CoQA as a challenge to the community at https://stanfordnlp.github. io/coqa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We ask other people a question to either seek or test their knowledge about a subject. Depending on their answer, we follow up with another question and their second answer builds on what has already been discussed. This incremental aspect makes human conversations succinct. An inability to build and maintain common ground in this way is part of why virtual assistants usually don't seem like competent conversational partners. In this * The first two authors contributed equally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "paper, we introduce CoQA, 1 a Conversational Question Answering dataset for measuring the ability of machines to participate in a questionanswering style conversation. In CoQA, a machine has to understand a text passage and answer a series of questions that appear in a conversation. We develop CoQA with three main goals in mind.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The first concerns the nature of questions in a human conversation. Figure 1 shows a conversation between two humans who are reading a passage, one acting as a questioner and the other as an answerer. In this conversation, every question after the first is dependent on the conversation history. For instance, Q 5 (Who?) is only a single word and is impossible to answer without knowing what has already been said. Posing short questions is an effective human conversation strategy, but such questions are difficult for machines to parse. As is well known, state-of-the-art models rely heavily on lexical similarity between a question and a passage (Chen et al., 2016; Weissenborn et al., 2017) . At present, there are no largescale reading comprehension datasets that contain questions that depend on a conversation history (see Table 1 ) and this is what CoQA is mainly developed for. 2 The second goal of CoQA is to ensure the naturalness of answers in a conversation. Many existing QA datasets restrict answers to contiguous text spans in a given passage (Table 1) . Such answers are not always natural-for example, there is no span-based answer to Q 4 (How many?) in Figure 1 . In CoQA, we propose that the answers can be free-form text, while for each answer, we also provide a text span from the passage as a rationale to the answer. Therefore, the answer to Q 4 is simply Three and its rationale spans across multiple sentences. Free-form answers have been studied in previous reading comprehension datasets for example, MS MARCO (Nguyen et al., 2016) and NarrativeQA (Ko\u010disk\u1ef3 et al., 2018) , and metrics such as BLEU or ROUGE are used for evaluation due to the high variance of possible answers. One key difference in our setting is that we require answerers to first select a text span as the rationale and then edit it to obtain a free-form answer. 3 Our method strikes a balance between naturalness of answers and reliable automatic evaluation, and it results in a high human agreement (88.8% F1 word overlap among human annotators).", "cite_spans": [ { "start": 649, "end": 668, "text": "(Chen et al., 2016;", "ref_id": "BIBREF6" }, { "start": 669, "end": 694, "text": "Weissenborn et al., 2017)", "ref_id": "BIBREF47" }, { "start": 887, "end": 888, "text": "2", "ref_id": null }, { "start": 1538, "end": 1559, "text": "(Nguyen et al., 2016)", "ref_id": "BIBREF31" }, { "start": 1576, "end": 1598, "text": "(Ko\u010disk\u1ef3 et al., 2018)", "ref_id": "BIBREF24" }, { "start": 1860, "end": 1861, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 68, "end": 76, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 830, "end": 837, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 1059, "end": 1068, "text": "(Table 1)", "ref_id": "TABREF1" }, { "start": 1172, "end": 1180, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The third goal of CoQA is to enable building QA systems that perform robustly across domains. The current QA datasets mainly focus on a single domain, which makes it hard to test the generalization ability of existing models. Hence we collect our dataset from seven different domainschildren's stories, literature, middle and high school English exams, news, Wikipedia, Reddit, and science. The last two are used for out-of-domain evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To summarize, CoQA has the following key characteristics:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 It consists of 127k conversation turns collected from 8k conversations over text passages. The average conversation length is 15 turns, and each turn consists of a question and an answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 It contains free-form answers and each answer has a span-based rationale highlighted in the passage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Its text passages are collected from seven diverse domains: five are used for in-domain evaluation and two are used for out-ofdomain evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Almost half of CoQA questions refer back to conversational history using anaphors, and a large portion require pragmatic reasoning, making it challenging for models that rely on lexical cues alone. We benchmark several deep neural network models, building on top of state-ofthe-art conversational and reading comprehension models (Section 5). The best-performing system achieves an F1 score of 65.4%. In contrast, humans achieve 88.8% F1, 23.4% F1 higher, indicating that there is a considerable room for improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given a passage and a conversation so far, the task is to answer the next question in the conversation. Each turn in the conversation contains a question and an answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2" }, { "text": "For the example in Figure 2 , the conversation begins with question Q 1 . We answer Q 1 with A 1 based on the evidence R 1 , which is a contiguous text span from the passage. In this example, the answerer only wrote the Governor as the answer but selected a longer rationale The Virginia governor's race.", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 27, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Task Definition", "sec_num": "2" }, { "text": "When we come to Q 2 (Where?), we must refer back to the conversation history, otherwise its answer could be Virginia or Richmond or something else. In our task, conversation history is indispensable for answering many questions. We use conversation history Q 1 and A 1 to answer Q 2 with A 2 based on the evidence R 2 . Formally, to answer Q n , it depends on the conversation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2" }, { "text": "MCTest (Richardson et al., 2013) Multiple choice Children's stories CNN/Daily Mail (Hermann et al., 2015) Spans News Children's book test (Hill et al., 2016) Multiple choice Children's stories SQuAD (Rajpurkar et al., 2016) Spans Wikipedia MS MARCO (Nguyen et al., 2016) Free-form text, Unanswerable Web Search NewsQA (Trischler et al., 2017) Spans News SearchQA (Dunn et al., 2017) Spans Jeopardy TriviaQA (Joshi et al., 2017) Spans Trivia RACE (Lai et al., 2017) Multiple choice Mid/High School Exams Narrative QA (Ko\u010disk\u1ef3 et al., 2018) Free-form text Movie Scripts, Literature SQuAD 2.0 (Rajpurkar et al., 2018) Spans history: Q 1 , A 1 , . . ., Q n\u22121 , A n\u22121 . For an unanswerable question, we give unknown as the final answer and do not highlight any rationale. In this example, we observe that the entity of focus changes as the conversation progresses. The questioner uses his to refer to Terry in Q 4 and he to Ken in Q 5 . If these are not resolved correctly, we end up with incorrect answers. The conversational nature of questions requires us to reason from multiple sentences (the current question and the previous questions or answers, and sentences from the passage). It is common that a single question may require a rationale spanning across multiple sentences (e.g., Q 1 , Q 4 , and Q 5 in Figure 1 ). We describe additional question and answer types in Section 4.", "cite_spans": [ { "start": 7, "end": 32, "text": "(Richardson et al., 2013)", "ref_id": "BIBREF37" }, { "start": 68, "end": 105, "text": "CNN/Daily Mail (Hermann et al., 2015)", "ref_id": null }, { "start": 138, "end": 157, "text": "(Hill et al., 2016)", "ref_id": "BIBREF17" }, { "start": 199, "end": 223, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF36" }, { "start": 249, "end": 270, "text": "(Nguyen et al., 2016)", "ref_id": "BIBREF31" }, { "start": 318, "end": 342, "text": "(Trischler et al., 2017)", "ref_id": "BIBREF45" }, { "start": 363, "end": 382, "text": "(Dunn et al., 2017)", "ref_id": "BIBREF12" }, { "start": 407, "end": 427, "text": "(Joshi et al., 2017)", "ref_id": "BIBREF21" }, { "start": 446, "end": 464, "text": "(Lai et al., 2017)", "ref_id": "BIBREF26" }, { "start": 516, "end": 538, "text": "(Ko\u010disk\u1ef3 et al., 2018)", "ref_id": "BIBREF24" }, { "start": 590, "end": 614, "text": "(Rajpurkar et al., 2018)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 1307, "end": 1315, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Conversational Answer Type Domain", "sec_num": null }, { "text": "Note that we collect rationales as (optional) evidence to help answer questions. However, they are not provided at testing time. A model needs to decide on the evidence by itself and derive the final answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conversational Answer Type Domain", "sec_num": null }, { "text": "For each conversation, we use two annotators, a questioner and an answerer. This setup has several advantages over using a single annotator to act both as a questioner and an answerer: 1) when two annotators chat about a passage, their dialogue flow is natural; 2) when one annotator responds with a vague question or an incorrect answer, the other can raise a flag, which we use to identify bad workers; and 3) the two annotators can discuss guidelines (through a separate chat window) when they have disagreements. These measures help to prevent spam and to obtain high agreement data. 4 We use Amazon Mechanical Turk to pair workers on a passage through the ParlAI MTurk API (Miller et al., 2017) .", "cite_spans": [ { "start": 588, "end": 589, "text": "4", "ref_id": null }, { "start": 678, "end": 699, "text": "(Miller et al., 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset Collection", "sec_num": "3" }, { "text": "We have different interfaces for a questioner and an answerer (see Appendix). A questioner's role is to ask questions, and an answerer's role is to answer questions in addition to highlighting rationales. Both questioner and answerer see the conversation that happened until now, that is, questions and answers from previous turns and rationales are kept hidden. While framing a new question, we want questioners to avoid using exact words in the passage in order to increase lexical diversity. When they type a word that is already present in the passage, we alert them to paraphrase the question if possible. While answering, we want answerers to stick to the vocabulary in the passage in order to limit the number of possible answers. We encourage this by asking them to first highlight a rationale (text span), which is then automatically copied into the answer box, and we further ask them to edit the copied text to generate a natural answer. We found 78% of the answers have at least one edit such as changing a word's case or adding a punctuation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collection Interface", "sec_num": "3.1" }, { "text": "We select passages from seven diverse domains: children's stories from MCTest (Richardson et al., 2013) , literature from Project Gutenberg, 5 middle and high school English exams from RACE (Lai et al., 2017) , news articles from CNN (Hermann et al., 2015) , articles from Wikipedia, Reddit articles from the Writing Prompts dataset (Fan et al., 2018) , and science articles from AI2 Science Questions (Welbl et al., 2017) .", "cite_spans": [ { "start": 78, "end": 103, "text": "(Richardson et al., 2013)", "ref_id": "BIBREF37" }, { "start": 190, "end": 208, "text": "(Lai et al., 2017)", "ref_id": "BIBREF26" }, { "start": 230, "end": 256, "text": "CNN (Hermann et al., 2015)", "ref_id": null }, { "start": 333, "end": 351, "text": "(Fan et al., 2018)", "ref_id": "BIBREF14" }, { "start": 402, "end": 422, "text": "(Welbl et al., 2017)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Passage Selection", "sec_num": "3.2" }, { "text": "Not all passages in these domains are equally good for generating interesting conversations. A passage with just one entity often results in questions that entirely focus on that entity. Therefore, we select passages with multiple entities, events, and pronominal references using Stanford CoreNLP . We truncate long articles to the first few paragraphs that result in around 200 words. Table 2 shows the distribution of domains. We reserve the Reddit and Science domains for outof-domain evaluation. For each in-domain dataset, we split the data such that there are 100 passages in the development set, 100 passages in the test set, and the rest in the training set. For each out- ", "cite_spans": [], "ref_spans": [ { "start": 387, "end": 394, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Passage Selection", "sec_num": "3.2" }, { "text": "Some questions in CoQA may have multiple valid answers. For example, another answer to Q 4 in Figure 2 is A Republican candidate. In order to account for answer variations, we collect three additional answers for all questions in the development and test data. Because our data are conversational, questions influence answers, which in turn influence the follow-up questions.", "cite_spans": [], "ref_spans": [ { "start": 94, "end": 102, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Collecting Multiple Answers", "sec_num": "3.3" }, { "text": "In the previous example, if the original answer was A Republican Candidate, then the following question Which party does he belong to? would not have occurred in the first place. When we show questions from an existing conversation to new answerers, it is likely they will deviate from the original answers, which makes the conversation incoherent. It is thus important to bring them to a common ground with the original answer. We achieve this by turning the answer collection task into a game of predicting original answers. First, we show a question to an answerer, and when she answers it, we show the original answer and ask her to verify if her answer matches the original. For the next question, we ask her to guess the original answer and verify again. We repeat this process with the same answerer until the conversation is complete. The entire conversation history is shown at each turn (question, answer, original answer for all previous turns but not the rationales). In our pilot experiment, the human F1 score is increased by 5.4% when we use this verification setup. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting Multiple Answers", "sec_num": "3.3" }, { "text": "What makes the CoQA dataset conversational compared to existing reading comprehension datasets like SQuAD? What linguistic phenomena do the questions in CoQA exhibit? How does the conversation flow from one turn to the next? We answer these questions in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Analysis", "sec_num": "4" }, { "text": "SQuAD has been the main benchmark for reading comprehension. In the following, we perform an in-depth comparison of CoQA and the latest version of SQuAD (Rajpurkar et al., 2018) . Because a conversation is spread over multiple turns, we expect conversational questions and answers to be shorter than in a standalone interaction. In fact, questions in CoQA can be made up of just one or two words (who?, when?, why?). As seen in Table 3, Table 4 : Distribution of answer types in SQuAD and CoQA. is only 5.5 words long whereas it is 10.1 for SQuAD. The answers are a bit shorter in CoQA than SQuAD because of the free-form nature of the answers. Table 4 provides insights into the type of answers in SQuAD and CoQA. While the original version of SQuAD (Rajpurkar et al., 2016) (Rajpurkar et al., 2018) focuses solely on obtaining them, resulting in higher frequency than in CoQA. SQuAD has 100% span-based answers by design, whereas in CoQA, 66.8% of the answers overlap with the passage after ignoring punctuation and case mismatches. 6 The rest of the answers, 33.2%, do not exactly overlap with the passage (see Section 4.3). It is worth noting that CoQA has 11.1% and 8.7% questions with yes or no as answers whereas SQuAD has 0%. Both datasets have a high number of named entities and noun phrases as answers.", "cite_spans": [ { "start": 153, "end": 177, "text": "(Rajpurkar et al., 2018)", "ref_id": "BIBREF35" }, { "start": 751, "end": 775, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF36" }, { "start": 776, "end": 800, "text": "(Rajpurkar et al., 2018)", "ref_id": "BIBREF35" }, { "start": 1035, "end": 1036, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 428, "end": 436, "text": "Table 3,", "ref_id": "TABREF5" }, { "start": 437, "end": 444, "text": "Table 4", "ref_id": null }, { "start": 645, "end": 652, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Comparison with SQuAD 2.0", "sec_num": "4.1" }, { "text": "We further analyze the questions for their relationship with the passages and the conversation history. We sample 150 questions in the development set and annotate various phenomena as shown in Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 194, "end": 201, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Linguistic Phenomena", "sec_num": "4.2" }, { "text": "If a question contains at least one content word that appears in the rationale, we classify it as lexical match. These constitute around 29.8% of the questions. If it has no lexical match but is a paraphrase of the rationale, we classify it as paraphrasing. These questions contain phenomena such as synonymy, antonymy, hypernymy, hyponymy, and negation. These constitute a large portion of questions, around 43.0%. The rest, 27.2%, have no lexical cues, and we classify them as pragmatics. These include phenomena like common sense and presupposition. For example, the question Was he loud and boisterous? is not a direct paraphrase of the rationale he dropped his feet with the lithe softness of a cat but the rationale combined with world knowledge can answer this question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Phenomena", "sec_num": "4.2" }, { "text": "For the relationship between a question and its conversation history, we classify questions into whether they are dependent or independent of the conversation history. If dependent, whether the questions contain an explicit marker or not. Our analysis shows that around 30.5% questions do not rely on coreference with the conversational history and are answerable on their own. Almost half of the questions (49.7%) contain explicit coreference markers such as he, she, it. These either refer to an entity or an event introduced in the conversation. The remaining 19.8% do not have explicit coreference markers but refer to an entity or event implicitly (these are often cases of ellipsis, as in the examples in Table 5 ). how long did it take to get to the fire? 3.4% A: Until supper time! R: By the time they arrived, it was almost supper time. Adverb deletion Q: What had happened to the ice? 3.0% A: It had changed R: It had somewhat changed its formation when they approached it Conjunction insertion Q: what else do they get for their work? 1.3% A: potatoes and carrots R: paid well, both in potatoes, carrots Noun insertion Q: Who did 1.3% A: Comedy Central employee R: But it was a Comedy Central account Coreference deletion Q: What is the story about? 1.2% A: A girl and a dog R: This is the story of a young girl and her dog Noun deletion Q: What is the ranking in the country in terms of people studying? 0.8% A: the fourth largest population R: and has the fourth largest student population Possesive insertion Q: Whose diary was it? 0.8% A: Deborah Logan's R: a 120-page diary kept 190 years ago by Deborah Logan Article deletion Q: why? 0.8% A: They were going to the circus R: They all were going to the circus to see the clowns Table 6 : Analysis of answers that don't overlap with the passage.", "cite_spans": [], "ref_spans": [ { "start": 711, "end": 718, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 1744, "end": 1751, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Linguistic Phenomena", "sec_num": "4.2" }, { "text": "Because of the free-form nature of CoQA's answers, around 33.2% of them do not exactly overlap with the given passage. We analyze 100 conversations to study the behavior of such answers. 7 As shown in Table 6 , the answers Yes and No constitute 48.5% and 30.3%, respectively, totaling 78.8%. The next majority, around 14.3%, are edits to text spans to improve the fluency (naturalness) of answers. More than two thirds of these edits are just one-word edits, either inserting or deleting a word. This indicates that text spans are a good approximation for natural answers-positive news for span-based reading comprehension models. The remaining one third involve multiple edits. Although multiple edits are challenging to evaluate using automatic metrics, we observe that many of these answers partially overlap with passage, indicating that word overlap is still a reliable automatic evaluation metric in our setting. The rest of the answers include counting (5.1%) and selecting a choice from the question (1.8%).", "cite_spans": [ { "start": 187, "end": 188, "text": "7", "ref_id": null } ], "ref_spans": [ { "start": 201, "end": 208, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Analysis of Free-form Answers", "sec_num": "4.3" }, { "text": "A coherent conversation must have smooth transitions between turns. We expect the narrative structure of the passage to influence our conversation flow. We split each passage into 10 uniform chunks, and identify chunks of interest in a given turn and its transition based on rationale spans. Figure 4 shows the conversation flow of the first 10 turns. The starting turns tend to focus on the first few chunks and as the conversation advances, the focus shifts to the later chunks. Moreover, the turn transitions are smooth, with the focus often remaining in the same chunk or moving to a neighboring chunk. Most frequent transitions happen to the first and the last chunks, and likewise these chunks have diverse outward transitions.", "cite_spans": [], "ref_spans": [ { "start": 292, "end": 300, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Conversation Flow", "sec_num": "4.4" }, { "text": "Given a passage p, the conversation history {q 1 , a 1 , . . . q i\u22121 , a i\u22121 }, and a question q i , the task is to predict the answer a i . Gold answers a 1 , a 2 , . . . , a i\u22121 are used to predict a i , similar to the setup discussed in Section 3.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5" }, { "text": "Our task can either be modeled as a conversational response generation problem or a reading comprehension problem. We evaluate strong baselines from each modeling type and a combination of the two on CoQA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5" }, { "text": "Sequence-to-sequence (seq2seq) models have shown promising results for generating conversational responses (Vinyals and Le, 2015; Zhang et al., 2018 ). Motivated by their success, we use a sequence-to-sequence with attention model for generating answers (Bahdanau et al., 2015) . We append the conversation history and the current question to the passage, as p q i\u2212n a i\u2212n . . . q i\u22121 a i\u22121 q i , and feed it into a bidirectional long short-term memory (LSTM) encoder, where n is the size of the history to be used. We generate the answer using an LSTM decoder which attends to the encoder states. Additionally, as the answer words are likely to appear in the original passage, we employ a copy mechanism in the decoder which allows to (optionally) copy a word from the passage (Gu et al., 2016; See et al., 2017) . This model is referred to as the Pointer-Generator network, PGNet.", "cite_spans": [ { "start": 107, "end": 129, "text": "(Vinyals and Le, 2015;", "ref_id": "BIBREF46" }, { "start": 130, "end": 148, "text": "Zhang et al., 2018", "ref_id": "BIBREF53" }, { "start": 254, "end": 277, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF1" }, { "start": 798, "end": 815, "text": "(Gu et al., 2016;", "ref_id": "BIBREF15" }, { "start": 816, "end": 833, "text": "See et al., 2017)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Conversational Models", "sec_num": "5.1" }, { "text": "The state-of-the-art reading comprehension models for extractive question answering focus on finding a span in the passage that matches the question best (Seo et al., 2016; Chen et al., 2017; Yu et al., 2018) . Because their answers are limited to spans, they cannot handle questions whose answers do not overlap with the passage (e.g., Q 3 , Q 4 , and Q 5 in Figure 1 ). However, this limitation makes them more effective learners than conversational models, which have to generate an answer from a large space of pre-defined vocabulary.", "cite_spans": [ { "start": 154, "end": 172, "text": "(Seo et al., 2016;", "ref_id": "BIBREF42" }, { "start": 173, "end": 191, "text": "Chen et al., 2017;", "ref_id": "BIBREF7" }, { "start": 192, "end": 208, "text": "Yu et al., 2018)", "ref_id": "BIBREF52" } ], "ref_spans": [ { "start": 360, "end": 368, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Reading Comprehension Models", "sec_num": "5.2" }, { "text": "We use the Document Reader (DrQA) model of Chen et al. (2017) , which has demonstrated strong performance on multiple datasets (Rajpurkar et al., 2016; Labutov et al., 2018) . Because DrQA requires text spans as answers during training, we select the span that has the highest lexical overlap (F1 score) with the original answer as the gold answer. If the answer appears multiple times in the story we use the rationale to find the correct one. If any answer word does not appear in the story, we fall back to an additional unknown token as the answer (about 17% in the training set). We prepend each question with its past questions and answers to account for conversation history, similar to the conversational models.", "cite_spans": [ { "start": 43, "end": 61, "text": "Chen et al. (2017)", "ref_id": "BIBREF7" }, { "start": 127, "end": 151, "text": "(Rajpurkar et al., 2016;", "ref_id": "BIBREF36" }, { "start": 152, "end": 173, "text": "Labutov et al., 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Reading Comprehension Models", "sec_num": "5.2" }, { "text": "Considering that a significant portion of answers in our dataset are yes or no (Table 4) , we also include an augmented reading comprehension model for comparison. We add two additional tokens, yes and no, to the end of the passage-if the gold answer is yes or no, the model is required to predict the corresponding token as the gold span; otherwise it does the same as the previous model. We refer to this model as Augmented DrQA.", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 88, "text": "(Table 4)", "ref_id": null } ], "eq_spans": [], "section": "Reading Comprehension Models", "sec_num": "5.2" }, { "text": "Finally, we propose a model that combines the advantages from both conversational models and extractive reading comprehension models. We use DrQA with PGNet in a combined model, in which DrQA first points to the answer evidence in the text, and PGNet naturalizes the evidence into an answer. For example, for Q 5 in Figure 1 , we expect that DrQA first predicts the rationale R 5 , and then PGNet generates A 5 from R 5 .", "cite_spans": [], "ref_spans": [ { "start": 316, "end": 324, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "A Combined Model", "sec_num": "5.3" }, { "text": "We make a few changes to DrQA and PGNet based on empirical performance. For DrQA, we require the model to predict the answer directly if the answer is a substring of the rationale, and to predict the rationale otherwise. For PGNet, we provide the current question and DrQA's span predictions as input to the encoder and the decoder aims to predict the final answer. 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Combined Model", "sec_num": "5.3" }, { "text": "Following SQuAD, we use macro-average F1 score of word overlap as our main evaluation metric. 9 We use the gold answers of history to predict the next answer. In SQuAD, for computing a model's performance, each individual prediction is compared against n human answers resulting in n F1 scores, the maximum of which is chosen as the prediction's F1. 10 For each question, we average out F1 across these n sets, both for humans and models. In our final evaluation, we use n = 4 human answers for every question (the original answer and 3 additionally collected answers). The articles a, an, and the and punctuations are excluded in evaluation.", "cite_spans": [ { "start": 94, "end": 95, "text": "9", "ref_id": null }, { "start": 350, "end": 352, "text": "10", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "6.1" }, { "text": "For all the experiments of seq2seq and PGNet, we use the OpenNMT toolkit (Klein et al., 2017) and its default settings: 2-layers of LSTMs with 500 hidden units for both the encoder and the decoder. The models are optimized using SGD, with an initial learning rate of 1.0 and a decay rate of 0.5. A dropout rate of 0.3 is applied to all layers.", "cite_spans": [ { "start": 73, "end": 93, "text": "(Klein et al., 2017)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6.2" }, { "text": "For the DrQA experiments, we use the implementation from the original paper (Chen et al., 2017) . We tune the hyperparameters on the development data: the number of turns to use from the conversation history, the number of layers, number of each hidden units per layer, and dropout rate. The best configuration we find is 3 layers of LSTMs with 300 hidden units for each layer. A dropout rate of 0.4 is applied to all LSTM layers and a dropout rate of 0.5 is applied to word embeddings. We used Adam to optimize DrQA models.", "cite_spans": [ { "start": 76, "end": 95, "text": "(Chen et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6.2" }, { "text": "We initialized the word projection matrix with GloVe (Pennington et al., 2014) for conversational models and fastText (Bojanowski et al., 2017) for reading comprehension models, based on empirical performance. We update the projection matrix during training in order to learn embeddings for delimiters such as .", "cite_spans": [ { "start": 53, "end": 78, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF34" }, { "start": 118, "end": 143, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6.2" }, { "text": "In-domain Out-of-dom.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6.2" }, { "text": "In-domain Out-of-dom. Table 7 presents the results of the models on the development and test data. Considering the results on the test set, the seq2seq model performs the worst, generating frequently occurring answers irrespective of whether these answers appear in the passage or not, a well known behavior of conversational models . PGNet alleviates the frequent response problem by focusing on the vocabulary in the passage and it outperforms seq2seq by 17.8 points. However, it still lags behind DrQA by 8.5 points. A reason could be that PGNet has to memorize the whole passage before answering a question, a huge overhead that DrQA avoids. But DrQA fails miserably in answering questions with answers that do not overlap with the passage (see row No span found in Table 8 ). The augmented DrQA circumvents this problem with additional yes/no tokens, giving it a boost of 12.8 points. When DrQA is fed into PGNet, we empower both DrQA and PGNet-DrQA in producing free-form answers, PGNet in focusing on the rationale instead of the passage. This combination outperforms vanilla PGNet and DrQA models by 21.0 and 12.5 points, respectively, and is competitive with the augmented DrQA (65.1 vs. 65.4).", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 29, "text": "Table 7", "ref_id": "TABREF10" }, { "start": 770, "end": 777, "text": "Table 8", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6.2" }, { "text": "Models vs. Humans The human performance on the test data is 88.8 F1, a strong agreement indicating that the CoQA's questions have concrete answers. Our best model is 23.4 points behind humans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.3" }, { "text": "In-domain vs. Out-of-domain All models perform worse on out-of-domain datasets compared with in-domain datasets. The best model drops by 6.6 points. For in-domain results, both the best model and humans find the literature domain harder than the others because literature's vocabulary requires proficiency in English. For out-of-domain results, the Reddit domain is apparently harder. Whereas humans achieve high performance on children's stories, models perform poorly, probably because of the fewer training examples in this domain compared with others. 11 Both humans and models find Wikipedia easy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.3" }, { "text": "Error Analysis Table 8 presents fine-grained results of models and humans on the development set. We observe that humans have the highest disagreement on the unanswerable questions. The human agreement on answers that do not overlap with passage is lower than on answers that do overlap. This is expected because our evaluation metric is based on word overlap rather than on the meaning of words. For the question did Jenny like her new room?, human answers she loved it and yes are both accepted. Finding the perfect evaluation metric for abstractive responses is still a challenging problem (Liu et al., 2016; Chaganty et al., 2018) and beyond the scope of our work. For our models' performance, seq2seq and PGNet perform well on non-overlapping answers, and DrQA performs well on overlapping answers, thanks to their respective designs. The augmented and combined models improve on both categories. Among the different question types, humans find lexical matches the easiest, followed by paraphrasing, and pragmatics the hardest-this is expected because questions with lexical matches and paraphrasing share some similarity with the passage, thus making them relatively easier to answer Table 9 : Results on the development set with different history sizes. History size indicates the number of previous turns prepended to the current question. Each turn contains a question and its answer.", "cite_spans": [ { "start": 593, "end": 611, "text": "(Liu et al., 2016;", "ref_id": "BIBREF28" }, { "start": 612, "end": 634, "text": "Chaganty et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 8", "ref_id": "TABREF11" }, { "start": 1190, "end": 1197, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.3" }, { "text": "than pragmatic questions. This is also the case with the combined model, but we could not explain the behavior of other models. Where humans find the questions without coreferences easier than those with coreferences, the models behave sporadically. Humans find implicit coreferences easier than explicit coreferences. A conjecture is that implicit coreferences depend directly on the previous turn, whereas explicit coreferences may have long distance dependency on the conversation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.3" }, { "text": "Importance of conversation history Finally, we examine how important the conversation history is for the dataset. Table 9 presents the results with a varied number of previous turns used as conversation history. All models succeed at leveraging history but the gains are little beyond one previous turn. As we increase the history size, the performance decreases. We also perform an experiment on humans to measure the trade-off between their performance and the number of previous turns shown. Based on the heuristic that short questions likely depend on the conversation history, we sample 300 one or two word questions, and collect answers to these varying the number of previous turns shown.", "cite_spans": [], "ref_spans": [ { "start": 114, "end": 121, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.3" }, { "text": "When we do not show any history, human performance drops to 19.9 F1, as opposed to 86.4 F1 when full history is shown. When the previous turn (question and answer) is shown, their performance boosts to 79.8 F1, suggesting that the previous turn plays an important role in understanding the current question. If the last two turns are shown, they reach up to 85.3 F1, almost close to the performance when the full history is shown. This suggests that most questions in a conversation have a limited dependency within a bound of two turns. DrQA is a bit better (0.3 F1 on the testing set) than the combined model, the latter model has the following benefits: 1) The combined model provides a rationale for every answer, which can be used to justify whether the answer is correct or not (e.g., yes/no questions); and 2) we don't have to decide on the set of augmented classes beforehand, which helps in answering a wide range of questions like counting and multiple choice (Table 10) . We also look closer into the outputs of the two models. Although the combined model is still far from perfect, it does correctly as desired in many examples-for example, for a counting question, it predicts a rationale current affairs, politics, and culture and generates an answer three; for a question With who?, it predicts a rationale Mary and her husband, Rick, and then compresses it into Mary and Rick for improving the fluency; and for a multiple choice question Does this help or hurt their memory of the event? it predicts a rationale this obsession may prevent their brains from remembering and answers hurt. We think there is still great room for improving the combined model and we leave it to future work.", "cite_spans": [], "ref_spans": [ { "start": 970, "end": 980, "text": "(Table 10)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.3" }, { "text": "We organize CoQA's relation to existing work under the following criteria.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "7" }, { "text": "Knowledge source We answer questions about text passages-our knowledge source. Another common knowledge source is machine-friendly databases, which organize world facts in the form of a table or a graph (Berant et al., 2013; Pasupat and Liang, 2015; Bordes et al., 2015; Saha et al., 2018; Talmor and Berant, 2018) . However, understanding their structure requires expertise, making it challenging to crowd-source large QA datasets without relying on templates. Like passages, other human-friendly sources are images and videos (Antol et al., 2015; Das et al., 2017; Hori et al., 2018) .", "cite_spans": [ { "start": 203, "end": 224, "text": "(Berant et al., 2013;", "ref_id": "BIBREF2" }, { "start": 225, "end": 249, "text": "Pasupat and Liang, 2015;", "ref_id": "BIBREF33" }, { "start": 250, "end": 270, "text": "Bordes et al., 2015;", "ref_id": "BIBREF4" }, { "start": 271, "end": 289, "text": "Saha et al., 2018;", "ref_id": null }, { "start": 290, "end": 314, "text": "Talmor and Berant, 2018)", "ref_id": "BIBREF44" }, { "start": 528, "end": 548, "text": "(Antol et al., 2015;", "ref_id": "BIBREF0" }, { "start": 549, "end": 566, "text": "Das et al., 2017;", "ref_id": "BIBREF10" }, { "start": 567, "end": 585, "text": "Hori et al., 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "7" }, { "text": "Naturalness There are various ways to curate questions: removing words from a declarative sentence to create a fill-in-the-blank question (Hermann et al., 2015) , using a hand-written grammar to create artificial questions Welbl et al., 2018) , paraphrasing artificial questions to natural questions (Saha et al., 2018; Talmor and Berant, 2018) , or, in our case, letting humans ask natural questions (Rajpurkar et al., 2016; Nguyen et al., 2016) . While the former enable collecting large and cheap datasets, the latter enable collecting natural questions. Recent efforts emphasize collecting questions without seeing the knowledge source in order to encourage the independence of question and documents (Joshi et al., 2017; Dunn et al., 2017; Ko\u010disk\u1ef3 et al., 2018 ). Because we allow a questioner to see the passage, we incorporate measures to increase independence, although complete independence is not attainable in our setup (Section 3.1). However, an advantage of our setup is that the questioner can validate the answerer on the spot resulting in high agreement data.", "cite_spans": [ { "start": 138, "end": 160, "text": "(Hermann et al., 2015)", "ref_id": "BIBREF16" }, { "start": 223, "end": 242, "text": "Welbl et al., 2018)", "ref_id": "BIBREF49" }, { "start": 300, "end": 319, "text": "(Saha et al., 2018;", "ref_id": null }, { "start": 320, "end": 344, "text": "Talmor and Berant, 2018)", "ref_id": "BIBREF44" }, { "start": 401, "end": 425, "text": "(Rajpurkar et al., 2016;", "ref_id": "BIBREF36" }, { "start": 426, "end": 446, "text": "Nguyen et al., 2016)", "ref_id": "BIBREF31" }, { "start": 705, "end": 725, "text": "(Joshi et al., 2017;", "ref_id": "BIBREF21" }, { "start": 726, "end": 744, "text": "Dunn et al., 2017;", "ref_id": "BIBREF12" }, { "start": 745, "end": 765, "text": "Ko\u010disk\u1ef3 et al., 2018", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "7" }, { "text": "Conversational Modeling Our focus is on questions that appear in a conversation. Iyyer et al. (2017) and Talmor and Berant (2018) break down a complex question into a series of simple questions mimicking conversational QA. Our work is closest to Das et al. (2017) and Saha et al. (2018) , who perform conversational QA on images and a knowledge graph, respectively, with the latter focusing on questions obtained by paraphrasing templates.", "cite_spans": [ { "start": 81, "end": 100, "text": "Iyyer et al. (2017)", "ref_id": "BIBREF20" }, { "start": 105, "end": 129, "text": "Talmor and Berant (2018)", "ref_id": "BIBREF44" }, { "start": 246, "end": 263, "text": "Das et al. (2017)", "ref_id": "BIBREF10" }, { "start": 268, "end": 286, "text": "Saha et al. (2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "7" }, { "text": "In parallel to our work, Choi et al. (2018) also created a dataset of conversations in the form of questions and answers on text passages. In our interface, we show a passage to both the questioner and the answerer, whereas their interface only shows a title to the questioner and the full passage to the answerer. Because their setup encourages the answerer to reveal more information for the following questions, their average answer length is 15.1 words (our average is 2.7). While the human performance on our test set is 88.8 F1, theirs is 74.6 F1. Moreover, although CoQA's answers can be freeform text, their answers are restricted only to extractive text spans. Our dataset contains passages from seven diverse domains, whereas their dataset is built only from Wikipedia articles about people.", "cite_spans": [ { "start": 25, "end": 43, "text": "Choi et al. (2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "7" }, { "text": "Concurrently, Saeidi et al. (2018) created a conversational QA dataset for regulatory text such as tax and visa regulations. Their answers are limited to yes or no along with a positive characteristic of permitting to ask clarification questions when a given question cannot be answered. Elgohary et al. (2018) proposed a sequential question answering dataset collected from Quiz Bowl tournaments, where a sequence contains multiple related questions. These questions are related to the same concept while not focusing on the dialogue aspects (e.g., coreference). Zhou et al. (2018) is another dialogue dataset based on a single movie-related Wikipedia article, in which two workers are asked to chat about the content. Their dataset is more like chit-chat style conversations whereas our dataset focuses on multi-turn question answering.", "cite_spans": [ { "start": 288, "end": 310, "text": "Elgohary et al. (2018)", "ref_id": "BIBREF13" }, { "start": 564, "end": 582, "text": "Zhou et al. (2018)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "7" }, { "text": "Reasoning Our dataset is a testbed of various reasoning phenomena occurring in the context of a conversation (Section 4). Our work parallels a growing interest in developing datasets that test specific reasoning abilities: algebraic reasoning (Clark, 2015) , logical reasoning , common sense reasoning (Ostermann et al., 2018) , and multi-fact reasoning (Welbl et al., 2018; Khashabi et al., 2018; Talmor and Berant, 2018) .", "cite_spans": [ { "start": 243, "end": 256, "text": "(Clark, 2015)", "ref_id": "BIBREF9" }, { "start": 302, "end": 326, "text": "(Ostermann et al., 2018)", "ref_id": "BIBREF32" }, { "start": 354, "end": 374, "text": "(Welbl et al., 2018;", "ref_id": "BIBREF49" }, { "start": 375, "end": 397, "text": "Khashabi et al., 2018;", "ref_id": "BIBREF22" }, { "start": 398, "end": 422, "text": "Talmor and Berant, 2018)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "7" }, { "text": "Recent Progress on CoQA Since we first released the dataset in August 2018, the progress of developing better models on CoQA has been rapid. Instead of simply prepending the current question with its previous questions and answers, Huang et al. (2019) proposed a more sophisticated solution to effectively stack single-turn models along the conversational flow. Others (e.g., Zhu et al., 2018) attempted to incorporate the most recent pretrained language representation model BERT (Devlin et al., 2018) 12 into CoQA and demonstrated superior results. As of the time we finalized the paper (Jan 8, 2019), the state-of-art F1 score on the test set was 82.8.", "cite_spans": [ { "start": 376, "end": 393, "text": "Zhu et al., 2018)", "ref_id": "BIBREF55" }, { "start": 481, "end": 502, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "7" }, { "text": "In this paper, we introduced CoQA, a large scale dataset for building conversational question answering systems. Unlike existing reading comprehension datasets, CoQA contains conversational questions, free-form answers along with 12 Pretrained BERT models were released in November 2018, which have demonstrated large improvements across a wide variety of NLP tasks. text spans as rationales, and text passages from seven diverse domains. We hope this work will stir more research in conversational modeling, a key ingredient for enabling natural human-machine communication. ", "cite_spans": [ { "start": 230, "end": 232, "text": "12", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "First each worker has to pass a qualification test that assesses their understanding of the guidelines of conversational QA. The success rate for the qualification test is 57% with 960 attempted workers. The guidelines indicate this is a conversation about a passage in the form of questions and answers, an example conversation and do's and don'ts. However, we give complete freedom for the workers to judge what is good and bad during the real conversation. This helped us in curating diverse categories of questions that were not present in the guidelines (e.g., true or false, fill in the blank and time series questions). We pay workers an hourly wage around 8-15 USD. Figure 5 shows the annotation interfaces for both questioners and answerers.", "cite_spans": [], "ref_spans": [ { "start": 674, "end": 682, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Appendix Worker Selection", "sec_num": null }, { "text": "We provide additional examples in Figure 7 and ", "cite_spans": [], "ref_spans": [ { "start": 34, "end": 42, "text": "Figure 7", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Additional Examples", "sec_num": null }, { "text": "CoQA is pronounced as coca.2 Concurrent with our work,Choi et al. (2018) also created a conversational dataset with a similar goal, but it differs in many aspects. We discuss the details in Section 7.1 Transactions of the Association for Computational Linguistics, vol. 7, pp. 1-18, 2019. Action Editor: Scott Wen-tau Yih. Submission batch: 10/2018; Revision batch: 1/2019; Published 5/2019. c 2019 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In contrast, in NarrativeQA, the annotators were encouraged to use their own words and copying was not allowed in their interface.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Due to Amazon Mechanical Turk terms of service, we allowed a single worker to act both as a questioner and an answerer after a minute of waiting. This constitutes around 12% of the data. We include this data in the training set only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Project Gutenberg https://www.gutenberg.org.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If punctuation and case are not ignored, only 37% of the answers can be found as spans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We only pick the questions in which none of its answers can be found as a span in the passage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We feed DrQA's oracle spans into PGNet during training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "SQuAD also uses exact-match metric, however, we think F1 is more appropriate for our dataset because of the freeform answers.10 However, for computing human performance, a human prediction is only compared against n \u2212 1 human answers, resulting in underestimating human performance. We fix this bias by partitioning n human answers into n different sets, each set containing n\u22121 answers, similar toChoi et al. (2018).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We collect children's stories from MCTest, which contains only 660 passages in total, of which we use 200 stories for the development and the test sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank MTurk workers, especially the Master Chatters and the MTC forum members, for contributing to the creation of CoQA, for giving feedback on various pilot interfaces, and for promoting our hits enthusiastically on various forums. CoQA has been made possible with financial support from the Facebook ParlAI and the Amazon Research awards, and gift funding from Toyota Research Institute. Danqi is supported by a Facebook PhD fellowship. We also would like to thank the members of the Stanford NLP group for critical feedback on the interface and experiments. We especially thank Drew Arad Hudson for participating in initial discussions, and Matthew Lamm for proof-reading the paper. We also thank the VQA team and Spandana Gella for their help in generating Figure 3. ", "cite_spans": [], "ref_spans": [ { "start": 778, "end": 787, "text": "Figure 3.", "ref_id": null } ], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "VQA: Visual question answering", "authors": [ { "first": "Stanislaw", "middle": [], "last": "Antol", "suffix": "" }, { "first": "Aishwarya", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2015, "venue": "International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "2425--2433", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Vi- sual question answering. In International Conference on Computer Vision (ICCV), pages 2425-2433. Santiago, Chile.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Repre- sentations (ICLR). San Diego, CA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Semantic parsing on Freebase from question-answer pairs", "authors": [ { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Chou", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Frostig", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2013, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1533--1544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empir- ical Methods in Natural Language Processing (EMNLP), pages 1533-1544. Seattle, WA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vec- tors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Large-scale simple question answering with memory networks", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.02075" ] }, "num": null, "urls": [], "raw_text": "Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The price of debiasing automatic metrics in natural language evaluation", "authors": [ { "first": "Arun", "middle": [], "last": "Chaganty", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Mussmann", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "643--653", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arun Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evaluation. In Asso- ciation for Computational Linguistics (ACL), pages 643-653. Melbourne, Australia.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A thorough examination of the CNN/Daily Mail reading comprehension task", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Bolton", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2016, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "2358--2367", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen, Jason Bolton, and Christopher D Manning. 2016. A thorough examination of the CNN/Daily Mail reading comprehension task. Association for Computational Linguistics (ACL), pages 2358-2367.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Reading Wikipedia to answer open-domain questions", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "1870--1879", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Asso- ciation for Computational Linguistics (ACL), pages 1870-1879. Vancouver, Canada.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "QuAC: Question Answering in Context", "authors": [ { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2174--2184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question Answering in Context. In Empirical Methods in Natural Language Processing (EMNLP), pages 2174-2184. Brussels, Belgium.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Elementary school science and math tests as a driver for AI: Take the Aristo Challenge! In Association for the Advancement of Artificial Intelligence (AAAI)", "authors": [ { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "4019--4021", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Clark. 2015. Elementary school science and math tests as a driver for AI: Take the Aristo Challenge! In Association for the Advancement of Artificial Intelligence (AAAI), pages 4019-4021. Austin, TX.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Visual dialog", "authors": [ { "first": "Abhishek", "middle": [], "last": "Das", "suffix": "" }, { "first": "Satwik", "middle": [], "last": "Kottur", "suffix": "" }, { "first": "Khushi", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Avi", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Deshraj", "middle": [], "last": "Yadav", "suffix": "" }, { "first": "M", "middle": [ "F" ], "last": "Jos\u00e9", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Moura", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "326--335", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos\u00e9 MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Computer Vision and Pattern Recognition (CVPR), pages 326-335. Honolulu, HI.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "BERT: Pretraining of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "SearchQA: A new Q&A dataset augmented with context from a search engine", "authors": [ { "first": "Matthew", "middle": [], "last": "Dunn", "suffix": "" }, { "first": "Levent", "middle": [], "last": "Sagun", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Higgins", "suffix": "" }, { "first": "V", "middle": [ "Ugur" ], "last": "Guney", "suffix": "" }, { "first": "Volkan", "middle": [], "last": "Cirik", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.05179" ] }, "num": null, "urls": [], "raw_text": "Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. SearchQA: A new Q&A dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Dataset and baselines for sequential open-domain question answering", "authors": [ { "first": "Ahmed", "middle": [], "last": "Elgohary", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" } ], "year": 2018, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1077--1083", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ahmed Elgohary, Chen Zhao, and Jordan Boyd- Graber. 2018. Dataset and baselines for se- quential open-domain question answering. In Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1077-1083. Brussels, Belgium.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Hierarchical neural story generation", "authors": [ { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Dauphin", "suffix": "" } ], "year": 2018, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "889--898", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Association for Computational Linguistics (ACL), pages 889-898. Melbourne, Australia.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Incorporating copying mechanism in sequence-to-sequence learning", "authors": [ { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "O", "middle": [ "K" ], "last": "Victor", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2016, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "1631--1640", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Association for Computational Linguistics (ACL), pages 1631-1640. Berlin, Germany.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Teaching machines to read and comprehend", "authors": [ { "first": "Karl", "middle": [], "last": "Moritz Hermann", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Kocisky", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Lasse", "middle": [], "last": "Espeholt", "suffix": "" }, { "first": "Will", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Mustafa", "middle": [], "last": "Suleyman", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "1693--1701", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS), pages 1693-1701. Montreal, Canada.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The Goldilocks Principle: Reading children's books with explicit memory representations", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2016, "venue": "International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The Goldilocks Principle: Reading children's books with explicit memory representations. In International Conference on Learning Representations (ICLR). San Juan, Puerto Rico.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Endto-end audio visual scene-aware dialog using multimodal attention-based video features", "authors": [ { "first": "Chiori", "middle": [], "last": "Hori", "suffix": "" }, { "first": "Huda", "middle": [], "last": "Alamri", "suffix": "" }, { "first": "Jue", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Gordon", "middle": [], "last": "Winchern", "suffix": "" }, { "first": "Takaaki", "middle": [], "last": "Hori", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Cherian", "suffix": "" }, { "first": "Tim", "middle": [ "K" ], "last": "Marks", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Cartillier", "suffix": "" }, { "first": "Raphael", "middle": [ "Gontijo" ], "last": "Lopes", "suffix": "" }, { "first": "Abhishek", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ifran", "middle": [], "last": "Essa", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.08409" ] }, "num": null, "urls": [], "raw_text": "Chiori Hori, Huda Alamri, Jue Wang, Gordon Winchern, Takaaki Hori, Anoop Cherian, Tim K Marks, Vincent Cartillier, Raphael Gontijo Lopes, Abhishek Das, Ifran Essa, Dhruv Batra, and Devi Parikh. 2018. End- to-end audio visual scene-aware dialog using multimodal attention-based video features. arXiv preprint arXiv:1806.08409.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "FlowQA: Grasping flow in history for conversational machine comprehension", "authors": [ { "first": "", "middle": [], "last": "Hsin-Yuan", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Choi", "suffix": "" }, { "first": "", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsin-Yuan Huang, Eunsol Choi, and Wen-tau Yih. 2019. FlowQA: Grasping flow in history for conversational machine comprehension. In International Conference on Learning Repre- sentations (ICLR). New Orleans, LA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Search-based neural structured learning for sequential question answering", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2017, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "1821--1831", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. 2017. Search-based neural structured learning for sequential question answering. In Association for Computational Linguistics (ACL), pages 1821-1831. Vancouver, Canada.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Weld", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "1601--1611", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Associa- tion for Computational Linguistics (ACL), pages 1601-1611. Vancouver, Canada.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Looking beyond the surface: A challenge set for reading comprehension over multiple sentences", "authors": [ { "first": "Daniel", "middle": [], "last": "Khashabi", "suffix": "" }, { "first": "Snigdha", "middle": [], "last": "Chaturvedi", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Shyam", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2018, "venue": "North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "252--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In North American Chapter of the Association for Computational Linguistics (NAACL), pages 252-262. New Orleans, LA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "OpenNMT: Open-source toolkit for neural machine translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "67--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Association for Com- putational Linguistics (ACL), pages 67-72. Vancouver, Canada.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The NarrativeQA Reading Comprehension Challenge", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Ko\u010disk\u1ef3", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Schwarz", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "G\u00e1bor", "middle": [], "last": "Melis", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "317--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Ko\u010disk\u1ef3, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G\u00e1bor Melis, and Edward Grefenstette. 2018. The NarrativeQA Reading Comprehension Chal- lenge. Transactions of the Association for Computational Linguistics, 6:317-328.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Multi-relational question answering from narratives: Machine reading and reasoning in simulated worlds", "authors": [ { "first": "Igor", "middle": [], "last": "Labutov", "suffix": "" }, { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Anusha", "middle": [], "last": "Prakash", "suffix": "" }, { "first": "Amos", "middle": [], "last": "Azaria", "suffix": "" } ], "year": 2018, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "833--844", "other_ids": {}, "num": null, "urls": [], "raw_text": "Igor Labutov, Bishan Yang, Anusha Prakash, and Amos Azaria. 2018. Multi-relational question answering from narratives: Machine reading and reasoning in simulated worlds. In Association for Computational Linguistics (ACL), pages 833-844. Melbourne, Australia.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "RACE: Largescale ReAding Comprehension Dataset From Examinations", "authors": [ { "first": "Guokun", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Qizhe", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Hanxiao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large- scale ReAding Comprehension Dataset From Examinations. In Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A diversitypromoting objective function for neural conversation models", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2016, "venue": "North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "110--119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity- promoting objective function for neural conver- sation models. In North American Chapter of the Association for Computational Linguistics (NAACL), pages 110-119. San Diego, CA.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation", "authors": [ { "first": "Chia-Wei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Iulian", "middle": [], "last": "Serban", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Noseworthy", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Charlin", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2016, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2122--2132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of un- supervised evaluation metrics for dialogue response generation. In Empirical Methods in Natural Language Processing (EMNLP), pages 2122-2132. Austin, TX.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Asso- ciation for Computational Linguistics (ACL), pages 55-60. Baltimore, MD.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "ParlAI: A dialog research software platform", "authors": [ { "first": "Alexander", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Will", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2017, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "79--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog re- search software platform. In Empirical Methods in Natural Language Processing (EMNLP), pages 79-84. Copenhagen, Denmark.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "MS MARCO: A human generated MAchine Reading COmprehension dataset", "authors": [ { "first": "Tri", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Mir", "middle": [], "last": "Rosenberg", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Song", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Saurabh", "middle": [], "last": "Tiwary", "suffix": "" }, { "first": "Rangan", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.09268" ] }, "num": null, "urls": [], "raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human gen- erated MAchine Reading COmprehension dataset. arXiv preprint arXiv:1611.09268.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "SemEval-2018 Task 11: Machine comprehension using commonsense knowledge", "authors": [ { "first": "Simon", "middle": [], "last": "Ostermann", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Ashutosh", "middle": [], "last": "Modi", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Thater", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2018, "venue": "International Workshop on Semantic Evaluation (SemEval)", "volume": "", "issue": "", "pages": "747--757", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Ostermann, Michael Roth, Ashutosh Modi, Stefan Thater, and Manfred Pinkal. 2018. SemEval-2018 Task 11: Machine comprehen- sion using commonsense knowledge. In Inter- national Workshop on Semantic Evaluation (SemEval), pages 747-757. New Orleans, LA.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Compositional semantic parsing on semi-structured tables", "authors": [ { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2015, "venue": "Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP)", "volume": "", "issue": "", "pages": "1470--1480", "other_ids": {}, "num": null, "urls": [], "raw_text": "Panupong Pasupat and Percy Liang. 2015. Com- positional semantic parsing on semi-structured tables. In Association for Computational Lin- guistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 1470-1480. Beijing, China.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543. Doha, Qatar.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Know what you don't know: Unanswerable questions for SQuAD", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "784--789", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Un- answerable questions for SQuAD. In Asso- ciation for Computational Linguistics (ACL), pages 784-789. Melbourne, Australia.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine compre- hension of text. In Empirical Methods in Natural Language Processing (EMNLP), pages 2383-2392. Austin, TX.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "MCTest: A challenge dataset for the open-domain machine comprehension of text", "authors": [ { "first": "Matthew", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Christopher", "suffix": "" }, { "first": "Erin", "middle": [], "last": "Burges", "suffix": "" }, { "first": "", "middle": [], "last": "Renshaw", "suffix": "" } ], "year": 2013, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "193--203", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine compre- hension of text. In Empirical Methods in Natural Language Processing (EMNLP), pages 193-203. Seattle, WA.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Interpretation of natural language rules in conversational machine reading", "authors": [ { "first": "Marzieh", "middle": [], "last": "Saeidi", "suffix": "" }, { "first": "Max", "middle": [], "last": "Bartolo", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Sheldon", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Bouchard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2018, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2087--2097", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rockt\u00e4schel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpretation of natural language rules in conversational machine reading. In Em- pirical Methods in Natural Language Pro- cessing (EMNLP), pages 2087-2097. Brussels, Belgium.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph", "authors": [ { "first": "Karthik", "middle": [], "last": "Khapra", "suffix": "" }, { "first": "Sarath", "middle": [], "last": "Sankaranarayanan", "suffix": "" }, { "first": "", "middle": [], "last": "Chandar", "suffix": "" } ], "year": 2018, "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "705--713", "other_ids": {}, "num": null, "urls": [], "raw_text": "Khapra, Karthik Sankaranarayanan, and Sarath Chandar. 2018. Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph. In Association for the Advancement of Artificial Intelligence (AAAI), pages 705-713. New Orleans, LA.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Get to the point: Summarization with pointer-generator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "1073--1083", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summa- rization with pointer-generator networks. In Annual Meeting of the Association for Compu- tational Linguistics (ACL), pages 1073-1083. Vancouver, Canada.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Bidirectional attention flow for machine comprehension", "authors": [ { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Aniruddha", "middle": [], "last": "Kembhavi", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2016, "venue": "International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. In International Conference on Learning Repre- sentations (ICLR). San Juan, Puerto Rico.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Generative deep neural networks for dialogue: A short review", "authors": [ { "first": "Iulian", "middle": [], "last": "Vlad Serban", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Charlin", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.06216" ] }, "num": null, "urls": [], "raw_text": "Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, and Joelle Pineau. 2016. Generative deep neural networks for dialogue: A short review. arXiv preprint arXiv:1611.06216.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "The Web as a knowledge-base for answering complex questions", "authors": [ { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2018, "venue": "North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "641--651", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon Talmor and Jonathan Berant. 2018. The Web as a knowledge-base for answering complex questions. In North American Chapter of the Association for Computational Linguistics (NAACL), pages 641-651. New Orleans, LA.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "NewsQA: A machine comprehension dataset", "authors": [ { "first": "Adam", "middle": [], "last": "Trischler", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xingdi", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Harris", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Bachman", "suffix": "" }, { "first": "Kaheer", "middle": [], "last": "Suleman", "suffix": "" } ], "year": 2017, "venue": "Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "191--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Workshop on Rep- resentation Learning for NLP, pages 191-200. Vancouver, Canada.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "A neural conversational model", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.05869" ] }, "num": null, "urls": [], "raw_text": "Oriol Vinyals and Quoc Le. 2015. A neu- ral conversational model. arXiv preprint arXiv:1506.05869.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Making neural QA as simple as possible but not simpler", "authors": [ { "first": "Dirk", "middle": [], "last": "Weissenborn", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Wiese", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Seiffe", "suffix": "" } ], "year": 2017, "venue": "Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "271--280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural QA as simple as possible but not simpler. In Computa- tional Natural Language Learning (CoNLL), pages 271-280. Vancouver, Canada.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Crowdsourcing multiple choice Science Questions", "authors": [ { "first": "Johannes", "middle": [], "last": "Welbl", "suffix": "" }, { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2017, "venue": "Workshop on Noisy User-generated Text", "volume": "", "issue": "", "pages": "94--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes Welbl, Nelson F. Liu, and Matt Gardner. 2017. Crowdsourcing multiple choice Science Questions. In Workshop on Noisy User-generated Text, pages 94-106. Copenhagen, Denmark.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Constructing datasets for multihop reading comprehension across documents", "authors": [ { "first": "Johannes", "middle": [], "last": "Welbl", "suffix": "" }, { "first": "Pontus", "middle": [], "last": "Stenetorp", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "287--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi- hop reading comprehension across docu- ments. Transactions of the Association for Computational Linguistics, 6:287-302.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Towards AI-complete question answering: A set of prerequisite toy tasks", "authors": [ { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri\u00ebnboer, Armand Joulin, and Tomas Mikolov. 2016. Towards AI-complete question answering: A set of prerequisite toy tasks. In International Con- ference on Learning Representations (ICLR).", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Fast and accurate reading comprehension by combining self-attention and convolution", "authors": [ { "first": "Adams", "middle": [ "Wei" ], "last": "Yu", "suffix": "" }, { "first": "David", "middle": [], "last": "Dohan", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adams Wei Yu, David Dohan, Quoc Le, Thang Luong, Rui Zhao, and Kai Chen. 2018. Fast and accurate reading comprehension by combin- ing self-attention and convolution. In Interna- tional Conference on Learning Representations (ICLR). Vancouver, Canada.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Personalizing dialogue agents: I have a dog, do you have pets too?", "authors": [ { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Urbanek", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2018, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "2204--2213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Associ- ation for Computational Linguistics (ACL), pages 2204-2213. Melbourne, Australia.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "A dataset for document grounded conversations", "authors": [ { "first": "Kangyan", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Shrimai", "middle": [], "last": "Prabhumoye", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2018, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "708--713", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kangyan Zhou, Shrimai Prabhumoye, and Alan W Black. 2018. A dataset for document grounded conversations. In Empirical Methods in Natural Language Processing (EMNLP), pages 708-713. Brussels, Belgium.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "SDNet: Contextualized attentionbased deep network for conversational question answering", "authors": [ { "first": "Chenguang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Xuedong", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1812.03593" ] }, "num": null, "urls": [], "raw_text": "Chenguang Zhu, Michael Zeng, and Xuedong Huang. 2018. SDNet: Contextualized attention- based deep network for conversational question answering. arXiv preprint arXiv:1812.03593.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "A conversation from the CoQA dataset. Each turn contains a question (Q i ), an answer (A i ), and a rationale (R i ) that supports the answer.", "uris": null }, "FIGREF1": { "type_str": "figure", "num": null, "text": "A conversation showing coreference chains in color. The entity of focus changes in Q4, Q5, and Q6.", "uris": null }, "FIGREF2": { "type_str": "figure", "num": null, "text": "Distribution of trigram prefixes of questions in SQuAD and CoQA.", "uris": null }, "FIGREF3": { "type_str": "figure", "num": null, "text": "Figure 3(a) and Figure 3(b) show the distribution of frequent trigram prefixes. Because of the freeform nature of answers, we expect a richer variety of questions in CoQA than in SQuAD. While nearly half of SQuAD questions are dominated by what questions, the distribution of CoQA is spread across multiple question types. Several sectors indicated by prefixes did, was, is, does, and and are frequent in CoQA but are completely absent in SQuAD. Whereas coreferences are non-existent in SQuAD, almost every sector of CoQA contains coreferences (he, him, she, it, they), indicating that CoQA is highly conversational.", "uris": null }, "FIGREF4": { "type_str": "figure", "num": null, "text": "Chunks of interest as a conversation progresses. Each chunk is one tenth of a passage. The x-axis indicates the turn number and the y-axis indicates the chunk containing the rationale. The height of a chunk indicates the concentration of conversation in that chunk. The width of the bands is proportional to the frequency of transition between chunks from one turn to the next.", "uris": null }, "FIGREF5": { "type_str": "figure", "num": null, "text": "Annotation interfaces for questioner (top) and answerer (bottom).", "uris": null }, "FIGREF7": { "type_str": "figure", "num": null, "text": "In this example, the questioner explores questions related to time.", "uris": null }, "FIGREF8": { "type_str": "figure", "num": null, "text": "A conversation containing No and unknown as answers.", "uris": null }, "TABREF1": { "type_str": "table", "num": null, "content": "", "html": null, "text": "Comparison of CoQA with existing reading comprehension datasets." }, "TABREF3": { "type_str": "table", "num": null, "content": "
: Distribution of domains in CoQA.
of-domain dataset, we only have 100 passages in
the test set.
", "html": null, "text": "" }, "TABREF4": { "type_str": "table", "num": null, "content": "
SQuADCoQA
Passage Length117271
Question Length10.15.5
Answer Length3.22.7
", "html": null, "text": "on average, a question in CoQA" }, "TABREF5": { "type_str": "table", "num": null, "content": "
SQuADCoQA
Answerable66.7%98.7%
Unanswerable33.3%1.3%
Span found100.0%66.8%
No span found0.0%33.2%
Named Entity35.9%28.7%
Noun Phrase25.0%19.6%
Yes0.0%11.1%
No0.1%8.7%
Number16.5%9.8%
Date/Time7.1%3.9%
Other15.5%18.1%
", "html": null, "text": "Average number of words in passage, question, and answer in SQuAD and CoQA." }, "TABREF7": { "type_str": "table", "num": null, "content": "", "html": null, "text": "Linguistic phenomena in CoQA questions. not have any unanswerable questions, the later version" }, "TABREF8": { "type_str": "table", "num": null, "content": "
Answer TypeExamplePercentage
YesQ: is MedlinePlus optimized for mobile?48.5%
A: Yes
R: There is also a site optimized for display on mobile devices
NoQ: Is it played outside?30.3%
A: No
R: AFL is the highest level of professional indoor American football
FluencyQ: Why?14.3%
A: so the investigation could continue
R: while the investigation continued
CountingQ: how many languages is it offered in?5.1%
A: Two
R: The service provides curated consumer health information in English and Spanish
Multiple choiceQ: Is Jenny older or younger?1.8%
A: Older
R: her baby sister is crying so loud that Jenny can't hear herself
Fine grained breakdown of Fluency
Multiple editsQ: What did she try just before that?41.4%
A: Coreference insertion Q: what is the cost to end users?16.0%
A: It is free
R: The service is funded by the NLM and is free to users
MorphologyQ: Who was messing up the neighborhoods?13.9%
A: vandals
R: vandalism in the neighborhoods
Article insertionQ: What would they cut with?7.2%
A: an ax
R: the heavy ax
Adverb insertionQ: How old was the diary?4.2%
A: 190 years old
R: kept 190 years ago
Adjective deletionQ: What type of book?4.2%
A: A diary.
R: a 120-page diary
Preposition insertion
", "html": null, "text": "She gave her a toy horse. R: She would give her baby sister one of her toy horses. (morphology: give \u2192 gave, horses \u2192 horse; delete: would, baby sister one of her; insert: a)" }, "TABREF10": { "type_str": "table", "num": null, "content": "
Augmt.DrQA+
", "html": null, "text": "Models and human performance (F1 score) on the development and the test data." }, "TABREF11": { "type_str": "table", "num": null, "content": "", "html": null, "text": "Fine-grained results of different question and answer types in the development set. For the question type results, we only analyze 150 questions as described in Section 4.2." }, "TABREF14": { "type_str": "table", "num": null, "content": "
", "html": null, "text": "Error analysis of questions with answers that do not overlap with the text passage." } } } }