{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:11:06.291045Z" }, "title": "Querent Intent in Multi-Sentence Questions", "authors": [ { "first": "Laurie", "middle": [], "last": "Burchell", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "laurie.burchell@ed.ac.uk" }, { "first": "Jie", "middle": [], "last": "Chi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "jie.chi@ed.ac.uk" }, { "first": "Tom", "middle": [], "last": "Hosking", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "tom.hosking@ed.ac.uk" }, { "first": "Nina", "middle": [], "last": "Markl", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "nina.markl@ed.ac.uk" }, { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Multi-sentence questions (MSQs) are sequences of questions connected by relations which, unlike sequences of standalone questions, need to be answered as a unit. Following Rhetorical Structure Theory (RST), we recognise that different \"question discourse relations\" between the subparts of MSQs reflect different speaker intents, and consequently elicit different answering strategies. Correctly identifying these relations is therefore a crucial step in automatically answering MSQs. We identify five different types of MSQs in English, and define five novel relations to describe them. We extract over 162,000 MSQs from Stack Exchange to enable future research. Finally, we implement a high-precision baseline classifier based on surface features.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Multi-sentence questions (MSQs) are sequences of questions connected by relations which, unlike sequences of standalone questions, need to be answered as a unit. Following Rhetorical Structure Theory (RST), we recognise that different \"question discourse relations\" between the subparts of MSQs reflect different speaker intents, and consequently elicit different answering strategies. Correctly identifying these relations is therefore a crucial step in automatically answering MSQs. We identify five different types of MSQs in English, and define five novel relations to describe them. We extract over 162,000 MSQs from Stack Exchange to enable future research. Finally, we implement a high-precision baseline classifier based on surface features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A multi-sentence question (MSQ) is a dialogue turn that contains more than one question (cf. Ex. (1)). We refer to the speaker of such a turn as a querent (i.e., one who seeks).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) Querent: How can I transport my cats if I am moving a long distance? (Q1) For example, flying them from NYC to London? (Q2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A standard question answering system might consider these questions separately:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) A1: You can take them in the car with you. A2: British Airways fly from NYC to London.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, this na\u00efve approach does not result in a good answer, since the querent intends that an answer take both questions into account: in (1), Q2 clarifies that taking pets by car is not a relevant option. The querent is likely looking for an answer like (3):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(3) A: British Airways will let you fly pets from NYC to London.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Whilst question answering (QA) has received significant research attention in recent years (Joshi et al., 2017; Agrawal et al., 2017) , there is little research to date on answering MSQs, despite their prevalence in English. Furthermore, existing QA datasets are not appropriate for the study of MSQs as they tend to be sequences of standalone questions constructed in relation to a text by crowdworkers (e.g. SQuAD (Rajpurkar et al., 2016) ). We are not aware of any work that has attempted to improve QA performance on MSQs, despite the potential for obvious errors as in the example above.", "cite_spans": [ { "start": 91, "end": 111, "text": "(Joshi et al., 2017;", "ref_id": "BIBREF8" }, { "start": 112, "end": 133, "text": "Agrawal et al., 2017)", "ref_id": "BIBREF0" }, { "start": 416, "end": 440, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contribution towards the broader research goal of automatically answering MSQs is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We create a new dataset of 162,745 English two-question MSQs from Stack Exchange.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We define five types of MSQ according to how they are intended to be answered, inferring intent from relations between them. \u2022 We design a baseline classifier based on surface features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Prior work on QA has focused on either single questions contained within dialogue (Choi et al., 2018; Reddy et al., 2019; Saeidi et al., 2018; Clark et al., 2018) , or questions composed of two or more sentences crowd-sourced by community QA (cQA) services (John and Kurian, 2011; Tamura et al., 2005) . Our definition of MSQs is similar to the latter, but it should be noted that sentences in existing cQA datasets can be declarative or standalone, while in our case they must be a sequence of questions that jointly imply some user intent. Popular tasks on cQA have only considered the semantics of individual questions and answers, while we are more focused on interactions between questions. Huang et al. (2008) and Krishnan et al. (2005) classify questions to improve QA performance, but their work is limited to standalone questions. Ciurca (2019) was the first to identify MSQs as a distinct phenomenon, and curated a small dataset consisting of 300 MSQs extracted from Yahoo Answers. However, this dataset is too small to enable significant progress on automatic classification of MSQ intent.", "cite_spans": [ { "start": 82, "end": 101, "text": "(Choi et al., 2018;", "ref_id": "BIBREF1" }, { "start": 102, "end": 121, "text": "Reddy et al., 2019;", "ref_id": "BIBREF14" }, { "start": 122, "end": 142, "text": "Saeidi et al., 2018;", "ref_id": "BIBREF17" }, { "start": 143, "end": 162, "text": "Clark et al., 2018)", "ref_id": "BIBREF3" }, { "start": 257, "end": 280, "text": "(John and Kurian, 2011;", "ref_id": "BIBREF7" }, { "start": 281, "end": 301, "text": "Tamura et al., 2005)", "ref_id": "BIBREF20" }, { "start": 696, "end": 715, "text": "Huang et al. (2008)", "ref_id": "BIBREF6" }, { "start": 720, "end": 742, "text": "Krishnan et al. (2005)", "ref_id": "BIBREF10" }, { "start": 840, "end": 853, "text": "Ciurca (2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Prior work", "sec_num": "2" }, { "text": "Stack Exchange is a network of question-answering sites, where each site covers a particular topic. Questions on Stack Exchange are formatted to have a short title and then a longer body describing the question, meaning that it is far more likely to contain MSQs than other question answering sites, which tend to focus attention on the title with only a short amount of description after the title. There is a voting system which allows us to proxy well-formedness, since badly-formed questions are likely to be rated poorly. It covers a variety of topics, meaning that we can obtain questions from a variety of domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Large-scale MSQ dataset", "sec_num": "3" }, { "text": "To obtain the data, we used the Stack Exchange Data Explorer 1 , an open source tool for running arbitrary queries against public data from the Stack Exchange network. We chose 93 sites within the network, and queried each site for entries with at least two question marks in the body of the question. We removed any questions with T E X and mark-up tags, then replaced any text matching a RegEx pattern for a website with '[website]'. From this cleaned text, we extracted pairs of MSQs by splitting the cleaned body of the question into sentences, then finding two adjacent sentences ending in '?'. We removed questions under 5 or over 300 characters in length. Finally, we removed any question identified as non-English using langid.py (Lui and Baldwin, 2012) . Many of the questions labelled as 'non-English' were in fact badly formed English, making language identification a useful pre-processing step.", "cite_spans": [ { "start": 738, "end": 761, "text": "(Lui and Baldwin, 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Large-scale MSQ dataset", "sec_num": "3" }, { "text": "After cleaning and processing, we extracted 162,745 questions from 93 topics 2 . A full list of topics and the number of questions extracted from each is given in Appendix A. We restrict the dataset to pairs of questions, leaving longer sequences of MSQs for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Large-scale MSQ dataset", "sec_num": "3" }, { "text": "MSQs are distinct from sequences of standalone questions in that their subparts need to be considered as a unit (see (1) in Section 1). This is because they form a discourse: a coherent sequence of utterances (Hobbs, 1979) . In declarative sentences, the relationship between their different parts is specified by \"discourse relations\" (Stede, 2011; Kehler, 2006) , which may be signalled with discourse markers (e.g. if, because) or discourse adverbials (e.g. as a result, see Rohde et al. (2015) ). We propose adapting the notion of discourse relations to interrogatives.", "cite_spans": [ { "start": 209, "end": 222, "text": "(Hobbs, 1979)", "ref_id": "BIBREF5" }, { "start": 336, "end": 349, "text": "(Stede, 2011;", "ref_id": "BIBREF19" }, { "start": 350, "end": 363, "text": "Kehler, 2006)", "ref_id": "BIBREF9" }, { "start": 478, "end": 497, "text": "Rohde et al. (2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "MSQ type as a proxy for speaker intent", "sec_num": "4" }, { "text": "A particularly useful approach to discourse relations in the context of MSQs is Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) , which understands them to be an expression of the speaker's communicative intent. Listeners can infer this intent under the assumptions that speakers are \"cooperative\" and keep their contributions as brief and relevant as possible (Grice, 1975) . Transposing this theory to interrogatives, we can conceptualise the querent's communicative intent as a specific kind of answer. Reflecting this intent, the relation suggests an answering strategy.", "cite_spans": [ { "start": 114, "end": 139, "text": "(Mann and Thompson, 1988)", "ref_id": "BIBREF12" }, { "start": 373, "end": 386, "text": "(Grice, 1975)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "MSQ type as a proxy for speaker intent", "sec_num": "4" }, { "text": "We introduce five types of \"question discourse relations\" with a prototypical example from our data set, highlighting the inferred intent and the proposed answering strategy in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 184, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "MSQ type as a proxy for speaker intent", "sec_num": "4" }, { "text": "How often should I feed it? Intent Two questions on the same topic (the querent's kitten). Strategy Resolve coreference and answer both questions separately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEPARABLE Example What's the recommended kitten food?", "sec_num": null }, { "text": "Example Is Himalayan pink salt okay to use in fish tanks?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "REFORMULATED", "sec_num": null }, { "text": "I read that aquarium salt is good but would pink salt work? Intent Speaker wants to paraphrase Q1 (perhaps for clarity). Strategy Answer one of the two questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "REFORMULATED", "sec_num": null }, { "text": "Example Is it normal for my puppy to eat so quickly?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DISJUNCTIVE", "sec_num": null }, { "text": "Or should I take him to the vet? Intent Querent offers two potential answers in the form of polar questions. Strategy Select one of the answers offered (e.g. \"Yes, it is normal\") or reject both (e.g. \"Neithertry feeding it less but more often\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DISJUNCTIVE", "sec_num": null }, { "text": "Example Has something changed that is making cats harder to buy?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONDITIONAL", "sec_num": null }, { "text": "If so, what changed? Intent Q2 only matters if the answer to Q1 is \"yes\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONDITIONAL", "sec_num": null }, { "text": "Strategy First consider what the answer to Q1 is and then answer Q2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONDITIONAL", "sec_num": null }, { "text": "Example How can I transport my cats if I am moving a long distance?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ELABORATIVE", "sec_num": null }, { "text": "For example, flying them from NYC to London? Intent Querent wants a more specific answer. Strategy Combine context and answer the second question only. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ELABORATIVE", "sec_num": null }, { "text": "Since Ciurca (2019) found that using conventional discourse parsers created for declaratives is not suitable for extracting discourse relations from MSQs, we design our own annotation scheme and use it to implement a baseline classifier. Following previous work on extracting discourse relations (Rohde et al., 2015) , we use discourse markers and discourse adverbials alongside other markers indicative of the structure of the question (listed in appendix C) to identify explicitly signalled relations. 3 We construct a high-precision, low-recall set of rules to distinguish the most prototypical forms of the five types using combinations of binary contrastive features. To derive the relevant features, we consider the minimal edits to examples of MSQs required to break or change the type of discourse relation between their parts. We then define a feature mask for each MSQ type which denotes whether each feature is required, disallowed or ignored by that type. Each mask is mutually exclusive by design.", "cite_spans": [ { "start": 6, "end": 19, "text": "Ciurca (2019)", "ref_id": "BIBREF2" }, { "start": 296, "end": 316, "text": "(Rohde et al., 2015)", "ref_id": "BIBREF16" }, { "start": 504, "end": 505, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Classification using contrastive features", "sec_num": "5" }, { "text": "Given a pair of questions, the system enumerates the values of each feature, and compares to the definitions in Appendix B. If a match is found, the pair is assigned the corresponding MSQ label, otherwise annotation and our classifier. While SEPARABLE MSQs appear to be the most prevalent, the classifier identifies only a small fraction of them, implying that they are likely to be implicitly signalled. DISJUNCTIVE and CONDI-TIONAL are the most likely to be explicitly signalled. it is assigned UNKNOWN. This process is illustrated in Appendix D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification using contrastive features", "sec_num": "5" }, { "text": "U N K S E P A R A B L E R E F O R M U L A T I O N E L A B O R A T I V E D I S J U N C T I V E C O N D I T I O N A L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification using contrastive features", "sec_num": "5" }, { "text": "S E P A R A B L E R E F O R M U L A T IO N E L A B O R A T IV E D IS JU N C T IV E C O N D", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification using contrastive features", "sec_num": "5" }, { "text": "To evaluate our classifier, 420 MSQs from our test set were annotated by two native speakers. We then evaluate the classifier on the subset of 271 samples for which both annotators agreed on the MSQ type. The resultant confusion matrix is shown in Figure 1a , with the classifier achieving 82.9% precision and 26.5% recall.", "cite_spans": [], "ref_spans": [ { "start": 248, "end": 257, "text": "Figure 1a", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Classification using contrastive features", "sec_num": "5" }, { "text": "Overall, we find that our classifier performs well for a heuristic approach, but that real world data contains many subtleties that can break our assumptions. During the annotation process, we found many instances of single questions followed by a question which fulfils a purely social function, such as \"Is it just me or this a problem?\" (a phatic question, see Robinson et al. (1992) ). MSQs can also exhibit more than one intent, presenting a challenge for both our classifier and the expert annotators (see Appendix E).", "cite_spans": [ { "start": 364, "end": 386, "text": "Robinson et al. (1992)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Classification using contrastive features", "sec_num": "5" }, { "text": "A limitation of our classifier is the focus on explicit MSQs, which can be identified with well-defined features. The low recall of our classifier indicates that MSQs are often implicit, missing certain markers or not completely fulfilling the distinguishing requirements. Figure 1b shows that while DISJUNCTIVE and CONDITIONAL MSQs are often explicitly signalled, the other types are likely to be implicit.", "cite_spans": [], "ref_spans": [ { "start": 273, "end": 282, "text": "Figure 1b", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Classification using contrastive features", "sec_num": "5" }, { "text": "Inspired by the role of discourse relations in MSQ answering strategies, we propose a novel definition of five different categories of MSQs based on their corresponding speaker intents. We introduce a rich and diversified multi-sentence questions dataset, which contains 162,000 MSQs extracted from Stack Exchange. This achieves our goal of providing a resource for further study of MSQs. Additionally, we implement a baseline classifier based on surface features as a prelimininary step towards successful answering strategies for MSQs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Future work could improve on our classifier by considering implicit MSQs, with one potential approach being to transform explicit MSQs into implicit examples by removing some markers while ensuring the relation is still valid. Other areas for further work include implementing appropriate answering strategies for different types of MSQs, and investigating whether and how longer chains of MSQs differ compared to pairs of connected questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Note that these requirements do not apply in all cases: if all conditions are met then we assert that a pair of questions must be of that type, but the absence of a feature does not forbid that relation from being present. A list of the lexical markers used to define features is given in Appendix C. Like discourse relations in declaratives, question discourse relations may be implicit, i.e. not marked with a connective or other marker but inferable to the listener. These implicit relations continue to be very challenging for automatic systems (Sporleder and Lascarides, 2008) and we do not attempt to handle them. Table 3 : Definitions for each MSQ type. '+' indicates that a feature is required, while '-' means the feature is disallowed. Types can be ignored for some features, meaning that the features are neither disallowed nor required.", "cite_spans": [ { "start": 549, "end": 581, "text": "(Sporleder and Lascarides, 2008)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 620, "end": 627, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "B Contrastive features", "sec_num": null }, { "text": "We include the case where Q2 is a statement, as in \"How can I transport my cats if I am moving a long distance? For example, to London.\". Although these forms of MSQ do not appear in our dataset due to the filtering method used, they are in general valid, and we include them for completeness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Contrastive features", "sec_num": null }, { "text": "To evaluate semantic similarity between Q1 and Q2, we calculated the cosine similarity between the mean of the words vectors, and compared to a threshold of 0.8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Contrastive features", "sec_num": null }, { "text": "Some of the discourse markers are drawn from the Penn Discourse Tree Bank (PDTB) annotation scheme (Webber et al., 2019) .", "cite_spans": [ { "start": 99, "end": 120, "text": "(Webber et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "C Lexical Markers", "sec_num": null }, { "text": "To identify cases of anaphora, we searched for the following pronoun strings (and it's) in the second question: she, he, it, they, her, his, its, their, them, it's", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.1 Anaphora Markers", "sec_num": null }, { "text": "To identify cases of verb ellipsis, we searched for the following pro-forms of full verb phrases in the second question: do so, did so, does so, do it, do too, does too, did too, did it too, do it too, does it too does, did, didn't, will, won't, would, is, are, were, weren't, wasn't, can, can't, could, must, have, has, had, hasn't, haven't, should, shouldn't, may, might, shall, who, what, where, when, why, how, which C.5 Conditional (\"if\") Markers if so, accordingly, then, as a result, it follows, subsequently, consequently, if yes, if not, if the answer is yes, if the answer is no C.6 Elaborative Markers for instance, for example, e.g., specifically, particularly, in particular, more specifically, more precisely, therefore C.7 Separable Markers also, secondly, next, related, relatedly, similarly, furthermore 14 11 3 3 0 1 3 93 31 16 2 1 0 9 36 10 0 1 12 12 19 39 3 0 0 0 2 0 43 1 2 9 0 0 0 46 Figure 3 : Confusion matrix between the two annotators who labelled our test set. While there is good agreement on the MSQ types that are often explicitly signalled (DISJUNCTIVE and CONDITIONAL), the other types are often more subtle, and examples may involve multiple intents.", "cite_spans": [ { "start": 214, "end": 219, "text": "does,", "ref_id": null }, { "start": 220, "end": 224, "text": "did,", "ref_id": null }, { "start": 225, "end": 232, "text": "didn't,", "ref_id": null }, { "start": 233, "end": 238, "text": "will,", "ref_id": null }, { "start": 239, "end": 245, "text": "won't,", "ref_id": null }, { "start": 246, "end": 252, "text": "would,", "ref_id": null }, { "start": 253, "end": 256, "text": "is,", "ref_id": null }, { "start": 257, "end": 261, "text": "are,", "ref_id": null }, { "start": 262, "end": 267, "text": "were,", "ref_id": null }, { "start": 268, "end": 276, "text": "weren't,", "ref_id": null }, { "start": 277, "end": 284, "text": "wasn't,", "ref_id": null }, { "start": 285, "end": 289, "text": "can,", "ref_id": null }, { "start": 290, "end": 296, "text": "can't,", "ref_id": null }, { "start": 297, "end": 303, "text": "could,", "ref_id": null }, { "start": 304, "end": 309, "text": "must,", "ref_id": null }, { "start": 310, "end": 315, "text": "have,", "ref_id": null }, { "start": 316, "end": 320, "text": "has,", "ref_id": null }, { "start": 321, "end": 325, "text": "had,", "ref_id": null }, { "start": 326, "end": 333, "text": "hasn't,", "ref_id": null }, { "start": 334, "end": 342, "text": "haven't,", "ref_id": null }, { "start": 343, "end": 350, "text": "should,", "ref_id": null }, { "start": 351, "end": 361, "text": "shouldn't,", "ref_id": null }, { "start": 362, "end": 366, "text": "may,", "ref_id": null }, { "start": 367, "end": 373, "text": "might,", "ref_id": null }, { "start": 374, "end": 380, "text": "shall,", "ref_id": null } ], "ref_spans": [ { "start": 906, "end": 914, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "C.2 Verb Ellipsis Markers", "sec_num": null }, { "text": "C.3 Polar Question Markers do,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.2 Verb Ellipsis Markers", "sec_num": null }, { "text": "https://data.stackexchange.com/ 2 Our dataset is available at https://github.com/laurieburchell/multi-sentence-questions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Since implicit discourse relations are pervasive and challenging to automatic systems(Sporleder and Lascarides, 2008), we make no attempt to extract them here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh. We would like to thank Bonnie Webber for her supervision, and Ivan Titov and Adam Lopez for their useful advice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "VQA: Visual question answering", "authors": [ { "first": "Aishwarya", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Stanislaw", "middle": [], "last": "Antol", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "C", "middle": [ "Lawrence" ], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "International Journal of Computer Vision", "volume": "123", "issue": "1", "pages": "4--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, C. Lawrence Zitnick, Devi Parikh, and Dhruv Batra. 2017. VQA: Visual question answering. International Journal of Computer Vision, 123(1):4-31, May.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Quac: Question answering in context", "authors": [ { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.07036" ] }, "num": null, "urls": [], "raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. arXiv preprint arXiv:1808.07036.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Sense classification of multi-sentence questions", "authors": [ { "first": "Tudor", "middle": [], "last": "Ciurca", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tudor Ciurca. 2019. Sense classification of multi-sentence questions. Master's thesis, School of Informatics, University of Edinburgh.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Think you have solved question answering? Try ARC, the AI2 reasoning challenge", "authors": [ { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Isaac", "middle": [], "last": "Cowhey", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "Carissa", "middle": [], "last": "Schoenick", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.05457" ] }, "num": null, "urls": [], "raw_text": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? Try ARC, the AI2 reasoning challenge. arXiv preprint arXiv:1803.05457.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Logic and conversation", "authors": [ { "first": "H", "middle": [ "P" ], "last": "Grice", "suffix": "" } ], "year": 1975, "venue": "Syntax and Semantics 3", "volume": "", "issue": "", "pages": "41--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. P. Grice. 1975. Logic and conversation. In P Cole and J Morgan, editors, Syntax and Semantics 3, pages 41-58. Academic Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Coherence and coreference", "authors": [ { "first": "R", "middle": [], "last": "Jerry", "suffix": "" }, { "first": "", "middle": [], "last": "Hobbs", "suffix": "" } ], "year": 1979, "venue": "Cognitive Science", "volume": "3", "issue": "1", "pages": "67--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerry R Hobbs. 1979. Coherence and coreference. Cognitive Science, 3(1):67-90.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Question classification using head words and their hypernyms", "authors": [ { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Thint", "suffix": "" }, { "first": "Zengchang", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "927--936", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiheng Huang, Marcus Thint, and Zengchang Qin. 2008. Question classification using head words and their hypernyms. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 927-936, Honolulu, Hawaii, October. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Research issues in community based question answering", "authors": [ { "first": "Blooma", "middle": [], "last": "John", "suffix": "" }, { "first": "Jayan", "middle": [], "last": "Kurian", "suffix": "" } ], "year": 2011, "venue": "PACIS 2011 -15th Pacific Asia Conference on Information Systems: Quality Research in Pacific", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blooma John and Jayan Kurian. 2011. Research issues in community based question answering. In PACIS 2011 - 15th Pacific Asia Conference on Information Systems: Quality Research in Pacific, page 29, 01.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. CoRR, abs/1705.03551.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Discourse coherence", "authors": [ { "first": "Andrew", "middle": [], "last": "Kehler", "suffix": "" } ], "year": 2006, "venue": "The Handbook of Pragmatics", "volume": "", "issue": "", "pages": "241--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Kehler. 2006. Discourse coherence. In Laurence Horn and Gregory Ward, editors, The Handbook of Pragmatics, pages 241-265. Blackwell Publishing Ltd.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Enhanced answer type inference from questions using sequential models", "authors": [ { "first": "Vijay", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "Sujatha", "middle": [], "last": "Das", "suffix": "" }, { "first": "Soumen", "middle": [], "last": "Chakrabarti", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "315--322", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vijay Krishnan, Sujatha Das, and Soumen Chakrabarti. 2005. Enhanced answer type inference from questions us- ing sequential models. In Proceedings of Human Language Technology Conference and Conference on Empir- ical Methods in Natural Language Processing, pages 315-322, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "2012. langid.py: An off-the-shelf language identification tool", "authors": [ { "first": "Marco", "middle": [], "last": "Lui", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": null, "venue": "Proceedings of the ACL 2012 System Demonstrations", "volume": "", "issue": "", "pages": "25--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Proceedings of the ACL 2012 System Demonstrations, pages 25-30, Jeju Island, Korea, July. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Rhetorical structure theory: Toward a functional theory of text organization", "authors": [ { "first": "C", "middle": [], "last": "William", "suffix": "" }, { "first": "Sandra", "middle": [ "A" ], "last": "Mann", "suffix": "" }, { "first": "", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 1988, "venue": "Text: Interdisciplinary Journal for the Study of Discourse", "volume": "8", "issue": "3", "pages": "243--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text: Interdisciplinary Journal for the Study of Discourse, 8(3):243-281.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "CoQA: A conversational question answering challenge", "authors": [ { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "249--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266, March.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "How Are You?\": Negotiating Phatic Communion", "authors": [ { "first": "Nikolas", "middle": [], "last": "Jeffrey D Robinson", "suffix": "" }, { "first": "Justine", "middle": [], "last": "Coupland", "suffix": "" }, { "first": "", "middle": [], "last": "Coupland", "suffix": "" } ], "year": 1992, "venue": "Language in Society", "volume": "21", "issue": "2", "pages": "207--230", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey D Robinson, Nikolas Coupland, and Justine Coupland. 1992. \"How Are You?\": Negotiating Phatic Communion. Language in Society, 21(2):207-230.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Recovering discourse relations: Varying influence of discourse adverbials", "authors": [ { "first": "Hannah", "middle": [], "last": "Rohde", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Dickinson", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Annie", "middle": [], "last": "Louis", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the First Workshop on Linking Computational Models of Lexical, Sentential and Discourse-level Semantics", "volume": "", "issue": "", "pages": "22--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hannah Rohde, Anna Dickinson, Chris Clark, Annie Louis, and Bonnie Webber. 2015. Recovering discourse relations: Varying influence of discourse adverbials. In Proceedings of the First Workshop on Linking Compu- tational Models of Lexical, Sentential and Discourse-level Semantics, pages 22-31.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Interpretation of natural language rules in conversational machine reading", "authors": [ { "first": "Marzieh", "middle": [], "last": "Saeidi", "suffix": "" }, { "first": "Max", "middle": [], "last": "Bartolo", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Sheldon", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Bouchard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rockt\u00e4schel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpretation of natural language rules in conversational machine reading. CoRR, abs/1809.01494.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Using automatically labelled examples to classify rhetorical relations: an assessment", "authors": [ { "first": "Caroline", "middle": [], "last": "Sporleder", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" } ], "year": 2008, "venue": "Natural Language Engineering", "volume": "14", "issue": "3", "pages": "369--416", "other_ids": {}, "num": null, "urls": [], "raw_text": "Caroline Sporleder and Alex Lascarides. 2008. Using automatically labelled examples to classify rhetorical relations: an assessment. Natural Language Engineering, 14(3):369-416.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Discourse processing", "authors": [ { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2011, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "4", "issue": "3", "pages": "1--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manfred Stede. 2011. Discourse processing. Synthesis Lectures on Human Language Technologies, 4(3):1-165, nov.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Classification of multiple-sentence questions", "authors": [ { "first": "Akihiro", "middle": [], "last": "Tamura", "suffix": "" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "" } ], "year": 2005, "venue": "Second International Joint Conference on Natural Language Processing: Full Papers", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akihiro Tamura, Hiroya Takamura, and Manabu Okumura. 2005. Classification of multiple-sentence questions. In Second International Joint Conference on Natural Language Processing: Full Papers.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The Penn discourse treebank 3.0 annotation manual", "authors": [ { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" }, { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonnie Webber, Rashmi Prasad, Alan Lee, and Aravind Joshi. 2019. The Penn discourse treebank 3.0 annotation manual.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Confusion matrix for our classifier evaluated on our handannotated test set. The classifier can reliably detect DIS-JUNCTIVE and CONDITIONAL MSQs, achieving a high overall precision score of 82.9%." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Counts of each MSQ type in our test set, according to our" }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Figure 1" }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "" }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "A visualisation of the labelling process. The system checks for the presence of each feature in the input text (circled in blue) and constructs a feature vector for the pair of questions. This feature vector is compared to the definitions, and if a match is found the questions are assigned the corresponding label. Note that not all features are shown." }, "TABREF0": { "type_str": "table", "num": null, "html": null, "content": "