{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:13:44.149655Z" }, "title": "What Would it Take to get Biomedical QA Systems into Practice?", "authors": [ { "first": "Gregory", "middle": [], "last": "Kell", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Iain", "middle": [ "J" ], "last": "Marshall", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "", "affiliation": {}, "email": "b.wallace@northeastern.edu" }, { "first": "Andr\u00e9", "middle": [], "last": "Jaun", "suffix": "", "affiliation": {}, "email": "ajaun@metadvice.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Medical question answering (QA) systems have the potential to answer clinicians' uncertainties about treatment and diagnosis ondemand, informed by the latest evidence. However, despite the significant progress in general QA made by the NLP community, medical QA systems are still not widely used in clinical environments. One likely reason for this is that clinicians may not readily trust QA system outputs, in part because transparency, trustworthiness, and provenance have not been key considerations in the design of such models. In this paper we discuss a set of criteria that, if met, we argue would likely increase the utility of biomedical QA systems, which may in turn lead to adoption of such systems in practice. We assess existing models, tasks, and datasets with respect to these criteria, highlighting shortcomings of previously proposed approaches and pointing toward what might be more usable QA systems.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Medical question answering (QA) systems have the potential to answer clinicians' uncertainties about treatment and diagnosis ondemand, informed by the latest evidence. However, despite the significant progress in general QA made by the NLP community, medical QA systems are still not widely used in clinical environments. One likely reason for this is that clinicians may not readily trust QA system outputs, in part because transparency, trustworthiness, and provenance have not been key considerations in the design of such models. In this paper we discuss a set of criteria that, if met, we argue would likely increase the utility of biomedical QA systems, which may in turn lead to adoption of such systems in practice. We assess existing models, tasks, and datasets with respect to these criteria, highlighting shortcomings of previously proposed approaches and pointing toward what might be more usable QA systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "During consultations in primary care, clinicians generate at least one question for every two patients (Del Fiol et al., 2014) . Nonetheless, clinicians look for answers to only half of the questions due to time constraints and the belief that answers to certain questions do not exist (Del Fiol et al., 2014) , despite the plethora of available evidence (Bastian et al., 2010) . When clinicians do search for answers, they usually spend fewer than three minutes per question doing so (Del Fiol et al., 2014; Hoogendam et al., 2008) .", "cite_spans": [ { "start": 103, "end": 126, "text": "(Del Fiol et al., 2014)", "ref_id": "BIBREF11" }, { "start": 286, "end": 309, "text": "(Del Fiol et al., 2014)", "ref_id": "BIBREF11" }, { "start": 355, "end": 377, "text": "(Bastian et al., 2010)", "ref_id": "BIBREF3" }, { "start": 485, "end": 508, "text": "(Del Fiol et al., 2014;", "ref_id": "BIBREF11" }, { "start": 509, "end": 532, "text": "Hoogendam et al., 2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our focus in this paper is on questions pertaining to patient care decisions, for example seeking guidance about diagnosis or treatment. Ideally, clinicians would search for answers to such questions with reference to high-quality studies and up-to-date evidence syntheses, typically indexed in medical databases such as PubMed 1 and the Cochrane Library. 2 This practice of emphasizing use of rigorous empirical evidence is known as evidence-based medicine (EBM). Under this framework, evidence compiled from all relevant highquality research (in the form of, e.g., systematic reviews and rigorously produced clinical guidelines) is preferred to individual studies or expert opinion (Ebell et al., 2004; Guyatt et al., 2008; Alper and Haynes, 2016a) . Unfortunately, searching existing sources for relevant, high-quality information is onerous. Due to the time constraints imposed on clinicians, this leads to widespread reliance on general information sources such as Google (Hider et al., 2009) . However, while simple to use, generalpurpose search engines rank results according to criteria not directly aligned with EBM principles such as rigour, comprehensiveness, and reliability (Hider et al., 2009) .", "cite_spans": [ { "start": 684, "end": 704, "text": "(Ebell et al., 2004;", "ref_id": "BIBREF12" }, { "start": 705, "end": 725, "text": "Guyatt et al., 2008;", "ref_id": "BIBREF16" }, { "start": 726, "end": 750, "text": "Alper and Haynes, 2016a)", "ref_id": "BIBREF1" }, { "start": 977, "end": 997, "text": "(Hider et al., 2009)", "ref_id": "BIBREF17" }, { "start": 1187, "end": 1207, "text": "(Hider et al., 2009)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Aside from internet search, clinicians often engage in informal discussions about decisions with colleagues in what are sometimes referred to as \"curbside consultations\" Champion, 2017, 2020; O'leary and Mhaolr\u00fanaigh, 2012) . It is common for practitioners to engage in at least one such discussion per week for practical reasons, including convenience, or an urgent need for information (Smith, 1996) . These inform the \"mindlines\" that clinicians acquire over their careers (i.e., mental models of medicine) and that are also based on other sources including guideline documents, training, background reading, and experience (Gabbay and le May, 2016) . However, the information exchanged in informal consultations may be inaccurate, incomplete, and lead to practice influenced more by expert opinion than the scientific literature (Papermaster and Champion, 2017) .", "cite_spans": [ { "start": 170, "end": 191, "text": "Champion, 2017, 2020;", "ref_id": null }, { "start": 192, "end": 223, "text": "O'leary and Mhaolr\u00fanaigh, 2012)", "ref_id": "BIBREF29" }, { "start": 388, "end": 401, "text": "(Smith, 1996)", "ref_id": "BIBREF41" }, { "start": 627, "end": 652, "text": "(Gabbay and le May, 2016)", "ref_id": "BIBREF15" }, { "start": 833, "end": 865, "text": "(Papermaster and Champion, 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Medical question answering (QA) systems have the potential to address these issues by answering clinicians' questions in real-time on the ba-sis of the latest evidence. This has motivated development of QA systems and associated medical QA datasets used to train them. For example, BioASQ (Tsatsaronis et al., 2015) and PubMedQA (Jin et al., 2019) have been created to train and evaluate systems that answer clinicians' questions based on medical research literature, while em-rQA (Pampari et al., 2018) , emrKBQA (Raghavan et al., 2021) and why-QA (Fan, 2019) were constructed using queries concerning patient data from electronic health records (EHRs). MEDIQA-QA (Ben Abacha et al., 2019) and LiveQA-Medical (Abacha et al., 2017) are datasets designed for systems that answer consumer (patient) queries. MEDIQA-AnS (Savery et al., 2020) accompanies the answers from MEDIQA-QA with summaries that consumers would understand more easily. Systems for QA over EHRs aim to answer questions about the medical history or prior care of individual patients. By contrast, our focus here is on systems that can provide general evidence-based guidance in response to queries; we therefore omit emrQA, emrKBQA and why-QA from our discussion.", "cite_spans": [ { "start": 289, "end": 315, "text": "(Tsatsaronis et al., 2015)", "ref_id": null }, { "start": 329, "end": 347, "text": "(Jin et al., 2019)", "ref_id": "BIBREF20" }, { "start": 481, "end": 503, "text": "(Pampari et al., 2018)", "ref_id": "BIBREF30" }, { "start": 514, "end": 537, "text": "(Raghavan et al., 2021)", "ref_id": null }, { "start": 549, "end": 560, "text": "(Fan, 2019)", "ref_id": "BIBREF14" }, { "start": 655, "end": 690, "text": "MEDIQA-QA (Ben Abacha et al., 2019)", "ref_id": null }, { "start": 695, "end": 731, "text": "LiveQA-Medical (Abacha et al., 2017)", "ref_id": null }, { "start": 817, "end": 838, "text": "(Savery et al., 2020)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Existing biomedical QA systems that answer questions with reference to the medical literature typically provide answers in the form of yes/no, factoids, lists, and/or definitions (Sarrouti and Ouatik El Alaoui, 2020; Ben Abacha and Zweigenbaum, 2015; Cao et al., 2011; Zahid et al., 2018; Yu et al., 2007) without supplying justifications, e.g., source journals, extracted text snippets, and/or associated statistics. However, this answer format does not readily translate into clinical practice.", "cite_spans": [ { "start": 179, "end": 216, "text": "(Sarrouti and Ouatik El Alaoui, 2020;", "ref_id": "BIBREF37" }, { "start": 217, "end": 250, "text": "Ben Abacha and Zweigenbaum, 2015;", "ref_id": "BIBREF5" }, { "start": 251, "end": 268, "text": "Cao et al., 2011;", "ref_id": "BIBREF7" }, { "start": 269, "end": 288, "text": "Zahid et al., 2018;", "ref_id": "BIBREF45" }, { "start": 289, "end": 305, "text": "Yu et al., 2007)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Take, for example, the question \"Which antibiotic should I use for urinary tract infections?\". A factoid-based QA system might (reasonably) return the answer \"trimethoprim 200mg\". However, a \"correct\" answer is not sufficient to translate into clinical use. An answer here is only as reliable as the source from which it was extracted. The source therefore needs to be judiciously chosen, and presented transparently. Furthermore, in this example, the knowledge of the best treatment requires information about the patients' age and any additional health problems (for instance, dosing may vary in children, or where someone has impaired kidney function). The optimal treatment might vary by location, reflecting local or individual bacterial resistance patterns (which frequently change over time), or vary depending on the cost of drug acquisition or availability. A factoid answer does not allow the possibility of changing practice, or providing critical information which is not a direct response to the narrow question asked (perhaps an antibiotic is not always needed). These issues both need to be considered in producing an answer, and need to be seen to have been considered by the clinician before s/he can feel confident in following the recommendation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this context, reliability is multi-faceted. For an answer to be reliable it must have been extracted from a trustworthy source, accurately transcribed, and relevant to the clinical context (was the dosing information extracted for the correct clinical condition?). It should also be locally applicable, and recent. In this example, a methodologically sound national clinical guideline is likely to be highly dependable, whereas a journal editorial or case study giving one expert's idiosyncratic opinion might be safely ignored. A question-answering system which does not understand the difference is not likely to be useful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We argue that the deployment of EBM-guided QA systems-by which we mean those intended to provide answers to clinical questions based on published evidence-in clinical practices is contingent on the outputs being reliable and actionable. Clinicians should be able to trust that the most robust evidence was retrieved, and that conflicting evidence was handled appropriately. Uncertainties associated with answers should be communicated to the clinician. 2 Desiderata for Medical QA What would be needed for clinicians to trust, and actually act upon answers provided by a QA system? In our view, the necessary criteria include: Provenance of the evidence and its reliability; Faithfulness of the evidence to the source, and; Transparency with respect to how answers are chosen, and how conflicting evidence is resolved. In accordance with these criteria, we suggest the following questions to assess the transparency of QA systems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Do the answers come from reliable sources for health information? All research articles are not equal, and there exist mature approaches to help clinicians identify the most reliable advice from the health literature. Evidence-Based Medicine is one such framework in which the findings of the most rigorous study designs (typically high quality clinical guidelines, and systematic reviews of the primary literature) are preferred to case studies and observational research (Alper and Haynes, 2016b; Sackett et al., 1985) .", "cite_spans": [ { "start": 476, "end": 501, "text": "(Alper and Haynes, 2016b;", "ref_id": "BIBREF2" }, { "start": 502, "end": 523, "text": "Sackett et al., 1985)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "More sophisticated approaches (e.g., risk of bias assessment tools and the GRADE framework; Higgins et al. 2011; Guyatt et al. 2008) go further by estimating how confident one should be in a research finding, taking into account aspects such as study type, the precision Figure 4 : Example of a medical QA system output that meets the criteria in \u00a72.", "cite_spans": [ { "start": 113, "end": 132, "text": "Guyatt et al. 2008)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 271, "end": 279, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The assumption is that the clinician is based in Nottingham while the most relevant guideline is for Wirral Community Teaching hospital (which is in a different region). The corresponding text spans in the question and response are highlighted with the same color. of the statistical results, and whether problems in study design were likely to have led to bias. QA systems which take a naive approach to evidence extraction-for example, selecting an answer from an undifferentiated corpus of scientific literature, treating all studies as equally reliable-are likely to be considerably less useful to clinicians. This is particularly true because there is often no definitive \"correct\" answer to a query; an overview of the best available evidence is what is sought. We suggest that QA systems should aim to explicitly use more rigorous, theoretically informed approaches to sorting the literature, mirroring the best current practice of manual question answering and evidence synthesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Does the system provide guidance? When searching for answers clinicians are looking for guidance, not just information. Guidance consists of recommendations of what to do in various clinical situations, while Boolean or factoid answers appear more absolute. The demand for guidance is reflected by the fact that many questions are of the form \"Should I ...?\" (Del Fiol et al., 2014; Ely et al., 2000; Papermaster and Champion, 2017) . Therefore, the system could respond with \"study/review X suggests the following action... \". This response could encourage the clinician to engage with the guidance and think critically about how to apply it in practice.", "cite_spans": [ { "start": 362, "end": 385, "text": "(Del Fiol et al., 2014;", "ref_id": "BIBREF11" }, { "start": 386, "end": 403, "text": "Ely et al., 2000;", "ref_id": "BIBREF13" }, { "start": 404, "end": 435, "text": "Papermaster and Champion, 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the aforementioned example on urinary tract infections (UTI), the NICE 4 guideline (NICE, 2019) recommends Nitrofurantoin under specific conditions: If the estimated glomerular filtration rate (eGFR) \u2265 45 ml/minute then 100 mg modified-release twice a day (or if unavailable, 50 mg four times a day) for 3 days.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. Are the answers useful in the context in which the provider is practicing? The usefulness of a QA system could be limited by factors such as drug availability, antibiotic resistance, and local or national funding/resources. Therefore, QA systems should account for the resources that are available to 4 The National Institute for Health and Care Excellence: the UK national health guideline producer clinicians when providing guidance. In addition, what is deemed as \"best practice\" may vary by location (i.e., region or country).", "cite_spans": [ { "start": 304, "end": 305, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "If a clinician were to consult a QA system on whether \"men receiving long-term GnRH analogues for prostate cancer should be offered regular DEXA scans to monitor potential loss of bone density\", guidelines from Wirral Community Teaching Hospital might be retrieved. The clinician would need to decide whether the guidelines apply to their locality (e.g., Nottingham) where DEXA scans may or may not be readily available. 4. Is there sufficient \"rationale\" for the answer provided? Prior work has shown that users of QA systems prefer answers to consist of paragraph-sized chunks of text as opposed to concise phrases (Lin et al., 2003) . Lengthier \"answers\" provide context, and allow users to ensure that the information in the source text is consistent with the final answer. As answers should be faithful to the source, any generated summaries should probably be extractive rather than abstractive.", "cite_spans": [ { "start": 617, "end": 635, "text": "(Lin et al., 2003)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For example, the answer to \"what dose of flucloxacillin should I prescribe for a 5 year old child?\" could consist of the snippet highlighted in Figure 1 . However, in cases where the answer is derived from multiple sources it may be necessary to generate a summary.", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 152, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "5. Does the system resolve conflicting evidence appropriately? Higher quality information should be prioritized using frameworks for rating the quality of evidence (Ebell et al., 2004; Guyatt et al., 2008; Alper and Haynes, 2016a) . If there are conflicts between equally relevant and reliable sources, the system should refrain from providing oversimplified guidance and inform the clinician of the conflicting sources. This could form the basis for further investigation by the clinician.", "cite_spans": [ { "start": 164, "end": 184, "text": "(Ebell et al., 2004;", "ref_id": "BIBREF12" }, { "start": 185, "end": 205, "text": "Guyatt et al., 2008;", "ref_id": "BIBREF16" }, { "start": 206, "end": 230, "text": "Alper and Haynes, 2016a)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The query \"Should spinal manipulations be used to treat headaches?\" could return three conflicting systematic reviews: one concluding that they should (Bryans et al., 2011) and two others that judge the evidence to be inconclusive (Chaibi et al., 2011; Posadzki and Ernst, 2011) . A QA system should inform the clinician of these contradictions. An ideal system would assess the relative methodological quality of the reviews, and present the most rigorous and reliable first.", "cite_spans": [ { "start": 151, "end": 172, "text": "(Bryans et al., 2011)", "ref_id": "BIBREF6" }, { "start": 231, "end": 252, "text": "(Chaibi et al., 2011;", "ref_id": "BIBREF8" }, { "start": 253, "end": 278, "text": "Posadzki and Ernst, 2011)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "6. Does the system handle and communicate uncertainties adequately? When providing guidance, the system should communicate any sources of uncertainty. If appropriate, the system should abstain from providing explicit guidance (e.g., where information conflicts or where supporting evidence is either absent or of low quality).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the case of the regular DEXA scans for men recieving long-term GnRH analogues, the system should communicate its uncertainty on whether the guidelines from Wirral Community Teaching hospital are applicable to the clinician's region.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Additionally, the question \"Does speech and language therapy help dysarthria after a brain injury?\" could return no relevant studies (Sellars et al., 2002) . It is important that the system explain that the question is unanswerable using the available literature.", "cite_spans": [ { "start": 133, "end": 155, "text": "(Sellars et al., 2002)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are several research challenges associated with the above criteria. Reframing the QA task will require new datasets which include answers (with accompanying rationales) from trusted sources; rankings by evidence quality; locality and patient contextualizing information; and which incorporate real-world conflicting answers and questions which lack answers. Quantitative measures would need to be created to assess how well the datasets and systems meet each criterion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While we expect that an improved system using these criteria might be more trustworthy (and hence potentially help to translate health research more effectively info clinical practice), we note that our criteria need to be empirically tested. To achieve this, we need to move beyond dataset evaluation, and consider user-centred design methodology. Ultimately, we should aim to improve and evaluate systems through research conducted in real-world clinical practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We next review prior work on Biomedical QA with respect to the above criteria. We display typical responses of these systems in a hypothetical web interface, and assess how well these responses meet the criteria.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The primary focus of prior medical QA work has been on developing systems that answer the following types of questions: boolean (yes/no), factoid, list (of factoids), and definitional, e.g. (Sarrouti and Ouatik El Alaoui, 2020; Ben Abacha and Zweigenbaum, 2015; Cao et al., 2011; Zahid et al., 2018; Yu et al., 2007) . Several datasets have been created to train and evaluate systems that handle the aforementioned question types, including BioASQ (Tsatsaronis et al., 2015) , em-rQA (Pampari et al., 2018) , emrKBQA (Raghavan et al., 2021) , PubMedQA (Jin et al., 2019) , why-QA (Fan, 2019) , MEDIQA-QA (Ben Abacha et al., 2019) , LiveQA-Medical (Abacha et al., 2017) and MEDIQA-AnS (Savery et al., 2020) . BioASQ, PubMedQA, MEDIQA-QA, MEDIQA-AnS and LiveQA-Medical derive answers from a corpus of biomedical literature, whereas emrQA, emrKBQA and why-QA are based on patient notes within EHRs. As stated above, our focus here is on systems that can answer general questions (independent of individual patients) based on the latest evidence, so we do not discuss emrQA, emrKBQA and why-QA. A comparison of the systems and datasets is provided in Table 1 . While BioASQ, MEDIQA-QA, MEDIQA-AnS and LiveQA-Medical are large-scale information retrieval (IR) and question answering (QA) datasets, PubMedQA is designed for \"reading comprehension\" question answering (RCQA) based on scientific abstracts. Each question of PubMedQA is accompanied by the abstract containing the answer.", "cite_spans": [ { "start": 190, "end": 227, "text": "(Sarrouti and Ouatik El Alaoui, 2020;", "ref_id": "BIBREF37" }, { "start": 228, "end": 261, "text": "Ben Abacha and Zweigenbaum, 2015;", "ref_id": "BIBREF5" }, { "start": 262, "end": 279, "text": "Cao et al., 2011;", "ref_id": "BIBREF7" }, { "start": 280, "end": 299, "text": "Zahid et al., 2018;", "ref_id": "BIBREF45" }, { "start": 300, "end": 316, "text": "Yu et al., 2007)", "ref_id": "BIBREF44" }, { "start": 448, "end": 474, "text": "(Tsatsaronis et al., 2015)", "ref_id": null }, { "start": 484, "end": 506, "text": "(Pampari et al., 2018)", "ref_id": "BIBREF30" }, { "start": 517, "end": 540, "text": "(Raghavan et al., 2021)", "ref_id": null }, { "start": 552, "end": 570, "text": "(Jin et al., 2019)", "ref_id": "BIBREF20" }, { "start": 580, "end": 591, "text": "(Fan, 2019)", "ref_id": "BIBREF14" }, { "start": 594, "end": 629, "text": "MEDIQA-QA (Ben Abacha et al., 2019)", "ref_id": null }, { "start": 632, "end": 668, "text": "LiveQA-Medical (Abacha et al., 2017)", "ref_id": null }, { "start": 684, "end": 705, "text": "(Savery et al., 2020)", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 1147, "end": 1154, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "The BioASQ Phase B challenge comprises the following question types (Tsatsaronis et al., 2015 ):", "cite_spans": [ { "start": 68, "end": 93, "text": "(Tsatsaronis et al., 2015", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "\u2022 Exact: \"yes\" or \"no\", e.g., \"Is the protein Papilin secreted?\";", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "\u2022 Factoid: named entities, e.g., \"Name synonym of Acrokeratosis paraneoplastica.\";", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "\u2022 List: list of named entities, e.g., \"Which miR-NAs could be used as potential biomarkers for epithelial ovarian cancer?\";", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "\u2022 Ideal: paragraph-sized summaries (text spans), e.g., \"What is the effect of TRH on myocardial contractility?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "While BioASQ has been instrumental to the progress of the field (Nentidis et al., 2017 (Nentidis et al., , 2018 QA System/ Dataset D1 D2 D3 D4 D5 D6", "cite_spans": [ { "start": 64, "end": 86, "text": "(Nentidis et al., 2017", "ref_id": "BIBREF25" }, { "start": 87, "end": 111, "text": "(Nentidis et al., , 2018", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "BioASQ (Krallinger et al., 2020) PubMedQA (Jin et al., 2019) MEDIQA-QA (Ben Abacha et al., 2019) MEDIQA-AnS (Savery et al., 2020) LiveQA-Medical (Abacha et al., 2017) MEANS (Ben Abacha and Zweigenbaum, 2015) AskHERMES (Cao et al., 2011) CLINIQA (Zahid et al., 2018) MedQA (Yu et al., 2007) 2020; Krallinger et al., 2020) , it satisfies only one of the criteria we have enumerated above, namely 4. Figure 2 shows the expected output of a system developed using BioASQ. In this example, the extract is provided verbatim (criterion 4).", "cite_spans": [ { "start": 7, "end": 32, "text": "(Krallinger et al., 2020)", "ref_id": "BIBREF21" }, { "start": 42, "end": 60, "text": "(Jin et al., 2019)", "ref_id": "BIBREF20" }, { "start": 108, "end": 129, "text": "(Savery et al., 2020)", "ref_id": "BIBREF38" }, { "start": 218, "end": 236, "text": "(Cao et al., 2011)", "ref_id": "BIBREF7" }, { "start": 245, "end": 265, "text": "(Zahid et al., 2018)", "ref_id": "BIBREF45" }, { "start": 272, "end": 289, "text": "(Yu et al., 2007)", "ref_id": "BIBREF44" }, { "start": 296, "end": 320, "text": "Krallinger et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 397, "end": 405, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "However, the answer is sourced from a general review; these reviews are less reliable than guidelines or systematic reviews (criterion 1). Furthermore, the system outputs absolute answers rather than guidance (criterion 2) which limits their usefulness to clinicians. A more suitable answer would be \"the following guidance is provided in X...\". It is unclear what resources are available to the clinician and the BioASQ dataset does not account for this (criterion 3). There is no contradictory evidence in the example and BioASQ has been preprocessed to ensure there are no conflicting papers (criterion 5). Unless the trained model is acting on a curated knowledge base, it would not be robust to conflicts. Finally, the absolute nature of the answer does not allow the system to recognise and account for uncertainty (criterion 6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "In contrast to BioASQ, PubMedQA provides answers to only Boolean (yes/no) questions, e.g. \"Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?\". Accompanying these responses is a \"long answer\", supplied in the form of the conclusions of the source abstracts. As per Figure 3 , the outputs of systems trained on PubMedQA can only satisfy criterion 4. The conclusion is given verbatim to support the short answer. Nevertheless, the source of the answer is not specified (criterion 1), the answer is absolute (criterion 2) and it does not account for any uncertainty (criterion 6). Systems developed using PubMedQA cannot ensure that the answer is useful to the clinician (criterion 3). Given the task is framed as \"reading comprehension\", there is only one abstract per question. This prevents systems from being trained to handle conflicts (criterion 5).", "cite_spans": [], "ref_spans": [ { "start": 304, "end": 312, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "MEDIQA-QA is a consumer QA dataset whose answers consist of exact snippets from Medline-Plus. Consumer questions are more focused on general information, symptom or person/organization questions (Roberts and Demner-Fushman, 2016) . The answers that are required by consumers are less complex and more easily understandable than those given to clinicians (Savery et al., 2020) . This has motivated the development of MEDIQA-AnS which summarises the answers of MEDIQA-QA. As shown in Figures 7 and 8 , MEDIQA-QA satisfies desiderata 1,2 and 4 while MEDIQA-AnS satisfies only 4.", "cite_spans": [ { "start": 195, "end": 229, "text": "(Roberts and Demner-Fushman, 2016)", "ref_id": "BIBREF35" }, { "start": 354, "end": 375, "text": "(Savery et al., 2020)", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 482, "end": 497, "text": "Figures 7 and 8", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "Although the LiveQA-Medical dataset uses the same answers and sources as MEDIQA-QA and MEDIQA-Ans, it differs by providing answers to each subquestion of the query. Additionally, verbatim extracts of MedlinePlus are used in the responses ( Figure 9 ). Hence criteria 1, 2 and 4 are fulfilled.", "cite_spans": [], "ref_spans": [ { "start": 240, "end": 248, "text": "Figure 9", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "MEANS returns only an extract of the original source, without any contextualizing information (Figure 10 ), i.e. the provenance of the answer. Therefore, only condition 4 is satisfied.", "cite_spans": [], "ref_spans": [ { "start": 94, "end": 104, "text": "(Figure 10", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "On the other hand, AskHERMES provides a list of answers which are labelled with topics from the question (Figure 11) . The extracts shown are from the original sources and are accompanied by links, authors, and dates. Thus, AskHERMES satisfies desiderata 1 and 4.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 116, "text": "(Figure 11)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "CLINIQA responds to queries with original abstracts that are accompanied with the PMID and the title of the source paper (Figure 12 ). However, the results are not rank according to reliability, so only criteria 2 and 4 are met.", "cite_spans": [], "ref_spans": [ { "start": 121, "end": 131, "text": "(Figure 12", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "Finally, MedQA's answers comprise sourced extracts from Medline and Google:Definition ( Figure 13 ). Answers are not ranked according to reliability, so the system only satisfies criteria 2 and 4.", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 98, "text": "Figure 13", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "None of the aformentioned datasets or systems address conflicts (criterion 5) or communicate uncertainty to clinicians (criterion 6). What might QA systems that satisfy all desiderata look like?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing Medical QA Datasets and Systems", "sec_num": "3" }, { "text": "We have seen that systems trained on BioASQ and PubMedQA do not satisfy all the criteria defined in \u00a72. In this section we present illustrative outputs of hypothetical systems that meet the full set of criteria we have put forth. Figure 4 presents an example output which satisfies the criteria but where no conflicts occur (criterion 5). The answer is sourced from a systematic review (criterion 1) and is in the form of guidance (criterion 2). While the guidance is actionable given the resources available (criterion 3) and the source extract is reproduced directly (criterion 4), the uncertainty in the answer is acknowledged (criterion 6) by stating the absence of relevant local and national guidelines. The corresponding words and phrases in the question, answer and title used to extract the text snippet are highlighted.", "cite_spans": [], "ref_spans": [ { "start": 230, "end": 238, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Presentation of Answers", "sec_num": "4" }, { "text": "A demonstration of how conflicting evidence could be addressed is provided in Figure 5 . In this scenario, the question \"Should spinal manipulations be used to treat headaches?\" returned three contradictory systematic reviews (Bryans et al., 2011; Chaibi et al., 2011; Posadzki and Ernst, 2011) (criterion 1). Therefore, the system refrains from providing explicit guidance (criterion 6) and instead provides the clinician with the names and links of conflicting reviews (criterion 5). In addition, the clinician is able to investigate the contradictory snippets further by clicking on \"Conflicting Snippets\" which would show the snippets in Figure 6 . Criterion 4 is inapplicable in this case as no answer was retrieved from the documents.", "cite_spans": [ { "start": 226, "end": 247, "text": "(Bryans et al., 2011;", "ref_id": "BIBREF6" }, { "start": 248, "end": 268, "text": "Chaibi et al., 2011;", "ref_id": "BIBREF8" }, { "start": 269, "end": 294, "text": "Posadzki and Ernst, 2011)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 78, "end": 86, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 642, "end": 651, "text": "Figure 6", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Presentation of Answers", "sec_num": "4" }, { "text": "One promising direction which may permit improved handling of contradictory evidence involves use of argumentation-based logic to \"reason\" about multiple potentially conflicting inputs (Chapman et al., 2019; Cyras et al., 2018) , perhaps after explicitly inferring the reported findings concerning treatment efficacies (Lehman et al., 2019; Nye et al., 2020) . An alternative (more audacious) direction would be to generate comparative summaries for clinicians that compose narrative summaries of the evidence on a given topic from primary sources, including discussion of conflicting evidence (Wallace et al., 2020; Shah et al., 2021) .", "cite_spans": [ { "start": 185, "end": 207, "text": "(Chapman et al., 2019;", "ref_id": null }, { "start": 208, "end": 227, "text": "Cyras et al., 2018)", "ref_id": "BIBREF10" }, { "start": 319, "end": 340, "text": "(Lehman et al., 2019;", "ref_id": "BIBREF22" }, { "start": 341, "end": 358, "text": "Nye et al., 2020)", "ref_id": "BIBREF28" }, { "start": 594, "end": 616, "text": "(Wallace et al., 2020;", "ref_id": "BIBREF28" }, { "start": 617, "end": 635, "text": "Shah et al., 2021)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Presentation of Answers", "sec_num": "4" }, { "text": "Developing and assessing systems according to the criteria outlined in \u00a72 would ensure the output is useful, actionable and reliable to clinicians. It would additionally improve the accountability of both the clinician and the system as the form of the output would be conducive to debugging and root cause analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Presentation of Answers", "sec_num": "4" }, { "text": "We have introduced criteria for assessing the transparency of medical question answering systems. These have been guided by the following question: What would be needed for clinicians to trust, and act upon answers from a QA system? In part we have argued that these systems should be explicitly informed by principles of EBM. The adequacy of existing medical systems and datasets, including BioASQ, PubMedQA, MEDIQA-QA, MEDIQA-AnS, LiveQA-Medical, MEANS, AskHERMES, CLINIQA and MedQA, was assessed using the transparency criteria that we proposed. We found that they met some, but not all, of the conditions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "We presented hypothetical examples of system outputs that satisfy all of the criteria and explained how they could be useful to clinicians. These included conflicts between sources of similar reliability. In these cases, the best course of action was to refrain from giving guidance and instead return the sources to the clinicians for further examination. The examples could form the basis of new datasets and systems that provide actionable answers to clinicians.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "We believe that these avenues of investigation would assist with the deployment of medical QA systems, ultimately furthering the practice of EBM. (Yu et al., 2007) .", "cite_spans": [ { "start": 146, "end": 163, "text": "(Yu et al., 2007)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "https://pubmed.ncbi.nlm.nih.gov", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.cochranelibrary.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by the National Institutes of Health (NIH), grant R01-LM012086.GK holds a doctoral studentship co-sponsored by Metadvice and the Guy's and St Thomas' Biomedical Research Centre.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Overview of the medical question answering task at trec 2017 liveqa", "authors": [ { "first": "Asma", "middle": [], "last": "Ben Abacha", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Pinter", "suffix": "" }, { "first": "Dina", "middle": [], "last": "Demner-Fushman", "suffix": "" } ], "year": 2017, "venue": "TREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asma Ben Abacha, Eugene Agichtein, Yuval Pinter, and Dina Demner-Fushman. 2017. Overview of the medical question answering task at trec 2017 liveqa. In TREC.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "EBHC pyramid 5.0 for accessing preappraised evidence and guidance", "authors": [ { "first": "S", "middle": [], "last": "Brian", "suffix": "" }, { "first": "R Brian", "middle": [], "last": "Alper", "suffix": "" }, { "first": "", "middle": [], "last": "Haynes", "suffix": "" } ], "year": 2016, "venue": "Evidence Based Medicine", "volume": "21", "issue": "4", "pages": "", "other_ids": { "DOI": [ "10.1136/ebmed-2016-110447" ] }, "num": null, "urls": [], "raw_text": "Brian S Alper and R Brian Haynes. 2016a. EBHC pyramid 5.0 for accessing preappraised evidence and guidance. Evidence Based Medicine, 21(4):123.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Ebhc pyramid 5.0 for accessing preappraised evidence and guidance", "authors": [ { "first": "S", "middle": [], "last": "Brian", "suffix": "" }, { "first": "R Brian", "middle": [], "last": "Alper", "suffix": "" }, { "first": "", "middle": [], "last": "Haynes", "suffix": "" } ], "year": 2016, "venue": "BMJ evidence-based medicine", "volume": "21", "issue": "4", "pages": "123--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian S Alper and R Brian Haynes. 2016b. Ebhc pyramid 5.0 for accessing preappraised evidence and guidance. BMJ evidence-based medicine, 21(4):123-125.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up?", "authors": [ { "first": "Hilda", "middle": [], "last": "Bastian", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Glasziou", "suffix": "" }, { "first": "Iain", "middle": [ "Chalmers" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "PLOS Medicine", "volume": "7", "issue": "9", "pages": "", "other_ids": { "DOI": [ "10.1371/journal.pmed.1000326" ] }, "num": null, "urls": [], "raw_text": "Hilda Bastian, Paul Glasziou, and Iain Chalmers. 2010. Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLOS Medicine, 7(9):e1000326. Publisher: Public Library of Science.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Overview of the MEDIQA 2019 shared task on textual inference, question entailment and question answering", "authors": [ { "first": "Asma", "middle": [], "last": "Ben Abacha", "suffix": "" }, { "first": "Chaitanya", "middle": [], "last": "Shivade", "suffix": "" }, { "first": "Dina", "middle": [], "last": "Demner-Fushman", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 18th BioNLP Workshop and Shared Task", "volume": "", "issue": "", "pages": "370--379", "other_ids": { "DOI": [ "10.18653/v1/W19-5039" ] }, "num": null, "urls": [], "raw_text": "Asma Ben Abacha, Chaitanya Shivade, and Dina Demner-Fushman. 2019. Overview of the MEDIQA 2019 shared task on textual inference, question en- tailment and question answering. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 370-379, Florence, Italy. Association for Computa- tional Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "MEANS: A medical question-answering system combining NLP techniques and semantic Web technologies. Information Processing & Management", "authors": [ { "first": "Asma", "middle": [], "last": "Ben Abacha", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2015, "venue": "", "volume": "51", "issue": "", "pages": "570--594", "other_ids": { "DOI": [ "10.1016/j.ipm.2015.04.006" ] }, "num": null, "urls": [], "raw_text": "Asma Ben Abacha and Pierre Zweigenbaum. 2015. MEANS: A medical question-answering system combining NLP techniques and semantic Web tech- nologies. Information Processing & Management, 51(5):570-594.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Evidence-Based Guidelines for the Chiropractic Treatment of Adults With Headache", "authors": [ { "first": "Roland", "middle": [], "last": "Bryans", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Descarreaux", "suffix": "" }, { "first": "Mireille", "middle": [], "last": "Duranleau", "suffix": "" }, { "first": "Henri", "middle": [], "last": "Marcoux", "suffix": "" }, { "first": "Brock", "middle": [], "last": "Potter", "suffix": "" }, { "first": "Rick", "middle": [], "last": "Ruegg", "suffix": "" }, { "first": "Lynn", "middle": [], "last": "Shaw", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Watkin", "suffix": "" }, { "first": "Eleanor", "middle": [], "last": "White", "suffix": "" } ], "year": 2011, "venue": "Journal of Manipulative and Physiological Therapeutics", "volume": "34", "issue": "", "pages": "274--289", "other_ids": { "DOI": [ "10.1016/j.jmpt.2011.04.008" ] }, "num": null, "urls": [], "raw_text": "Roland Bryans, Martin Descarreaux, Mireille Duran- leau, Henri Marcoux, Brock Potter, Rick Ruegg, Lynn Shaw, Robert Watkin, and Eleanor White. 2011. Evidence-Based Guidelines for the Chiro- practic Treatment of Adults With Headache. Jour- nal of Manipulative and Physiological Therapeutics, 34(5):274-289.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "AskHERMES: An online question answering system for complex clinical questions", "authors": [ { "first": "Yonggang", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Feifan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pippa", "middle": [], "last": "Simpson", "suffix": "" }, { "first": "Lamont", "middle": [], "last": "Antieau", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Bennett", "suffix": "" }, { "first": "James", "middle": [], "last": "Cimino", "suffix": "" }, { "first": "John", "middle": [], "last": "Ely", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2011, "venue": "Journal of biomedical informatics", "volume": "44", "issue": "", "pages": "277--88", "other_ids": { "DOI": [ "10.1016/j.jbi.2011.01.004" ] }, "num": null, "urls": [], "raw_text": "YongGang Cao, Feifan Liu, Pippa Simpson, Lamont Antieau, Andrew Bennett, James Cimino, John Ely, and Hong Yu. 2011. AskHERMES: An online ques- tion answering system for complex clinical ques- tions. Journal of biomedical informatics, 44:277- 88.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Manual therapies for migraine: a systematic review", "authors": [ { "first": "Aleksander", "middle": [], "last": "Chaibi", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Michael Bj\u00f8rn", "middle": [], "last": "Tuchin", "suffix": "" }, { "first": "", "middle": [], "last": "Russell", "suffix": "" } ], "year": 2011, "venue": "The journal of headache and pain", "volume": "12", "issue": "2", "pages": "127--133", "other_ids": { "DOI": [ "10.1007/s10194-011-0296-6" ] }, "num": null, "urls": [], "raw_text": "Aleksander Chaibi, Peter J Tuchin, and Michael Bj\u00f8rn Russell. 2011. Manual therapies for migraine: a systematic review. The journal of headache and pain, 12(2):127-133. Edition: 2011/02/05 Pub- lisher: Springer Milan.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Kai Essers, Isabel Sassoon, Sanjay Modgil, Simon Parsons, and Elizabeth I. Sklar. 2019. Computational argumentation-based clinical decision support", "authors": [ { "first": "Martin", "middle": [], "last": "Chapman", "suffix": "" }, { "first": "Panagiotis", "middle": [], "last": "Balatsoukas", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Ashworth", "suffix": "" }, { "first": "Vasa", "middle": [], "last": "Curcin", "suffix": "" }, { "first": "Nadin", "middle": [], "last": "K\u00f6kciyan", "suffix": "" } ], "year": null, "venue": "Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AA-MAS '19", "volume": "", "issue": "", "pages": "2345--2347", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Chapman, Panagiotis Balatsoukas, Mark Ash- worth, Vasa Curcin, Nadin K\u00f6kciyan, Kai Es- sers, Isabel Sassoon, Sanjay Modgil, Simon Par- sons, and Elizabeth I. Sklar. 2019. Computational argumentation-based clinical decision support. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AA- MAS '19, page 2345-2347, Richland, SC. Interna- tional Foundation for Autonomous Agents and Mul- tiagent Systems.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Argumentation for explainable reasoning with conflicting medical recommendations", "authors": [ { "first": "K", "middle": [], "last": "Cyras", "suffix": "" }, { "first": "B", "middle": [], "last": "Delaney", "suffix": "" }, { "first": "Denys", "middle": [], "last": "Prociuk", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "" }, { "first": "M", "middle": [], "last": "Chapman", "suffix": "" }, { "first": "Jes\u00fas", "middle": [], "last": "Dom\u00ednguez", "suffix": "" }, { "first": "V", "middle": [], "last": "Curcin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Cyras, B. Delaney, Denys Prociuk, Francesca Toni, M. Chapman, Jes\u00fas Dom\u00ednguez, and V. Curcin. 2018. Argumentation for explainable reasoning with conflicting medical recommendations. In MedRACER+WOMoCoE@KR.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Clinical Questions Raised by Clinicians at the Point of Care: A Systematic Review", "authors": [ { "first": "Guilherme", "middle": [], "last": "Del Fiol", "suffix": "" }, { "first": "T", "middle": [ "Elizabeth" ], "last": "Workman", "suffix": "" }, { "first": "Paul", "middle": [ "N" ], "last": "Gorman", "suffix": "" } ], "year": 2014, "venue": "JAMA Internal Medicine", "volume": "174", "issue": "5", "pages": "710--718", "other_ids": { "DOI": [ "10.1001/jamainternmed.2014.368" ] }, "num": null, "urls": [], "raw_text": "Guilherme Del Fiol, T. Elizabeth Workman, and Paul N. Gorman. 2014. Clinical Questions Raised by Clin- icians at the Point of Care: A Systematic Review. JAMA Internal Medicine, 174(5):710-718.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Strength of Recommendation Taxonomy (SORT): A Patient-Centered Approach to Grading Evidence in the Medical Literature", "authors": [ { "first": "Mark", "middle": [], "last": "Ebell", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Siwek", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Woolf", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Susman", "suffix": "" }, { "first": "Bernard", "middle": [], "last": "Ewigman", "suffix": "" } ], "year": 2004, "venue": "The Journal of the American Board of Family Practice / American Board of Family Practice", "volume": "17", "issue": "", "pages": "59--67", "other_ids": { "DOI": [ "10.3122/jabfm.17.1.59" ] }, "num": null, "urls": [], "raw_text": "Mark Ebell, Jay Siwek, Barry Weiss, Steven Woolf, Jef- frey Susman, Bernard Ewigman, and Marjorie Bow- man. 2004. Strength of Recommendation Taxon- omy (SORT): A Patient-Centered Approach to Grad- ing Evidence in the Medical Literature. The Journal of the American Board of Family Practice / Ameri- can Board of Family Practice, 17:59-67.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A taxonomy of generic clinical questions: Classification study", "authors": [ { "first": "John", "middle": [], "last": "Ely", "suffix": "" }, { "first": "Jerome", "middle": [], "last": "Osheroff", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Gorman", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Ebell", "suffix": "" }, { "first": "M", "middle": [], "last": "Chambliss", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Pifer", "suffix": "" }, { "first": "P", "middle": [], "last": "Stavri", "suffix": "" } ], "year": 2000, "venue": "BMJ", "volume": "321", "issue": "", "pages": "429--461", "other_ids": { "DOI": [ "10.1136/bmj.321.7258.429" ] }, "num": null, "urls": [], "raw_text": "John Ely, Jerome Osheroff, Paul Gorman, Mark Ebell, M Chambliss, Eric Pifer, and P Stavri. 2000. A tax- onomy of generic clinical questions: Classification study. BMJ (Clinical research ed.), 321:429-32.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Annotating and characterizing clinical sentences with explicit why-QA cues", "authors": [ { "first": "Jungwei", "middle": [], "last": "Fan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "101--106", "other_ids": { "DOI": [ "10.18653/v1/W19-1913" ] }, "num": null, "urls": [], "raw_text": "Jungwei Fan. 2019. Annotating and characterizing clinical sentences with explicit why-QA cues. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 101-106, Minneapo- lis, Minnesota, USA. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Mindlines: making sense of evidence in practice", "authors": [ { "first": "John", "middle": [], "last": "Gabbay", "suffix": "" }, { "first": "Andr\u00e9e", "middle": [], "last": "Le", "suffix": "" }, { "first": "May", "middle": [], "last": "", "suffix": "" } ], "year": 2016, "venue": "British Journal of General Practice", "volume": "66", "issue": "649", "pages": "", "other_ids": { "DOI": [ "10.3399/bjgp16X686221" ] }, "num": null, "urls": [], "raw_text": "John Gabbay and Andr\u00e9e le May. 2016. Mindlines: making sense of evidence in practice. British Jour- nal of General Practice, 66(649):402.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "GRADE: An emerging consensus on rating quality of evidence and strength of recommendations", "authors": [ { "first": "Gordon", "middle": [], "last": "Guyatt", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Oxman", "suffix": "" }, { "first": "Gunn", "middle": [], "last": "Vist", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Kunz", "suffix": "" }, { "first": "Yngve", "middle": [], "last": "Falck-Ytter", "suffix": "" }, { "first": "Pablo", "middle": [], "last": "Alonso", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Sch\u00fcnemann", "suffix": "" } ], "year": 2008, "venue": "BMJ", "volume": "336", "issue": "", "pages": "924--930", "other_ids": { "DOI": [ "10.1136/bmj.39489.470347.AD" ] }, "num": null, "urls": [], "raw_text": "Gordon Guyatt, Andrew Oxman, Gunn Vist, Regina Kunz, Yngve Falck-Ytter, Pablo Alonso, and Hol- ger Sch\u00fcnemann. 2008. GRADE: An emerging con- sensus on rating quality of evidence and strength of recommendations. BMJ (Clinical research ed.), 336:924-6.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The information-seeking behavior of clinical staff in a large health care organization", "authors": [ { "first": "Phil", "middle": [], "last": "Hider", "suffix": "" }, { "first": "Gemma", "middle": [], "last": "Griffin", "suffix": "" }, { "first": "Marg", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Coughlan", "suffix": "" } ], "year": 2009, "venue": "Journal of the Medical Library Association : JMLA", "volume": "97", "issue": "", "pages": "47--50", "other_ids": { "DOI": [ "10.3163/1536-5050.97.1.009" ] }, "num": null, "urls": [], "raw_text": "Phil Hider, Gemma Griffin, Marg Walker, and Edward Coughlan. 2009. The information-seeking behavior of clinical staff in a large health care organization. Journal of the Medical Library Association : JMLA, 97:47-50.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The cochrane collaboration's tool for assessing risk of bias in randomised trials", "authors": [ { "first": "P", "middle": [ "T" ], "last": "Julian", "suffix": "" }, { "first": "", "middle": [], "last": "Higgins", "suffix": "" }, { "first": "G", "middle": [], "last": "Douglas", "suffix": "" }, { "first": "", "middle": [], "last": "Altman", "suffix": "" }, { "first": "C", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Peter", "middle": [], "last": "G\u00f8tzsche", "suffix": "" }, { "first": "David", "middle": [], "last": "J\u00fcni", "suffix": "" }, { "first": "", "middle": [], "last": "Moher", "suffix": "" }, { "first": "Jelena", "middle": [], "last": "Andrew D Oxman", "suffix": "" }, { "first": "", "middle": [], "last": "Savovi\u0107", "suffix": "" }, { "first": "F", "middle": [], "last": "Kenneth", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Schulz", "suffix": "" }, { "first": "Jonathan Ac", "middle": [], "last": "Weeks", "suffix": "" }, { "first": "", "middle": [], "last": "Sterne", "suffix": "" } ], "year": 2011, "venue": "Bmj", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian PT Higgins, Douglas G Altman, Peter C G\u00f8tzsche, Peter J\u00fcni, David Moher, Andrew D Oxman, Jelena Savovi\u0107, Kenneth F Schulz, Laura Weeks, and Jonathan AC Sterne. 2011. The cochrane collaboration's tool for assessing risk of bias in randomised trials. Bmj, 343.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Answers to questions posed during daily patient care are more likely to be answered by UpToDate than PubMed", "authors": [ { "first": "Arjen", "middle": [], "last": "Hoogendam", "suffix": "" }, { "first": "Anton", "middle": [ "F" ], "last": "Stalenhoef", "suffix": "" }, { "first": "Pieter", "middle": [ "F" ], "last": "De Vries Robb\u00e9", "suffix": "" }, { "first": "P M", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Overbeke", "suffix": "" } ], "year": 2008, "venue": "Journal of medical Internet research", "volume": "10", "issue": "4", "pages": "29--29", "other_ids": { "DOI": [ "10.2196/jmir.1012" ] }, "num": null, "urls": [], "raw_text": "Arjen Hoogendam, Anton F H Stalenhoef, Pieter F de Vries Robb\u00e9, and A John P M Overbeke. 2008. Answers to questions posed during daily patient care are more likely to be answered by UpToDate than PubMed. Journal of medical Internet research, 10(4):e29-e29. Publisher: Gunther Eysenbach.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "PubMedQA: A Dataset for Biomedical Research Question Answering", "authors": [ { "first": "Qiao", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Bhuwan", "middle": [], "last": "Dhingra", "suffix": "" }, { "first": "Zhengping", "middle": [], "last": "Liu", "suffix": "" }, { "first": "William", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Xinghua", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D19-1259" ] }, "num": null, "urls": [], "raw_text": "Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. PubMedQA: A Dataset for Biomedical Research Question Answer- ing. Pages: 2577.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Bioasq at clef2020: Large-scale biomedical semantic indexing and question answering", "authors": [ { "first": "Martin", "middle": [], "last": "Krallinger", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Krithara", "suffix": "" }, { "first": "A", "middle": [], "last": "Nentidis", "suffix": "" }, { "first": "G", "middle": [], "last": "Paliouras", "suffix": "" }, { "first": "Marta", "middle": [], "last": "Villegas", "suffix": "" } ], "year": 2020, "venue": "Advances in Information Retrieval", "volume": "12036", "issue": "", "pages": "550--556", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Krallinger, Anastasia Krithara, A. Nentidis, G. Paliouras, and Marta Villegas. 2020. Bioasq at clef2020: Large-scale biomedical semantic indexing and question answering. Advances in Information Retrieval, 12036:550 -556.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Inferring Which Medical Treatments Work from Reports of Clinical Trials", "authors": [ { "first": "Eric", "middle": [], "last": "Lehman", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Deyoung", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "3705--3717", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Lehman, Jay DeYoung, Regina Barzilay, and By- ron C. Wallace. 2019. Inferring Which Medical Treatments Work from Reports of Clinical Trials. In Proceedings of the Conference of the North Ameri- can Chapter of the Association for Computational Linguistics (NAACL), pages 3705-3717.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "What makes a good answer? the role of context in question answering", "authors": [ { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Dennis", "middle": [], "last": "Quan", "suffix": "" }, { "first": "Vineet", "middle": [], "last": "Sinha", "suffix": "" }, { "first": "Karun", "middle": [], "last": "Bakshi", "suffix": "" }, { "first": "David", "middle": [], "last": "Huynh", "suffix": "" }, { "first": "Boris", "middle": [], "last": "Katz", "suffix": "" }, { "first": "David", "middle": [ "R" ], "last": "Karger", "suffix": "" } ], "year": 2003, "venue": "Proceedings of IN-TERACT 2003", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jimmy Lin, Dennis Quan, Vineet Sinha, Karun Bak- shi, David Huynh, Boris Katz, and David R. Karger. 2003. What makes a good answer? the role of con- text in question answering. In Proceedings of IN- TERACT 2003, pages 25-32.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Results of the seventh edition of the BioASQ challenge", "authors": [ { "first": "Anastasios", "middle": [], "last": "Nentidis", "suffix": "" }, { "first": "Konstantinos", "middle": [], "last": "Bougiatiotis", "suffix": "" } ], "year": 2020, "venue": "Machine Learning and Knowledge Discovery in Databases", "volume": "", "issue": "", "pages": "553--568", "other_ids": { "DOI": [ "10.1007/978-3-030-43887-6_51" ] }, "num": null, "urls": [], "raw_text": "Anastasios Nentidis, Konstantinos Bougiatiotis, Anas- tasia Krithara, and Georgios Paliouras. 2020. Re- sults of the seventh edition of the BioASQ challenge. In Machine Learning and Knowledge Discovery in Databases, pages 553-568. Springer International Publishing.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Results of the fifth edition of the BioASQ challenge", "authors": [ { "first": "Anastasios", "middle": [], "last": "Nentidis", "suffix": "" }, { "first": "Konstantinos", "middle": [], "last": "Bougiatiotis", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Krithara", "suffix": "" } ], "year": 2017, "venue": "Georgios Paliouras, and Ioannis Kakadiaris", "volume": "", "issue": "", "pages": "48--57", "other_ids": { "DOI": [ "10.18653/v1/W17-2306" ] }, "num": null, "urls": [], "raw_text": "Anastasios Nentidis, Konstantinos Bougiatiotis, Anas- tasia Krithara, Georgios Paliouras, and Ioannis Kakadiaris. 2017. Results of the fifth edition of the BioASQ challenge. In BioNLP 2017, pages 48-57, Vancouver, Canada,. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Results of the sixth edition of the BioASQ challenge", "authors": [ { "first": "Anastasios", "middle": [], "last": "Nentidis", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Krithara", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 6th BioASQ Workshop A challenge on large-scale biomedical semantic indexing and question answering", "volume": "", "issue": "", "pages": "1--10", "other_ids": { "DOI": [ "10.18653/v1/W18-5301" ] }, "num": null, "urls": [], "raw_text": "Anastasios Nentidis, Anastasia Krithara, Konstanti- nos Bougiatiotis, Georgios Paliouras, and Ioannis Kakadiaris. 2018. Results of the sixth edition of the BioASQ challenge. In Proceedings of the 6th BioASQ Workshop A challenge on large-scale biomedical semantic indexing and question answer- ing, pages 1-10, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "UTI (lower): antimicrobial prescribing", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "NICE. 2019. UTI (lower): antimicrobial prescribing. https://www.nice.org.uk/guidance/ng109/resources/ visual-summary-pdf-6544021069.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Understanding clinical trial reports: Extracting medical entities and their relations", "authors": [ { "first": "Jay", "middle": [], "last": "Benjamin E Nye", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Deyoung", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Lehman", "suffix": "" }, { "first": "", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "J", "middle": [], "last": "Iain", "suffix": "" }, { "first": "Byron C", "middle": [], "last": "Marshall", "suffix": "" }, { "first": "", "middle": [], "last": "Wallace", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.03550" ] }, "num": null, "urls": [], "raw_text": "Benjamin E Nye, Jay DeYoung, Eric Lehman, Ani Nenkova, Iain J Marshall, and Byron C Wallace. 2020. Understanding clinical trial reports: Ex- tracting medical entities and their relations. arXiv preprint arXiv:2010.03550.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Information-seeking behaviour of nurses: where is information sought and what processes are followed?", "authors": [ { "first": "Denise", "middle": [], "last": "Fiona", "suffix": "" }, { "first": "O'", "middle": [], "last": "Leary", "suffix": "" }, { "first": "Siobh\u00e1n Ni", "middle": [], "last": "Mhaolr\u00fanaigh", "suffix": "" } ], "year": 2012, "venue": "Journal of Advanced Nursing", "volume": "68", "issue": "2", "pages": "379--390", "other_ids": { "DOI": [ "10.1111/j.1365-2648.2011.05750.x" ] }, "num": null, "urls": [], "raw_text": "Denise Fiona O'leary and Siobh\u00e1n Ni Mhaolr\u00fanaigh. 2012. Information-seeking behaviour of nurses: where is information sought and what processes are followed? Journal of Advanced Nursing, 68(2):379- 390. Publisher: John Wiley & Sons, Ltd.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "emrQA: A Large Corpus for Question Answering on Electronic Medical Records", "authors": [ { "first": "Anusri", "middle": [], "last": "Pampari", "suffix": "" }, { "first": "Preethi", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anusri Pampari, Preethi Raghavan, Jennifer Liang, and Jian Peng. 2018. emrQA: A Large Corpus for Ques- tion Answering on Electronic Medical Records.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "The common practice of \"curbside consultation\": A systematic review", "authors": [ { "first": "Amy", "middle": [], "last": "Papermaster", "suffix": "" }, { "first": "Jane", "middle": [ "Dimmitt" ], "last": "Champion", "suffix": "" } ], "year": 2017, "venue": "Journal of the American Association of Nurse Practitioners", "volume": "", "issue": "10", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amy Papermaster and Jane Dimmitt Champion. 2017. The common practice of \"curbside consultation\": A systematic review. Journal of the American Associ- ation of Nurse Practitioners, 29(10).", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Exploring the use of curbside consultations for interprofessional collaboration and clinical decision-making", "authors": [ { "first": "E", "middle": [], "last": "Amy", "suffix": "" }, { "first": "Jane", "middle": [ "Dimmitt" ], "last": "Papermaster", "suffix": "" }, { "first": "", "middle": [], "last": "Champion", "suffix": "" } ], "year": 2020, "venue": "Journal of Interprofessional Care", "volume": "", "issue": "", "pages": "1--8", "other_ids": { "DOI": [ "10.1080/13561820.2020.1768057" ] }, "num": null, "urls": [], "raw_text": "Amy E Papermaster and Jane Dimmitt Champion. 2020. Exploring the use of curbside consulta- tions for interprofessional collaboration and clinical decision-making. Journal of Interprofessional Care, pages 1-8. Publisher: Taylor & Francis.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Spinal manipulations for cervicogenic headaches: a systematic review of randomized clinical trials", "authors": [ { "first": "Paul", "middle": [], "last": "Posadzki", "suffix": "" }, { "first": "Edzard", "middle": [], "last": "Ernst", "suffix": "" } ], "year": 2011, "venue": "Headache", "volume": "51", "issue": "7", "pages": "1132--1139", "other_ids": { "DOI": [ "10.1111/j.1526-4610.2011.01932.x" ] }, "num": null, "urls": [], "raw_text": "Paul Posadzki and Edzard Ernst. 2011. Spinal ma- nipulations for cervicogenic headaches: a system- atic review of randomized clinical trials. Headache, 51(7):1132-1139.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Diwakar Mahajan, Rachita Chandra, and Peter Szolovits. 2021. emrkbqa: A clinical knowledge-base question answering dataset", "authors": [ { "first": "Preethi", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "Jennifer", "middle": [ "J" ], "last": "Liang", "suffix": "" } ], "year": null, "venue": "BIONLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preethi Raghavan, Jennifer J. Liang, Diwakar Mahajan, Rachita Chandra, and Peter Szolovits. 2021. emrk- bqa: A clinical knowledge-base question answering dataset. In BIONLP.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Interactive use of online health resources: a comparison of consumer and professional questions", "authors": [ { "first": "Kirk", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Dina", "middle": [], "last": "Demner-Fushman", "suffix": "" } ], "year": 2016, "venue": "American Medical Informatics Association", "volume": "23", "issue": "4", "pages": "802--811", "other_ids": { "DOI": [ "10.1093/jamia/ocw024" ] }, "num": null, "urls": [], "raw_text": "Kirk Roberts and Dina Demner-Fushman. 2016. In- teractive use of online health resources: a compari- son of consumer and professional questions. Jour- nal of the American Medical Informatics Associa- tion, 23(4):802-811.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Clinical epidemiology: a basic science for clinical medicine", "authors": [ { "first": "L", "middle": [], "last": "David", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Sackett", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Haynes", "suffix": "" }, { "first": "", "middle": [], "last": "Tugwell", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David L Sackett, R Brian Haynes, Peter Tugwell, et al. 1985. Clinical epidemiology: a basic science for clinical medicine. Little, Brown and Company.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "SemBioNLQA: A semantic biomedical question answering system for retrieving exact and ideal answers to natural language questions", "authors": [ { "first": "Mourad", "middle": [], "last": "Sarrouti", "suffix": "" }, { "first": "Said Ouatik El", "middle": [], "last": "Alaoui", "suffix": "" } ], "year": 2020, "venue": "Artificial Intelligence in Medicine", "volume": "102", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1016/j.artmed.2019.101767" ] }, "num": null, "urls": [], "raw_text": "Mourad Sarrouti and Said Ouatik El Alaoui. 2020. SemBioNLQA: A semantic biomedical question an- swering system for retrieving exact and ideal an- swers to natural language questions. Artificial In- telligence in Medicine, 102:101767.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Question-driven summarization of answers to consumer health questions", "authors": [ { "first": "Max", "middle": [], "last": "Savery", "suffix": "" }, { "first": "Asma", "middle": [], "last": "Ben Abacha", "suffix": "" }, { "first": "Soumya", "middle": [], "last": "Gayen", "suffix": "" }, { "first": "Dina", "middle": [], "last": "Demner-Fushman", "suffix": "" } ], "year": 2020, "venue": "Scientific Data", "volume": "7", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.1038/s41597-020-00667-z" ] }, "num": null, "urls": [], "raw_text": "Max Savery, Asma Ben Abacha, Soumya Gayen, and Dina Demner-Fushman. 2020. Question-driven summarization of answers to consumer health ques- tions. Scientific Data, 7(1):322.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Speech and language therapy for dysarthria due to nonprogressive brain damage", "authors": [ { "first": "C", "middle": [], "last": "Sellars", "suffix": "" }, { "first": "T", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "P", "middle": [], "last": "Langhorne", "suffix": "" } ], "year": 2002, "venue": "Cochrane Database of Systematic Reviews", "volume": "", "issue": "3", "pages": "", "other_ids": { "DOI": [ "10.1002/14651858.CD002088" ] }, "num": null, "urls": [], "raw_text": "C. Sellars, T. Hughes, and P. Langhorne. 2002. Speech and language therapy for dysarthria due to non- progressive brain damage. Cochrane Database of Systematic Reviews, (3). Publisher: John Wiley & Sons, Ltd.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Nutribullets hybrid: Multi-document health summarization", "authors": [ { "first": "J", "middle": [], "last": "Darsh", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "R", "middle": [], "last": "Lei", "suffix": "" }, { "first": "", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2021, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Darsh J. Shah, Lili Yu, Tao Lei, and R. Barzilay. 2021. Nutribullets hybrid: Multi-document health summa- rization. ArXiv, abs/2104.03465.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "What clinical information do doctors need?", "authors": [ { "first": "Richard", "middle": [], "last": "Smith", "suffix": "" } ], "year": 1996, "venue": "BMJ", "volume": "313", "issue": "7064", "pages": "", "other_ids": { "DOI": [ "10.1136/bmj.313.7064.1062" ] }, "num": null, "urls": [], "raw_text": "Richard Smith. 1996. What clinical information do doctors need? BMJ, 313(7064):1062.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Eric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition", "authors": [ { "first": "George", "middle": [], "last": "Tsatsaronis", "suffix": "" }, { "first": "Georgios", "middle": [], "last": "Balikas", "suffix": "" }, { "first": "Prodromos", "middle": [], "last": "Malakasiotis", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Partalas", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Zschunke", "suffix": "" }, { "first": "Michael", "middle": [ "R" ], "last": "Alvers", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Weissenborn", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Krithara", "suffix": "" } ], "year": null, "venue": "Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Arti\u00e9res, Axel-Cyrille Ngonga Ngomo", "volume": "16", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1186/s12859-015-0564-6" ] }, "num": null, "urls": [], "raw_text": "George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R. Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopou- los, Yannis Almirantis, John Pavlopoulos, Nico- las Baskiotis, Patrick Gallinari, Thierry Arti\u00e9res, Axel-Cyrille Ngonga Ngomo, Norman Heino, Eric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competi- tion. BMC Bioinformatics, 16(1):138.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Frank Soboczenski, and I. Marshall. 2020. Generating (factual?) narrative summaries of rcts: Experiments with neural multi-document summarization", "authors": [ { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" }, { "first": "Sayantan", "middle": [], "last": "Saha", "suffix": "" } ], "year": null, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Byron C. Wallace, Sayantan Saha, Frank Soboczen- ski, and I. Marshall. 2020. Generating (fac- tual?) narrative summaries of rcts: Experiments with neural multi-document summarization. ArXiv, abs/2008.11293.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Development, implementation, and a cognitive evaluation of a definitional question answering system for physicians", "authors": [ { "first": "Hong", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Minsuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "David", "middle": [], "last": "Kaufman", "suffix": "" }, { "first": "John", "middle": [], "last": "Ely", "suffix": "" }, { "first": "Jerome", "middle": [ "A" ], "last": "Osheroff", "suffix": "" }, { "first": "George", "middle": [], "last": "Hripcsak", "suffix": "" }, { "first": "James", "middle": [], "last": "Cimino", "suffix": "" } ], "year": 2007, "venue": "J. of Biomedical Informatics", "volume": "40", "issue": "3", "pages": "236--251", "other_ids": { "DOI": [ "10.1016/j.jbi.2007.03.002" ] }, "num": null, "urls": [], "raw_text": "Hong Yu, Minsuk Lee, David Kaufman, John Ely, Jerome A. Osheroff, George Hripcsak, and James Cimino. 2007. Development, implementation, and a cognitive evaluation of a definitional question an- swering system for physicians. J. of Biomedical In- formatics, 40(3):236-251.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "CLINIQA: A Machine Intelligence Based Clinical Question Answering System", "authors": [ { "first": "M", "middle": [], "last": "Zahid", "suffix": "" }, { "first": "Ankush", "middle": [], "last": "Mittal", "suffix": "" }, { "first": "R", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "G", "middle": [], "last": "Atluri", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M Zahid, Ankush Mittal, R. Joshi, and G. Atluri. 2018. CLINIQA: A Machine Intelligence Based Clinical Question Answering System.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "A Additional figures of QA interfaces", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A Additional figures of QA interfaces", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Yellow box shows text snippet used to answer \"what dose of flucloxacillin should I prescribe for a 5 year old child?\" 3 .", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Web interface for QA system developed using BioASQ.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Web interface for QA system developed using PubMedQA.", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "Example of a medical QA system output that handles the conflicting conclusions of 3 systematic reviews.", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "Contradictory source snippets leading to the response presented in figure 5.", "type_str": "figure", "num": null, "uris": null }, "FIGREF5": { "text": "Web interface for QA system developed using MEDIQA-QA.", "type_str": "figure", "num": null, "uris": null }, "FIGREF6": { "text": "Web interface for QA system developed using MEDIQA-AnS.", "type_str": "figure", "num": null, "uris": null }, "FIGREF7": { "text": "Web interface for QA system developed using LiveQA-Medical.", "type_str": "figure", "num": null, "uris": null }, "FIGREF8": { "text": "Web interface for MEANS.", "type_str": "figure", "num": null, "uris": null }, "FIGREF9": { "text": "Web interface for AskHERMES.", "type_str": "figure", "num": null, "uris": null }, "FIGREF10": { "text": "Web interface for CLINIQA which includes figure 5 from(Zahid et al., 2018).", "type_str": "figure", "num": null, "uris": null }, "FIGREF11": { "text": "Web interface for MedQA which includes figure 3 from", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "html": null, "type_str": "table", "num": null, "content": "", "text": "Comparision of how well QA systems and datasets meet the desiderata outlined in \u00a72." } } } }