ACL-OCL / Base_JSON /prefixN /json /nlpmc /2021.nlpmc-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:09.904651Z"
},
"title": "Towards Automating Medical Scribing : Clinic Visit Dialogue2Note Sentence Alignment and Snippet Summarization",
"authors": [
{
"first": "Wen-Wai",
"middle": [],
"last": "Yim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Augmedix Inc",
"location": {}
},
"email": "wenwai.yim@augmedix.com"
},
{
"first": "Meliha",
"middle": [],
"last": "Yetisgen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": "melihay@uw.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Medical conversations from patient visits are routinely summarized into clinical notes for documentation of clinical care. The automatic creation of clinical note is particularly challenging given that it requires summarization over spoken language and multiple speaker turns; as well, clinical notes include highly technical semi-structured text. In this paper, we describe our corpus creation method and baseline systems for two NLP tasks, clinical dialogue2note sentence alignment and clinical dialogue2note snippet summarization. These two systems, as well as other models created from such a corpus, may be incorporated as parts of an overall end-to-end clinical note generation system.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Medical conversations from patient visits are routinely summarized into clinical notes for documentation of clinical care. The automatic creation of clinical note is particularly challenging given that it requires summarization over spoken language and multiple speaker turns; as well, clinical notes include highly technical semi-structured text. In this paper, we describe our corpus creation method and baseline systems for two NLP tasks, clinical dialogue2note sentence alignment and clinical dialogue2note snippet summarization. These two systems, as well as other models created from such a corpus, may be incorporated as parts of an overall end-to-end clinical note generation system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As a side effect of widespread electronic medical record adoption spurred by the HITECH Act, clinicians have been burdened with increased documentation demands (Tran et al.) . Thus for each visit with a patient, clinicians are required to input order entries and referrals; most importantly, they are charged with the creation of a clinical note. A clinical note summarizes the discussions and plans of a medical visit and ultimately serves as a clinical communication device, as well as a record used for billing and legal purposes. To combat physician burnout, some practices employ medical scribes to assist in documentation tasks. However, hiring such assistants to audit visits and to collaborate with medical staff for electronic medical record documentation completion is costly; thus there is great interest in creating technology to automatically generate clinical notes based on clinic visit conversations.",
"cite_spans": [
{
"start": 160,
"end": 173,
"text": "(Tran et al.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Not only does the task of clinical note creation from medical conversation dialogue include summarizing information over multiple speakers, often the clinical note document is created with clinicianprovided templates; clinical notes are also often injected with structured information, e.g. labs. Finally, parts of clinical notes may be transcribed from dictations; or clinicians may issue commands to adjust changes in the text, e.g. \"change the template\", \"nevermind disregard that.\" In earlier work (Yim et al., 2020) , we introduced a new annotation methodology that aligns clinic visit dialogue sentences to clinical note sentences with labels, thus creating sub-document granular snippet alignments between dialogue and clinical note pairs (e.g. Table 1 , 2). In this paper, we extend this annotation work on a real corpus and provide the first baselines for clinic visit dialogue2note automatic sentence alignments. Much like machine translation (MT) bitext corpora alignment is instrumental to the progress in MT; we believe that dialogue2note sentence alignment will be a critical driver for AI assisted medical scribing. In the dia-logue2note snippet summarization task, we provide our baselines for generating clinical note sentences from transcript snippets. Technology developed from these tasks, as well as other models generated from this annotation, can contribute as part of a larger framework that ingests automatic speech recognition (ASR) output from clinician-patient visits and generates clinical note text end-to-end (Quiroz et al., 2019) . the dialogue snippet), it is important to consider several differences in textual mediums:",
"cite_spans": [
{
"start": 502,
"end": 520,
"text": "(Yim et al., 2020)",
"ref_id": "BIBREF30"
},
{
"start": 1540,
"end": 1561,
"text": "(Quiroz et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 752,
"end": 759,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Semantic variations between spoken dialogue and written clinical note narrative. Spoken language in clinic visits have vastly different representations than in highly technical clinical note reports. Dialogue may include frequent use of vernacular and verbal expressions, along with disfluencies, filler words, and false starts. In contrast, clinical note text is known to use semi-structured language, e.g. lists, and is known to have a much higher degree of nominalization. Moreover, notes frequently contain medical terminology, acronyms, and abbreviations, often with multiple word senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Information density and length. Whereas clinical notes are highly dense technical documents, conversation dialogue are much longer than clinical notes. In fact, in our data, dialogues were on average three times the note length. Key information in conversations are regularly interspersed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Dialogue anaphora across multiple turns is pervasive. Anaphora is the phenomenon in which information can only be understood in conjunction with references to other expressions. Consider in the dialogue example : \"Patient: I have been having swelling and pain in my knee. Doctor: How often does the knee bother you?\" It's understood that the second reference of \"knee\" pertains to the kneerelated swelling and pain. A more complex example is shown in Table 2 note line 6. While anaphora occurs in all naturally generated language, in con-versation, it may appear across multiple turns many sentences apart with contextually inferred subjects.",
"cite_spans": [],
"ref_spans": [
{
"start": 451,
"end": 458,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Order of appearance between source and target are not consistent. The order of information and organization of data in a clinical note may not match the order of discussion in a clinic visit dialogue. This provides additional challenges in the alignment process. Table 2 shows corresponding note and dialogue information with the same color.",
"cite_spans": [],
"ref_spans": [
{
"start": 263,
"end": 270,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Content incongruency. Relationship-building is a critical aspect of clinician-patient visits. Therefore visit conversations may include discussion unrelated to patient health, e.g. politics and social events. Conversely, not all clinical note content necessarily corresponds to a dialogue content. Information may come from a clinical note template or various parts of the electronic medical record.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Clinical note creation from conversation amalgamates interweaving subtasks. Elements in a clinic visit conversation (or accompanying speech introduction) are intended to be recorded or acted upon in different ways. For example, some spoken language may be directly copied to the clinical note with minor pre-determined edits, such as in a dictation, e.g. \"three plus cryptic\" will be converted to \"3+ cryptic\". However some language is meant to express directives, pertaining to adjustments to the note, e.g. \"please insert the risks and benefits template for tonsillectomy.\" Some information is meant to be interpreted, e.g. \"the pe was all normal\" would allow a note sentence \"CV: normal rhythm\" as well as \"skin: intact, no lacerations\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Finally, there are different levels of abstractive summarization over multiple statements, questions and answers as shown in the Table 2 examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Clinical Conversation Language Understanding Language understanding of clinical conversation can be traced to a plethora of historical work in conversation analysis regarding clinician-patient interactions (Byrne and Long, 1977; Raimbault et al., 1975; Drass, 1982; Cerny, 2007; Wang et al., 2018) . More recent work has additionally included classification of dialogue utterances into semantic categories. Examples include classifying dialogue sentences into either the target SOAP section format or by using abstracted labels consistent with conversation analysis (Jeblee et al., 2019; Schloss and Konam, 2020; Wang et al., 2020) . The work of (Lacson et al., 2006) framed identifying relevant parts of hemodialysis 118 nurse-patient phone conversations as an extractive summarization task. There has also been numerous works related to identifying topics, entities, attributes, and relations from clinic visit conversation -using various schemas (Jeblee et al., 2019; Rajkomar et al., 2019; Du et al., 2019) . Though clinic conversation language understanding is not explored in this work, our automatic or manual sentence alignments methods produce the language understanding labels that may to used to (a) model dialogue relevance, (b) cluster dialogue topics, and (c) classify speaking mode, e.g. dictation versus question-answers.",
"cite_spans": [
{
"start": 206,
"end": 228,
"text": "(Byrne and Long, 1977;",
"ref_id": "BIBREF2"
},
{
"start": 229,
"end": 252,
"text": "Raimbault et al., 1975;",
"ref_id": "BIBREF20"
},
{
"start": 253,
"end": 265,
"text": "Drass, 1982;",
"ref_id": "BIBREF5"
},
{
"start": 266,
"end": 278,
"text": "Cerny, 2007;",
"ref_id": "BIBREF4"
},
{
"start": 279,
"end": 297,
"text": "Wang et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 566,
"end": 587,
"text": "(Jeblee et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 588,
"end": 612,
"text": "Schloss and Konam, 2020;",
"ref_id": "BIBREF23"
},
{
"start": 613,
"end": 631,
"text": "Wang et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 646,
"end": 667,
"text": "(Lacson et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 949,
"end": 970,
"text": "(Jeblee et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 971,
"end": 993,
"text": "Rajkomar et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 994,
"end": 1010,
"text": "Du et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Creating a corpus of aligned clinic visit conversation dialogue sentences with corresponding clinical note sentences is instrumental for training language generation systems. Early work in this domain includes that of (Finley et al., 2018) , which uses an automated algorithm based on some heuristics, e.g. string matches, and merge conditions, to align dictation parts of clinical notes. In (Yim et al., 2020) , we annotated manual alignments between dialogue sentences and clinical note sentences for the entire visit; however, the dataset was small and artificial (66 visits). Here we utilize this approach on real data and additionally provide an automatic sentence alignment baseline system. To our knowledge, this is the first work to propose an automated sentence alignment system for entire clinic visit dialogue and note pairs.",
"cite_spans": [
{
"start": 218,
"end": 239,
"text": "(Finley et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 392,
"end": 410,
"text": "(Yim et al., 2020)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clinic Visit Dialogue2note Sentence Alignment",
"sec_num": null
},
{
"text": "Clinical Language Generation from Conversation (Finley et al., 2018) produced dictation parts of a report, measuring performance both on gold standard transcripts and raw ASR output using statistical MT methods. In (Liu et al., 2019) , the authors labeled a corpus of 101K simulated conversations and 490 nurse-patient dialogues with artificial short semi-structured summaries. They experimented with different LSTM sequence-tosequence methods, various attention mechanisms, pointer generator mechanisms, and topic information additions. (Enarvi et al., 2020) performed similar work with sequence-to-sequence methods on a corpus of 800K orthopaedic ASR generated transcripts and notes; (Krishna et al., 2020 ) on a corpus of 6862 visits of transcripts annotated with clinical note summary sentences. Unlike most of previous works, our task generates clinical note sentences from labeled transcript snippets, which are at times overlapping and discontinuous. (Krishna et al., 2020) 's CLUSTER2SENT oracle system does use gold standard transcript \"clusters\", though different from our setup, outputs entire sections. While this strategy presupposes an upstream conversation topic segmentation system 1 as well as some extractive summarization, generation based on smaller text chunks can lead to more controllable and accurate natural language generation, critical characteristics in health applications.",
"cite_spans": [
{
"start": 47,
"end": 68,
"text": "(Finley et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 215,
"end": 233,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 538,
"end": 559,
"text": "(Enarvi et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 686,
"end": 707,
"text": "(Krishna et al., 2020",
"ref_id": "BIBREF14"
},
{
"start": 958,
"end": 980,
"text": "(Krishna et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clinic Visit Dialogue2note Sentence Alignment",
"sec_num": null
},
{
"text": "Data The data set was constructed from clinical encounter visits from 500 visits and 13 providers. The data for each visit consisted of a visit audio and clinical note. For each visit audio, speaker roles (e.g. clinician patient) were segmented and labeled. Automatically generated speech to text for each audio was manually corrected by annotators. Table 3 gives the summary statistics of the extracted visit audio. For all specialties, the average number of turns and sentences for transcript was 175 \u00b1 111 and 341 \u00b1 214, for a total of 87725 turns and 170546 sentences. The number of sentences for clinical note was 47 \u00b1 24, for a total of 23421 sentences. Table 4 shows the number of turns and sentences per different types of speakers. We also combined our data with external data, the mock patient visit (MPV) dataset, from (Yim et al., 2020) Annotations Each annotation is based on a clinical note sentence association with multiple transcript sentences. A note sentence can be associated with zero transcript sentences and an INFERRED-OUTSIDE label for default template values, e.g. \"cv: normal\". One may also be associated with sets of transcript sentences and a set tag, e.g. DICTATION or QA (described below). Finally, when multiple sets have anaphoric references, they may be tied together using a GROUP label. Given this hierarchy, the annotation related to a single note sentence can be represented as a tree as shown in Figure 1 .",
"cite_spans": [
{
"start": 830,
"end": 848,
"text": "(Yim et al., 2020)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 350,
"end": 357,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 660,
"end": 667,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 1435,
"end": 1443,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "Set labels COMMAND: Spoken by the clinician to the scribe to make a change to the clinical note structure, e.g. \"add skin care macro.\" DICTATION: Spoken by the clinician to the scribe where the output text is expected to be almost verbatim, though with understood changes in abbrevations, number expressions, and language formatting commands, e.g. \"return in four to five days period.\" STATEMENT2SCRIBE: Spoken by the clinician to the scribe where information is communicated informally, e.g. \"okay so put down heart and lungs were normal\" STATEMENT: Statements spoken by any participant in a clinic visit in natural conversation, e.g. \"it lasted about a week.\" QA: Questions and answers spoken by any participant in a clinic visit in natural conversation, e.g. \"how long has the runny nose lasted? about a week.\" INFERRED-OUTSIDE: Clinical note sentences for which information comes from a known template's default value rather than the conversation, e.g.\"skin: intact.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "If after applying all possible associations and still there is information in the note sentence not available from the transcript, then an INCOMPLETE tag is added. A note sentence is left unmarked if no information can be found from the transcript. Table 2 shows label annotations with color coding for a full abbreviated transcript-note pair.",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 256,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "To measure interannotaor agreement, we calculated the triple, path, and span metrics introduced in (Yim et al., 2020) , briefly described again here. The triple, path, and span metrics were defined based on instances constructed from the annotation tree representation. Specifically, for the triple metric, which measures unlabeled note to dialogue sentence match, instances are defined by note sentence id and transcript sentence id per visit, e.g. 'visitid_01|note_0|3'. The second metric, similar to the leaf-ancestor metric used in parsing, takes into account the full path from one note sentence to one dialogue sentence, e.g. 'visitid_01|note_0|GROUP|QA|3'. The span metric, similar to that of PARSEVAL, measures a node-level labeled span of dialogue sentences, e.g. for the top group node would be 'visitid_01|note_0|GROUP| [10, 12, 13, 14] ' (Sampson and Babarczy, 2003) . When testing agreement, labels for each annotator are decomposed to these instance collections; true positive, false positive, and false negatives may be counted by the matches and mismatches between annotators. F1 score is calculated as usual. The different definitions allow both relaxed (triple) and stricter (path and span) agreement measurements.",
"cite_spans": [
{
"start": 99,
"end": 117,
"text": "(Yim et al., 2020)",
"ref_id": "BIBREF30"
},
{
"start": 831,
"end": 835,
"text": "[10,",
"ref_id": null
},
{
"start": 836,
"end": 839,
"text": "12,",
"ref_id": null
},
{
"start": 840,
"end": 843,
"text": "13,",
"ref_id": null
},
{
"start": 844,
"end": 847,
"text": "14]",
"ref_id": null
},
{
"start": 850,
"end": 878,
"text": "(Sampson and Babarczy, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "Labeling Process A group of 11 annotators were trained for various parts of the processing task. Audio transcription was performed using Elan (archive.mpi.nl/tla/elan) and dialogue2note annotation was performed using an in-house software. Annotators underwent training on sample files for which they received in-depth feedback. They additionally took a training quiz and self-reviewed errors. After training, their interannotator agreement was calculated based on 10 final files. Their average pairwise triple, path, and span F1 scores were 0.754, 0.549, and 0.645 respectively, a reasonable performance given the task difficulty. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "Annotation Statistics On average 58 \u00b1 18 % of the clinical note was marked with an annotation. This suggests that around 40% of the note is structural, e.g. blank sentences or section headers, or from outside sources, e.g. injected labs, medication lists, etc. On average 13 \u00b1 12 % of the transcript sentences were marked. This low number suggests that much of the information from transcripts consisted of repeats or were unused. Table 5 shows that most note sentences were associated with one set type, though still many were associated with multiple. Table 6 shows the frequency of note sentences and the unique label types associated with it. From the spread of percentages for each combination category, it is apparent that understanding the entire conversation context requires combining different types of cognitive listening skills. For each note sentence, the average range of transcript sentences associated with it in the train set was 11, with the 90th percentile at 17; however there were 10% of cases with ranges above 17, which occurred when explicit topic mentions appeared far away from detailed discussion. Crossing annotations occur when content from the note and transcript appeared comparatively out of order. For example, if note sentence 0 is matched with transcript sentence 3 and meanwhile note sentence 3 is matched with transcript sentence 0, these annotations \"cross\", rendering automatic alignment more challenging. To quantify this, we calculate the percentages of annotations which annotates cross one, three, or five other annotations 4 (Table 7) . These high percentages reveal that the order of information in the transcript differ greatly from that of the note -thus 3 These agreement values are consistent with the comparable task of simplification corpus creation, previously measured to be 0.68 kappa (Hwang et al., 2015) .",
"cite_spans": [
{
"start": 1702,
"end": 1703,
"text": "3",
"ref_id": null
},
{
"start": 1839,
"end": 1859,
"text": "(Hwang et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 431,
"end": 438,
"text": "Table 5",
"ref_id": "TABREF8"
},
{
"start": 554,
"end": 561,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 1569,
"end": 1578,
"text": "(Table 7)",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "4 DICTATION, STATEMENT2SCRIBE, COMMAND labels aren't counted to focus on conversational dialogue # label-types freq % 1 8712 37 2 2914 12 3 1021 4 4 311 1 5 20 - alignments are said to be non-monotonic.",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 166,
"text": "% 1 8712 37 2 2914 12 3 1021 4 4 311 1 5",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "The full amount of annotations from the dia-logue2note labels may be used to create classifiers in many different types of tasks, e.g. dialogue relevance classification, topic segmentation, command identification, etc. However, in the remaining sections, we focus on two particular system applications : automatic dialogue2note sentence alignment and snippet summarization. For these baselines, the train and test sets were split using stratified random sampling using an 80-20 split. The training and test sets were composed of 400 and 100 of our visits; 53 and 13 for the MPV visits. 91 visits from training was reserved for development testing. As a simplification, the GROUP, INCOMPLETE, and COMMAND labels are ignored for these baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "crossing percentages cross1 33 \u00b1 28 cross3 22 \u00b1 27 cross5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "14 \u00b1 22 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "We define the dialogue2note sentence alignment baseline task as the classification of 1-to-1 dialogue sentence and clinical note sentence pairs with set labels. Thus, the candidate space includes all combinations of clinical note sentences paired with all dialogue possible sentences in a visit; only those annotated with labeled associations are considered positive. This is a subset of the full annotation tasks that require 1-to-many multi-label classifications with hierarchical GROUP set labels. However, this feature description match-note Dot product of note and transcript vector divided by the magnitude of the note vector. match-transcript Dot product of note and transcript vector divided by the magnitude of the transcript vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Baselines",
"sec_num": "5"
},
{
"text": "cui-pair UMLS concept pair, as extracted by MetaMap (Aronson and Lang, 2010) , where the first concept unique identifier (cui) is from the clinical note and the second cui is from the transcript sentence. The top_p parameter determines which most significant cui-pair features to keep, using chi-square analysis. prev-sent-quest 1 if the previous sentence has one of sentence has a question feature, e.g. interrogative words such as who, what etc, 0 otherwise. jaccard-sim If set to local, then defaults to jaccard similarity of the note-transcript sentence pair. If set to regional and similarity passes the sim-thresh threshold, instead, the maximum jaccard similarity from candidate regional local matches is returned. These candidate regional matches are created by by heuristically finding the closest length matches by incorporating previous and next sentences. Bitext Corpus Creation Related Work The topic of bitext corpus creation is often used in the context of creating resources for statistical machine translation or as a means to create cross lingual linguistic resources (Koehn, 2005; Tiedemann, 2011) ; it is also used to describe simplification dataset creation (Barzilay and Elhadad, 2003; Hwang et al., 2015; \u0160tajner et al., 2017) . While highly parallel bitext can be aligned using sentence length methods, much like other comparable corpora alignment strategies, multi-form comparable corpora cannot rely on monotonic ordering or correlated bitext sentence length; moreover the different text forms presents additional constraints on exact narrative structure. Like in previous work, we build our baselines for dialogue2note sentence alignment by using similarity features with some adjustment to incorporate similarity over multiple sentences.",
"cite_spans": [
{
"start": 44,
"end": 76,
"text": "MetaMap (Aronson and Lang, 2010)",
"ref_id": null
},
{
"start": 1086,
"end": 1099,
"text": "(Koehn, 2005;",
"ref_id": "BIBREF13"
},
{
"start": 1100,
"end": 1116,
"text": "Tiedemann, 2011)",
"ref_id": "BIBREF26"
},
{
"start": 1179,
"end": 1207,
"text": "(Barzilay and Elhadad, 2003;",
"ref_id": "BIBREF1"
},
{
"start": 1208,
"end": 1227,
"text": "Hwang et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 1228,
"end": 1249,
"text": "\u0160tajner et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Baselines",
"sec_num": "5"
},
{
"text": "System Description Candidate classification instances for every note sentence and transcript sentences pair were created and classified into one of the previously described set labels. For each clinical note, an additional classification instance was created for a match with an empty transcript line. (This occurs with a INFERRED-OUTSIDE label). A single tag was assigned to each classification instance according to annotated labels. If multiple tags existed per sentence pair, we took the first label in the following order: STATEMENT, STATEMENT2SCRIBE, QA, DICTATION. Sentences were tokenized, changed to lemma form using Spacy English model (spacy.io), and vectorized according to a bag of words model. Stop words and punctuation were removed. To balance the uneven data distribution, the number of negative class instances were sampled randomly according to configurable parameter, neg_samp. We experimented with three baseline pairwise classification systems: simple-threshold : A rule-based system that categorizes everything over threshold1 to DICTATION anything between threshold1 and threshold2 to STATEMENT2SCRIBE. These were the two labels in the train set with the highest pairwise similarities; other labels had comparable similarities. system1 : A simple feature-based system using a decision tree classifier (scikit-learn.org). Its features included speaker category, cosine similarity, length of the note and transcript sentence vectors, and the note sentence vector. In order to take into account the match over the length of either the note or the transcript, we included a match-note and match-transcript feature described in Table 8 . system2 : A feature-based system like system1 with additional features, the transcript vector, a previous-question feature, a cui-pair feature, and a jaccard similarity feature described in Table 8 . To avoid erroneous matches to answer sentences, in this system, common answers (e.g. \"no\") were removed from the train set.",
"cite_spans": [],
"ref_spans": [
{
"start": 1647,
"end": 1654,
"text": "Table 8",
"ref_id": "TABREF11"
},
{
"start": 1847,
"end": 1854,
"text": "Table 8",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Sentence Alignment Baselines",
"sec_num": "5"
},
{
"text": "Results After tuning, we found optimal performances for the threshold systems at threshold1=0.9 and threshold2=0.6. For system1 and system2, optimized parameters were at neg_samp=50, jaccard-sim=regional, sim-thresh=0.3, top_p=20, for a decision tree classifier. performance near that of the more complex systems. Using a simple feature based system, we see F1 measures between 0.188 and 0.390 for everything but INFERRED-OUTSIDE and UNMARKED. As expected, given the high amounts of UNMARKED, it has the highest performance. Adding additional features and curating training examples gave a minor boost across different labels as shown in the system1 and system2 differences. Analyzing the results across pairs based on similarity ranges, we see that the higher similarity pairs have higher performance, likely because the similarity features can be more reliable at those ranges (Table 10) . Table 11 shows the results of system2 per label. Such results are comparable to simplification dataset creation systems with 0.33 F1 at 0-40% similarity, 0.79 F1 at 40-70%, 0.95 F1 at 70-100% (Barzilay and Elhadad, 2003 Studying confusions between classes in system2, we found that overwhelmingly most errors were due to assigning unmarked passages to another label. This may be due to the simple representation of features, where certain content note or transcript bag of word features may have higher weights against similarity features. There are also cases where legitimately, the dialogue will mention what is discussed in the clinical note but is not marked in the gold standard (e.g. the same topic may be referred to multiple times but we only annotate the best instance). To a smaller extent, there were confusions among related positive class labels. Confusions between DICTATION and STATEMENT2SCRIBE occurred for high similarity sentences. Confusions between STATE-MENT2SCRIBE and STATEMENT arose for cases in which dialogue may be perceived to be spoken either to a scribe or a patient, e.g. \"looks normal\". Confusions between STATEMENT and QA transpired because we allowed the QA label to encompass both open-ended questions, e.g. \"How are you? I have been having a headache for 2 weeks\" as well as very focused categorical questions, e.g. \"Did you take nasal spray? No.\"; thus answers to open-ended questions can be easily confused with STATEMENTs.",
"cite_spans": [
{
"start": 1085,
"end": 1112,
"text": "(Barzilay and Elhadad, 2003",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 879,
"end": 889,
"text": "(Table 10)",
"ref_id": "TABREF1"
},
{
"start": 892,
"end": 901,
"text": "Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Sentence Alignment Baselines",
"sec_num": "5"
},
{
"text": "In the current system, classifications for each note-dialogue sentence pair are labeled independently. We can improve the system by framing the required matches for each clinical note sentence as a sequence labeling problem. More semantic normalization features and surrounding sentence features would benefit the classification. Finally, in the future we can try more complex sentence vector representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Baselines",
"sec_num": "5"
},
{
"text": "We define the snippet summarization baseline task where given the gold standard dialogue snippet text, a corresponding clinical note sentence is generated. The number of instances of aligned sets for train, dev, and test was 7129, 1851, and 2085 respectively. The average number of input and output tokens was 24 and 13 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Snippet Summarization Baselines",
"sec_num": "6"
},
{
"text": "Monolingual Text-to-Text Language Generation Related Work Monolingual monologue text-to-text language generation tasks include summarization (See et al., 2017) , simplification (\u0160tajner et al., 2017) , and paraphrasing (Ma et al., 2018) . The exact manner of transformation between the input and output text depends on comparative lengths, task-specific constraints, and level of abstraction.",
"cite_spans": [
{
"start": 141,
"end": 159,
"text": "(See et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 177,
"end": 199,
"text": "(\u0160tajner et al., 2017)",
"ref_id": null
},
{
"start": 219,
"end": 236,
"text": "(Ma et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Snippet Summarization Baselines",
"sec_num": "6"
},
{
"text": "In the area of conversational modeling, e.g. chatbots, the task is to produce appropriate dialogue responses given a prompt. In one simple classic setup, the response generation can be modeled as an information retrieval problem (Jurafsky and Martin, 2009; Ji et al., 2014) . In such systems, the prompt query is processed and compared to those saved Table 12 : BLEU, ROUGE-1, ROUGE-2, and ROUGE-L performance by sections in training data. The system produces the saved response to the prompt most similar to that of the query. Although our task is not to respond a user, we may utilize the same type of system. Specifically, we can instead model the note sentence as the retrieval response to a dialogue input prompt. Our problem most closely resembles meeting conversation summarization, in which the source data is a meeting conversation (dialogue) and the target data is a meeting summary (monologue) (Carenini et al., 2011) . Method pipelines include multiple classifiers such as topic segmentation, action item identification, as well as some language generation module. There is also work with end-toend pipelines that perform extractive and abstractive neural generation (Zhu et al., 2020; Mehdad et al., 2013) . Unlike a typical summarization task, our source data is of a more comparable length, making the task more tractable. For our baselines, in addition to a simple retrieval based system, we experimented with a classic sequence-to-sequence model with and without a pointer-generator.",
"cite_spans": [
{
"start": 229,
"end": 256,
"text": "(Jurafsky and Martin, 2009;",
"ref_id": "BIBREF12"
},
{
"start": 257,
"end": 273,
"text": "Ji et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 905,
"end": 928,
"text": "(Carenini et al., 2011)",
"ref_id": "BIBREF3"
},
{
"start": 1179,
"end": 1197,
"text": "(Zhu et al., 2020;",
"ref_id": "BIBREF31"
},
{
"start": 1198,
"end": 1218,
"text": "Mehdad et al., 2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 351,
"end": 359,
"text": "Table 12",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Snippet Summarization Baselines",
"sec_num": "6"
},
{
"text": "Note Section Identification Clinical notes are typically organized into different sections demarked by section headers as shown in Table 2 note lines 0, 2, 26, and 62. In order to report language generation performances grouped by sections and also to experiment with joint section prediction, we automatically labeled note sentences to one of six note sections using a rule-based algorithm. These categories included: History of Present Illness (HPI), Assessment and Plan (AP), Physical Exam (PE), Chief Complaint (CC), Review of Systems (ROS), and Imaging (IM). Sections headers were identified using regular expressions created by studying the train set. Subsequently, note sentences were labeled based on their corresponding section header. We modeled section prediction for two of our baseline systems : ret, pg-mt.",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Snippet Summarization Baselines",
"sec_num": "6"
},
{
"text": "System Descriptions Below we describe our baseline systems. We trained and tested our seq-to-seq models using the LeafNATS codebase (Shi et al., 2019) . retrieval-based generator (ret) : Note sentence suggestion generation are modeled as a retrieval task. Paired transcript snippets and note lines (with associated section) are cached. For new transcript snippets, the note sentence corresponding to the highest cosine similarity dialogue snippet in training data is returned. seq2seq baselines : We evaluate the performances of three sequence-to-sequence baselines with an RNN sequence encoder. The base system (vanilla) is a simple sequence-to-sequence system with attention. We also evaluate an option to add a pointergenerator network (pg). Finally, to model a pointergenerator system that outputs a summary as well as a section designation, we evaluated a final option that treats the two outputs as a multitask system (pg-mt). 5 Experiments were run on an EC2 p2.xlarge instance with an NVIDIA K80 GPU, taking \u223c150 minutes each.",
"cite_spans": [
{
"start": 132,
"end": 150,
"text": "(Shi et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Snippet Summarization Baselines",
"sec_num": "6"
},
{
"text": "Results Table 12 shows the BLEU, ROUGE-1 (R-1), ROUGE-2 (R-2), AND ROUGE-L (R-L) performances across different note sections. As shown, typically the two pointer-generator systems outperform the retrieval based and vanilla baselines. This difference may be due to the ability for the pointer-generator system to copy-and-paste items from the original input.",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 16,
"text": "Table 12",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Snippet Summarization Baselines",
"sec_num": "6"
},
{
"text": "Comparatively, (Krishna et al., 2020) 's best CLUSTER2SENT oracle scores yielded R-1, R-2, and R-L performances of 66.5, 39.01, and 52.46, respectfully, from 6862 visits. In our low resource scenario of 566 visits, we achieved 50%, 43%, and 61% of their R-1, R-2, and R-L scores at 12% of the data. This suggests given more training data our system may similarly reach state-of-the-art levels. Table 13 shows the accuracy of the ret and pgmt systems for note section prediction. Although on the whole, pg-mt performs better than the ret system, for low frequency categories this is not the case. This phenomenon most likely occurs because pg-mt favors higher frequency labels, which is consistent with its training objective. ret, which classifies note section through the intermediate comparisons of input sequence similarities, is less likely to be directly skewed by class imbalances. Human Evaluation We sampled 10 random test snippets from each of the six section categories for evaluation (total 60 snippets). An annotator with a medical degree was asked to rank the four systems relative to each other, where 1 is the best. Additionally each system was evaluated independently with a score from 1-5 (5=best) for the categories relevancy, factual accuracy, writing-style, completeness, and overall. Table 14 shows the average scores for the different baseline systems. The vanilla seq2seq system consistently performed the worst, while the pointer-generator systems consistently performed better.",
"cite_spans": [
{
"start": 15,
"end": 37,
"text": "(Krishna et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 394,
"end": 402,
"text": "Table 13",
"ref_id": "TABREF1"
},
{
"start": 1305,
"end": 1313,
"text": "Table 14",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Snippet Summarization Baselines",
"sec_num": "6"
},
{
"text": "ret vanilla pg pg-mt completeness 2.5 1.2 3.1 2.9 factual-accuracy 2.4 1.3 3.2 2.9 relevancy 2.9 1.5 3.7 3.5 writing-style 3.2 1.8 3.3 3.3 overall 2.4 1.2 3.1 2.9 rank(1=best) 2.7 3.4 1.8 2.1 While our sentence generation baselines showed modest performances, this is consistent with low resource language generation scenarios and may be ameliorated with additional training data. To improve our system, in the future, we will apply methods from low-resource machine translation techniques, utilizing unpaired sources of medical dialogue and clinic note corpora. Furthermore, we can experiment with other sequence-to-sequence approaches, e.g. transformers, for better summary generation. Joint section prediction generation may be extended to model hierarchical sections by adjusting targets to include subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Snippet Summarization Baselines",
"sec_num": "6"
},
{
"text": "In this work, we provided baselines for two tasks that work towards natural language generation of note sentences from medical visit conversation. An automated dialogue2note sentence alignment system can be used to create realistic training data so immensely critical for modern systems. Meanwhile, if given properly extracted transcript snippets, dialogue2note snippet summarization could provide a valuable building block for an overall language generation system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "In future work, additional metadata information, (e.g. set labels, speaker, specialties) may be incorporated into the network architecture. Although we only explore two systems here, other models such as topic segmentation, extractive summarization, note sentence ordering, and dialogue command classification, can be trained from this annotated dataset alone. These labels may alternatively be used for additional multitask classification objectives in a full sequence-to-sequence model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Extension of this labeled dataset may yield further interesting gains. For example, textual entailment labels between paired snippets would allow progress towards understanding and generating semantic variations and detail. Event annotation, which structures text, if performed on paired snippets, would provide training examples for data-totext or text-to-data generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Together or apart, such systems would enable automation of clinical note generation whether as a full end-to-end solution or as piecemeal suggestions in a human-augmented solution. Ultimately this technology may be utilized to deburden clinicians, allowing them to focus back on patient care.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "All annotators, hired in-house, underwent HIPAA data and security training. Data was stored in dedicated HIPAA compliant compute resources. Data collection and persistence was consistent with terms of use and customer expectations. All content examples in this paper are fictitious.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": null
},
{
"text": "A system that divides conversations into segments according to topics",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Final experimental hyperparameters were set at, RNN=LSTM, batch_size=50, emb_dim=128, src_hidden_dim=256, trg_hidden_dim=256, src_seq_lens=400, trg_seq_lens=100, attn_method=luong_concat, repetition=vanilla, share_emb_weight=False.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An overview of MetaMap: historical perspective and recent advances",
"authors": [
{
"first": "Fran\u0203 \u00a7ois-Michel",
"middle": [],
"last": "Alan R Aronson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American Medical Informatics Association : JAMIA",
"volume": "17",
"issue": "3",
"pages": "229--236",
"other_ids": {
"DOI": [
"10.1136/jamia.2009.002733"
]
},
"num": null,
"urls": [],
"raw_text": "Alan R Aronson and Fran\u0202 \u00a7ois-Michel Lang. 2010. An overview of MetaMap: historical perspective and recent advances. Journal of the American Medical Informatics Association : JAMIA, 17(3):229-236.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Sentence alignment for monolingual comparable corpora",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Noemie",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Noemie Elhadad. 2003. Sentence alignment for monolingual comparable corpora. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 25- 32.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Doctors talking to patients",
"authors": [
{
"first": "F",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "P",
"middle": [
"S"
],
"last": "Byrne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Long",
"suffix": ""
}
],
"year": 1977,
"venue": "Psychological Medicine",
"volume": "7",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John F Byrne and PS Long. 1977. Doctors talking to patients. Psychological Medicine, 7(4):735.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Methods for mining and summarizing text conversations",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Synthesis Lectures on Data Management",
"volume": "3",
"issue": "3",
"pages": "1--130",
"other_ids": {
"DOI": [
"10.2200/S00363ED1V01Y201105DTM017"
]
},
"num": null,
"urls": [],
"raw_text": "Giuseppe Carenini, Gabriel Murray, and Raymond Ng. 2011. Methods for mining and summarizing text conversations. Synthesis Lectures on Data Manage- ment, 3(3):1-130. Publisher: Morgan & Claypool Publishers.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On the function of speech acts in doctor-patient communication",
"authors": [
{
"first": "Miroslav",
"middle": [],
"last": "Cerny",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miroslav Cerny. 2007. On the function of speech acts in doctor-patient communication. Linguistica.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Negotiation and the structure of discourse in medical consultation. Sociology of health& illness",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kriss",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drass",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kriss A Drass. 1982. Negotiation and the structure of discourse in medical consultation. Sociology of health& illness.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Extracting symptoms and their status from clinical conversations",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Anjuli",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Linh",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Yuhui",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Izhak",
"middle": [],
"last": "Shafran",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "915--925",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1087"
]
},
"num": null,
"urls": [],
"raw_text": "Nan Du, Kai Chen, Anjuli Kannan, Linh Tran, Yuhui Chen, and Izhak Shafran. 2019. Extracting symp- toms and their status from clinical conversations. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 915- 925. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generating medical reports from patient-doctor conversations using sequence-to-sequence models",
"authors": [
{
"first": "Seppo",
"middle": [],
"last": "Enarvi",
"suffix": ""
},
{
"first": "Marilisa",
"middle": [],
"last": "Amoia",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Del-Agua Teba",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Delaney",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Diehl",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Hahn",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Liam",
"middle": [],
"last": "Mcgrath",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Pinto",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Rubini",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ruiz",
"suffix": ""
},
{
"first": "Gagandeep",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Stemmer",
"suffix": ""
},
{
"first": "Weiyi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Vozila",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ranjani",
"middle": [],
"last": "Ramamurthy",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the First Workshop on Natural Language Processing for Medical Conversations",
"volume": "",
"issue": "",
"pages": "22--30",
"other_ids": {
"DOI": [
"10.18653/v1/2020.nlpmc-1.4"
]
},
"num": null,
"urls": [],
"raw_text": "Seppo Enarvi, Marilisa Amoia, Miguel Del-Agua Teba, Brian Delaney, Frank Diehl, Stefan Hahn, Kristina Harris, Liam McGrath, Yue Pan, Joel Pinto, Luca Rubini, Miguel Ruiz, Gagandeep Singh, Fabian Stemmer, Weiyi Sun, Paul Vozila, Thomas Lin, and Ranjani Ramamurthy. 2020. Generating medi- cal reports from patient-doctor conversations using sequence-to-sequence models. In Proceedings of the First Workshop on Natural Language Process- ing for Medical Conversations, pages 22-30. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "From dictations to clinical reports using machine translation",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Finley",
"suffix": ""
},
{
"first": "Wael",
"middle": [],
"last": "Salloum",
"suffix": ""
},
{
"first": "Najmeh",
"middle": [],
"last": "Sadoughi",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Edwards",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "Nico",
"middle": [],
"last": "Axtmann",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Brenndoerfer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Suendermann-Oeft",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "3",
"issue": "",
"pages": "121--128",
"other_ids": {
"DOI": [
"10.18653/v1/N18-3015"
]
},
"num": null,
"urls": [],
"raw_text": "Gregory Finley, Wael Salloum, Najmeh Sadoughi, Erik Edwards, Amanda Robinson, Nico Axtmann, Michael Brenndoerfer, Mark Miller, and David Suendermann-Oeft. 2018. From dictations to clini- cal reports using machine translation. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 3 (Industry Papers), pages 121-128, New Orleans -Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Aligning sentences from standard wikipedia to simple wikipedia",
"authors": [
{
"first": "William",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2015,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1022"
]
},
"num": null,
"urls": [],
"raw_text": "William Hwang, Hannaneh Hajishirzi, Mari Osten- dorf, and Wei Wu. 2015. Aligning sentences from standard wikipedia to simple wikipedia. In HLT- NAACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Extracting relevant information from physicianpatient dialogues for automated clinical note taking",
"authors": [
{
"first": "Serena",
"middle": [],
"last": "Jeblee",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Faiza Khan Khattak",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Crampton",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Mamdani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rudzicz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)",
"volume": "",
"issue": "",
"pages": "65--74",
"other_ids": {
"DOI": [
"10.18653/v1/D19-6209"
]
},
"num": null,
"urls": [],
"raw_text": "Serena Jeblee, Faiza Khan Khattak, Noah Crampton, Muhammad Mamdani, and Frank Rudzicz. 2019. Extracting relevant information from physician- patient dialogues for automated clinical note tak- ing. In Proceedings of the Tenth International Work- shop on Health Text Mining and Information Anal- ysis (LOUHI 2019), pages 65-74. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An information retrieval approach to short text conversation",
"authors": [
{
"first": "Zongcheng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zongcheng Ji, Z. Lu, and Hang Li. 2014. An infor- mation retrieval approach to short text conversation. ArXiv, abs/1408.6988.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Speech and Language Processing",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Jurafsky and James H. Martin. 2009. Speech and Language Processing (2nd Edition). Prentice- Hall, Inc., USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generating soap notes from doctor-patient conversations",
"authors": [
{
"first": "K",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Sopan",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"P"
],
"last": "Bigham",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"Chase"
],
"last": "Lipton",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Krishna, Sopan Khosla, Jeffrey P. Bigham, and Zachary Chase Lipton. 2020. Generating soap notes from doctor-patient conversations. ArXiv, abs/2005.01795.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic analysis of medical dialogue in the home hemodialysis domain: Structure induction and summarization",
"authors": [
{
"first": "C",
"middle": [],
"last": "Ronilda",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Lacson",
"suffix": ""
},
{
"first": "William",
"middle": [
"J"
],
"last": "Barzilay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Long",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Biomedical Informatics",
"volume": "39",
"issue": "5",
"pages": "541--555",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2005.12.009"
]
},
"num": null,
"urls": [],
"raw_text": "Ronilda C. Lacson, Regina Barzilay, and William J. Long. 2006. Automatic analysis of medical dia- logue in the home hemodialysis domain: Structure induction and summarization. Journal of Biomedi- cal Informatics, 39(5):541-555.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Topic-aware pointergenerator networks for summarizing spoken conversations",
"authors": [
{
"first": "Zhengyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Sheldon",
"middle": [
"Lee"
],
"last": "Shao Guang",
"suffix": ""
},
{
"first": "Aiti",
"middle": [],
"last": "Aw",
"suffix": ""
},
{
"first": "Nancy",
"middle": [
"F"
],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)",
"volume": "",
"issue": "",
"pages": "814--821",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengyuan Liu, A. Ng, Sheldon Lee Shao Guang, AiTi Aw, and Nancy F. Chen. 2019. Topic-aware pointer- generator networks for summarizing spoken conver- sations. 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 814- 821.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Query and output: Generating words by querying distributed word representations for paraphrase generation",
"authors": [
{
"first": "Shuming",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xuancheng",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "196--206",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1018"
]
},
"num": null,
"urls": [],
"raw_text": "Shuming Ma, Xu Sun, Wei Li, Sujian Li, Wenjie Li, and Xuancheng Ren. 2018. Query and output: Gen- erating words by querying distributed word represen- tations for paraphrase generation. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 196-206, New Orleans, Louisiana. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Abstractive meeting summarization with entailment and fusion",
"authors": [
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Tompa",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "ENLG",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yashar Mehdad, G. Carenini, F. Tompa, and R. Ng. 2013. Abstractive meeting summarization with en- tailment and fusion. In ENLG.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Challenges of developing a digital scribe to reduce clinical documentation burden",
"authors": [
{
"first": "Juan",
"middle": [
"C"
],
"last": "Quiroz",
"suffix": ""
},
{
"first": "Liliana",
"middle": [],
"last": "Laranjo",
"suffix": ""
},
{
"first": "Ahmet",
"middle": [
"Baki"
],
"last": "Kocaballi",
"suffix": ""
},
{
"first": "Shlomo",
"middle": [],
"last": "Berkovsky",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Rezazadegan",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Coiera",
"suffix": ""
}
],
"year": 2019,
"venue": "NPJ Digital Medicine",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1038/s41746-019-0190-1"
]
},
"num": null,
"urls": [],
"raw_text": "Juan C. Quiroz, Liliana Laranjo, Ahmet Baki Koca- balli, Shlomo Berkovsky, Dana Rezazadegan, and Enrico Coiera. 2019. Challenges of developing a digital scribe to reduce clinical documentation bur- den. NPJ Digital Medicine, 2.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Aspects of communication between patients and doctors: an analysis of the discourse in medical interviews",
"authors": [
{
"first": "Ginette",
"middle": [],
"last": "Raimbault",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Cachin",
"suffix": ""
},
{
"first": "Jean",
"middle": [
"Marie"
],
"last": "Limal",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Eliacheff",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Rappaport",
"suffix": ""
}
],
"year": 1975,
"venue": "Pediatrics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ginette Raimbault, Olga Cachin, Jean Marie Limal, Caroline Eliacheff, and Raphael Rappaport. 1975. Aspects of communication between patients and doctors: an analysis of the discourse in medical in- terviews. Pediatrics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatically charting symptoms from patient-physician conversations using machine learning",
"authors": [
{
"first": "Alvin",
"middle": [],
"last": "Rajkomar",
"suffix": ""
},
{
"first": "Anjuli",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Vardoulakis",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2019,
"venue": "JAMA Internal Medicine",
"volume": "179",
"issue": "6",
"pages": "836--838",
"other_ids": {
"DOI": [
"10.1001/jamainternmed.2018.8558"
]
},
"num": null,
"urls": [],
"raw_text": "Alvin Rajkomar, Anjuli Kannan, Kai Chen, Laura Var- doulakis, Katherine Chou, Claire Cui, and Jeffrey Dean. 2019. Automatically charting symptoms from patient-physician conversations using machine learn- ing. JAMA Internal Medicine, 179(6):836-838.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A test of the leaf-ancestor metric for parse accuracy | natural language engineering | cambridge core",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Sampson",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Babarczy",
"suffix": ""
}
],
"year": 2003,
"venue": "Natural Language Engineering",
"volume": "9",
"issue": "4",
"pages": "365--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Sampson and Anna Babarczy. 2003. A test of the leaf-ancestor metric for parse accuracy | natu- ral language engineering | cambridge core. Natural Language Engineering, 9(4):365-380.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Towards an automated SOAP note: Classifying utterances from medical conversations",
"authors": [
{
"first": "J",
"middle": [],
"last": "Benjamin",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Schloss",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Konam",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin J Schloss and Sandeep Konam. 2020. To- wards an automated SOAP note: Classifying utter- ances from medical conversations. In Proceedings of Machine Learning Research, Machine Learning for Healthcare (MLHC).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "A",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In ACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Leafnats: An open-source toolkit and live demo system for neural abstractive text summarization",
"authors": [
{
"first": "Tian",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Ping",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chandan K",
"middle": [],
"last": "Reddy",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tian Shi, Ping Wang, and Chandan K Reddy. 2019. Leafnats: An open-source toolkit and live demo sys- tem for neural abstractive text summarization. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics (Demonstrations), pages 66-71.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Bitext alignment",
"authors": [
{
"first": "Jorg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2011,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "4",
"issue": "2",
"pages": "1--165",
"other_ids": {
"DOI": [
"10.2200/S00367ED1V01Y201106HLT014"
]
},
"num": null,
"urls": [],
"raw_text": "Jorg Tiedemann. 2011. Bitext alignment. Synthesis Lectures on Human Language Technologies, 4(2):1- 165.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "How does medical scribes' work inform development of speech-based clinical documentation technologies? a systematic review",
"authors": [
{
"first": "Brian",
"middle": [
"D"
],
"last": "Tran",
"suffix": ""
},
{
"first": "Yunan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Songzi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "27",
"issue": "",
"pages": "808--817",
"other_ids": {
"DOI": [
"10.1093/jamia/ocaa020"
]
},
"num": null,
"urls": [],
"raw_text": "Brian D. Tran, Yunan Chen, Songzi Liu, and Kai Zheng. How does medical scribes' work inform develop- ment of speech-based clinical documentation tech- nologies? a systematic review. 27(5):808-817.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Constructing a Chinese medical conversation corpus annotated with conversational structures and actions",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nan Wang, Yan Song, and Fei Xia. 2018. Constructing a Chinese medical conversation corpus annotated with conversational structures and actions. In Pro- ceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC- 2018), Miyazaki, Japan. European Languages Re- sources Association (ELRA).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Studying challenges in medical conversation with structured annotation",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the First Workshop on Natural Language Processing for Medical Conversations",
"volume": "",
"issue": "",
"pages": "12--21",
"other_ids": {
"DOI": [
"10.18653/v1/2020.nlpmc-1.3"
]
},
"num": null,
"urls": [],
"raw_text": "Nan Wang, Yan Song, and Fei Xia. 2020. Studying challenges in medical conversation with structured annotation. In Proceedings of the First Workshop on Natural Language Processing for Medical Conversa- tions, pages 12-21. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Alignment annotation for clinic visit dialogue to clinical note sentence language generation",
"authors": [
{
"first": "Wen-Wai",
"middle": [],
"last": "Yim",
"suffix": ""
},
{
"first": "Meliha",
"middle": [],
"last": "Yetisgen",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Micah",
"middle": [],
"last": "Grossman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "413--421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen-wai Yim, Meliha Yetisgen, Jenny Huang, and Micah Grossman. 2020. Alignment annotation for clinic visit dialogue to clinical note sentence lan- guage generation. In Proceedings of The 12th Lan- guage Resources and Evaluation Conference, pages 413-421. European Language Resources Associa- tion.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "End-to-end abstractive summarization for meetings",
"authors": [
{
"first": "Chenguang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ruochen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Xuedong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenguang Zhu, Ruochen Xu, Michael Zeng, and Xue- dong Huang. 2020. End-to-end abstractive summa- rization for meetings. ArXiv, abs/2004.02016.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Sentence alignment methods for improving text simplification systems",
"authors": [
{
"first": "Sanja",
"middle": [],
"last": "\u0160tajner",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Franco-Salvador",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Heiner",
"middle": [],
"last": "Stuckenschmidt",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "97--102",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2016"
]
},
"num": null,
"urls": [],
"raw_text": "Sanja \u0160tajner, Marc Franco-Salvador, Simone Paolo Ponzetto, Paolo Rosso, and Heiner Stuckenschmidt. 2017. Sentence alignment methods for improving text simplification systems. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 97-102. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Annotation match tree",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Alignment example",
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>note[1] \u2192 STATEMENT2SCRIBE[0]</td></tr><tr><td>note[5] \u2192 GROUP</td></tr><tr><td>[ STATEMENT[6],</td></tr><tr><td>STATEMENT[7],</td></tr><tr><td>STATEMENT[9,10] ]</td></tr><tr><td>note[6] \u2192 GROUP</td></tr><tr><td>[ QA[18,19],</td></tr><tr><td>QA[20,21],</td></tr><tr><td>STATEMENT[22,23] ]</td></tr><tr><td>INCOMPLETE</td></tr><tr><td>note[18] \u2192 STATEMENT[11]</td></tr><tr><td>note[19] \u2192 QA[32,33]</td></tr><tr><td>note[29] \u2192 INFFERRED-OUTSIDE</td></tr><tr><td>note[33] \u2192 DICTATION[48]</td></tr><tr><td>note[68] \u2192 COMMAND[147]</td></tr></table>",
"num": null,
"text": "depicts a full abbreviated clinical note with marked associated dialogue transcript sentences. To understand the challenges of alignment (creation of paired transcript-note input-output) and generation (creation of the note sentence from",
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table/>",
"num": null,
"text": "",
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table><tr><td>specialty</td><td colspan=\"4\">providers visits duration speakers</td></tr><tr><td>ENT</td><td>1</td><td>68</td><td>10 \u00b1 4</td><td>4 \u00b1 1</td></tr><tr><td>HAND</td><td>1</td><td>43</td><td>10 \u00b1 4</td><td>3 \u00b1 1</td></tr><tr><td>ORTHO</td><td>1</td><td>27</td><td>11 \u00b1 5</td><td>4 \u00b1 1</td></tr><tr><td>PODIATRY</td><td>4</td><td>174</td><td>7 \u00b1 4</td><td>3 \u00b1 1</td></tr><tr><td>PRIMARY</td><td>6</td><td>188</td><td>17 \u00b1 9</td><td>4 \u00b1 1</td></tr><tr><td>TOTAL</td><td>13</td><td>500</td><td>12 \u00b1 8</td><td>4 \u00b1 1</td></tr></table>",
"num": null,
"text": "to create a total of 566 visits. 2",
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table/>",
"num": null,
"text": "",
"type_str": "table"
},
"TABREF6": {
"html": null,
"content": "<table><tr><td>speaker</td><td colspan=\"2\">sentences turns</td></tr><tr><td>clinician_primary</td><td>99421</td><td>42480</td></tr><tr><td>patient</td><td>56052</td><td>36059</td></tr><tr><td>other</td><td>15073</td><td>9186</td></tr><tr><td>TOTAL</td><td>170546</td><td>87725</td></tr></table>",
"num": null,
"text": "To normalize for annotation differences between the Mock Patient Visits (MPV) and our corpus, we removed INFERRED-DIALOGUE labels, reattached REPEATS to a higher node, and moved all GROUP labels to the highest node.",
"type_str": "table"
},
"TABREF7": {
"html": null,
"content": "<table/>",
"num": null,
"text": "",
"type_str": "table"
},
"TABREF8": {
"html": null,
"content": "<table><tr><td>label-combo</td><td colspan=\"3\">note sents %sent % cum</td></tr><tr><td>{INFERRED-OUTSIDE}</td><td>3731</td><td>16</td><td>16</td></tr><tr><td>{STATEMENT2SCRIBE}</td><td>2664</td><td>11</td><td>27</td></tr><tr><td>{STATEMENT}</td><td>977</td><td>4</td><td>31</td></tr><tr><td>{STATEMENT2SCRIBE,INCOMPLETE}</td><td>898</td><td>4</td><td>35</td></tr><tr><td>{DICTATION}</td><td>742</td><td>3</td><td>38</td></tr><tr><td>{STATEMENT,INCOMPLETE}</td><td>706</td><td>3</td><td>41</td></tr><tr><td>{QA}</td><td>465</td><td>2</td><td>43</td></tr><tr><td>{STATEMENT,GROUP}</td><td>452</td><td>2</td><td>45</td></tr><tr><td>{QA,STATEMENT,GROUP}</td><td>382</td><td>2</td><td>47</td></tr></table>",
"num": null,
"text": "Label frequency per note sentence",
"type_str": "table"
},
"TABREF9": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Note sentence label combination statistics",
"type_str": "table"
},
"TABREF10": {
"html": null,
"content": "<table/>",
"num": null,
"text": "",
"type_str": "table"
},
"TABREF11": {
"html": null,
"content": "<table><tr><td>: Feature description for non-standard features</td></tr><tr><td>setup is consistent with the comparable simplifi-</td></tr><tr><td>cation dataset creation task. We report the align-</td></tr><tr><td>ment evaluation based on pairwise F1 score. The</td></tr><tr><td>number of positive pairwise instances in train, dev,</td></tr><tr><td>and test sets are 19721, 4770, and 5796; including</td></tr><tr><td>all possible negative instances 6370787, 1303972,</td></tr><tr><td>1706901.</td></tr></table>",
"num": null,
"text": "",
"type_str": "table"
},
"TABREF12": {
"html": null,
"content": "<table><tr><td>label</td><td colspan=\"2\">thresh sys1</td><td>sys2</td></tr><tr><td>DICTATION</td><td>0.36</td><td>0.39</td><td>0.43</td></tr><tr><td>STATEMENT2SCRIBE</td><td>0.20</td><td>0.36</td><td>0.36</td></tr><tr><td>STATEMENT</td><td>0.00</td><td>0.12</td><td>0.13</td></tr><tr><td>QA</td><td>0.00</td><td>0.19</td><td>0.20</td></tr><tr><td>INFERRED-OUTSIDE</td><td>0.00</td><td>0.59</td><td>0.66</td></tr><tr><td>UNMARKED</td><td colspan=\"3\">0.998 0.998 0.998</td></tr></table>",
"num": null,
"text": "shows the F1 results per each label. With the simple threshold system, we can see the DICTATION pairs already achieve a",
"type_str": "table"
},
"TABREF13": {
"html": null,
"content": "<table><tr><td colspan=\"4\">similarity composition thresh sys1 sys2</td></tr><tr><td>0-20</td><td>0.66</td><td>0.00</td><td>0.22 0.26</td></tr><tr><td>20-40</td><td>0.20</td><td>0.08</td><td>0.39 0.39</td></tr><tr><td>40-70</td><td>0.09</td><td>0.45</td><td>0.64 0.69</td></tr><tr><td>70-100</td><td>0.05</td><td>0.91</td><td>0.94 0.93</td></tr></table>",
"num": null,
"text": "Pairwise F1 by label",
"type_str": "table"
},
"TABREF14": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Pairwise F1 by jaccard similarity (composition is the percent of annotations within the range)",
"type_str": "table"
},
"TABREF15": {
"html": null,
"content": "<table><tr><td>label</td><td>gold-freq</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>DICTATION</td><td>257</td><td>0.53</td><td>0.35</td><td>0.43</td></tr><tr><td>STATEMENT2SCRIBE</td><td>1248</td><td>0.32</td><td>0.43</td><td>0.36</td></tr><tr><td>STATEMENT</td><td>2140</td><td>0.23</td><td>0.09</td><td>0.13</td></tr><tr><td>QA</td><td>1239</td><td>0.25</td><td>0.16</td><td>0.20</td></tr><tr><td>INFERRED-OUTSIDE</td><td>912</td><td>0.72</td><td>0.61</td><td>0.66</td></tr><tr><td>UNMARKED</td><td colspan=\"4\">1701105 0.998 0.998 0.999</td></tr></table>",
"num": null,
"text": ").",
"type_str": "table"
},
"TABREF16": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Sys2 performance by label",
"type_str": "table"
},
"TABREF19": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Section frequency and accuracy",
"type_str": "table"
},
"TABREF20": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Average human evaluation ratings",
"type_str": "table"
}
}
}
}