{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:27:28.219444Z" }, "title": "Weakly Supervised Medication Regimen Extraction from Medical Conversations", "authors": [ { "first": "Dhruvesh", "middle": [], "last": "Patel", "suffix": "", "affiliation": {}, "email": "dhruveshpate@cs.umass.edu" }, { "first": "Sandeep", "middle": [], "last": "Konam", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Sai", "middle": [ "P" ], "last": "Selvaraj", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Automated Medication Regimen (MR) extraction from medical conversations can not only improve recall and help patients follow through with their care plan, but also reduce the documentation burden for doctors. In this paper, we focus on extracting spans for frequency, route and change, corresponding to medications discussed in the conversation. We first describe a unique dataset of annotated doctor-patient conversations and then present a weakly supervised model architecture that can perform span extraction using noisy classification data. The model utilizes an attention bottleneck inside a classification model to perform the extraction. We experiment with several variants of attention scoring and projection functions and propose a novel transformer-based attention scoring function (TAScore). The proposed combination of TAScore and Fusedmax projection achieves a 10 point increase in Longest Common Substring F1 compared to the baseline of additive scoring plus softmax projection.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Automated Medication Regimen (MR) extraction from medical conversations can not only improve recall and help patients follow through with their care plan, but also reduce the documentation burden for doctors. In this paper, we focus on extracting spans for frequency, route and change, corresponding to medications discussed in the conversation. We first describe a unique dataset of annotated doctor-patient conversations and then present a weakly supervised model architecture that can perform span extraction using noisy classification data. The model utilizes an attention bottleneck inside a classification model to perform the extraction. We experiment with several variants of attention scoring and projection functions and propose a novel transformer-based attention scoring function (TAScore). The proposed combination of TAScore and Fusedmax projection achieves a 10 point increase in Longest Common Substring F1 compared to the baseline of additive scoring plus softmax projection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Patients forget 40-80% of the medical information provided by healthcare practitioners immediately (Mcguire, 1996) and misconstrue 48% of what they think they remembered (Anderson et al., 1979) , and this adversely affects patient adherence. Automatically extracting information from doctor-patient conversations can help patients correctly recall doctor's instructions and improve compliance with the care plan (Tsulukidze et al., 2014) . On the other hand, clinicians spend up to 49.2% of their overall time on EHR and desk work, and only 27.0% of their total time on direct clinical face time with * Work done as an intern at Abridge AI Inc.", "cite_spans": [ { "start": 99, "end": 114, "text": "(Mcguire, 1996)", "ref_id": "BIBREF17" }, { "start": 170, "end": 193, "text": "(Anderson et al., 1979)", "ref_id": "BIBREF0" }, { "start": 412, "end": 437, "text": "(Tsulukidze et al., 2014)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "is important, so, and, um, so, you know, I would recommend vitamin D 1 to be taken 1 . Have you had Fosamax 2 before? PT: I think my mum did. DR: Okay, Fosamax 2 , you take 2 one pill 2 on Monday and one on Thursday 2 . DR: Do you use much caffine? PT: No, none. DR: Okay, this is 3 Actonel 3 and it's one tablet 3 once a month 3 . DR: Do you get a one month or a three months supply in your prescriptions?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DR: Limiting your alcohol consumption", "sec_num": null }, { "text": "Figure 1: An example excerpt from a doctorpatient conversation transcript. Here, there are three medications mentioned indicated by the superscript. The extracted attributes, change, route and frequency, for each medications are also shown.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DR: Limiting your alcohol consumption", "sec_num": null }, { "text": "patients (Sinsky et al., 2016) . Increased data management work is also correlated with increased doctor burnout (Kumar, 2016) . Information extracted from medical conversations can also aid doctors in their documentation work (Rajkomar et al., 2019; Schloss and Konam, 2020) , allow them to spend more face time with the patients, and build better relationships.", "cite_spans": [ { "start": 9, "end": 30, "text": "(Sinsky et al., 2016)", "ref_id": "BIBREF23" }, { "start": 113, "end": 126, "text": "(Kumar, 2016)", "ref_id": "BIBREF14" }, { "start": 227, "end": 250, "text": "(Rajkomar et al., 2019;", "ref_id": "BIBREF20" }, { "start": 251, "end": 275, "text": "Schloss and Konam, 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "DR: Limiting your alcohol consumption", "sec_num": null }, { "text": "In this work, we focus on extracting Medication Regimen (MR) information (Du et al., 2019; Selvaraj and Konam, 2019) from the doctor-patient conversations. Specifically, we extract three attributes, i.e., frequency, route and change, corresponding to medications discussed in the conversation ( Figure 1 ). Medication Regimen information can help doctors with medication orders cum renewals, medication reconciliation, verification of reconciliations for errors, and other medicationcentered EHR documentation tasks. It can also improve patient engagement, transparency and better compliance with the care plan (Tsulukidze et al., 2014; Grande et al., 2017) .", "cite_spans": [ { "start": 73, "end": 90, "text": "(Du et al., 2019;", "ref_id": "BIBREF7" }, { "start": 91, "end": 116, "text": "Selvaraj and Konam, 2019)", "ref_id": "BIBREF22" }, { "start": 611, "end": 636, "text": "(Tsulukidze et al., 2014;", "ref_id": "BIBREF24" }, { "start": 637, "end": 657, "text": "Grande et al., 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 295, "end": 303, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "DR: Limiting your alcohol consumption", "sec_num": null }, { "text": "MR attribute information present in a conversation can be obtained as spans in text (Figure 1 ) or can be categorized into classification labels (Table 2) . While the classification labels are easy to obtain at scale in an automated manner -for instance, by pairing conversations with billing codes or medication orders -they can be noisy and can result in a prohibitively large number of classes. Classification labels go through normalization and disambiguation, often resulting in label names which are very different from the phrases used in the conversation. This process leads to a loss of granular information present in the text (see, for example, row 2 in Table 2 ). Span extraction, on the other hand, alleviates this issue as the outputs are actual spans in the conversation. However, span extraction annotations are relatively hard to come by and are time-consuming to annotate manually. Hence, in this work, we look at the task of MR attribute span extraction from doctor-patient conversation using weak supervision provided by the noisy classification labels.", "cite_spans": [], "ref_spans": [ { "start": 84, "end": 93, "text": "(Figure 1", "ref_id": null }, { "start": 145, "end": 155, "text": "(Table 2)", "ref_id": null }, { "start": 666, "end": 673, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "DR: Limiting your alcohol consumption", "sec_num": null }, { "text": "The main contributions of this work are as follows. We present a way of setting up an MR attribute extraction task from noisy classification data (Section 2). We propose a weakly supervised model architecture which utilizes attention bottleneck inside a classification model to perform span extraction (Section 3 & 4) . In order to favor sparse and contiguous extractions, we experiment with two variants of attention projection functions (Section 3.1.2), namely, softmax and Fusedmax (Niculae and Blondel, 2017) . Further, we propose a novel transformer-based attention scoring function TAScore (Section 3.1.1). The combination of TAScore and Fusedmax achieves significant improvements in extraction performance over a phrase-based (22 LCSF1 points) and additive softmax attention (10 LCSF1 points) baselines.", "cite_spans": [ { "start": 302, "end": 317, "text": "(Section 3 & 4)", "ref_id": null }, { "start": 498, "end": 512, "text": "Blondel, 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "DR: Limiting your alcohol consumption", "sec_num": null }, { "text": "Medication Regimen (MR) consists of information about a prescribed medication akin to attributes of an entity. In this work, we specifically focus on frequency, route of the medication and any change in the medication's dosage or frequency as shown in Figure 1 . For example, given the conversation excerpt and the medication \"Fosamax\" as shown in Figure 1 , the model needs to extract the spans \"one pill on Monday and one on Thursday\", \"pill\" and \"you take\" for attributes frequency, route and change, respectively. The major challenge, however, is to perform the attribute span extraction using noisy classification labels with very few or no span-level labels. The rest of this section describes the dataset used for this task.", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 260, "text": "Figure 1", "ref_id": null }, { "start": 348, "end": 356, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Medication Regimen (MR) using Weak Supervision", "sec_num": "2" }, { "text": "The data used in this paper comes from a collection of human transcriptions of 63000 fully-consented and de-identified doctor-patient conversations. A total of 57000 conversations were randomly selected to construct the training (and dev) conversation pool and the remaining 6000 conversations were reserved as the test pool.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2.1" }, { "text": "The classification dataset: All the conversations are annotated with MR tags by expert human annotators. Each set of MR tags consists of the medication name and its corresponding attributes frequency, route and change, which are normalized free-form instructions in natural language phrases corresponding to each of the three attributes (see Table 8 in A.4). Each set of MR tags is grounded to a contiguous window of utterances' text, 1 around a medication mention as evidence for that set. Hence, each set of grounded MR tags can be written as , where the last three entries correspond to the three MR attributes. ", "cite_spans": [], "ref_spans": [ { "start": 342, "end": 349, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "2.1" }, { "text": "Once a month pill take Table 2 : Classification examples resulting from the conversation shown in Figure 1 . sults in classes shown in Table 1 . 2 As an illustration, this annotation process when applied to the conversation piece shown in Figure 1 would result in the three data points shown in Table 2 . Using this procedure on both the training and test conversation pools, we obtain 45,059 training, 11,212 validation and 5,458 test classification data points. 3 The extraction dataset: Since the goal is to extract spans related to MR attributes, we would ideally need a dataset with span annotations to perform this task in a fully supervised manner. However, span annotation is laborious and expensive. Hence, we re-purpose the classification dataset (along with its classification labels) to perform the task of span extraction using weak supervision. We also manually annotate a small fraction of the train, validation and test sets (150, 150 and 500 data-points respectively) for attribute spans to see the effect of supplying a small number of strongly supervised instances on the performance of the model. In order to have a good representation of all the classes in the test set, we increase the sampling weight of data-points which have rare classes. Hence, our test set is relatively more difficult compared to a random sample of 500 data-points. All the results are reported on our test set of 500 difficult data-points annotated for attribute spans. For annotating attribute spans, the annotators were given instructions to mark spans which provide minimally sufficient and natural evidence for the already annotated attribute class as described below. Sufficiency: Given only the annotated span for a particular attribute, one should be able to predict the correct classification label. This aims to encourage the attribute spans to cover all distinguishing information for that attribute.", "cite_spans": [ { "start": 145, "end": 146, "text": "2", "ref_id": null }, { "start": 464, "end": 465, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 23, "end": 30, "text": "Table 2", "ref_id": null }, { "start": 98, "end": 106, "text": "Figure 1", "ref_id": null }, { "start": 135, "end": 142, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 239, "end": 247, "text": "Figure 1", "ref_id": null }, { "start": 295, "end": 302, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Actonel", "sec_num": null }, { "text": "Minimality: Peripheral words which can be replaced with other words without changing the attribute's classification label should not be included in the extracted span. This aims to discourage marking entire utterances as attribute spans. Naturalness: The marked span(s) if presented to a human should sound like complete English phrases (if it has multiple tokens) or a meaningful word if it has only a single token. In essence, this means that the extractions should not drop stop words from within phrases. This requirement aims to reduce the cognitive load on the human who uses the model's extraction output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Actonel", "sec_num": null }, { "text": "Using medical conversations for information extraction is more challenging compared to written doctor notes because the spontaneity of conversation gives rise to a variety of speech patterns with disfluencies and interruptions. Moreover, the vocabulary can range from colloquial to medical jargon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges", "sec_num": "2.2" }, { "text": "In addition, we also have noise in our classification dataset with its main source being annotators' use of information outside the grounded text window to produce the free-form tags. This happens in two ways. First, when the free-form MR instructions are written using evidence that was discussed elsewhere in the conversation but is not present in the grounded text window. Second, when the annotator uses their domain knowledge instead of using just the information in the grounded text window -for instance, when the route of a medication is not explicitly mentioned, the annotator might use the medication's common route in their free-form instructions. Using manual analysis of the 800 datapoints across the train, dev and test sets, we find that 22% of frequency, 36% of route and 15% of change classification labels, have this noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges", "sec_num": "2.2" }, { "text": "In this work, our approach to extraction depends on the size of the auxiliary task's (classification) dataset to overcome above mentioned challenges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges", "sec_num": "2.2" }, { "text": "There have been several successful attempts to use neural attention (Bahdanau et al., 2015) to extract information from text in an unsupervised manner (He et al., 2017; Lin et al., 2016; Yu et al., 2019) . Attention scores provide a good proxy for importance of a particular token in a model. However, when there are multiple layers of attention, or if the encoder is too complex and trainable, the model no longer provides a way to produce reliable and faithful importance scores (Jain and Wallace, 2019) . We argue that, in order to bring in the faithfulness, we need to create an attention bottleneck in our classification + extraction model. The attention bottleneck is achieved by employing an attention function which generates a set of attention weights over the encoded input tokens. Attention bottleneck forces the classifier to only see the portions of input that pass through it, thereby enabling us to trade the classification performance for extraction performance and getting span extraction with weak supervision from classification labels.", "cite_spans": [ { "start": 68, "end": 91, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF1" }, { "start": 151, "end": 168, "text": "(He et al., 2017;", "ref_id": "BIBREF11" }, { "start": 169, "end": 186, "text": "Lin et al., 2016;", "ref_id": "BIBREF16" }, { "start": 187, "end": 203, "text": "Yu et al., 2019)", "ref_id": "BIBREF28" }, { "start": 481, "end": 505, "text": "(Jain and Wallace, 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "In the rest of this section, we provide general background on neural attention and present its variants employed in this work. This is followed by the presentation of our complete model architecture in the subsequent sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "Given a query q \u2208 R m and keys K \u2208 R l\u00d7n , the attention function \u03b1 : R m \u00d7 R l\u00d7n \u2192 \u2206 l is composed of two functions: a scoring function S : R m \u00d7 R l\u00d7n \u2192 R l which produces unnormalized importance scores, and a projection function \u03a0 : R l \u2192 \u2206 l which normalizes these scores by projecting them to an (l \u2212 1)-dimensional probability simplex. 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Attention", "sec_num": "3.1" }, { "text": "The purpose of the scoring function is to produce importance scores for each entry in the key K w.r.t the query q for the task at hand, which in our case is classification. We experiment with two scoring functions: additive and transformer-based. Additive: This is same as the scoring function used in Bahdanau et al. (2015) , where the scores 4 Throughout this work l represents the sequence length dimension and \u2206 l = {x \u2208 R l | x > 0, x 1 = 1} represents a probability simplex. are produced as follows:", "cite_spans": [ { "start": 302, "end": 324, "text": "Bahdanau et al. (2015)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Scoring Function", "sec_num": "3.1.1" }, { "text": "s j = v T tanh(W q q + W k k j ) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring Function", "sec_num": "3.1.1" }, { "text": "where, v \u2208 R m , W q \u2208 R m\u00d7m and W k \u2208 R m\u00d7n are trainable weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring Function", "sec_num": "3.1.1" }, { "text": "Transformer-based Attention Score (TAScore):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring Function", "sec_num": "3.1.1" }, { "text": "While the additive scoring function is simple and easy to train, it suffers from one major drawback in our setting: since we freeze the weights of our embedder and do not use multiple layers of trainable attention (Section 4.4), the additive attention can struggle to resolve references -finding the correct attribute when there are multiple entities of interest, especially when there are multiple distinct medications (Section 6.4). For this reason, we propose a novel multi-layer transformer-based attention scoring function (TAScore) which can perform this reference resolution while also preserving the attention bottleneck. Figure 2 shows the architecture of TAScore. The query and key vectors are projected to the same space using two separate linear layers while also adding sinusoidal positional embeddings to the key vectors. A special trainable separator vector is added between the query and key vectors and the entire sequence is passed through a multi-layer transformer (Vaswani et al., 2017) . Finally, scalar scores (one corresponding to each vector in the key) are produced from the outputs of the transformer by passing them through a feed-forward layer with dropout.", "cite_spans": [ { "start": 984, "end": 1006, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 630, "end": 638, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Scoring Function", "sec_num": "3.1.1" }, { "text": "A projection function \u03a0 : R l \u2192 \u2206 l in the context of attention distribution, normalizes the real valued importance scores by projecting them to an (l \u2212 1)-dimensional probability simplex \u2206 l . Niculae and Blondel (2017) provide a unified view of the projection function as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projection Function", "sec_num": "3.1.2" }, { "text": "\u03a0 \u2126 (s) = arg max a\u2208\u2206 l a T s \u2212 \u03b3\u2126(a) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projection Function", "sec_num": "3.1.2" }, { "text": "Here, a \u2208 \u2206 l , \u03b3 is a hyperparameter and \u2126 is a regularization penalty which allows us to introduce problem specific inductive bias into our attention distribution. When \u2126 is strongly convex, we have a closed form solution to the projection operation as well as its gradient (Niculae and Blondel, 2017; Blondel et al., 2020 ). Since we use the attention distribution to perform extraction, we experiment with the following instances of projection functions in this work. Softmax: \u2126(a) = l i=1 a i log a i Using the negative entropy as the regularizer, results in the usual softmax projection operator", "cite_spans": [ { "start": 276, "end": 303, "text": "(Niculae and Blondel, 2017;", "ref_id": "BIBREF18" }, { "start": 304, "end": 324, "text": "Blondel et al., 2020", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Projection Function", "sec_num": "3.1.2" }, { "text": "\u03a0 \u2126 (s) = exp(s/\u03b3) l i=1 exp(s i /\u03b3) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projection Function", "sec_num": "3.1.2" }, { "text": "Fusedmax: \u2126(a) = 1 2 a 2 2 + l i=1 |a i+1 \u2212 a i | Using squared loss with fused-lasso penalty (Niculae and Blondel, 2017), results in a projection operator which produces sparse as well as contiguous attention weights 5 . The fusedmax projection operator can be written as \u03a0 \u2126 (s) =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projection Function", "sec_num": "3.1.2" }, { "text": "P \u2206 l (P T V (s)) , where P T V (s) = arg min y\u2208R l y \u2212 s 2 2 + l\u22121 d=1 |y d+1 \u2212 y d |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projection Function", "sec_num": "3.1.2" }, { "text": "is the proximal operator for 1d Total Variation Denoising problem, and P \u2206 l is the euclidean projection operator. Both these operators can be computed non-iteratively as described in Condat (2013) and Duchi et al. (2008) , respectively. The gradient of Fusedmax operator can be efficiently computed as described in Niculae and Blondel (2017) . 6 Fusedmax*: We observe that while softmax learns to focus on the right region of text, it tends to assign very low attention weights to some tokens of phrases resulting in multiple discontinuous spans per attribute, while Fusedmax on the other hand, almost always generates contiguous attention weights. However, Fusedmax makes more mistakes in identifying the overall region that contains 5 Some example outputs of softmax and fusedmax on random inputs are shown in Appendix A.3 6 The pytorch implementation to compute fusedmax used in this work is available at https://github.com/ dhruvdcoder/sparse-structured-attention. the target span (Section 6.3). In order to combine the advantages of softmax and Fusedmax, we first train a model using softmax as the projector and then swap the softmax with Fusedmax in the final few epochs. We call this approach Fusedmax*.", "cite_spans": [ { "start": 184, "end": 197, "text": "Condat (2013)", "ref_id": "BIBREF5" }, { "start": 202, "end": 221, "text": "Duchi et al. (2008)", "ref_id": "BIBREF8" }, { "start": 316, "end": 342, "text": "Niculae and Blondel (2017)", "ref_id": "BIBREF18" }, { "start": 345, "end": 346, "text": "6", "ref_id": null }, { "start": 736, "end": 737, "text": "5", "ref_id": null }, { "start": 826, "end": 827, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Projection Function", "sec_num": "3.1.2" }, { "text": "Our classification + extraction model uses MR attributes classification labels to extract MR attributes. The model can be divided into three phases: identify, classify and extract ( Figure 3) . The identify phases encodes the input text and medication name and uses the attention bottleneck to produce attention over the text. Classify phase computes the context vector using the attention from the identify phases and classifies the context vectors. Finally, the extract phase uses the attention from the identify phase to extract spans corresponding to MR attributes. Notation:", "cite_spans": [], "ref_spans": [ { "start": 182, "end": 191, "text": "Figure 3)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "Let the dataset D be {(x (1) , y (1) ), . . . (x (N) , y (N) )}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "Each x consists of a medication m and conversation text t, and each y consists of classification labels for frequency, route and change, i.e, y = ( f y, r y, c y), respectively. The number of classes for each attribute is denoted by (\u2022) n. As seen from Table 1, f n = 12, r n = 10 and c n = 8. The length of a text excerpt is denoted by l. The extracted span for attribute k \u2208 {f, r, c} is denoted by a binary vector k e of length l, such that k e j = 1, if j th token is in the extracted span for attribute k.", "cite_spans": [ { "start": 233, "end": 236, "text": "(\u2022)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "As shown in the Figure 3 , the identify phase finds the most relevant parts of the text w.r.t each of the three attributes. For this, we first encode the text as well as the given medication using a contextualized token embedder E. In our case, this is 1024 dimensional BERT (Devlin et al., 2019) 7 . Since BERT uses WordPiece representations (Wu et al., 2016) , we average these wordpiece representations to form the word embeddings. In order to supply the speaker information, we concatenate a 2-dimensional fixed vocabulary speaker embedding to every token embedding in the text to obtain speaker-aware word representations.", "cite_spans": [ { "start": 343, "end": 360, "text": "(Wu et al., 2016)", "ref_id": null } ], "ref_spans": [ { "start": 16, "end": 24, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Identify", "sec_num": "4.1" }, { "text": "We then perform average pooling of the medication representations to get a single vector representation for the medication 8 . Finally, with the given medication representation as the query and the speaker-aware token representations as the key, we use three separate attention functions (attention bottleneck), one for each attribute (no weight sharing), to produce three sets of normalized attention distributions f\u00e2 , r\u00e2 and c\u00e2 over the tokens of the text. The identify phase can be succinctly described as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identify", "sec_num": "4.1" }, { "text": "k a = k \u03b1(E(m), E(t)) , where k \u2208 {f, r, c}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identify", "sec_num": "4.1" }, { "text": "Here, each k\u00e2 is an element of the probability simplex \u2206 l and is used to perform attribute extraction (Section 4.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identify", "sec_num": "4.1" }, { "text": "We obtain the attribute-wise context vectors k c, as the weighted sum of the encoded tokens (K in Figure 3) where the weights are given by the attributewise attention distributions k a. To perform the classification for each attribute, the attribute-wise context vectors are used as input to feed-forward neural networks F k (one per attribute), as shown below: 9 k p = softmax F k ( k c) k\u0177 = arg max j\u2208{1,2,..., k n} k p j , where k \u2208 {f, r, c}.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 107, "text": "Figure 3)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Classify", "sec_num": "4.2" }, { "text": "The spans are extracted from the attention distribution using a fixed extraction function X : \u2206 l \u2192 {0, 1} l , defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extract", "sec_num": "4.3" }, { "text": "k\u00ea j = X k ( k a) j = 1 if k a j > k \u03b3 0 if k a j \u2264 k \u03b3 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extract", "sec_num": "4.3" }, { "text": "where k \u03b3 is the extraction threshold for attribute k. For softmax projection function, it is important to tune the attribute-wise extraction thresholds \u03b3. We tune these using extraction performance on the extraction validation set. For fusedmax projection function which produces spare weights, the thresholds need not be tuned, and hence are set to 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extract", "sec_num": "4.3" }, { "text": "We train the model end-to-end using gradient descent, except the extract module (Figure 3) , which does not have any trainable weights, and the embedder E. Freezing the embedder is vital for the performance, since not doing so results in excessive dispersion of token information to other nearby tokens, resulting in poor extractions.", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 90, "text": "(Figure 3)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Training", "sec_num": "4.4" }, { "text": "The total loss for the training is divided into two parts as described below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.4" }, { "text": "(1) Classification Loss L c : In order to perform classification with highly class imbalanced data (see Table 1 ), we use weighted cross-entropy:", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 111, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Training", "sec_num": "4.4" }, { "text": "L c = k\u2208{f,r,c} \u2212 k wk y log k pk y ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.4" }, { "text": "where the class weights k wk y are obtained by inverting each class' relative proportion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.4" }, { "text": "(2) Identification Loss L i : If span labels e are present for some subset A of training examples, we first normalize these into ground truth attention probabilities a:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.4" }, { "text": "k a j = k e j l j=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.4" }, { "text": "k e j for k \u2208 {f, r, c}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.4" }, { "text": "We then use KL-Divergence between the ground truth attention probabilities and the ones generated by the model (\u00e2) to compute identification loss L i = k\u2208{f,r,c} KL k a k\u00e2 . Note that L i is zero for data-points that do not have span labels. Using these two loss functions, the overall loss L = L c + \u03bbL i . Table 3 shows the results obtained by various combinations of attention scoring and projection functions on the task of MR attribute extraction in terms of the metrics defined in Section 5. It also shows the classification F1 score to emphasize how the attention bottleneck affects classification performance. The first row shows how a simple phrase based extraction system would perform on the task. 10", "cite_spans": [], "ref_spans": [ { "start": 308, "end": 315, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Training", "sec_num": "4.4" }, { "text": "In order to see if having a small number of extraction training data-points (containing explicit span labels) helps the extraction performance, we annotate 150 (see Section 2 for how we sampled the datapoints) of the training data-points with span labels. As seen from Table 3 , even a small number of examples with span labels (\u2248 0.3%) help a lot with the extraction performance for all models. We think this trend might continue if we add more training span labels. We leave the finding of the right balance between annotation effort and extraction performance as a future direction to explore.", "cite_spans": [], "ref_spans": [ { "start": 269, "end": 276, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Effect of Span labels", "sec_num": "6.1" }, { "text": "In order to quantify the effect of performing the auxiliary task of classification along with the main task of extraction, we train the proposed model in three different settings. (1) The Classification Only uses the complete dataset (~45k) but only with the classification labels. (2) The Extraction Only setting only uses the 150 training examples that have span labels. (3) Finally, the Classifica-tion+Extraction setting uses the 45k examples with classification labels along with the 150 examples with the span labels to train the model. Table 4 (rows 2, 3 and 4) shows the effect of having classification labels and performing extraction and classification jointly using the proposed model. The model structure and the volume of the classification data (~45k examples) makes the auxiliary task of classification extremely helpful for the main task of extraction, even with the presence of label noise.", "cite_spans": [], "ref_spans": [ { "start": 543, "end": 550, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Effect of classification labels", "sec_num": "6.2" }, { "text": "It is worth noting that the classification performance of the proposed method is also improved by explicit supervision to the extraction portion of the model (row 2 vs 4, Table 4 ). In order to set a reference for classification performance, we train strong classification only models, one for each attribute, using pretrained BERT. These BERT Classifiers, are implemented as described in Devlin et al. (2019) with input consisting of the text and medication name separated by a [SEP] token (row 1). Based on the improvements achieved in the classification performance using span annotations, we believe that having more span labels can further close the gap between the classification performance of the proposed model and the BERT Classifiers. However, this work focuses on extraction performance, hence improving the classification performance is left to future work.", "cite_spans": [ { "start": 389, "end": 409, "text": "Devlin et al. (2019)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 171, "end": 178, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Effect of classification labels", "sec_num": "6.2" }, { "text": "While softmax with post-hoc threshold tuning achieves consistently higher TF1 compared to fusedmax (which does not require threshold tuning), the later achieves better LCSF1. We observe that while the attention function using softmax projection focuses on the correct portion of the text, it drops intermediate words, resulting in multiple discontinuous spans. Fusedmax on the other hand almost always produces contiguous spans. Figure 4 further illustrates this point using a test example. The training trick which we call fusedmax* swaps the softmax projection function with fusedmax during the final few epochs to combine the strengths of both softmax and fusedmax. This achieves high LCSF1 as well as TF1. Figure 4 : Difference in extracted spans for MR attributes with models that uses Fusedmax* and Softmax, for the medication Actonel. Blue: change, green: route and yellow: frequency. Refer Figure 1 for groundtruth annotations. Table 5 shows the percent change in the extraction F1 if we use TAScore instead of additive scoring (everything else being the same). As seen, there is a significant improvement irrespective of the projection function being used. The need for TAScore stems from the difficulty of the additive scoring function to resolve references between spans when there are multiple medications present. In order to measure the efficacy of TAScore for this problem, we divide the test set into two subsets: data-points which have multiple distinct medications in their text (MM) and datapoints that have single medication only. As seen from the first two columns for both the metrics in Table 5 , using TAScore instead of additive results in more improvement in the MM-subset compared to the SM-subset, showing that using transformer scorer does help with resolving references when multiple medications are present in the text. Figure 5 shows the distribution of Avg. LCSF1 (average across all three attributes). It can be seen that there are a significant number of datapoints in the MM subset which get LCSF1 of zero, showing that even when the transformer scorer achieves improvement on MM subset, it gets quite a lot of these data-points completely wrong. This shows that the there is still room for improvement.", "cite_spans": [], "ref_spans": [ { "start": 429, "end": 438, "text": "Figure 4", "ref_id": null }, { "start": 711, "end": 719, "text": "Figure 4", "ref_id": null }, { "start": 899, "end": 907, "text": "Figure 1", "ref_id": null }, { "start": 937, "end": 944, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 1611, "end": 1618, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 1852, "end": 1860, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Effect of projection function", "sec_num": "6.3" }, { "text": "In summary, our analysis reveals that Fusedmax/Fusedmax* favors contiguous extraction spans which is a necessity for our task. Irrespective of the projection function used, the proposed scoring function TAScore improves the extraction performance when compared to the popular additive scoring function. The proposed model architecture is able to establish a synergy between the classification and span extraction tasks where one improves the performance of the other. Overall, the proposed combination of TAScore and Fusedmax* achieves a 22 LCSF1 points improvement over the phrasebased baseline and 10 LCSF1 points improvement over the naive additive and softmax combination.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.5" }, { "text": "Existing literature directly related to our work can be bucketed into two categories -related methods and related tasks. Methods: The recent work on generating rationales/explanations for deep neural network based classification models (Lei et al., 2016; Bastings et al., 2020; Paranjape et al., 2020) is closely related to ours in terms of the methods used. Most of these works use binary latent variables to perform extraction as an intermediate step before classification. Our work is closely related to (Jain et al., 2020; Zhong et al., 2019) , who use attention scores to generate rationales for classification models. These works, however, focus on generating faithful and plausible explanation for classification as opposed to extracting the spans for attributes of an entity, which is the focus of our work. Moreover, our method can be generalized to any number of attributes while all these methods would require a separate model for each attribute. Tasks: Understanding doctor-patient conversations is starting to receive attention recently (Rajkomar et al., 2019; Schloss and Konam, 2020) . Selvaraj and Konam (2019) performs MR extraction by framing the problem as a generative question answering task. This approach is not efficient at inference time -it requires one forward pass for each attribute. Moreover, unlike a span extraction model, the generative model might produce hallucinated facts. Du et al. (2019) obtain MR attributes as spans in text; however, they use a fully supervised approach which requires a large dataset with spanlevel labels.", "cite_spans": [ { "start": 236, "end": 254, "text": "(Lei et al., 2016;", "ref_id": "BIBREF15" }, { "start": 255, "end": 277, "text": "Bastings et al., 2020;", "ref_id": "BIBREF2" }, { "start": 278, "end": 301, "text": "Paranjape et al., 2020)", "ref_id": "BIBREF19" }, { "start": 507, "end": 526, "text": "(Jain et al., 2020;", "ref_id": "BIBREF13" }, { "start": 527, "end": 546, "text": "Zhong et al., 2019)", "ref_id": "BIBREF29" }, { "start": 1051, "end": 1074, "text": "(Rajkomar et al., 2019;", "ref_id": "BIBREF20" }, { "start": 1075, "end": 1099, "text": "Schloss and Konam, 2020)", "ref_id": "BIBREF21" }, { "start": 1411, "end": 1427, "text": "Du et al. (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "We provide a framework to perform MR attribute extraction from medical conversations with weak supervision using noisy classification labels. This is done by creating an attention bottleneck in the classification model and performing extraction using the attention weights. After experimenting with several variants of attention scoring and projection functions, we show that the combination of our transformer-based attention scoring function (TAScore) combined with Fusedmax* achieves significantly higher extraction performance compared to the other attention variants and a phrase-based baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "8" }, { "text": "While our proposed method achieves good performance, there is still room for improvement, especially for text with multiple medications. Data augmentation by swapping or masking medication names is worth exploring. An alternate direction of future work involves improving the naturalness of extracted spans. Auxiliary supervision using a language modeling objective would be a promising approach for this. are set to 0.2. The linear layer for the query has input and output dimensions of 1024 and 32, respectively. Due to the concatenation of speaker embedding, the linear layer for keys has input and output dimensions of 1026 and 32, respectively. The feedforward layer (which generates scalar scores for each token) on top of the transformer is 2-layered with relu activations and hidden sizes (16, 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "8" }, { "text": "The final classifier for each attribute is a 2-layer feedforward network with hidden sizes (512, \"number of classes for the attribute\") and dropout probability of 0.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers:", "sec_num": "5." }, { "text": "Figures 6 and 7 show examples of outputs of projection functions softmax and fusedmax on random input scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3 Examples: Projection Functions", "sec_num": null }, { "text": "We implement a phrase based extraction system to provide a baseline for the extraction task. A lexicon of relevant phrases is created for each class for each attribute as shown in Table 8 . We then look for string matches within these phrases and the text for the data-point. If there are matches then the longest match is considered as an extraction span for that attribute. Table 8 : Phrases used in the phrase based baseline. These are also the most frequently occurring phrases in the free-form annotations.", "cite_spans": [], "ref_spans": [ { "start": 180, "end": 187, "text": "Table 8", "ref_id": null }, { "start": 376, "end": 383, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "A.4 Phrase based extraction baseline", "sec_num": null }, { "text": "The text includes both the spoken words and the speaker information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The detailed explanation for each of the classes can be found inTable 7in Appendix A.1.3 The dataset statistics are given in Appendix A.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The pre-trained weight for BERT is from the Hugging-Face library(Wolf et al., 2019) 8 Most medication names are single word, however a few medicines have names which are upto 4-5 words.9 Complete set of hyperparameters used is given in Appendix A.2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The details about the phrase based baseline are presented in Appendix A.4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/ bert-large-cased", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "(a) Positive and negative scores (b) Positive scores only (c) More uniformly distributed positive scoresFigure 7: Sample outputs (right column) of fusedmax function on random input scores (left column).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank University of Pittsburgh Medical Center (UPMC) and Abridge AI Inc. for providing access to the de-identified data corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "The complete set of normalized classification labels for all three medication attributes and their meaning is shown in Table 7 .Average statistics about the dataset are shown in Table 6 ", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 126, "text": "Table 7", "ref_id": null }, { "start": 178, "end": 185, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "A.1 Data", "sec_num": null }, { "text": "We use AllenNLP (Gardner et al., 2017) to implement our models and Weights&Biases (Biewald, 2020) to manage our experiments. Following is the list of hyperparameters used in our experiments:1. Contextualized Token Embedder: We use 1024-dimensional 24-layer bert-large-cased obtained as a pre-trained model from HuggingFace 11 . We freeze the weights of the embedder in our training. The max sequence length is set to 256.2. Speaker embedding: 2-dimensional trainable embedding with vocabulary size of 4 as we only have 4 unique speakers in our dataset: doctor, patient, caregiver and nurse.3. Softmax and Fusedmax: The temperatures of softmax and fusedmax are set to a default value of 1. The sparsity weight of fusedmax is also set to its default value of 1 for all attributes.4. TAScore: The transformer used in TAScore is a 2-layer transformer encoder where each layer is implemented as in Vaswani et al. (2017) . Both the hidden dimensions inside the transformer (self-attention and feedforward) are set to 32 and all the dropout probabilities", "cite_spans": [ { "start": 16, "end": 38, "text": "(Gardner et al., 2017)", "ref_id": "BIBREF9" }, { "start": 82, "end": 97, "text": "(Biewald, 2020)", "ref_id": "BIBREF3" }, { "start": 893, "end": 914, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "A.2 Hyperparameters", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Patient Information Recall in a Rheumatology Clinic", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Anderson", "suffix": "" }, { "first": "Sally", "middle": [], "last": "Dodman", "suffix": "" }, { "first": "M", "middle": [], "last": "Kopelman", "suffix": "" }, { "first": "A", "middle": [], "last": "Fleming", "suffix": "" } ], "year": 1979, "venue": "Rheumatology", "volume": "18", "issue": "", "pages": "18--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. L. Anderson, Sally Dodman, M. Kopelman, and A. Fleming. 1979. Patient Information Recall in a Rheumatology Clinic. Rheumatology, 18:18-22.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Interpretable neural predictions with differentiable binary variables", "authors": [ { "first": "Joost", "middle": [], "last": "Bastings", "suffix": "" }, { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2020, "venue": "ACL 2019 -57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference", "volume": "", "issue": "", "pages": "2963--2977", "other_ids": { "DOI": [ "10.18653/v1/p19-1284" ] }, "num": null, "urls": [], "raw_text": "Joost Bastings, Wilker Aziz, and Ivan Titov. 2020. In- terpretable neural predictions with differentiable bi- nary variables. In ACL 2019 -57th Annual Meet- ing of the Association for Computational Linguistics, Proceedings of the Conference, pages 2963-2977. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Experiment tracking with weights and biases. Software available from wandb", "authors": [ { "first": "Lukas", "middle": [], "last": "Biewald", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lukas Biewald. 2020. Experiment tracking with weights and biases. Software available from wandb.com.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning with fenchel-young losses", "authors": [ { "first": "Mathieu", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Andre", "suffix": "" }, { "first": "Vlad", "middle": [], "last": "Martins", "suffix": "" }, { "first": "", "middle": [], "last": "Niculae", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "35", "pages": "1--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mathieu Blondel, Andre F.T. Martins, and Vlad Nic- ulae. 2020. Learning with fenchel-young losses. Journal of Machine Learning Research, 21(35):1- 69.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A direct algorithm for 1-D total variation denoising", "authors": [ { "first": "Laurent", "middle": [], "last": "Condat", "suffix": "" } ], "year": 2013, "venue": "IEEE Signal Processing Letters", "volume": "20", "issue": "11", "pages": "1054--1057", "other_ids": { "DOI": [ "10.1109/LSP.2013.2278339" ] }, "num": null, "urls": [], "raw_text": "Laurent Condat. 2013. A direct algorithm for 1-D total variation denoising. IEEE Signal Processing Letters, 20(11):1054-1057.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning to infer entities, properties and their relations from clinical conversations", "authors": [ { "first": "Nan", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mingqiu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Linh", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Gang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Izhak", "middle": [], "last": "Shafran", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.11536" ] }, "num": null, "urls": [], "raw_text": "Nan Du, Mingqiu Wang, Linh Tran, Gang Li, and Izhak Shafran. 2019. Learning to infer entities, proper- ties and their relations from clinical conversations. arXiv preprint arXiv:1908.11536.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Efficient projections onto the l1-ball for learning in high dimensions", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Shai", "middle": [], "last": "Shalev-Shwartz", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Chandra", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th International Conference on Machine Learning, ICML '08", "volume": "", "issue": "", "pages": "272--279", "other_ids": { "DOI": [ "10.1145/1390156.1390191" ] }, "num": null, "urls": [], "raw_text": "John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. 2008. Efficient projections onto the l1-ball for learning in high dimensions. In Pro- ceedings of the 25th International Conference on Machine Learning, ICML '08, page 272-279, New York, NY, USA. Association for Computing Machin- ery.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Allennlp: A deep semantic natural language processing platform", "authors": [ { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Grus", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Schmitz", "suffix": "" }, { "first": "Luke", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A digital advocate? reactions of rural people who experience homelessness to the idea of recording clinical encounters", "authors": [ { "first": "Mary", "middle": [ "Ganger" ], "last": "Stuart W Grande", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Castaldo", "suffix": "" }, { "first": "Ida", "middle": [], "last": "Carpenter-Song", "suffix": "" }, { "first": "Glyn", "middle": [], "last": "Griesemer", "suffix": "" }, { "first": "", "middle": [], "last": "Elwyn", "suffix": "" } ], "year": 2017, "venue": "Health Expectations", "volume": "20", "issue": "4", "pages": "618--625", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart W Grande, Mary Ganger Castaldo, Elizabeth Carpenter-Song, Ida Griesemer, and Glyn Elwyn. 2017. A digital advocate? reactions of rural peo- ple who experience homelessness to the idea of recording clinical encounters. Health Expectations, 20(4):618-625.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An unsupervised neural attention model for aspect extraction", "authors": [ { "first": "Ruidan", "middle": [], "last": "He", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Wee Sun Lee", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Dahlmeier", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "388--397", "other_ids": { "DOI": [ "10.18653/v1/P17-1036" ] }, "num": null, "urls": [], "raw_text": "Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2017. An unsupervised neural attention model for aspect extraction. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 388-397, Vancouver, Canada. Association for Com- putational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Attention is not explanation", "authors": [ { "first": "Sarthak", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2019, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In NAACL-HLT.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning to Faithfully Rationalize by Construction", "authors": [ { "first": "Sarthak", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Sarah", "middle": [], "last": "Wiegreffe", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Pinter", "suffix": "" }, { "first": "Byron C", "middle": [], "last": "Wallace", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, and By- ron C Wallace. 2020. Learning to Faithfully Ratio- nalize by Construction.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Burnout and doctors: prevalence, prevention and intervention", "authors": [ { "first": "Shailesh", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2016, "venue": "Healthcare", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shailesh Kumar. 2016. Burnout and doctors: preva- lence, prevention and intervention. In Healthcare, volume 4, page 37. Multidisciplinary Digital Pub- lishing Institute.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Rationalizing neural predictions", "authors": [ { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2016, "venue": "EMNLP 2016 -Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "107--117", "other_ids": { "DOI": [ "10.18653/v1/d16-1011" ] }, "num": null, "urls": [], "raw_text": "Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In EMNLP 2016 -Conference on Empirical Methods in Natural Lan- guage Processing, Proceedings, pages 107-117.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Neural relation extraction with selective attention over instances", "authors": [ { "first": "Yankai", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Shiqi", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2124--2133", "other_ids": { "DOI": [ "10.18653/v1/P16-1200" ] }, "num": null, "urls": [], "raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 2124-2133, Berlin, Germany. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Remembering what the doctor said: Organization and adults' memory for medical information", "authors": [ { "first": "Lisa", "middle": [ "C" ], "last": "Mcguire", "suffix": "" } ], "year": 1996, "venue": "Experimental Aging Research", "volume": "22", "issue": "", "pages": "403--428", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisa C. Mcguire. 1996. Remembering what the doctor said: Organization and adults' memory for medical information. Experimental Aging Research, 22:403- 428.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A regularized framework for sparse and structured neural attention", "authors": [ { "first": "Vlad", "middle": [], "last": "Niculae", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Blondel", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3338--3348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vlad Niculae and Mathieu Blondel. 2017. A regular- ized framework for sparse and structured neural at- tention. In Advances in neural information process- ing systems, pages 3338-3348.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction", "authors": [ { "first": "Mandar", "middle": [], "last": "Bhargavi Paranjape", "suffix": "" }, { "first": "John", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Thickstun", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. An Information Bottleneck Approach for Control- ling Conciseness in Rationale Extraction. Technical report.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Automatically charting symptoms from patient-physician conversations using machine learning", "authors": [ { "first": "Alvin", "middle": [], "last": "Rajkomar", "suffix": "" }, { "first": "Anjuli", "middle": [], "last": "Kannan", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Vardoulakis", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Chou", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2019, "venue": "JAMA internal medicine", "volume": "179", "issue": "6", "pages": "836--838", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alvin Rajkomar, Anjuli Kannan, Kai Chen, Laura Var- doulakis, Katherine Chou, Claire Cui, and Jeffrey Dean. 2019. Automatically charting symptoms from patient-physician conversations using machine learn- ing. JAMA internal medicine, 179(6):836-838.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Towards an automated soap note: Classifying utterances from medical conversations. Machine Learning for Health Care", "authors": [ { "first": "Benjamin", "middle": [], "last": "Schloss", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Konam", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2007.08749.Ver-sion3" ] }, "num": null, "urls": [], "raw_text": "Benjamin Schloss and Sandeep Konam. 2020. To- wards an automated soap note: Classifying utter- ances from medical conversations. Machine Learn- ing for Health Care, 2020, arXiv:2007.08749. Ver- sion 3.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Medication regimen extraction from clinical conversations", "authors": [ { "first": "P", "middle": [], "last": "Sai", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Selvaraj", "suffix": "" }, { "first": "", "middle": [], "last": "Konam", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.04961" ] }, "num": null, "urls": [], "raw_text": "Sai P Selvaraj and Sandeep Konam. 2019. Medica- tion regimen extraction from clinical conversations. arXiv preprint arXiv:1912.04961.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties", "authors": [ { "first": "Christine", "middle": [], "last": "Sinsky", "suffix": "" }, { "first": "Lacey", "middle": [], "last": "Colligan", "suffix": "" }, { "first": "Ling", "middle": [], "last": "Li", "suffix": "" }, { "first": "Mirela", "middle": [], "last": "Prgomet", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Reynolds", "suffix": "" }, { "first": "Lindsey", "middle": [], "last": "Goeders", "suffix": "" }, { "first": "Johanna", "middle": [], "last": "Westbrook", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Tutty", "suffix": "" }, { "first": "George", "middle": [], "last": "Blike", "suffix": "" } ], "year": 2016, "venue": "Annals of internal medicine", "volume": "165", "issue": "11", "pages": "753--760", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christine Sinsky, Lacey Colligan, Ling Li, Mirela Prgomet, Sam Reynolds, Lindsey Goeders, Johanna Westbrook, Michael Tutty, and George Blike. 2016. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Annals of internal medicine, 165(11):753-760.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Providing recording of clinical consultation to patientsa highly valued but underutilized intervention: a scoping review", "authors": [ { "first": "Maka", "middle": [], "last": "Tsulukidze", "suffix": "" }, { "first": "Marie-Anne", "middle": [], "last": "Durand", "suffix": "" }, { "first": "Paul", "middle": [ "J" ], "last": "Barr", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mead", "suffix": "" }, { "first": "Glyn", "middle": [], "last": "Elwyn", "suffix": "" } ], "year": 2014, "venue": "Patient Education and Counseling", "volume": "95", "issue": "3", "pages": "297--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maka Tsulukidze, Marie-Anne Durand, Paul J Barr, Thomas Mead, and Glyn Elwyn. 2014. Provid- ing recording of clinical consultation to patients- a highly valued but underutilized intervention: a scoping review. Patient Education and Counseling, 95(3):297-304.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Beyond word attention: Using segment attention in neural relation extraction", "authors": [ { "first": "Bowen", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Zhenyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tingwen", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Quangang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "IJCAI", "volume": "", "issue": "", "pages": "5401--5407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bowen Yu, Zhenyu Zhang, Tingwen Liu, Bin Wang, Sujian Li, and Quangang Li. 2019. Beyond word attention: Using segment attention in neural relation extraction. In IJCAI, pages 5401-5407.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Fine-grained sentiment analysis with faithful attention. (a) Positive and negative scores (b) Positive scores only", "authors": [ { "first": "Ruiqi", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Shao", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruiqi Zhong, Steven Shao, and Kathleen McKeown. 2019. Fine-grained sentiment analysis with faithful attention. (a) Positive and negative scores (b) Positive scores only", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Sample outputs (right column) of softmax function on random input scores", "authors": [], "year": null, "venue": "Figure", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Figure 6: Sample outputs (right column) of softmax function on random input scores (left column).", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Architecture of TAScore. q and K are input query and keys, respectively, and s are the output scores.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Complete model for weakly supervised MR attribute extraction.", "type_str": "figure", "uris": null }, "FIGREF3": { "num": null, "text": "Distribution of the Avg. LCSF1 for the best performing model (BERT+TAScore+Fusedmax*). A significant number (\u2248 10%) of datapoints with multiple medication in their text get LCSF1 of zero (1st bar).", "type_str": "figure", "uris": null }, "TABREF1": { "text": "The normalized labels in the classification data.", "content": "", "html": null, "num": null, "type_str": "table" }, "TABREF2": { "text": ". . I think my mum did. Okay, Fosamax, you take one pill on Monday and one on Thursday. Do you have much caffine? No, none. . .", "content": "
textmedicationClassification labels
frequencyroute change
. . . I would recommend vitamin D to be taken. Have you had Fosamax before?. . .vitamin Dnonenonetake
FosamaxTwice a weekpilltake
Do you have much caffine? No, none. Okay, this is Actonel and it's, one tablet once a month.. . .
The free-form instructions for each attribute in
the MR tags are normalized and categorized into
manageable number of classification labels to avoid
long tail and overlapping classes. This process re-
", "html": null, "num": null, "type_str": "table" }, "TABREF3": { "text": "change Avg. freq. route change Avg. freq. route change Avg.", "content": "
ModelSpanToken-wise extraction F1LCSF1Classification F1
labels freq. route Phrase-based baseline Encoder Scorer Projector -41.03 48.57 10.7533.45 36.26 50.41 11.5432.73 ----
BERTAdditive Softmax051.22 46.27 22.8140.10 39.87 46.40 18.9235.06 51.51 54.06 51.6552.40
BERTAdditive Fusedmax047.55 51.31 5.1034.65 46.39 59.10 4.8236.77 43.54 42.91 9.1931.88
BERTTAScore Softmax066.53 48.96 27.6147.70 61.49 47.34 22.4943.77 44.93 51.34 46.4947.58
BERTTAScore Fusedmax056.35 44.04 22.0740.82 61.96 50.27 25.2545.82 51.95 48.37 43.0047.77
BERTAdditive Softmax15061.56 45.08 33.5446.73 57.90 48.14 28.2844.77 55.62 52.42 50.4052.81
BERTAdditive Fusedmax15047.05 52.49 27.6942.41 42.37 57.50 30.6343.50 54.04 48.40 52.2851.57
BERTAdditive Fusedmax* 15065.90 47.30 34.7749.32 67.15 51.12 36.0451.30 56.46 42.63 50.6849.93
BERTTAScore Softmax15066.53 54.35 34.2751.72 62.90 53.05 28.3348.09 50.13 45.86 47.1647.72
BERTTAScore Fusedmax15058.24 58.09 25.0947.32 57.93 64.05 26.7049.56 51.61 53.95 43.5149.69
BERTTAScore Fusedmax* 15066.90 54.85 33.2851.67 70.10 60.05 35.9255.36 64.26 44.50 51.2153.32
", "html": null, "num": null, "type_str": "table" }, "TABREF4": { "text": "Attribute extraction performance for various combinations of scoring and projection functions. The avg. columns represent the macro average of the corresponding metric across the attributes.", "content": "
Training TypeModelTokenwise Extraction F1Classification F1
freq. route change avg.freq. route change avg.
Classification onlyBERT Classifiers----74.72 40.82 55.7658.48
Classification onlyBERT+TAScore+Fusedmax* 58.55 45.00 24.4342.66 52.45 46.37 43.0047.27
Extraction onlyBERT+TAScore+Fusedmax* 53.79 44.44 14.3237.18 ----
Classification +Extraction BERT+TAScore+Fusedmax* 66.90 54.85 33.2851.67 64.26 44.50 51.2153.32
", "html": null, "num": null, "type_str": "table" }, "TABREF5": { "text": "Effect of performing extraction+classification jointly in our proposed model. While the Extraction Only training only uses the 150 examples which are explicitly annotated with span labels, the Classification only training uses the complete training dataset with classification labels.", "content": "
5 Metrics
Token-wise F1 (TF1): Each token in text is either
part of the extracted span (positive class) for an at-
tribute or not (negative class). Token-wise F1 score
is the F1 score of the positive class obtained by
considering all the tokens in the dataset as separate
binary classification data points. TF1 is calculated
separately for each attribute.
Longest Common Substring F1 (LCSF1):
LCSF1 measures if the extracted spans, along with
being part of the gold spans, are contiguous or not.
Longest Common Substring (LCS) is the longest
overlapping contiguous span of tokens between the
predicted and gold spans. LCSF1 is defined as the
harmonic mean of LCS-Recall and LCS-Precision
which are defined per extraction as:
LCS-Recall =#tokens in LCS #tokens in gold span
LCS-Precision =#tokens in LCS #tokens in predicted span
6 Results and Analysis
", "html": null, "num": null, "type_str": "table" }, "TABREF7": { "text": "", "content": "
: MR extraction improvement (%) brought
by TAScore over additive scorer in the full test set
(All=100%), and test subset with single medication
(SM=22.7%) and multiple medications (MM=77.3%)
in the text.
", "html": null, "num": null, "type_str": "table" }, "TABREF9": { "text": "Complete set of normalized classification labels for all three medication attributes and their explanation", "content": "
Attribute ClassPhrases
Every Morningeveryday in the morning | every morning | morning
everyday before sleeping | everyday after dinner |
At Bedtimeevery night | after dinner |
at bedtime | before sleeping
freqTwice a daytwice a day | 2 times a day | two times a day | 2 times per day | two times per day
Three times a day3 times a day | 3 times per day | 3 times every day
Every six hoursevery 6 hours | every six hours
Every weekevery week | weekly | once a week
twice a week | two times a week |
Twice a week2 times a week | twice per week | two times per week |
2 times per week
Three times a week3 times a week | 3 times per week
Every monthevery month | monthly | once a month
Other
None
Pilltablet | pill | capsule | mg
Injectionpen | shot | injector | injection | inject
Topical creamcream | gel | ointment | lotion
Nasal sprayspray | nasal conversation transcript.
routeMedicated patchpatch
Ophthalmic solution ophthalmic | drops | drop
Oral solutionoral solution
Other
None
Taketake | start | put you on | continue
Stopstop | off
changeIncrease Decreaseincrease reduce | decrease
Other
None
", "html": null, "num": null, "type_str": "table" } } } }