{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:27:34.910480Z" }, "title": "What Do You See in this Patient? Behavioral Testing of Clinical NLP Models", "authors": [ { "first": "Betty", "middle": [], "last": "Van Aken", "suffix": "", "affiliation": {}, "email": "bvanaken@bht-berlin.de" }, { "first": "Sebastian", "middle": [], "last": "Herrmann", "suffix": "", "affiliation": {}, "email": "sebastianhe93@gmail.com" }, { "first": "Alexander", "middle": [], "last": "L\u00f6ser", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Decision support systems based on clinical notes have the potential to improve patient care by pointing doctors towards overseen risks. Predicting a patient's outcome is an essential part of such systems, for which the use of deep neural networks has shown promising results. However, the patterns learned by these networks are mostly opaque and previous work revealed both reproduction of systemic biases and unexpected behavior for out-of-distribution patients. For application in clinical practice it is crucial to be aware of such behavior. We thus introduce a testing framework that evaluates clinical models regarding certain changes in the input. The framework helps to understand learned patterns and their influence on model decisions. In this work, we apply it to analyse the change in behavior with regard to the patient characteristics gender, age and ethnicity. Our evaluation of three current clinical NLP models demonstrates the concrete effects of these characteristics on the models' decisions. They show that model behavior varies drastically even when fine-tuned on the same data with similar AUROC score. These results exemplify the need for a broader communication of model behavior in the clinical domain. Predicted Mortality Risk Predicted Diagnoses i.a. 86yo man presents with stomach pain and shortness of breath Original sample Artificially altered testing samples 49% ... esophagitis ...", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Decision support systems based on clinical notes have the potential to improve patient care by pointing doctors towards overseen risks. Predicting a patient's outcome is an essential part of such systems, for which the use of deep neural networks has shown promising results. However, the patterns learned by these networks are mostly opaque and previous work revealed both reproduction of systemic biases and unexpected behavior for out-of-distribution patients. For application in clinical practice it is crucial to be aware of such behavior. We thus introduce a testing framework that evaluates clinical models regarding certain changes in the input. The framework helps to understand learned patterns and their influence on model decisions. In this work, we apply it to analyse the change in behavior with regard to the patient characteristics gender, age and ethnicity. Our evaluation of three current clinical NLP models demonstrates the concrete effects of these characteristics on the models' decisions. They show that model behavior varies drastically even when fine-tuned on the same data with similar AUROC score. These results exemplify the need for a broader communication of model behavior in the clinical domain. Predicted Mortality Risk Predicted Diagnoses i.a. 86yo man presents with stomach pain and shortness of breath Original sample Artificially altered testing samples 49% ... esophagitis ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Outcome prediction from clinical notes. The use of automatic systems in the medical domain is promising due to their potential exposure to large amounts of data from earlier patients. This data can include information that helps doctors make better decisions regarding diagnoses and treatments of a patient at hand. Outcome prediction models take patient information as input and then output probabilities for all considered outcomes (Choi et al., 2018; Khadanga et al., 2019) . We focus this work on outcome models using natural language in the form of clinical notes as an input, since they are a common source of patient information and contain a multitude of possible variables. 1: Minimal alterations to the patient description can have a large impact on outcome predictions of clinical NLP models. We introduce behavioral testing for the clinical domain to expose these impacts.", "cite_spans": [ { "start": 434, "end": 453, "text": "(Choi et al., 2018;", "ref_id": "BIBREF4" }, { "start": 454, "end": 476, "text": "Khadanga et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The problem of black box models for clinical predictions. Recent models show promising results on tasks such as mortality (Si and Roberts, 2019) and diagnosis prediction (Liu et al., 2018; Choi et al., 2018) . However, since most of these models work as black boxes, it is unclear which features they consider important and how they interpret certain patient characteristics. From earlier work we know that highly parameterized models are prone to emphasize systemic biases in the data (Sun et al., 2019) . Further, these models have high potential to disadvantage minority groups as their behavior towards out-of-distribution samples is often unpredictable. This behavior is especially dangerous in the clinical domain, since it can lead to underdiagnosis or inappropriate treatment (Straw, 2020) . Thus, understanding models and allocative harms they might cause (Barocas et al., 2017) is an essential prerequisite for their application in clinical practice. We argue that more in-depth evaluations are needed to know whether models have learned medically meaningful patterns or not.", "cite_spans": [ { "start": 122, "end": 144, "text": "(Si and Roberts, 2019)", "ref_id": "BIBREF30" }, { "start": 170, "end": 188, "text": "(Liu et al., 2018;", "ref_id": "BIBREF19" }, { "start": 189, "end": 207, "text": "Choi et al., 2018)", "ref_id": "BIBREF4" }, { "start": 486, "end": 504, "text": "(Sun et al., 2019)", "ref_id": "BIBREF34" }, { "start": 784, "end": 797, "text": "(Straw, 2020)", "ref_id": "BIBREF33" }, { "start": 865, "end": 887, "text": "(Barocas et al., 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Behavioral testing for the clinical domain. As a step towards this goal, we introduce a novel test-ing framework specifically for the clinical domain that enables us to examine the influence of certain patient characteristics on the model predictions. Our work is motivated by behavioral testing frameworks for general Natural Language Processing (NLP) tasks (Ribeiro et al., 2020) in which model behavior is observed under changing input data. Our framework incorporates a number of test cases and is further extendable to the needs of individual data sets and clinical tasks.", "cite_spans": [ { "start": 359, "end": 381, "text": "(Ribeiro et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Influence of patient characteristics. As an initial case study we apply the framework to analyse the behavior of models trained on the widely used MIMIC-III database (Johnson et al., 2016) . We analyse how sensitive these models are towards textual indicators of patient characteristics, such as age, gender and ethnicity, in English clinical notes. These characteristics are known to be affected by discrimination in health care (Stangl et al., 2019) , on the other hand, they can represent important risk factors for certain diseases or conditions. That is why we consider it especially important to understand how these mentions affect model decisions.", "cite_spans": [ { "start": 166, "end": 188, "text": "(Johnson et al., 2016)", "ref_id": "BIBREF14" }, { "start": 430, "end": 451, "text": "(Stangl et al., 2019)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contributions. In summary, we present the following contributions in this work: 1) We introduce a behavioral testing framework specifically for clinical NLP models. We release the code for applying and extending the framework 1 to enable in-depth evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2) We present an analysis on the patient characteristics gender, age and ethnicity to understand the sensitivity of models towards textual cues regarding these groups and whether their predictions are medically plausible. 3) We show results of three state-of-the-art clinical NLP models and find that model behavior strongly varies depending on the applied pre-training. We further show that highly optimised models tend to overestimate the effect of certain patient characteristics leading to potentially harmful behavior.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Outcome prediction from clinical text has been studied regarding a variety of outcomes. The most prevalent being in-hospital mortality (Ghassemi et al., 2014; Jo et al., 2017; Suresh et al., 2018; Si and Roberts, 2019) , diagnosis prediction (Tao et al., 1 URL:", "cite_spans": [ { "start": 135, "end": 158, "text": "(Ghassemi et al., 2014;", "ref_id": "BIBREF8" }, { "start": 159, "end": 175, "text": "Jo et al., 2017;", "ref_id": "BIBREF13" }, { "start": 176, "end": 196, "text": "Suresh et al., 2018;", "ref_id": "BIBREF35" }, { "start": 197, "end": 218, "text": "Si and Roberts, 2019)", "ref_id": "BIBREF30" }, { "start": 242, "end": 254, "text": "(Tao et al.,", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Clinical Outcome Prediction", "sec_num": "2.1" }, { "text": "https://github.com/bvanaken/ clinical-behavioral-testing 2019; Liu et al., 2018 Liu et al., , 2019a and phenotyping (Liu et al., 2019b; Jain et al., 2019; Oleynik et al., 2019; Pfaff et al., 2020) . In recent years, most approaches are based on deep neural networks due to their ability to outperform earlier methods in most settings. Most recently, Transformer-based models have been applied for prediction of patient outcomes with reported increases in performance Zhang et al., 2020a; Tuzhilin, 2020; Zhao et al., 2021; van Aken et al., 2021; Rasmy et al., 2021) . In this work we analyse three Transformer-based models due to their upcoming prevalence in the application of NLP in health care.", "cite_spans": [ { "start": 63, "end": 79, "text": "Liu et al., 2018", "ref_id": "BIBREF19" }, { "start": 80, "end": 99, "text": "Liu et al., , 2019a", "ref_id": "BIBREF17" }, { "start": 116, "end": 135, "text": "(Liu et al., 2019b;", "ref_id": "BIBREF18" }, { "start": 136, "end": 154, "text": "Jain et al., 2019;", "ref_id": "BIBREF12" }, { "start": 155, "end": 176, "text": "Oleynik et al., 2019;", "ref_id": "BIBREF24" }, { "start": 177, "end": 196, "text": "Pfaff et al., 2020)", "ref_id": "BIBREF25" }, { "start": 467, "end": 487, "text": "Zhang et al., 2020a;", "ref_id": "BIBREF41" }, { "start": 488, "end": 503, "text": "Tuzhilin, 2020;", "ref_id": "BIBREF38" }, { "start": 504, "end": 522, "text": "Zhao et al., 2021;", "ref_id": "BIBREF43" }, { "start": 523, "end": 545, "text": "van Aken et al., 2021;", "ref_id": "BIBREF39" }, { "start": 546, "end": 565, "text": "Rasmy et al., 2021)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Clinical Outcome Prediction", "sec_num": "2.1" }, { "text": "Ribeiro et al. (2020) identify shortcomings of common model evaluation on held-out datasets, such as the occurrence of the same biases in both training and test set and the lack of broad testing scenarios in the held-out set. To mitigate these problems, they introduce CHECKLIST, a behavioral testing framework for general NLP abilities. In particular, they highlight that such frameworks evaluate input-output behavior without any knowledge of internal structures of a system (Beizer, 1995) . Building upon CHECKLIST, R\u00f6ttger et al. (2021) introduce a behavioral testing suite for the domain of hate speech detection to address the individual challenges of the task. Following their work, we create a behavioral testing framework for the domain of clinical outcome prediction, that comprise idiosyncratic data and respective challenges.", "cite_spans": [ { "start": 477, "end": 491, "text": "(Beizer, 1995)", "ref_id": "BIBREF2" }, { "start": 519, "end": 540, "text": "R\u00f6ttger et al. (2021)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Behavioral Testing in NLP", "sec_num": "2.2" }, { "text": "Zhang et al. (2020b) highlight the reproduction of systemic biases in clinical NLP models. They quantify such biases with the recall gap among patient groups and show that models trained on data from MIMIC-III inherit biases regarding gender, ethnicity, and insurance status-leading to higher recall values for majority groups. Log'e et al. (2021) further find disparities in pain treatment suggestions by language models for different races and genders. We take these findings as motivation to directly analyse the sensitivity of large pre-trained models with regard to patient characteristics. In contrast to earlier work and following Ribeiro et al. (2020), we want to eliminate the influence of existing data labels on our evaluation. Further, our approach simulates patient cases that are similar to real-life occurrences. It thus displays the actual impact of learned patterns on all analysed patient groups. From an existing test set we create test groups by altering specific tokens in the clinical note. We then analyse the change in predictions which reveals the impact of the mention on the clinical NLP model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysing Clinical NLP Models", "sec_num": "2.3" }, { "text": "Sample alterations. Our goal is to examine how clinical NLP models react to mentions of certain patient characteristics in text. Comparable to earlier approaches to behavioral testing we use sample alterations to artificially create different test groups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Behavioral Testing of Clinical NLP Models", "sec_num": "3" }, { "text": "In our case, a test group is defined by one manifestation of a patient characteristic, such as female as the patient's gender. To ensure that we only measure the influence of this certain characteristic, we keep the rest of the patient case unchanged and apply the alterations to all samples in our test dataset. Depending on the original sample, the operations to create a certain test group thus include 1) changing a mention, 2) adding a mention or 3) keeping a mention unchanged (in case of a patient case that is already part of the test group at hand). This results in one newly created dataset per test group, all based on the same patient cases and only different in the patient characteristic under investigation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Behavioral Testing of Clinical NLP Models", "sec_num": "3" }, { "text": "Prediction analysis. After creating the test groups, we collect the models' predictions for all cases in each test group. Different from earlier approaches to behavioral testing we do not test whether predictions on the altered samples are true or false with regard to the ground truth. As van Aken et al. (2021) pointed out, clinical ground truth must be viewed critically, because the collected data does only show one possible pathway for a patient out of many. Further, existing biases in treatments and diagnoses are likely included in our testing data potentially leading to meaningless results. To prevent that, we instead focus on detecting how the model outputs change regardless of the original annotations. This way we can also evaluate very rare mentions (e.g. transgender) and observe their impact on the model predictions reli-ably. Figure 2 shows a schematic overview of the functioning of the framework.", "cite_spans": [], "ref_spans": [ { "start": 847, "end": 855, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Behavioral Testing of Clinical NLP Models", "sec_num": "3" }, { "text": "Extensibility. In this study, we use the introduced framework to analyse model behavior with regard to patient characteristics as described in 4.2. However, it can also be used to test other model behavior like the ability to detect diagnoses when certain indicators are present in the text or the influence of stigmatizing language (cf. Goddu et al. (2018) ). It is further possible to combine certain patient groups to test model behavior regarding intersectionality. While such analyses are beyond the scope of this paper, we include them in the published codebase as an example for further extensions.", "cite_spans": [ { "start": 338, "end": 357, "text": "Goddu et al. (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Behavioral Testing of Clinical NLP Models", "sec_num": "3" }, { "text": "We conduct our analysis on data from the MIMIC-III database (Johnson et al., 2016) . In particular we use the outcome prediction task setup by van Aken et al. (2021). The classification task includes 48,745 English admission notes annotated with the patients' clinical outcomes at discharge. We select the outcomes diagnoses at discharge and in-hospital mortality for this analysis, since they have the highest impact on patient care and present a high potential to disadvantage certain patient groups. We use three models (see 4.3) trained on the two admission to discharge tasks and conduct our analysis on the test set defined by the authors with 9,829 samples.", "cite_spans": [ { "start": 60, "end": 82, "text": "(Johnson et al., 2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "We choose three characteristics for the analysis in this work: Age, gender and ethnicity. While these characteristics differ in their importance as clinical risk factors, all of them are known to be subject to biases and stigmas in health care (Stangl et al., 2019) . Therefore, we want to test, whether the analysed models have learned medically plausible patterns or ones that might be harmful to certain patient groups. We deliberately also include groups that occur very rarely in the original dataset. We want to understand the impact of imbalanced input data especially on minority groups, since they are already disadvantaged by the health care system (Riley, 2012; Bulatao and Anderson, 2004) . When altering the samples in our test set, we utilize the fact that patients are described in a mostly consistent way in clinical notes. We collect all mention variations from the training set used to describe the different patient characteristics and alter the samples accordingly in an automated setup. Details regarding all applied variations can be found in the public repository linked in 1.", "cite_spans": [ { "start": 244, "end": 265, "text": "(Stangl et al., 2019)", "ref_id": "BIBREF32" }, { "start": 685, "end": 700, "text": "Anderson, 2004)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Considered Patient Characteristics", "sec_num": "4.2" }, { "text": "Age. The age of a patient is a significant risk factor for a number of clinical outcomes. Our test includes all ages between 18 and 89 and the [** Age over 90**] de-idenfitication label from the MIMIC-III database. By analysing the model behavior on changing age mentions we can get insights on how the models interpret numbers, which is considered challenging for current NLP models .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Considered Patient Characteristics", "sec_num": "4.2" }, { "text": "Gender. A patient's gender is both a risk factor for certain diseases and also subject to unintended biases in healthcare. We test the model's behavior regarding gender by altering the gender mention and by changing all pronouns in the clinical note. In addition to female and male, we also consider transgender as a gender test group in our study. This group is extremely rare in clinical datasets like MIMIC-III, but since approximately 1.4 million people in the U.S. identify as transgender (Flores et al., 2016) , it is important to understand how model predictions change when the characteristic is present in a clinical note.", "cite_spans": [ { "start": 494, "end": 515, "text": "(Flores et al., 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Considered Patient Characteristics", "sec_num": "4.2" }, { "text": "Ethnicity. The ethnicity of a patient is only occasionally mentioned in clinical notes and its role in medical decision-making is controversial, since it can lead to disadvantages in patient care (Anderson et al., 2001; Snipes et al., 2011) . Earlier studies have also shown that ethnicity in clinical notes is often incorrectly assigned (Moscou et al., 2003) . We want to know how clinical NLP models interpret the mention of ethnicity in a clinical note and whether their behavior can cause unfair treatment. We choose White, African American, Hispanic and Table 1 : Performance of three state-of-the-art models on the tasks diagnoses (multi-label) and mortality prediction (binary task) in % AUROC. PubMedBERT outperforms the other models in both tasks by a small margin.", "cite_spans": [ { "start": 196, "end": 219, "text": "(Anderson et al., 2001;", "ref_id": "BIBREF0" }, { "start": 220, "end": 240, "text": "Snipes et al., 2011)", "ref_id": "BIBREF31" }, { "start": 338, "end": 359, "text": "(Moscou et al., 2003)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 559, "end": 566, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Considered Patient Characteristics", "sec_num": "4.2" }, { "text": "Asian as ethnicity groups for our evaluation, as they are the most frequent ethnicities in MIMIC-III.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Considered Patient Characteristics", "sec_num": "4.2" }, { "text": "In this study, we apply the introduced testing framework to three existing clinical models which are fine-tuned on the tasks of diagnosis and mortality prediction. We use public pre-trained model checkpoints and fine-tune all models on the same training data with the same hyperparameter setup 2 . The models are based on the BERT architecture (Devlin et al., 2019) as it presents the current stateof-the-art in predicting patient outcomes. Their performance on the two tasks is shown in Table 1 . We deliberately choose three models based on the same architecture to investigate the impact of pre-training data while keeping architectural considerations aside. In general the proposed testing framework is model agnostic and works with any type of text-based outcome prediction model.", "cite_spans": [ { "start": 344, "end": 365, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 488, "end": 496, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Clinical NLP Models", "sec_num": "4.3" }, { "text": "BioBERT. Lee et al. (2020) introduced BioBERT which is based on a pre-trained BERT Base (Devlin et al., 2019) checkpoint. They applied another language model fine-tuning step using biomedical articles from PubMed abstracts and full-text articles. BioBERT has shown improved performance on both medical and clinical downstream tasks. train PubMedBERT from scratch. The tokenization is adjusted to the medical domain accordingly. The model reaches state-of-the-art results on multiple medical NLP tasks and outperforms the other analysed models on the outcome prediction tasks.", "cite_spans": [ { "start": 9, "end": 26, "text": "Lee et al. (2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Clinical NLP Models", "sec_num": "4.3" }, { "text": "We present the results on all test cases by averaging the probabilities that a model assigns to each test sample. We then compare the averaged probabilities across test cases to identify which characteristics have a large impact on the model's prediction over the whole test set. The values per diagnosis in the heatmaps shown in Figure 3 , 4, 7 and 8 are defined using the following formula:", "cite_spans": [], "ref_spans": [ { "start": 330, "end": 338, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c i = p i \u2212 N j p j N", "eq_num": "(1)" } ], "section": "Results", "sec_num": "5" }, { "text": "where c i is the value assigned to test group i, p is the (predicted) probability for a given diagnosis and N is the number of all test groups except i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "We choose this illustration based on the concept of partial dependence plots (Friedman, 2001) to highlight both positive and negative influence of a characteristic on model behavior. Since all test groups are based on the same patients and only differ regarding the characteristic at hand, even small differences in the averaged predictions can point towards general patterns that the model learned to associate with a characteristic.", "cite_spans": [ { "start": 77, "end": 93, "text": "(Friedman, 2001)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Transgender mention leads to lower mortality and diagnoses predictions. Table 2 shows the mortality predictions of the three analysed models with regard to the gender assigned in the text. While the predicted mortality risk for female and male patients lies within a small range, all models predict the mortality risk of patients that are described as transgender as lower than nontransgender patients. This is probably due to the relative young age of most transgender patients PubMedBERT assigns highest risk to female, the other models to male patients. Notably, all models decrease their mortality prediction for transgender patients. in the MIMIC-III training data, but can be harmful to older patients identifying as transgender at inference time.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Influence of Gender", "sec_num": "5.1" }, { "text": "Sensitivity to gender mention varies per model. Figure 3 shows the change in model prediction for each diagnosis with regard to the gender mention. The cells of the heatmap are the deviations from the average score of the other test cases. Thus, a red cell indicates that the model assigns a higher probability to a diagnosis for this gender group. We see that PubMedBERT is highly sensitive to the change of the patient gender, especially regarding transgender patients. Except from few diagnoses such as Cardiac dysrhythmias and Drug Use / Abuse, the model predicts a lower probability to diseases if the patient letter contains the transgender mention. The CORe and BioBERT models are less sensitive in this regard. The most salient deviation of the BioBERT model is a drop in probability of Urinary tract disorders for male patients, which is medically plausible due to anatomic differences (Tan and Chlebicki, 2016) .", "cite_spans": [ { "start": 895, "end": 920, "text": "(Tan and Chlebicki, 2016)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 48, "end": 56, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Influence of Gender", "sec_num": "5.1" }, { "text": "Patterns in MIMIC-III training data are partially inherited. In Figure 4 we show the original distribution of diagnoses per gender in the training data. Note that the deviations are about 10 times larger than the ones produced by the model predictions in Figure 3 . This indicates that the models take gender as a decision factor, but only among others. Due to the very rare occurrence of transgender mentions (only seven cases in the training data), most diagnoses are underrepresented for this group. This is partially reflected by the model predictions, especially by PubMedBERT, as described above. Other salient patterns such as the prevalence of Chronic ischemic heart disease in male patients are only reproduced faintly by the models.", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 72, "text": "Figure 4", "ref_id": "FIGREF5" }, { "start": 255, "end": 263, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Influence of Gender", "sec_num": "5.1" }, { "text": "Mortality risk is differently influenced by age. Figure 6 shows the averaged predicted mortality per age for all models and the actual distribution from the training data (dotted line). We see that BioBERT does not take age into account when predicting mortality risk except for patients over 90. PubMedBERT assigns a higher mortality risk to all age groups with a small increase for patients over 60 and an even steeper increase for patients over 90. CORe follows the training data the most while also inheriting peaks and troughs in the data.", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 57, "text": "Figure 6", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Influence of Age", "sec_num": "5.2" }, { "text": "Models are equally affected by age when predicting diagnoses. We exemplify the impact of age on diagnosis prediction on eight outcome diagnoses in Figure 5 . The dotted lines show the distribution of the diagnosis within an age group in the training data. The change of predictions regarding age are similar throughout the analysed models with only small variations such as for Cardiac dysrhythmias. Some diagnoses are regarded more probable in older patients (e.g. Acute Kidney Failure) and others in younger patients (e.g. Abuse of drugs). The distributions per age group in the training data are more extreme, but follow the same tendencies as predicted by the models.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 155, "text": "Figure 5", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Influence of Age", "sec_num": "5.2" }, { "text": "Peaks indicate lack of number understanding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Influence of Age", "sec_num": "5.2" }, { "text": "From earlier studies we know that BERT-based models have difficulties dealing with numbers in text . The peaks that we observe in some predictions support this finding. For instance, the models assign a higher risk of Cardiac dysrhythmias to patients aged 73 than to patients aged 74, because they do not capture that these are consecutive ages. Therefore, the influence of age on the predictions might solely be based on the individual age tokens observed in the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Influence of Age", "sec_num": "5.2" }, { "text": "Mention of any ethnicity decreases prediction of mortality risk. Table 3 shows the mortality predictions when different ethnicities are mentioned and when there is no mention. We observe that the mention of any of the ethnicities leads to a decrease in mortality risk prediction in all models, with White and African American patients receiving the lowest probabilities.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 72, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Influence of Ethnicity", "sec_num": "5.3" }, { "text": "Diagnoses predicted by PubMedBERT are highly sensitive to ethnicity mentions. Figure 7 depicts the influence of ethnicity mentions on the three models. Notably, the predictions of PubMed-BERT are strongly influenced by ethnicity mentions. Multiple diagnoses such as Chronic kidney disease are more often predicted when there is no mention of ethnicity, while diagnoses like Hypertension and Abuse of drugs are regarded more likely in African American patients and Unspecified anemias in Hispanic patients. While the original training data in Figure 8 shows the same strong variance among ethnicities, this is not inherited the same way in the CORe and BioBERT models. However, we can also observe deviations regarding ethnicity in these models.", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 86, "text": "Figure 7", "ref_id": "FIGREF8" }, { "start": 542, "end": 550, "text": "Figure 8", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "Influence of Ethnicity", "sec_num": "5.3" }, { "text": "African American patients are assigned lower risk of diagnoses by CORe and BioBERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Influence of Ethnicity", "sec_num": "5.3" }, { "text": "The heatmaps showing predictions of CORe and BioBERT reveal a potentially harmful pattern in which the mention of African American in a clinical note decreases the predictions for a large number of diagnoses. This pattern is found more prominently in the CORe model, but also in BioBERT. Putting these models into clinical application could result in fewer diagnostic tests to be ordered by physicians and therefore lead to disadvantages in the treatment of African American patients. This is particularly critical as it would reinforce existing biases in health care (Nelson, 2002) .", "cite_spans": [ { "start": 568, "end": 582, "text": "(Nelson, 2002)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Influence of Ethnicity", "sec_num": "5.3" }, { "text": "Model behaviors show large variance. The results described in 5 reveal large differences in the influence of patient characteristics throughout models. The analysis shows that there is no overall best model, but each model has learned both useful patterns (e.g. age as a medical plausible risk factor) and potentially dangerous ones (e.g. decreases in diagnosis risks for minority groups). The large variance is surprising since the models have a shared architecture and are fine-tuned on the same datathey only differ in their pre-training. And while the reported AUROC scores for the models (Table 1) are close to each other, the variance in learned behavior show that we should consider in-depth analyses a crucial part of model evaluation in the clinical domain. This is especially important since harmful patterns in clinical NLP models are often fine-grained and difficult to detect.", "cite_spans": [], "ref_spans": [ { "start": 593, "end": 603, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Model scoring can obfuscate critical behavior.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The analysis has shown that PubMedBERT which outperforms the other models in both mortality and diagnosis prediction by AUROC show larger sensitivity to mentions of gender and ethnicity in the text. Many of them-like lower diagnosis risk assignment to African American patients-might lead to undertreatment. This is alerting since it particularly affects minority groups which are already disadvantaged by the health care system. It also shows that instead of measuring clinical models regarding rather abstract scores, looking at their potential impact to patients should be further emphasized. To communicate model behavior to medical professionals one possible direction could be to use behavioral analysis results as a part of clinical model cards as proposed by Mitchell et al. (2019) .", "cite_spans": [ { "start": 767, "end": 789, "text": "Mitchell et al. (2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Limitations of the proposed framework. Unlike other behavioral testing setups (see 2.2), results of our framework cannot be easily categorized into correct and false behavior. While increased risk allocations can be beneficial to a patient group due to doctors running additional tests, they can also lead to mistreatment or other diagnoses being overlooked. Same holds for the influence of rare mentions, such as transgender: One could argue that based on only seven occurrences in the training set the characteristic should have less impact on model decisions overall. However, some features e.g. regarding rare diseases should be recognized as important even if very infrequent. Since our models often lack such judgement, the decision about which patient characteristic to consider a risk factor and their impact on outcome predictions is still best made by medical professionals. Nevertheless, decision support systems can be beneficial if their behavior is transparently communicated. With this framework we want to take a step towards improving this communication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "In this work, we introduced a behavioral testing framework for the clinical domain to understand the effects of textual variations on model predictions. We apply this framework to three current clinical NLP models to examine the impact of cer-tain patient characteristics. Our results show that the models-even with very close AUROC scoreshave learned very different behavioral patterns, some of them with high potential to disadvantage minority groups. With this work, we want to emphasize the importance of model evaluation beyond common metrics especially in sensitive areas like health care. We recommend to use the results of these evaluations for discussions with medical professionals. Being aware of specific model behavior and incorporating this knowledge into clinical decision making is a crucial step towards safe deployment of such models. For future work we consider iterative model fine-tuning with medical professionals in the loop a promising direction to teach models which patterns to stick to and which ones to discard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Batch size: 20; learning rate: 5e-05; dropout: 0.1; warmup steps: 1000; early stopping patience: 20.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Dr. med. Simon Ronicke for the valuable input. Our work is funded by the German Federal Ministry for Economic Affairs and Energy (BMWi) under grant agreement 01MD19003B (PLASS) and 01MK2008MD (Servicemeister).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The role of race in the clinical presentation", "authors": [ { "first": "Matthew", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Moscou", "suffix": "" }, { "first": "Celestine", "middle": [], "last": "Fulchon", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Neuspiel", "suffix": "" } ], "year": 2001, "venue": "Family medicine", "volume": "33", "issue": "", "pages": "430--434", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Anderson, Susan Moscou, Celestine Fulchon, and Daniel Neuspiel. 2001. The role of race in the clinical presentation. Family medicine, 33:430-4.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The problem with bias: Allocative versus representational harms in machine learning", "authors": [ { "first": "Solon", "middle": [], "last": "Barocas", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Crawford", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Shapiro", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Wallach", "suffix": "" } ], "year": 2017, "venue": "SIGCIS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: Al- locative versus representational harms in machine learning. In SIGCIS, Philadelphia, PA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Black-box testing: techniques for functional testing of software and systems", "authors": [ { "first": "Boris", "middle": [], "last": "Beizer", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boris Beizer. 1995. Black-box testing: techniques for functional testing of software and systems. John Wi- ley & Sons, Inc.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Understanding racial and ethnic differences in health in late life: A research agenda", "authors": [ { "first": "A", "middle": [], "last": "Rodolfo", "suffix": "" }, { "first": "Norman B", "middle": [], "last": "Bulatao", "suffix": "" }, { "first": "", "middle": [], "last": "Anderson", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rodolfo A Bulatao and Norman B Anderson. 2004. Un- derstanding racial and ethnic differences in health in late life: A research agenda. National Academies Press (US).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Mime: Multilevel medical embedding of electronic health records for predictive healthcare", "authors": [ { "first": "Edward", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Cao", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Walter", "middle": [ "F" ], "last": "Stewart", "suffix": "" }, { "first": "Jimeng", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "4552--4562", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Choi, Cao Xiao, Walter F. Stewart, and Jimeng Sun. 2018. Mime: Multilevel medical embedding of electronic health records for predictive healthcare. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Pro- cessing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montr\u00e9al, Canada, pages 4552-4562.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "How many adults identify as transgender in the united states", "authors": [ { "first": "A", "middle": [ "R" ], "last": "Flores", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Herman", "suffix": "" }, { "first": "G", "middle": [ "J" ], "last": "Gates", "suffix": "" }, { "first": "T", "middle": [ "N T" ], "last": "Brown", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A.R. Flores, J.L. Herman, G.J. Gates, and T.N.T. Brown. 2016. How many adults identify as transgender in the united states? Los Angeles, CA: The Williams Institute.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Greedy function approximation: a gradient boosting machine", "authors": [ { "first": "H", "middle": [], "last": "Jerome", "suffix": "" }, { "first": "", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2001, "venue": "Annals of statistics", "volume": "", "issue": "", "pages": "1189--1232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerome H Friedman. 2001. Greedy function approx- imation: a gradient boosting machine. Annals of statistics, pages 1189-1232.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unfolding physiological state: mortality modelling in intensive care units", "authors": [ { "first": "Marzyeh", "middle": [], "last": "Ghassemi", "suffix": "" }, { "first": "Tristan", "middle": [], "last": "Naumann", "suffix": "" }, { "first": "Finale", "middle": [], "last": "Doshi-Velez", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Brimmer", "suffix": "" }, { "first": "Rohit", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2014, "venue": "The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14", "volume": "", "issue": "", "pages": "75--84", "other_ids": { "DOI": [ "10.1145/2623330.2623742" ] }, "num": null, "urls": [], "raw_text": "Marzyeh Ghassemi, Tristan Naumann, Finale Doshi- Velez, Nicole Brimmer, Rohit Joshi, Anna Rumshisky, and Peter Szolovits. 2014. Unfolding physiological state: mortality modelling in intensive care units. In The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing, KDD '14, New York, NY, USA -August 24 -27, 2014, pages 75-84. ACM.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Do words matter? stigmatizing language and the transmission of bias in the medical record", "authors": [ { "first": "Anna", "middle": [], "last": "Goddu", "suffix": "" }, { "first": "O'", "middle": [], "last": "Katie", "suffix": "" }, { "first": "Sophie", "middle": [], "last": "Conor", "suffix": "" }, { "first": "Mustapha", "middle": [], "last": "Lanzkron", "suffix": "" }, { "first": "Somnath", "middle": [], "last": "Saheed", "suffix": "" }, { "first": "Carlton", "middle": [], "last": "Saha", "suffix": "" }, { "first": "Mary", "middle": [ "Catherine" ], "last": "Haywood", "suffix": "" }, { "first": "", "middle": [], "last": "Beach", "suffix": "" } ], "year": 2018, "venue": "Journal of General Internal Medicine", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Goddu, Katie O'Conor, Sophie Lanzkron, Mustapha Saheed, Somnath Saha, Carlton Haywood, and Mary Catherine Beach. 2018. Do words matter? stigmatizing language and the transmission of bias in the medical record. Journal of General Internal Medicine, 33.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Jianfeng Gao, and Hoifung Poon. 2020. Domain-specific language model pretraining for biomedical natural language processing", "authors": [ { "first": "Yu", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Tinn", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Lucas", "suffix": "" }, { "first": "Naoto", "middle": [], "last": "Usuyama", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tristan", "middle": [], "last": "Naumann", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain-specific lan- guage model pretraining for biomedical natural lan- guage processing.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Clinicalbert: Modeling clinical notes and predicting hospital readmission", "authors": [ { "first": "Kexin", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Jaan", "middle": [], "last": "Altosaar", "suffix": "" }, { "first": "Rajesh", "middle": [], "last": "Ranganath", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.05342" ] }, "num": null, "urls": [], "raw_text": "Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. arXiv preprint arXiv:1904.05342.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An analysis of attention over clinical notes for predictive tasks", "authors": [ { "first": "Sarthak", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Ramin", "middle": [], "last": "Mohammadi", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "15--21", "other_ids": { "DOI": [ "10.18653/v1/W19-1902" ] }, "num": null, "urls": [], "raw_text": "Sarthak Jain, Ramin Mohammadi, and Byron C. Wal- lace. 2019. An analysis of attention over clinical notes for predictive tasks. In Proceedings of the 2nd Clinical Natural Language Processing Work- shop, pages 15-21, Minneapolis, Minnesota, USA. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Combining lstm and latent topic modeling for mortality prediction", "authors": [ { "first": "Yohan", "middle": [], "last": "Jo", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Palaskar", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1709.02842" ] }, "num": null, "urls": [], "raw_text": "Yohan Jo, Lisa Lee, and Shruti Palaskar. 2017. Com- bining lstm and latent topic modeling for mortality prediction. arXiv preprint arXiv:1709.02842.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "MIMIC-III, a Freely Accessible Critical Care Database", "authors": [ { "first": "E", "middle": [ "W" ], "last": "Alistair", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "J", "middle": [], "last": "Tom", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Pollard", "suffix": "" }, { "first": "H Lehman", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Mengling", "middle": [], "last": "Li-Wei", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Ghassemi", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Moody", "suffix": "" }, { "first": "Leo", "middle": [ "Anthony" ], "last": "Szolovits", "suffix": "" }, { "first": "Roger G", "middle": [], "last": "Celi", "suffix": "" }, { "first": "", "middle": [], "last": "Mark", "suffix": "" } ], "year": 2016, "venue": "Scientific Data", "volume": "3", "issue": "1", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-Wei, Mengling Feng, Moham- mad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. MIMIC-III, a Freely Accessible Critical Care Database. Scientific Data, 3(1):1-9.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Using clinical notes with time series data for ICU management", "authors": [ { "first": "Swaraj", "middle": [], "last": "Khadanga", "suffix": "" }, { "first": "Karan", "middle": [], "last": "Aggarwal", "suffix": "" }, { "first": "Shafiq", "middle": [], "last": "Joty", "suffix": "" }, { "first": "Jaideep", "middle": [], "last": "Srivastava", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "6432--6437", "other_ids": { "DOI": [ "10.18653/v1/D19-1678" ] }, "num": null, "urls": [], "raw_text": "Swaraj Khadanga, Karan Aggarwal, Shafiq Joty, and Jaideep Srivastava. 2019. Using clinical notes with time series data for ICU management. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 6432-6437, Hong Kong, China. Association for Computational Linguis- tics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2020, "venue": "Bioinformatics", "volume": "36", "issue": "4", "pages": "1234--1240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Two-stage federated phenotyping and patient representation learning", "authors": [ { "first": "Dianbo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Dmitriy", "middle": [], "last": "Dligach", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 18th BioNLP Workshop and Shared Task", "volume": "", "issue": "", "pages": "283--291", "other_ids": { "DOI": [ "10.18653/v1/W19-5030" ] }, "num": null, "urls": [], "raw_text": "Dianbo Liu, Dmitriy Dligach, and Timothy Miller. 2019a. Two-stage federated phenotyping and pa- tient representation learning. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 283- 291, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Two-stage federated phenotyping and patient representation learning", "authors": [ { "first": "Dianbo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Dmitriy", "middle": [], "last": "Dligach", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 18th BioNLP Workshop and Shared Task", "volume": "", "issue": "", "pages": "283--291", "other_ids": { "DOI": [ "10.18653/v1/W19-5030" ] }, "num": null, "urls": [], "raw_text": "Dianbo Liu, Dmitriy Dligach, and Timothy Miller. 2019b. Two-stage federated phenotyping and pa- tient representation learning. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 283- 291, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Deep EHR: chronic disease prediction using medical notes", "authors": [ { "first": "Jingshu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zachariah", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Narges", "middle": [], "last": "Razavian", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Machine Learning for Healthcare Conference", "volume": "2018", "issue": "", "pages": "440--464", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingshu Liu, Zachariah Zhang, and Narges Razavian. 2018. Deep EHR: chronic disease prediction using medical notes. In Proceedings of the Machine Learn- ing for Healthcare Conference, MLHC 2018, 17-18 August 2018, Palo Alto, California, volume 85 of Proceedings of Machine Learning Research, pages 440-464. PMLR.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Q-pain: A question answering dataset to measure social bias in pain management", "authors": [ { "first": "Emily", "middle": [ "L" ], "last": "C'ecile Log'e", "suffix": "" }, { "first": "David", "middle": [], "last": "Ross", "suffix": "" }, { "first": "", "middle": [], "last": "Yaw Amoah", "suffix": "" }, { "first": "Saahil", "middle": [], "last": "Dadey", "suffix": "" }, { "first": "Adriel", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Saporta", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Rajpurkar", "suffix": "" } ], "year": 2021, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C'ecile Log'e, Emily L. Ross, David Yaw Amoah Dadey, Saahil Jain, Adriel Saporta, Andrew Y. Ng, and Pranav Rajpurkar. 2021. Q-pain: A question answering dataset to measure social bias in pain man- agement. ArXiv, abs/2108.01764.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Model cards for model reporting", "authors": [ { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Simone", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zaldivar", "suffix": "" }, { "first": "Parker", "middle": [], "last": "Barnes", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vasserman", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Spitzer", "suffix": "" }, { "first": "Deborah", "middle": [], "last": "Inioluwa", "suffix": "" }, { "first": "Timnit", "middle": [], "last": "Raji", "suffix": "" }, { "first": "", "middle": [], "last": "Gebru", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* '19", "volume": "", "issue": "", "pages": "220--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Account- ability, and Transparency, FAT* '19, page 220-229, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Validity of racial/ethnic classifications in medical records data: an exploratory study", "authors": [ { "first": "Susan", "middle": [], "last": "Moscou", "suffix": "" }, { "first": "Judith", "middle": [ "B" ], "last": "Matthew R Anderson", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "", "middle": [], "last": "Valencia", "suffix": "" } ], "year": 2003, "venue": "American journal of public health", "volume": "93", "issue": "7", "pages": "1084--1086", "other_ids": {}, "num": null, "urls": [], "raw_text": "Susan Moscou, Matthew R Anderson, Judith B Kaplan, and Lisa Valencia. 2003. Validity of racial/ethnic classifications in medical records data: an exploratory study. American journal of public health, 93(7):1084- 1086.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Unequal treatment: confronting racial and ethnic disparities in health care", "authors": [ { "first": "Alan", "middle": [], "last": "Nelson", "suffix": "" } ], "year": 2002, "venue": "Journal of the national medical association", "volume": "94", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Nelson. 2002. Unequal treatment: confronting racial and ethnic disparities in health care. Journal of the national medical association, 94(8):666.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Evaluating shallow and deep learning strategies for the 2018 n2c2 shared task on clinical text classification", "authors": [ { "first": "Michel", "middle": [], "last": "Oleynik", "suffix": "" }, { "first": "Amila", "middle": [], "last": "Kugic", "suffix": "" }, { "first": "Zdenko", "middle": [], "last": "Kas\u00e1\u010d", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Kreuzthaler", "suffix": "" } ], "year": 2019, "venue": "Journal of the American Medical Informatics Association", "volume": "26", "issue": "11", "pages": "1247--1254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Oleynik, Amila Kugic, Zdenko Kas\u00e1\u010d, and Markus Kreuzthaler. 2019. Evaluating shallow and deep learning strategies for the 2018 n2c2 shared task on clinical text classification. Journal of the Ameri- can Medical Informatics Association, 26(11):1247- 1254.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Clinical annotation research kit (clark): Computable phenotyping using machine learning", "authors": [ { "first": "Miles", "middle": [], "last": "Emily R Pfaff", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Crosskey", "suffix": "" }, { "first": "Ashok", "middle": [], "last": "Morton", "suffix": "" }, { "first": "", "middle": [], "last": "Krishnamurthy", "suffix": "" } ], "year": 2020, "venue": "JMIR medical informatics", "volume": "8", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily R Pfaff, Miles Crosskey, Kenneth Morton, and Ashok Krishnamurthy. 2020. Clinical annotation research kit (clark): Computable phenotyping us- ing machine learning. JMIR medical informatics, 8(1):e16042.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Med-bert: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction", "authors": [ { "first": "Laila", "middle": [], "last": "Rasmy", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Ziqian", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Cui", "middle": [], "last": "Tao", "suffix": "" }, { "first": "Degui", "middle": [], "last": "Zhi", "suffix": "" } ], "year": 2021, "venue": "NPJ digital medicine", "volume": "4", "issue": "1", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laila Rasmy, Yang Xiang, Ziqian Xie, Cui Tao, and Degui Zhi. 2021. Med-bert: pretrained contextual- ized embeddings on large-scale structured electronic health records for disease prediction. NPJ digital medicine, 4(1):1-13.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Beyond accuracy: Behavioral testing of NLP models with CheckList", "authors": [ { "first": "Tongshuang", "middle": [], "last": "Marco Tulio Ribeiro", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Guestrin", "suffix": "" }, { "first": "", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4902--4912", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.442" ] }, "num": null, "urls": [], "raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Health disparities: gaps in access, quality and affordability of medical care. Transactions of the American Clinical and Climatological Association", "authors": [ { "first": "J", "middle": [], "last": "Wayne", "suffix": "" }, { "first": "", "middle": [], "last": "Riley", "suffix": "" } ], "year": 2012, "venue": "", "volume": "123", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wayne J Riley. 2012. Health disparities: gaps in access, quality and affordability of medical care. Transac- tions of the American Clinical and Climatological Association, 123:167.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "HateCheck: Functional tests for hate speech detection models", "authors": [ { "first": "Paul", "middle": [], "last": "R\u00f6ttger", "suffix": "" }, { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Margetts", "suffix": "" }, { "first": "Janet", "middle": [], "last": "Pierrehumbert", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "41--58", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.4" ] }, "num": null, "urls": [], "raw_text": "Paul R\u00f6ttger, Bertie Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet Pierrehumbert. 2021. HateCheck: Functional tests for hate speech detection models. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 41-58, Online. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Deep patient representation of clinical notes via multi-task learning for mortality prediction", "authors": [ { "first": "Yuqi", "middle": [], "last": "Si", "suffix": "" }, { "first": "Kirk", "middle": [], "last": "Roberts", "suffix": "" } ], "year": 2019, "venue": "AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science", "volume": "", "issue": "", "pages": "779--788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuqi Si and Kirk Roberts. 2019. Deep patient repre- sentation of clinical notes via multi-task learning for mortality prediction. AMIA Joint Summits on Trans- lational Science proceedings. AMIA Joint Summits on Translational Science, 2019:779-788.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Is race medically relevant? a qualitative study of physicians' attitudes about the role of race in treatment decision-making", "authors": [ { "first": "Shedra", "middle": [], "last": "Snipes", "suffix": "" }, { "first": "Sherrill", "middle": [], "last": "Sellers", "suffix": "" }, { "first": "Adebola", "middle": [], "last": "Tafawa", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Cooper", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Fields", "suffix": "" }, { "first": "Vence", "middle": [], "last": "Bonham", "suffix": "" } ], "year": 2011, "venue": "BMC health services research", "volume": "11", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shedra Snipes, Sherrill Sellers, Adebola Tafawa, Lisa Cooper, Julie Fields, and Vence Bonham. 2011. Is race medically relevant? a qualitative study of physi- cians' attitudes about the role of race in treatment decision-making. BMC health services research, 11:183.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "The health stigma and discrimination framework: a global, crosscutting framework to inform research, intervention development, and policy on health-related stigmas", "authors": [ { "first": "L", "middle": [], "last": "Anne", "suffix": "" }, { "first": "Valerie", "middle": [ "A" ], "last": "Stangl", "suffix": "" }, { "first": "Carmen", "middle": [ "H" ], "last": "Earnshaw", "suffix": "" }, { "first": "Wim", "middle": [], "last": "Logie", "suffix": "" }, { "first": "", "middle": [], "last": "Van Brakel", "suffix": "" }, { "first": "C", "middle": [], "last": "Leickness", "suffix": "" }, { "first": "Iman", "middle": [], "last": "Simbayi", "suffix": "" }, { "first": "John", "middle": [ "F" ], "last": "Barr\u00e9", "suffix": "" }, { "first": "", "middle": [], "last": "Dovidio", "suffix": "" } ], "year": 2019, "venue": "BMC medicine", "volume": "17", "issue": "1", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anne L Stangl, Valerie A Earnshaw, Carmen H Lo- gie, Wim van Brakel, Leickness C Simbayi, Iman Barr\u00e9, and John F Dovidio. 2019. The health stigma and discrimination framework: a global, crosscutting framework to inform research, intervention develop- ment, and policy on health-related stigmas. BMC medicine, 17(1):1-13.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "The automation of bias in medical artificial intelligence (AI): decoding the past to create a better future", "authors": [ { "first": "Isabel", "middle": [], "last": "Straw", "suffix": "" } ], "year": 2020, "venue": "Artif. Intell. Medicine", "volume": "110", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isabel Straw. 2020. The automation of bias in medical artificial intelligence (AI): decoding the past to create a better future. Artif. Intell. Medicine, 110:101965.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Mitigating gender bias in natural language processing: Literature review", "authors": [ { "first": "Tony", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Gaut", "suffix": "" }, { "first": "Shirlyn", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Yuxin", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Mai", "middle": [], "last": "Elsherief", "suffix": "" }, { "first": "Jieyu", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Diba", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Belding", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1630--1640", "other_ids": { "DOI": [ "10.18653/v1/P19-1159" ] }, "num": null, "urls": [], "raw_text": "Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 1630-1640, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Learning tasks for multitask learning: Heterogenous patient populations in the ICU", "authors": [ { "first": "Harini", "middle": [], "last": "Suresh", "suffix": "" }, { "first": "Jen", "middle": [ "J" ], "last": "Gong", "suffix": "" }, { "first": "John", "middle": [ "V" ], "last": "Guttag", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "802--810", "other_ids": { "DOI": [ "10.1145/3219819.3219930" ] }, "num": null, "urls": [], "raw_text": "Harini Suresh, Jen J. Gong, and John V. Guttag. 2018. Learning tasks for multitask learning: Heterogenous patient populations in the ICU. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, pages 802-810. ACM.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Urinary tract infections in adults", "authors": [ { "first": "Chee", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Maciej", "middle": [], "last": "Chlebicki", "suffix": "" } ], "year": 2016, "venue": "Singapore Medical Journal", "volume": "57", "issue": "", "pages": "485--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chee Tan and Maciej Chlebicki. 2016. Urinary tract infections in adults. Singapore Medical Journal, 57:485-490.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Effective feature representation for clinical text concept extraction", "authors": [ { "first": "Yifeng", "middle": [], "last": "Tao", "suffix": "" }, { "first": "Bruno", "middle": [], "last": "Godefroy", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Genthial", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "1--14", "other_ids": { "DOI": [ "10.18653/v1/W19-1901" ] }, "num": null, "urls": [], "raw_text": "Yifeng Tao, Bruno Godefroy, Guillaume Genthial, and Christopher Potts. 2019. Effective feature representa- tion for clinical text concept extraction. In Proceed- ings of the 2nd Clinical Natural Language Process- ing Workshop, pages 1-14, Minneapolis, Minnesota, USA. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Predicting clinical diagnosis from patients electronic health records using bertbased neural networks", "authors": [ { "first": "Alexander", "middle": [], "last": "Tuzhilin", "suffix": "" } ], "year": 2020, "venue": "Artificial Intelligence in Medicine: 18th International Conference on Artificial Intelligence in Medicine", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Tuzhilin. 2020. Predicting clinical diagnosis from patients electronic health records using bert- based neural networks. In Artificial Intelligence in Medicine: 18th International Conference on Artifi- cial Intelligence in Medicine, AIME 2020, Minneapo- lis, MN, USA, August 25-28, 2020, Proceedings, vol- ume 12299, page 111. Springer Nature.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Clinical outcome prediction from admission notes using self-supervised knowledge integration", "authors": [ { "first": "Jens-Michalis", "middle": [], "last": "Betty Van Aken", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Papaioannou", "suffix": "" }, { "first": "Klemens", "middle": [], "last": "Mayrdorfer", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Budde", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Gers", "suffix": "" }, { "first": "", "middle": [], "last": "Loeser", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "881--893", "other_ids": {}, "num": null, "urls": [], "raw_text": "Betty van Aken, Jens-Michalis Papaioannou, Manuel Mayrdorfer, Klemens Budde, Felix Gers, and Alexan- der Loeser. 2021. Clinical outcome prediction from admission notes using self-supervised knowledge in- tegration. In Proceedings of the 16th Conference of the European Chapter of the Association for Compu- tational Linguistics: Main Volume, pages 881-893, Online. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Do NLP models know numbers? probing numeracy in embeddings", "authors": [ { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Yizhong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5307--5315", "other_ids": { "DOI": [ "10.18653/v1/D19-1534" ] }, "num": null, "urls": [], "raw_text": "Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do NLP models know num- bers? probing numeracy in embeddings. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5307-5315, Hong Kong, China. Association for Computational Linguis- tics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Time-aware transformerbased network for clinical notes series prediction", "authors": [ { "first": "Dongyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jidapa", "middle": [], "last": "Thadajarassiri", "suffix": "" }, { "first": "Cansu", "middle": [], "last": "Sen", "suffix": "" }, { "first": "Elke", "middle": [], "last": "Rundensteiner", "suffix": "" } ], "year": 2020, "venue": "Machine Learning for Healthcare Conference", "volume": "", "issue": "", "pages": "566--588", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dongyu Zhang, Jidapa Thadajarassiri, Cansu Sen, and Elke Rundensteiner. 2020a. Time-aware transformer- based network for clinical notes series prediction. In Machine Learning for Healthcare Conference, pages 566-588. PMLR.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Hurtful words: quantifying biases in clinical contextual word embeddings", "authors": [ { "first": "Haoran", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Amy", "middle": [ "X" ], "last": "Lu", "suffix": "" }, { "first": "Mohamed", "middle": [], "last": "Abdalla", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Mcdermott", "suffix": "" }, { "first": "Marzyeh", "middle": [], "last": "Ghassemi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACM Conference on Health, Inference, and Learning", "volume": "", "issue": "", "pages": "110--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haoran Zhang, Amy X Lu, Mohamed Abdalla, Matthew McDermott, and Marzyeh Ghassemi. 2020b. Hurtful words: quantifying biases in clinical contextual word embeddings. In Proceedings of the ACM Conference on Health, Inference, and Learning, pages 110-120.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Bertsurv: Bert-based survival models for predicting outcomes of trauma patients", "authors": [ { "first": "Yun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Qinghang", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Xinlu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Yuqing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Linda", "middle": [], "last": "Petzold", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2103.10928" ] }, "num": null, "urls": [], "raw_text": "Yun Zhao, Qinghang Hong, Xinlu Zhang, Yu Deng, Yuqing Wang, and Linda Petzold. 2021. Bertsurv: Bert-based survival models for predicting outcomes of trauma patients. arXiv preprint arXiv:2103.10928.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "58yo man presents with stomach pain and acute shortness of breath 58yo woman presents with stomach pain and acute shortness of breath 58yo afro american man presents with stomach pain and shortness of breath 58yo obese man presents with stomach pain and shortness of breath", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Figure 1: Minimal alterations to the patient description can have a large impact on outcome predictions of clinical NLP models. We introduce behavioral testing for the clinical domain to expose these impacts.", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "Behavioral testing framework for the clinical domain. Schematic overview of the introduced framework.", "type_str": "figure", "uris": null }, "FIGREF3": { "num": null, "text": "Clinical Outcome Representations (CORe) by van Aken et al. (2021) are based on BioBERT and extended with a pre-training step that focuses on the prediction of patient outcomes. The pre-training data includes clinical notes, Wikipedia articles and case studies from PubMed. The tokenization is similar to the BioBERT model. PubMedBERT. Gu et al. (2020) recently introduced PubMedBERT based on similar data as BioBERT. They use PubMed articles and abstracts but instead of extending a BERT Base model, they", "type_str": "figure", "uris": null }, "FIGREF4": { "num": null, "text": "Influence of gender on predicted diagnoses. Blue: Predicted probability for diagnosis is below-average; red: predicted probability above-average. PubMedBERT shows highest sensitivity to gender mention and regards many diagnoses less likely if transgender is mentioned in the text. Graph shows deviation of probabilities on 24 most common diagnoses in test set.", "type_str": "figure", "uris": null }, "FIGREF5": { "num": null, "text": "Original distribution of diagnoses per gender in MIMIC-III. Cell colors: Deviation from average probability. Numbers in parenthesis: Occurrences in the training set. Most diagnoses occur less often in transgender patients due to their very low sample count.", "type_str": "figure", "uris": null }, "FIGREF6": { "num": null, "text": "Influence of age on diagnosis predictions. The x-axis is the simulated age and the y-axis is the predicted probability of a diagnosis. All models follow similar patterns with some diagnosis risks increasing with age and some decreasing. The original training distributions (black dotted line) are mostly followed but attenuated.", "type_str": "figure", "uris": null }, "FIGREF7": { "num": null, "text": "Influence of age on mortality predictions. Xaxis: Simulated age; y-axis: predicted mortality risk. The three models are differently calibrated and only CORe is highly influenced by age.", "type_str": "figure", "uris": null }, "FIGREF8": { "num": null, "text": "Influence of ethnicity on diagnosis predictions. Blue: Predicted probability for diagnosis is below-average; red: predicted probability above-average. PubMedBERT's predictions are highly influenced by ethnicity mentions, while CORe and BioBERT show smaller deviations, but also disparities on specific groups.", "type_str": "figure", "uris": null }, "FIGREF9": { "num": null, "text": "Original distribution of diagnoses per ethnicity in MIMIC-III. Cell colors: Deviation from average probability. Numbers in parenthesis: Occurrences in the training set. Both the distribution of samples and the occurrences of diagnoses are highly unbalanced in the training set.", "type_str": "figure", "uris": null }, "TABREF0": { "num": null, "html": null, "type_str": "table", "text": "TEST SET...year old characteristic B patient ...", "content": "
MODIFICATION OF ALL SAMPLES
INTO TEST GROUPS
0.1
characteristic C characteristic AMODEL0.4ANALYSIS OF CHANGE IN PREDICTIONS PER TEST GROUP
0.2
" }, "TABREF3": { "num": null, "html": null, "type_str": "table", "text": "", "content": "" }, "TABREF5": { "num": null, "html": null, "type_str": "table", "text": "Influence of ethnicity on mortality predictions. The mention of an ethnicity decreases the predicted mortality risk. White and African American patients are assigned with the lowest mortality risk (gray-shaded).", "content": "
" } } } }