{ "paper_id": "U16-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:10:52.982149Z" }, "title": "The Benefits of Word Embeddings Features for Active Learning in Clinical Information Extraction", "authors": [ { "first": "Mahnoosh", "middle": [], "last": "Kholghi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Queensland University of Technology", "location": {} }, "email": "m1.kholghi@qut.edu.au" }, { "first": "Lance", "middle": [], "last": "De Vine", "suffix": "", "affiliation": { "laboratory": "", "institution": "Queensland University of Technology", "location": {} }, "email": "l.devine@qut.edu.au" }, { "first": "Laurianne", "middle": [], "last": "Sitbon", "suffix": "", "affiliation": { "laboratory": "", "institution": "Queensland University of Technology", "location": {} }, "email": "laurianne.sitbon@qut.edu.au" }, { "first": "Guido", "middle": [], "last": "Zuccon", "suffix": "", "affiliation": { "laboratory": "", "institution": "Queensland University of Technology", "location": {} }, "email": "g.zuccon@qut.edu.au" }, { "first": "Anthony", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": {}, "email": "anthony.nguyen@csiro.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This study investigates the use of unsupervised word embeddings and sequence features for sample representation in an active learning framework built to extract clinical concepts from clinical free text. The objective is to further reduce the manual annotation effort while achieving higher effectiveness compared to a set of baseline features. Unsupervised features are derived from skip-gram word embeddings and a sequence representation approach. The comparative performance of unsupervised features and baseline handcrafted features in an active learning framework are investigated using a wide range of selection criteria including least confidence, information diversity, information density and diversity, and domain knowledge informativeness. Two clinical datasets are used for evaluation: the i2b2/VA 2010 NLP challenge and the ShARe/CLEF 2013 eHealth Evaluation Lab. Our results demonstrate significant improvements in terms of effectiveness as well as annotation effort savings across both datasets. Using unsupervised features along with baseline features for sample representation lead to further savings of up to 9% and 10% of the token and concept annotation rates, respectively.", "pdf_parse": { "paper_id": "U16-1003", "_pdf_hash": "", "abstract": [ { "text": "This study investigates the use of unsupervised word embeddings and sequence features for sample representation in an active learning framework built to extract clinical concepts from clinical free text. The objective is to further reduce the manual annotation effort while achieving higher effectiveness compared to a set of baseline features. Unsupervised features are derived from skip-gram word embeddings and a sequence representation approach. The comparative performance of unsupervised features and baseline handcrafted features in an active learning framework are investigated using a wide range of selection criteria including least confidence, information diversity, information density and diversity, and domain knowledge informativeness. Two clinical datasets are used for evaluation: the i2b2/VA 2010 NLP challenge and the ShARe/CLEF 2013 eHealth Evaluation Lab. Our results demonstrate significant improvements in terms of effectiveness as well as annotation effort savings across both datasets. Using unsupervised features along with baseline features for sample representation lead to further savings of up to 9% and 10% of the token and concept annotation rates, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Active learning (AL) has recently received considerable attention in clinical information extraction, as it promises to automatically annotate clinical free text with less manual annotation ef-fort than supervised learning approaches, while achieving the same effectiveness (Bostr\u00f6m & Dalianis, 2012; Chen et al., 2015; Chen et al., 2012; Figueroa et al., 2012; Kholghi et al., 2015 Kholghi et al., , 2016 Ohno-Machado et al., 2013) . Active learning is particularly important in the clinical domain because of the costs incurred in preparing high quality annotated data as required by supervised machine learning approaches for a wide range of data analysis applications such as retrieving, reasoning, and reporting. Active learning is a human-in-the-loop process in which at each iteration, a set of informative instances is automatically selected by a query strategy (Settles, 2012) and annotated in order to re-train or update the supervised model (see Figure 1) . The query strategy, as a key component of the AL process, plays an important role in the performance of AL approaches. The learning models at each iteration are typically built using supervised learning algorithms. The associated output of the learning model (i.e. the posterior probability) is usually leveraged in identifying and selecting the next set of informative instances. Hence, it is important to build ac-curate statistical models early in the process, and at each iteration. Previous studies have highlighted that the feature set, which is used to represent data instances, is an important factor that affects the stability, robustness, and effectiveness of the learning models built across the AL iterations (Kholghi et al., 2014) .", "cite_spans": [ { "start": 274, "end": 300, "text": "(Bostr\u00f6m & Dalianis, 2012;", "ref_id": "BIBREF0" }, { "start": 301, "end": 319, "text": "Chen et al., 2015;", "ref_id": "BIBREF2" }, { "start": 320, "end": 338, "text": "Chen et al., 2012;", "ref_id": "BIBREF3" }, { "start": 339, "end": 361, "text": "Figueroa et al., 2012;", "ref_id": "BIBREF8" }, { "start": 362, "end": 382, "text": "Kholghi et al., 2015", "ref_id": "BIBREF12" }, { "start": 383, "end": 405, "text": "Kholghi et al., , 2016", "ref_id": "BIBREF13" }, { "start": 406, "end": 432, "text": "Ohno-Machado et al., 2013)", "ref_id": "BIBREF18" }, { "start": 870, "end": 885, "text": "(Settles, 2012)", "ref_id": "BIBREF20" }, { "start": 1690, "end": 1712, "text": "(Kholghi et al., 2014)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 957, "end": 966, "text": "Figure 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In previous studies of AL for clinical information extraction, a set of common hand-crafted features, such as orthographical and morphological features, was used to build supervised models across AL iterations and suggested that more effective models would lead to reduced annotation rates in addition to improved effectiveness (Kholghi, et al., 2014 (Kholghi, et al., , 2015 (Kholghi, et al., , 2016 . On the other hand, the application of unsupervised features, such as clustering-based representations, distributional word representations, and skip-gram word embeddings has been shown to improve fully supervised clinical information extraction systems (De Vine et al., 2015; Jonnalagadda et al., 2012; Nikfarjam et al., 2015; . We can therefore hypothesize that their use within an active learning framework may result in further reduction of manual annotation effort; however, no previous study has formally evaluated this in the clinical information extraction context.", "cite_spans": [ { "start": 328, "end": 350, "text": "(Kholghi, et al., 2014", "ref_id": "BIBREF11" }, { "start": 351, "end": 375, "text": "(Kholghi, et al., , 2015", "ref_id": "BIBREF12" }, { "start": 376, "end": 400, "text": "(Kholghi, et al., , 2016", "ref_id": "BIBREF13" }, { "start": 656, "end": 678, "text": "(De Vine et al., 2015;", "ref_id": "BIBREF6" }, { "start": 679, "end": 705, "text": "Jonnalagadda et al., 2012;", "ref_id": "BIBREF9" }, { "start": 706, "end": 729, "text": "Nikfarjam et al., 2015;", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we investigate the effects of an improved sample representation using word embeddings and sequence features on an active learning framework built for clinical concept extraction. Concept extraction is a significant primary step in extracting meaningful information from clinical free text. It is a type of sequence labeling task in which sequences of terms that express meaningful concepts within a clinical setting, such as medication name, frequency, and dosage, are identified. We examine a wide range of hand-crafted and automatically generated unsupervised features to improve supervised and AL-based concept extraction systems. Our contributions are as follow:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) We validate the impact of word embeddings and sequence features on improving the clinical concept extraction systems, as previously studied by De Vine, et al. (2015) , by using an additional dataset (ShARe/CLEF 2013 dataset) for evaluation. We generate unsupervised features using a different corpus and then investigate the combinations of features that lead to the most significant improvements on supervised models across the datasets.", "cite_spans": [ { "start": 147, "end": 169, "text": "De Vine, et al. (2015)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) We demonstrate that selected combinations of unsupervised features lead to more effective models across the AL batches and also less annotation effort compared to common hand-crafted features. We do this across a selected set of query strategies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The two primary areas that relate to this work are: (i) the use of unsupervised sample representations in clinical information extraction, and (ii) active learning approaches for clinical information extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The recent development of shared datasets, such as i2b2 challenges (Uzuner et al., 2010; Uzuner et al., 2011) and the ShARe/CLEF eHealth Evaluation Lab (Suominen et al., 2013) has stimulated research into new approaches to improve the current clinical information extraction systems. Unsupervised approaches to extract new features for representing data instances have proven to be key to more effective clinical information extraction systems (De Bruijn et al., 2011; De Vine, et al., 2015; Jonnalagadda, et al., 2012; Tang, Cao, et al., 2013) . Three main categories of unsupervised word representation approaches have been used in clinical information extraction systems: (1) clustering-based representations using Brown clustering (Brown et al., 1992) , (2) distributional word representation using random indexing (Kanerva et al., 2000) , and (3) word embeddings from neural language models, such as skip-gram word embeddings (Mikolov et al., 2013) .", "cite_spans": [ { "start": 67, "end": 88, "text": "(Uzuner et al., 2010;", "ref_id": "BIBREF26" }, { "start": 89, "end": 109, "text": "Uzuner et al., 2011)", "ref_id": "BIBREF27" }, { "start": 152, "end": 175, "text": "(Suominen et al., 2013)", "ref_id": "BIBREF21" }, { "start": 444, "end": 468, "text": "(De Bruijn et al., 2011;", "ref_id": "BIBREF5" }, { "start": 469, "end": 491, "text": "De Vine, et al., 2015;", "ref_id": "BIBREF6" }, { "start": 492, "end": 519, "text": "Jonnalagadda, et al., 2012;", "ref_id": "BIBREF9" }, { "start": 520, "end": 544, "text": "Tang, Cao, et al., 2013)", "ref_id": "BIBREF23" }, { "start": 735, "end": 755, "text": "(Brown et al., 1992)", "ref_id": "BIBREF1" }, { "start": 819, "end": 841, "text": "(Kanerva et al., 2000)", "ref_id": "BIBREF10" }, { "start": 931, "end": 953, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Sample Representations in Clinical Information Extraction", "sec_num": "2.1" }, { "text": "De Bruijn, et al. (2011) extracted clusteringbased word representation features using Brown clustering and used them along with a set of hand-crafted features in developing their systems for the i2b2/VA 2010 NLP challenge. Their system achieved the highest effectiveness amongst systems in the challenge. In the same challenge, Jonnalagadda, et al. (2012) significantly improved the effectiveness of their system by adding distributional semantic features (using random indexing) to their feature set. Tang, et al. 2015developed a novel approach to generate sequence level features by concatenating the accumulated and normalized word and lexical vectors of each token in a phrase or sentence. Their results demonstrated that unsupervised features generated using word embeddings and sequence level representations led to supervised models of significantly higher effectiveness compared to those built with baseline handcrafted features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Sample Representations in Clinical Information Extraction", "sec_num": "2.1" }, { "text": "Active learning aims to significantly reduce the high costs of manual annotation required to build a high quality annotated data for training phase of supervised approaches. Kholghi, et al. (2016) developed an active learning based framework to investigate the effect of AL in reducing the burden of manual annotation for clinical information extraction systems. In their framework, they apply state-of-theart AL query strategies for sequence labelling tasks (i.e., Least Confidence (LC) and information density) to the extraction of clinical concepts. They found that AL achieves the same effectiveness as supervised learning while saving up to 77% of the total number of sequence that require manual annotation. Chen, et al. (2015) proposed new AL query strategies under groupings of uncertainty-based and diversity-based approaches. They conducted a comprehensive empirical evaluation of existing and their proposed AL approaches on the clinical concept extraction task and found that uncertainty sampling-based approaches, such as LC, resulted in a significant reduction of annotation effort compared to diversity-based approaches. also conducted a comprehensive empirical comparison of a wide range of AL query strategies and found that the least confidence, which is an informativeness based selection criterion, is a better choice for clinical data in terms of effectiveness and annotation effort reduction. They also developed a new query strategy, called Domain Knowledge Informativeness (DKI), which makes use of external clinical resources. They showed that DKI led to a further 14% of token and concept annotation rates compared to LC.", "cite_spans": [ { "start": 174, "end": 196, "text": "Kholghi, et al. (2016)", "ref_id": "BIBREF13" }, { "start": 714, "end": 733, "text": "Chen, et al. (2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Active Learning in Clinical Information extraction", "sec_num": "2.2" }, { "text": "We follow the same approach as described by De Vine, et al. (2015) to generate the unsupervised features. Figure 2 depicts our pipeline for generating the unsupervised features; these will be used to augment the supervised hand-crafted features of our classifier.", "cite_spans": [], "ref_spans": [ { "start": 106, "end": 114, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Unsupervised Sample Representation", "sec_num": "3.1" }, { "text": "The pre-processing step includes lower-casing, substitution of matching regular expressions, and removing punctuations on the training corpus. We then generate word embeddings from the pre-processed corpus using the Skip-gram model (Mikolov, et al., 2013) . We also generate lower dimensional \"lexical\" vectors from the preprocessed corpus, which encode character ngrams (i.e., uni-grams, bi-grams, tri-grams, tetragrams, and skip-grams). These vectors are used to capture lexicographic patterns. A lexical vector is generated for each token by accumulating and normalizing all the n-gram vectors comprising the token.", "cite_spans": [ { "start": 232, "end": 255, "text": "(Mikolov, et al., 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Sample Representation", "sec_num": "3.1" }, { "text": "We then use the word embeddings and the lexical vectors to construct representations for both bi-grams and sentences. First, all the lexical vectors associated with the tokens within a bigram or sentence are accumulated and normalized. The word embeddings for those tokens are also accumulated and normalized. Then, the resulting lexical and word vectors are concatenated and normalized to form a sequence representation for the corresponding bi-gram or sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Sample Representation", "sec_num": "3.1" }, { "text": "We further cluster the word vectors, bi-gram vectors and sentence vectors to generate feature identifiers which are then used in our classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Sample Representation", "sec_num": "3.1" }, { "text": "A key element of the AL process (Figure 1) is the query strategy, which, at each iteration, selects the instances that contain the most useful information (i.e., informative samples) for the learning model. We now outline the state-of-theart AL query strategies for clinical concept extraction (Chen, et al., 2015; Kholghi, et al., 2015) .", "cite_spans": [ { "start": 294, "end": 314, "text": "(Chen, et al., 2015;", "ref_id": "BIBREF2" }, { "start": 315, "end": 337, "text": "Kholghi, et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 32, "end": 42, "text": "(Figure 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Active Learning Query Strategies", "sec_num": "3.2" }, { "text": "Least Confidence (LC) (Culotta & McCallum, 2005) is an uncertainty-based approach in which the model's confidence (certainty) in predicting the label of a sample is the criterion to measure the informativeness of that sample. The model's confidence is estimated based on the posterior probability of the model. The less the posterior probability, the less confident the model is about the sample's label. The samples for which the model's uncertainty is the highest are the most informative for the AL model. Information Diversity (IDiv) (Kholghi, et al., 2015) is based on the idea that in addition to an informativeness measure, the similarity between samples can be useful to inform the model. IDiv selects samples that are informative and diverse (i.e., those that are less similar to the labeled set).", "cite_spans": [ { "start": 22, "end": 48, "text": "(Culotta & McCallum, 2005)", "ref_id": "BIBREF4" }, { "start": 538, "end": 561, "text": "(Kholghi, et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Active Learning Query Strategies", "sec_num": "3.2" }, { "text": "Information Density and Diversity (IDD) (Kholghi, et al., 2015) is similar to IDiv with the difference that, to avoid choosing outliers, it also considers the similarity between the samples in the unlabeled set.", "cite_spans": [ { "start": 40, "end": 63, "text": "(Kholghi, et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Active Learning Query Strategies", "sec_num": "3.2" }, { "text": "Domain Knowledge Informativeness (DKI) (Kholghi, et al., 2015) leverages the domain knowledge extracted from an external resource such as SNOMED CT, in addition to an informativeness measure, to better inform the model. The domain knowledge in DKI is estimated based on the longest span of a concept that each token belongs to, according to a pre-defined set of semantic types in the external resource. Figure 3 shows a short description of all the features used in this study. The baseline feature groups (A, B, C) include orthographical (regular expression patterns), lexical and morphological (suffixes/prefixes and character n-grams), contextual (window of k words), linguistic (POS tags (Toutanova et al., 2003) ), and external semantic features (UMLS and SNOMED CT semantic groups as described in (Kholghi, et al., 2015) ).", "cite_spans": [ { "start": 39, "end": 62, "text": "(Kholghi, et al., 2015)", "ref_id": "BIBREF12" }, { "start": 692, "end": 716, "text": "(Toutanova et al., 2003)", "ref_id": "BIBREF25" }, { "start": 803, "end": 826, "text": "(Kholghi, et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 403, "end": 411, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Active Learning Query Strategies", "sec_num": "3.2" }, { "text": "As the previous work demonstrated, learning word embeddings and sequence features from a clinical corpus with an adequate amount of data, and a good coverage of the target data, results in higher effectiveness compared to a general or relatively small clinical corpus (De Vine, et al., 2015) . In this study, we use a clinical corpus composed of the concatenation of the i2b2/VA 2010 train set (Uzuner, et al., 2011) , the Med-Track collection (Voorhees & Tong, 2011) , and the ShARe/CLEF 2013 train set to generate word embeddings.", "cite_spans": [ { "start": 268, "end": 291, "text": "(De Vine, et al., 2015)", "ref_id": "BIBREF6" }, { "start": 394, "end": 416, "text": "(Uzuner, et al., 2011)", "ref_id": "BIBREF27" }, { "start": 444, "end": 467, "text": "(Voorhees & Tong, 2011)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Groups", "sec_num": "4.1" }, { "text": "In this study, we use an incremental, pool-based, active learning framework (Kholghi, et al., 2014 (Kholghi, et al., , 2016 . We build models across AL batches using tuned linear chain Conditional Random Fields (CRFs) (Kholghi, et al., 2014; Lafferty et al., 2001 ) with different feature groups. The implementation of CRFs for both supervised and active learning is based on the MALLET toolkit (McCallum, 2002) . In this study, Random Sampling (RS) is used as a baseline for the AL framework. RS randomly selects samples at each iteration. All active learning and random sampling baseline setups including the initial labeled set and batch size (i.e., both less than 1% of the size of the train set) are based on previous findings (Kholghi, et al., 2015 (Kholghi, et al., , 2016 .", "cite_spans": [ { "start": 76, "end": 98, "text": "(Kholghi, et al., 2014", "ref_id": "BIBREF11" }, { "start": 99, "end": 123, "text": "(Kholghi, et al., , 2016", "ref_id": "BIBREF13" }, { "start": 218, "end": 241, "text": "(Kholghi, et al., 2014;", "ref_id": "BIBREF11" }, { "start": 242, "end": 263, "text": "Lafferty et al., 2001", "ref_id": "BIBREF14" }, { "start": 395, "end": 411, "text": "(McCallum, 2002)", "ref_id": "BIBREF15" }, { "start": 732, "end": 754, "text": "(Kholghi, et al., 2015", "ref_id": "BIBREF12" }, { "start": 755, "end": 779, "text": "(Kholghi, et al., , 2016", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Supervised and Active Learning Settings", "sec_num": "4.2" }, { "text": "We use the annotated train sets developed for the concept extraction task in the i2b2/VA 2010 NLP challenge (Uzuner, et al., 2011) and ShARe/CLEF 2013 eHealth Evaluation Lab (Task 1) (Pradhan et al., 2013) to build learning models across AL batches using different feature groups. The corresponding test set of each dataset is used to evaluate the effect of feature groups on the performance of models built across AL batches (see Table 1 ). The i2b2/VA 2010 task comprises the extraction of clinical problems, tests and treatments from clinical reports, while the ShARe/CLEF 2013 eHealth Evaluation Lab (task 1) requires to identify mentions of disorders.", "cite_spans": [ { "start": 108, "end": 130, "text": "(Uzuner, et al., 2011)", "ref_id": "BIBREF27" }, { "start": 183, "end": 205, "text": "(Pradhan et al., 2013)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 431, "end": 438, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Datasets", "sec_num": "4.3" }, { "text": "In our evaluation, the learning model effectiveness is measured by Precision, Recall and F1measure. The evaluation measures are computed on the test set using MALLET's multisegmentation evaluator (McCallum, 2002) . To demonstrate statistically significant improvements on F1-measures, we perform a 5*2 cross validated paired t-test (Dietterich, 1998) . The performance of the AL framework is evaluated using Annotation Rate (AR), which measures the number of Sequences (SAR), Tokens (TAR), and Concepts (CAR) required by the AL framework to reach the target supervised effectiveness. The lower the annotation rate, the better the AL framework is considered to be. \u202b\u0734\u0723\u202c = # \u0748\u073d\u073e\u0741\u0748\u0741\u0740 \u202b\u074a\u0745\u0750\u073d\u0750\u074a\u074a\u073d\u202c \u202b\u074f\u0750\u0745\u074a\u0751\u202c \u202b\u0740\u0741\u074f\u0751\u202c \u202b\u0755\u073e\u202c \u202b\u072e\u0723\u202c # \u202b\u0748\u073d\u0750\u0750\u202c \u0748\u073d\u073e\u0741\u0748\u0741\u0740 \u202b\u074a\u0745\u0750\u073d\u0750\u074a\u074a\u073d\u202c \u202b\u074f\u0750\u0745\u074a\u0751\u202c \u0745\u074a \u202b\u074a\u0745\u073d\u074e\u0750\u202c \u202b\u0750\u0741\u074f\u202c Table 2 presents the effectiveness of the supervised CRFs models, which employ all the labeled instances in the train sets of the considered datasets, using the different combinations of features described in Figure 3 . The highest effectiveness obtained in each feature group is highlighted in bold. Table 2 shows that the inclusion of the unsupervised word and sequence level features improves the effectiveness of the supervised model compared to the best baseline feature set ABC. The models' effectiveness built using feature groups ABCD, ABCDGH, ABCDGHK, and ABCDGHJKM are selected for subsequent active learning experiments as target supervised effectiveness, because they result in considerable improvements in the supervised models' effectiveness across both datasets.", "cite_spans": [ { "start": 196, "end": 212, "text": "(McCallum, 2002)", "ref_id": "BIBREF15" }, { "start": 332, "end": 350, "text": "(Dietterich, 1998)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 770, "end": 777, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 979, "end": 987, "text": "Figure 3", "ref_id": "FIGREF3" }, { "start": 1071, "end": 1078, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Evaluation measures", "sec_num": "4.4" }, { "text": "We now consider the performance of the active learning framework in terms of annotation rates. It is important to note that in these experiments, the models built across AL batches, using selected feature sets, are required to reach the target supervised effectiveness achieved using the corresponding feature set (F1-measures in Table 2 ). Table 3 presents SAR, TAR and CAR for different AL query strategies and for the Random Sampling baseline. The most effective feature sets, compared to the baseline feature set ABC (highlighted in gray), for the models built across AL batches using different query strategies are highlighted in bold.", "cite_spans": [], "ref_spans": [ { "start": 330, "end": 337, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 341, "end": 348, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Active Learning Performance", "sec_num": "5.2" }, { "text": "Word and sequence representations result in less annotation effort across all query strategies in both datasets compared to the hand-crafted feature set. We observe 9% and 10% reduction in token (TAR) and concept (CAR) annotation rates for the IDiv query strategy (highlighted in orange) when using ABCDGH feature set com-pared to the baseline ABC feature set in ShARe/CLEF 2013 dataset. The same feature set (ABCDGH) results in 4% and 6% less TAR and CAR in i2b2/VA 2010 dataset (highlighted in green) compared to the baseline ABC feature set when using LC as the query strategy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active Learning Performance", "sec_num": "5.2" }, { "text": "Generally, the addition of word level features (D, G, and H) gives the best results. Also, on occasions, the addition of sequence level features (J, K, and M) gives further improvements, although not consistently. The previous study also showed that the addition of sequence level features results in less remarkable improvement on supervised models' effectiveness compared to word level features (De Vine, et al., 2015) .", "cite_spans": [ { "start": 397, "end": 420, "text": "(De Vine, et al., 2015)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Active Learning Performance", "sec_num": "5.2" }, { "text": "The results from our empirical evaluation confirm the previous findings suggesting that the use of unsupervised features significantly improves clinical information extraction systems in a supervised learning setting (De Vine, et al., 2015) . Here we have further studied the use of these features within an active learning framework.", "cite_spans": [ { "start": 217, "end": 240, "text": "(De Vine, et al., 2015)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Our results highlight that the use of unsupervised word and sequence level features not only increases the effectiveness of the models built Table 3 . Annotation rates for all active learning query strategies and the baseline RS using different sample representations (feature groups). Results for the baseline feature set (ABC) are highlighted in gray. across AL batches, but also leads to lower manual annotation efforts in the active learning framework compared to the baseline feature set ABC (no unsupervised features). We can assume that the reason is that the better the sample representation, the stronger the updated model is in terms of effectiveness at each iteration of active learning. This means that AL query strategies use a better updated model at each iteration and therefore choose a better set of informative instances. Hence, by using these data representations, AL requires a smaller number of sequences, tokens, and concepts to reach the target supervised effectiveness. This, in turn, translates into lower annotation rates. However, not all combinations of different features always lead to lower annotation rates in the AL framework (Kholghi, et al., 2014) . We thus next study the trade-off between effectiveness (F1 measure from Table 2 ) and annotation rate (CAR from Table 3) to better understand the performance of five selected feature groups (ABC, ABCD, ABCDGH, and ABCDGHK). Figure 4 demonstrates the concept annotation rate (CAR) values (horizontal axis) for the best performing query strategy, in each dataset, when reaching: (1) the corresponding target supervised effectiveness for each feature set showed by , and (2) a fixed effectiveness for all feature sets showed by . These values are depicted against the effectiveness when training on the full train set (vertical axis) for each feature set (F1 measure from Table 2 ). We are presenting these for LC and IDiv for i2b2/VA 2010 and ShARe/CLEF datasets, respectively as they achieved the lowest concept annotation rates as discussed in section 5.2. The fixed effectiveness for all feature sets is determined as follow: F1 measure = 0.80 for i2b2/VA 2010 and F1 measure = 0.70 for ShARe/CLEF 2013. The aim of this analysis is to verify whether improvements in terms of supervised effectiveness when using different feature sets (F1 measure from Table 2 ) necessarily scale into improvements in CAR (i.e., lower annotation effort) and whether the same behavior is observed in terms of annotation effort reduction when a fixed F1 measure value is considered for all feature groups. It is Figure 4 . Analysis of concept annotation rates (CAR) (horizontal axis) at (1) target supervised effectiveness for each feature set ( ), and (2) a fixed effectiveness for all feature sets ( ) with respect to the corresponding F1 measure for each feature set from Table 2 (vertical axis). (a) i2b2/VA 2010; (b) ShARe/CLEF 2013.", "cite_spans": [ { "start": 1159, "end": 1182, "text": "(Kholghi, et al., 2014)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 141, "end": 148, "text": "Table 3", "ref_id": null }, { "start": 1257, "end": 1264, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 1409, "end": 1417, "text": "Figure 4", "ref_id": null }, { "start": 1854, "end": 1861, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 2337, "end": 2344, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 2578, "end": 2586, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "important to note that the higher the F1 measure and the lower the CAR, the better the feature set. Hence, those points towards the left upper corner of both plots in Figure 4 perform better both in terms of effectiveness and annotation rate. Points marked with the same symbol should be compared to each other. In terms of target supervised effectiveness ( ), Figure 4 shows that feature groups ABCDGH and ABCDGHK outperform the other feature groups in i2b2/VA 2010 dataset, both in terms of effectiveness (F1 measure) and annotation rate (CAR). While ABCDGH achieves the best CAR (i.e., the lowest) in ShARe/CLEF 2013 dataset, it is not the best performing feature group in terms of supervised effectiveness. The highest F1 measure was achieved by feature group ABCDGHK in this dataset. The same pattern is observed when considering a fixed F1 measure value ( ). Hence, the feature set that leads to a supervised model with the highest effectiveness (F1 measure) does not always lead to an AL model with the lowest annotation rate. These results demonstrate that improving supervised models built across the AL batches does not necessarily guarantee a reduction in annotation rates.", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 175, "text": "Figure 4", "ref_id": null }, { "start": 361, "end": 369, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "Interestingly, the updated model has no role in selecting the next batch of instances when using the Random Sampling baseline, as this randomly selects instances at each iteration. Yet, a better feature set (e.g., ABCDGHJKM) helps RS to reduce the annotation rate. If we compare the updated models at the same batch of RS using different data representations, for instance ABC vs. ABCDGHJKM, we observe that by even adding random instances to the labeled set, more information is injected into the updated model using the feature set ABCDGHJKM compared to ABC. This suggests that RS with unsupervised features has a reduced rate of annotation errors compared to using the ABC feature set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "These results can be summarized into the following observations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "\u2022 A better sample representation using unsupervised features leads to higher effectiveness and less manual annotation effort not only in an AL framework, but also in a Random Sampling approach. \u2022 Although there is a relationship between high effectiveness and low annotation effort, not all combinations of features conducive to the highest effectiveness necessarily lead to the lowest annotation effort. \u2022 The combination of word level features (D, G, and H) with the baseline handcrafted features, i.e., ABCDGH, generally performs better than the other feature combinations across all AL query strategies and datasets, both in terms of effectiveness and annotation rates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "This paper presented an analysis of different data representations using a wide range of feature sets and investigated their impact on active learning performance in terms of both model effectiveness and annotation effort reduction. We believe this is the first study analyzing the effect of unsupervised sample representation using word embeddings and sequence level features on an active learning framework built for clinical information extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The empirical results highlighted the benefits of unsupervised features in achieving higher effectiveness and lower manual annotation effort in our AL framework. Word and sequence level features significantly increase the effectiveness of the models built across AL batches. In addition, compared to the baseline feature set, they reduce the manual annotation effort by using a small number of sequences, tokens, and concepts to reach the target supervised performance. Hence, it can be concluded that the manual annotation of clinical free text for information extraction applications can be accelerated using an improved sample representation in an active learning framework. While this could seem intuitive, we have also shown that improvements demonstrated in a fully supervised framework do not necessarily translate into improvements in an active learning framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "De-identifying health records by means of active learning", "authors": [ { "first": "H", "middle": [], "last": "Bostr\u00f6m", "suffix": "" }, { "first": "H", "middle": [], "last": "Dalianis", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "90--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bostr\u00f6m, H., & Dalianis, H. (2012). De-identifying health records by means of active learning. Recall (micro), 97(97.55), 90-97.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Class-based ngram models of natural language", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "P", "middle": [ "V" ], "last": "Desouza", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" }, { "first": "V", "middle": [ "J D" ], "last": "Pietra", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Lai", "suffix": "" } ], "year": 1992, "venue": "Comput. Linguist", "volume": "18", "issue": "4", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, P. F., deSouza, P. V., Mercer, R. L., Pietra, V. J. D., & Lai, J. C. (1992). Class-based n- gram models of natural language. Comput. Linguist., 18(4), 467-479.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A study of active learning methods for named entity recognition in clinical text", "authors": [ { "first": "Y", "middle": [], "last": "Chen", "suffix": "" }, { "first": "T", "middle": [ "A" ], "last": "Lasko", "suffix": "" }, { "first": "Q", "middle": [], "last": "Mei", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Denny", "suffix": "" }, { "first": "H", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2015, "venue": "Journal of Biomedical Informatics", "volume": "58", "issue": "", "pages": "11--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Y., Lasko, T. A., Mei, Q., Denny, J. C., & Xu, H. (2015). A study of active learning methods for named entity recognition in clinical text. Journal of Biomedical Informatics, 58, 11-18.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Applying active learning to assertion classification of concepts in clinical text", "authors": [ { "first": "Y", "middle": [], "last": "Chen", "suffix": "" }, { "first": "S", "middle": [], "last": "Mani", "suffix": "" }, { "first": "H", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2012, "venue": "Journal of Biomedical Informatics", "volume": "45", "issue": "2", "pages": "265--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Y., Mani, S., & Xu, H. (2012). Applying active learning to assertion classification of concepts in clinical text. Journal of Biomedical Informatics, 45(2), 265-272.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Reducing labeling effort for structured prediction tasks", "authors": [ { "first": "A", "middle": [], "last": "Culotta", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "746--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Culotta, A., & McCallum, A. (2005). Reducing labeling effort for structured prediction tasks. Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI) (pp. 746-751): AAAI Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Machine-learned solutions for three stages of clinical information extraction: the state of the art at i2b2 2010", "authors": [ { "first": "B", "middle": [], "last": "De Bruijn", "suffix": "" }, { "first": "C", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "S", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "J", "middle": [], "last": "Martin", "suffix": "" }, { "first": "X", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2011, "venue": "Journal of the American Medical Informatics Association", "volume": "18", "issue": "5", "pages": "557--562", "other_ids": {}, "num": null, "urls": [], "raw_text": "De Bruijn, B., Cherry, C., Kiritchenko, S., Martin, J., & Zhu, X. (2011). Machine-learned solutions for three stages of clinical information extraction: the state of the art at i2b2 2010. Journal of the American Medical Informatics Association, 18(5), 557-562.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Analysis of word embeddings and sequence features for clinical information extraction", "authors": [ { "first": "L", "middle": [], "last": "De Vine", "suffix": "" }, { "first": "M", "middle": [], "last": "Kholghi", "suffix": "" }, { "first": "G", "middle": [], "last": "Zuccon", "suffix": "" }, { "first": "L", "middle": [], "last": "Sitbon", "suffix": "" }, { "first": "A", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Australasian Language Technology Association Workshop", "volume": "", "issue": "", "pages": "21--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "De Vine, L., Kholghi, M., Zuccon, G., Sitbon, L., & Nguyen, A. (2015). Analysis of word embeddings and sequence features for clinical information extraction. Proceedings of Australasian Language Technology Association Workshop (pp. 21-30).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Approximate statistical tests for comparing supervised classification learning algorithms", "authors": [ { "first": "T", "middle": [ "G" ], "last": "Dietterich", "suffix": "" } ], "year": 1998, "venue": "Neural computation", "volume": "10", "issue": "7", "pages": "1895--1923", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dietterich, T. G. (1998). Approximate statistical tests for comparing supervised classification learning algorithms. Neural computation, 10(7), 1895-1923.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Active learning for clinical text classification: is it better than random sampling", "authors": [ { "first": "R", "middle": [ "L" ], "last": "Figueroa", "suffix": "" }, { "first": "Q", "middle": [], "last": "Zeng-Treitler", "suffix": "" }, { "first": "L", "middle": [ "H" ], "last": "Ngo", "suffix": "" }, { "first": "S", "middle": [], "last": "Goryachev", "suffix": "" }, { "first": "E", "middle": [ "P" ], "last": "Wiechmann", "suffix": "" } ], "year": 2012, "venue": "Journal of the American Medical Informatics Association", "volume": "19", "issue": "5", "pages": "809--816", "other_ids": {}, "num": null, "urls": [], "raw_text": "Figueroa, R. L., Zeng-Treitler, Q., Ngo, L. H., Goryachev, S., & Wiechmann, E. P. (2012). Active learning for clinical text classification: is it better than random sampling? Journal of the American Medical Informatics Association, 19(5), 809-816.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Enhancing clinical concept extraction with distributional semantics", "authors": [ { "first": "S", "middle": [], "last": "Jonnalagadda", "suffix": "" }, { "first": "T", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "S", "middle": [], "last": "Wu", "suffix": "" }, { "first": "G", "middle": [], "last": "Gonzalez", "suffix": "" } ], "year": 2012, "venue": "Journal of Biomedical Informatics", "volume": "45", "issue": "1", "pages": "129--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonnalagadda, S., Cohen, T., Wu, S., & Gonzalez, G. (2012). Enhancing clinical concept extraction with distributional semantics. Journal of Biomedical Informatics, 45(1), 129-140.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Random indexing of text samples for latent semantic analysis", "authors": [ { "first": "P", "middle": [], "last": "Kanerva", "suffix": "" }, { "first": "J", "middle": [], "last": "Kristofersson", "suffix": "" }, { "first": "A", "middle": [], "last": "Holst", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 22nd annual conference of the cognitive science society", "volume": "1036", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kanerva, P., Kristofersson, J., & Holst, A. (2000). Random indexing of text samples for latent semantic analysis. Proceedings of the 22nd annual conference of the cognitive science society (Vol. 1036).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Factors influencing robustness and effectiveness of conditional random fields in active learning frameworks", "authors": [ { "first": "M", "middle": [], "last": "Kholghi", "suffix": "" }, { "first": "L", "middle": [], "last": "Sitbon", "suffix": "" }, { "first": "G", "middle": [], "last": "Zuccon", "suffix": "" }, { "first": "A", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2014, "venue": "Conferences in Research and Practice in Information Technology", "volume": "158", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kholghi, M., Sitbon, L., Zuccon, G., & Nguyen, A. (2014). Factors influencing robustness and effectiveness of conditional random fields in active learning frameworks. Proceedings of the 12th Australasian Data Mining Conference (AusDM 2014) (Vol. 158): Conferences in Research and Practice in Information Technology, Australian Computer Society Inc.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "External knowledge and query strategies in active learning: a study in clinical information extraction", "authors": [ { "first": "M", "middle": [], "last": "Kholghi", "suffix": "" }, { "first": "L", "middle": [], "last": "Sitbon", "suffix": "" }, { "first": "G", "middle": [], "last": "Zuccon", "suffix": "" }, { "first": "A", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th ACM International on Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "143--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kholghi, M., Sitbon, L., Zuccon, G., & Nguyen, A. (2015). External knowledge and query strategies in active learning: a study in clinical information extraction. Proceedings of the 24th ACM International on Conference on Information and Knowledge Management (pp. 143-152): ACM.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Active learning: a step towards automating medical concept extraction", "authors": [ { "first": "M", "middle": [], "last": "Kholghi", "suffix": "" }, { "first": "L", "middle": [], "last": "Sitbon", "suffix": "" }, { "first": "G", "middle": [], "last": "Zuccon", "suffix": "" }, { "first": "A", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2016, "venue": "Journal of the American Medical Informatics Association", "volume": "23", "issue": "2", "pages": "289--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kholghi, M., Sitbon, L., Zuccon, G., & Nguyen, A. (2016). Active learning: a step towards automating medical concept extraction. Journal of the American Medical Informatics Association, 23(2), 289-296.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data", "authors": [ { "first": "J", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Eighteenth International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lafferty, J. D., McCallum, A., & Pereira, F. C. N. (2001). Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. Proceedings of the Eighteenth International Conference on Machine Learning (ICML) (pp. 282-289).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "MALLET: A Machine Learning for Language Toolkit", "authors": [ { "first": "A", "middle": [ "K" ], "last": "Mccallum", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCallum, A. K. (2002). MALLET: A Machine Learning for Language Toolkit. Retrieved from http://mallet.cs.umass.edu", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features", "authors": [ { "first": "A", "middle": [], "last": "Nikfarjam", "suffix": "" }, { "first": "A", "middle": [], "last": "Sarker", "suffix": "" }, { "first": "K", "middle": [], "last": "O'connor", "suffix": "" }, { "first": "R", "middle": [], "last": "Ginn", "suffix": "" }, { "first": "G", "middle": [], "last": "Gonzalez", "suffix": "" } ], "year": 2015, "venue": "Journal of the American Medical Informatics Association", "volume": "22", "issue": "3", "pages": "671--681", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikfarjam, A., Sarker, A., O'Connor, K., Ginn, R., & Gonzalez, G. (2015). Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features. Journal of the American Medical Informatics Association, 22(3), 671-681.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Natural language processing: algorithms and tools to extract computable information from EHRs and from the biomedical literature", "authors": [ { "first": "L", "middle": [], "last": "Ohno-Machado", "suffix": "" }, { "first": "P", "middle": [], "last": "Nadkarni", "suffix": "" }, { "first": "K", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2013, "venue": "Journal of the American Medical Informatics Association", "volume": "20", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ohno-Machado, L., Nadkarni, P., & Johnson, K. (2013). Natural language processing: algorithms and tools to extract computable information from EHRs and from the biomedical literature. Journal of the American Medical Informatics Association, 20(5), 805.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Task 1: ShARe/CLEF ehealth evaluation lab 2013", "authors": [ { "first": "S", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "N", "middle": [], "last": "Elhadad", "suffix": "" }, { "first": "B", "middle": [], "last": "South", "suffix": "" }, { "first": "D", "middle": [], "last": "Martinez", "suffix": "" }, { "first": "L", "middle": [], "last": "Christensen", "suffix": "" }, { "first": "A", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "H", "middle": [], "last": "Suominen", "suffix": "" }, { "first": "W", "middle": [], "last": "Chapman", "suffix": "" }, { "first": "G", "middle": [], "last": "Savova", "suffix": "" } ], "year": 2013, "venue": "CLEF 2013 Evaluation Labs and Workshops: Working Notes: CLEF", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pradhan, S., Elhadad, N., South, B., Martinez, D., Christensen, L., Vogel, A., Suominen, H., Chapman, W., & Savova, G. (2013). Task 1: ShARe/CLEF ehealth evaluation lab 2013. CLEF 2013 Evaluation Labs and Workshops: Working Notes: CLEF.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Active learning", "authors": [ { "first": "B", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2012, "venue": "", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Settles, B. (2012). Active learning (Vol. 6): Morgan & Claypool Publishers.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Overview of the ShARe/CLEF eHealth Evaluation Lab", "authors": [ { "first": "H", "middle": [], "last": "Suominen", "suffix": "" }, { "first": "S", "middle": [], "last": "Salanter\u00e4", "suffix": "" }, { "first": "S", "middle": [], "last": "Velupillai", "suffix": "" }, { "first": "W", "middle": [], "last": "Chapman", "suffix": "" }, { "first": "G", "middle": [], "last": "Savova", "suffix": "" }, { "first": "N", "middle": [], "last": "Elhadad", "suffix": "" }, { "first": "S", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "B", "middle": [], "last": "South", "suffix": "" }, { "first": "D", "middle": [], "last": "Mowery", "suffix": "" }, { "first": "G", "middle": [ "F" ], "last": "Jones", "suffix": "" }, { "first": "J", "middle": [], "last": "Leveling", "suffix": "" }, { "first": "L", "middle": [], "last": "Kelly", "suffix": "" }, { "first": "L", "middle": [], "last": "Goeuriot", "suffix": "" }, { "first": "D", "middle": [], "last": "Martinez", "suffix": "" }, { "first": "G", "middle": [], "last": "Zuccon", "suffix": "" } ], "year": 2013, "venue": "Information Access Evaluation. Multilinguality, Multimodality, and Visualization", "volume": "8138", "issue": "", "pages": "212--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suominen, H., Salanter\u00e4, S., Velupillai, S., Chapman, W., Savova, G., Elhadad, N., Pradhan, S., South, B., Mowery, D., Jones, G. F., Leveling, J., Kelly, L., Goeuriot, L., Martinez, D., & Zuccon, G. (2013). Overview of the ShARe/CLEF eHealth Evaluation Lab 2013. In P. Forner, H. M\u00fcller, R. Paredes, P. Rosso & B. Stein (Eds.), Information Access Evaluation. Multilinguality, Multimodality, and Visualization (Vol. 8138, pp. 212-231):", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Recognizing clinical entities in hospital discharge summaries using structural support vector machines with word representation features", "authors": [ { "first": "B", "middle": [], "last": "Tang", "suffix": "" }, { "first": "H", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wu", "suffix": "" }, { "first": "M", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "H", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2013, "venue": "BMC Medical Informatics and Decision Making", "volume": "13", "issue": "1", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tang, B., Cao, H., Wu, Y., Jiang, M., & Xu, H. (2013). Recognizing clinical entities in hospital discharge summaries using structural support vector machines with word representation features. BMC Medical Informatics and Decision Making, 13(1), 1- 10.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Recognizing and Encoding Discorder Concepts in Clinical Text using Machine Learning and Vector Space Model", "authors": [ { "first": "B", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wu", "suffix": "" }, { "first": "M", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Denny", "suffix": "" }, { "first": "H", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2013, "venue": "Workshop of ShARe/CLEF eHealth Evaluation Lab", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tang, B., Wu, Y., Jiang, M., Denny, J. C., & Xu, H. (2013). Recognizing and Encoding Discorder Concepts in Clinical Text using Machine Learning and Vector Space Model. Workshop of ShARe/CLEF eHealth Evaluation Lab 2013.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Feature-rich part-of-speech tagging with a cyclic dependency network", "authors": [ { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Y", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "1", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Toutanova, K., Klein, D., Manning, C. D., & Singer, Y. (2003). Feature-rich part-of-speech tagging with a cyclic dependency network. Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (Vol. 1, pp. 173-180): Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Extracting medication information from clinical text", "authors": [ { "first": "\u00d6", "middle": [], "last": "Uzuner", "suffix": "" }, { "first": "I", "middle": [], "last": "Solti", "suffix": "" }, { "first": "E", "middle": [], "last": "Cadag", "suffix": "" } ], "year": 2010, "venue": "Journal of the American Medical Informatics Association", "volume": "17", "issue": "5", "pages": "514--518", "other_ids": {}, "num": null, "urls": [], "raw_text": "Uzuner, \u00d6., Solti, I., & Cadag, E. (2010). Extracting medication information from clinical text. Journal of the American Medical Informatics Association, 17(5), 514-518.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text", "authors": [ { "first": "\u00d6", "middle": [], "last": "Uzuner", "suffix": "" }, { "first": "B", "middle": [ "R" ], "last": "South", "suffix": "" }, { "first": "S", "middle": [], "last": "Shen", "suffix": "" }, { "first": "S", "middle": [ "L" ], "last": "Duvall", "suffix": "" } ], "year": 2011, "venue": "Journal of the American Medical Informatics Association", "volume": "18", "issue": "5", "pages": "552--556", "other_ids": {}, "num": null, "urls": [], "raw_text": "Uzuner, \u00d6., South, B. R., Shen, S., & DuVall, S. L. (2011). 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association, 18(5), 552-556.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Overview of the TREC 2011 medical records track", "authors": [ { "first": "E", "middle": [ "M" ], "last": "Voorhees", "suffix": "" }, { "first": "R", "middle": [], "last": "Tong", "suffix": "" } ], "year": 2011, "venue": "Proceedings of Text REtrieval Conference (TREC)", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Voorhees, E. M., & Tong, R. (2011). Overview of the TREC 2011 medical records track. Proceedings of Text REtrieval Conference (TREC) (Vol. 4).", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Active learning process.", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "(2013) compared different word representation features extracted from Brown clustering and random indexing and found that they are complementary and when combined with common basic features the effectiveness of clinical", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "Word and sequence level feature generation process. information extraction systems increased. De Vine, et al.", "num": null, "uris": null }, "FIGREF3": { "type_str": "figure", "text": "Description of the features used in this study.", "num": null, "uris": null }, "TABREF0": { "content": "
Train SetTest Set
#doc#seq#doc#seq
i2b2/VA 2010349 30,673477 45,025
ShARe/CLEF200 10,1711009,273
2013
", "text": "Number of documents (#doc) and sequences (#seq) in the train and test sets of the two considered datasets.", "num": null, "type_str": "table", "html": null }, "TABREF1": { "content": "
i2b2/VA 2010ShARe/CLEF 2013
Features
PrecisionRecallF1 measurePrecisionRecallF1 measure
Word0.65710.60110.62790.22250.43170.2936
A0.84040.80310.82130.78580.64610.7091
BaselineB C BC AB0.6167 0.7691 0.7269 0.83680.6006 0.6726 0.712 0.80380.6085 0.7192 0.7194 0.820.5157 0.7022 0.7163 0.78320.4027 0.5118 0.518 0.64720.4523 0.5921 0.6012 0.7087
AC0.83780.80590.82160.80350.68080.7371
ABC0.84090.80660.82340.80950.68040.7394
D0.77730.73930.75780.68150.55810.6137
WordLevelGH ABCD0.8056 0.84240.7547 0.81270.7793 0.82730.7225 0.80420.5625 0.69160.6325 0.7436
ABCDGH0.85020.81240.8309*0.80920.68980.7448*
J0.65510.62420.63930.65640.40540.5012
K0.68520.64330.66360.63050.41890.5033
Sequence LevelABCDGHJ ABCDGHK ABCDGHJK L M ABCDGHJKL0.8488 0.8495 0.8449 0.7361 0.7531 0.84580.8126 0.8132 0.8116 0.6169 0.6358 0.80860.8303* 0.8309* 0.8279 0.6713 0.6895 0.82680.7992 0.8111 0.8093 0.7015 0.672 0.80680.6916 0.69 0.6889 0.3854 0.3924 0.68810.7415 0.7457* 0.7443 0.4975 0.4955 0.7427
ABCDGHJKM0.84880.81130.8296*0.81050.690.7454*
ABCDGHJKLM0.84470.80620.8250.81170.68730.7444
", "text": "Supervised target performance for all sets of features. Statistically significant improvements (p<0.05) for F1 when compared with ABC are indicated by *.", "num": null, "type_str": "table", "html": null } } } }