{ "paper_id": "P19-1014", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:21:59.365807Z" }, "title": "A Joint Named-Entity Recognizer for Heterogeneous Tag-sets Using a Tag Hierarchy", "authors": [ { "first": "Genady", "middle": [], "last": "Beryozkin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Research Tel Aviv", "location": { "country": "Israel" } }, "email": "genady@google.com" }, { "first": "Yoel", "middle": [], "last": "Drori", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Research Tel Aviv", "location": { "country": "Israel" } }, "email": "" }, { "first": "Oren", "middle": [], "last": "Gilon", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Research Tel Aviv", "location": { "country": "Israel" } }, "email": "ogilon@google.com" }, { "first": "Tzvika", "middle": [], "last": "Hartman", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Research Tel Aviv", "location": { "country": "Israel" } }, "email": "tzvika@google.com" }, { "first": "Idan", "middle": [], "last": "Szpektor", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Research Tel Aviv", "location": { "country": "Israel" } }, "email": "szpektor@google.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We study a variant of domain adaptation for named-entity recognition where multiple, heterogeneously tagged training sets are available. Furthermore, the test tag-set is not identical to any individual training tag-set. Yet, the relations between all tags are provided in a tag hierarchy, covering the test tags as a combination of training tags. This setting occurs when various datasets are created using different annotation schemes. This is also the case of extending a tag-set with a new tag by annotating only the new tag in a new dataset. We propose to use the given tag hierarchy to jointly learn a neural network that shares its tagging layer among all tag-sets. We compare this model to combining independent models and to a model based on the multitasking approach. Our experiments show the benefit of the tag-hierarchy model, especially when facing non-trivial consolidation of tag-sets.", "pdf_parse": { "paper_id": "P19-1014", "_pdf_hash": "", "abstract": [ { "text": "We study a variant of domain adaptation for named-entity recognition where multiple, heterogeneously tagged training sets are available. Furthermore, the test tag-set is not identical to any individual training tag-set. Yet, the relations between all tags are provided in a tag hierarchy, covering the test tags as a combination of training tags. This setting occurs when various datasets are created using different annotation schemes. This is also the case of extending a tag-set with a new tag by annotating only the new tag in a new dataset. We propose to use the given tag hierarchy to jointly learn a neural network that shares its tagging layer among all tag-sets. We compare this model to combining independent models and to a model based on the multitasking approach. Our experiments show the benefit of the tag-hierarchy model, especially when facing non-trivial consolidation of tag-sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Named Entity Recognition (NER) has seen significant progress in the last couple of years with the application of Neural Networks to the task. Such models achieve state-of-the-art performance with little or no manual feature engineering (Collobert et al., 2011; Huang et al., 2015; Lample et al., 2016; Ma and Hovy, 2016; Dernoncourt et al., 2017) . Following this success, more complex NER setups are approached with neural models, among them domain adaptation (Qu et al., 2016; He and Sun, 2017; Dong et al., 2017) .", "cite_spans": [ { "start": 236, "end": 260, "text": "(Collobert et al., 2011;", "ref_id": "BIBREF7" }, { "start": 261, "end": 280, "text": "Huang et al., 2015;", "ref_id": "BIBREF16" }, { "start": 281, "end": 301, "text": "Lample et al., 2016;", "ref_id": "BIBREF19" }, { "start": 302, "end": 320, "text": "Ma and Hovy, 2016;", "ref_id": "BIBREF25" }, { "start": 321, "end": 346, "text": "Dernoncourt et al., 2017)", "ref_id": "BIBREF8" }, { "start": 461, "end": 478, "text": "(Qu et al., 2016;", "ref_id": "BIBREF29" }, { "start": 479, "end": 496, "text": "He and Sun, 2017;", "ref_id": "BIBREF14" }, { "start": 497, "end": 515, "text": "Dong et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work we study one type of domain adaptation for NER, denoted here heterogeneous tagsets. In this variant, samples from the test set are not available at training time. Furthermore, the test tag-set differs from each training tag-set. However every test tag can be represented either as a single training tag or as a combination of several training tags. This information is given in the form of a hypernym hierarchy over all tags, training and test (see Fig. 1 ).", "cite_spans": [], "ref_spans": [ { "start": 462, "end": 468, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This setting arises when different schemes are used for annotating multiple datasets for the same task. This often occurs in the medical domain, where healthcare providers use customized tagsets to create their own private test sets (Shickel et al., 2017; Lee et al., 2018) . Another scenario is selective annotation, as in the case of extending an existing tag-set, e.g. {'Name', 'Location'}, with another tag, e.g. 'Date'. To save annotation effort, new training data is labeled only with the new tag. This case of disjoint tag-sets is also discussed in the work of Greenberg et al. (2018) . A similar case is extending a training-set with new examples in which only rare tags are annotated. In domains where training data is scarce, out-ofdomain datasets annotated with infrequent tags may be very valuable.", "cite_spans": [ { "start": 233, "end": 255, "text": "(Shickel et al., 2017;", "ref_id": "BIBREF32" }, { "start": 256, "end": 273, "text": "Lee et al., 2018)", "ref_id": "BIBREF20" }, { "start": 568, "end": 591, "text": "Greenberg et al. (2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A naive approach concatenates all trainingsets, ignoring the differences between the tagging schemes in each example. A different approach would be to learn to tag with multiple training tagsets. Then, in a post-processing step, the predictions from the different tag-sets need to be consolidated into a single test tag sequence, resolving tagging differences along the way. We study two such models. The first model learns an independent NER model for each training tag-set. The second model applies the multitasking (MTL) (Collobert et al., 2011; Ruder, 2017) paradigm, in which a shared latent representation of the input text is fed into separate tagging layers.", "cite_spans": [ { "start": 524, "end": 548, "text": "(Collobert et al., 2011;", "ref_id": "BIBREF7" }, { "start": 549, "end": 561, "text": "Ruder, 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The above models require heuristic postprocessing to consolidate the different predicted tag sequences. To overcome this limitation, we propose a model that incorporates the given tag hierarchy within the neural NER model. Specifically, this model learns to predict a tag sequence only over the fine-grained tags in the hierarchy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Tag-set 1 (T 1 ): Name, Street, City, Hospital, Age>90 Tag-set 2 (T 2 ): First Name, Last Name, Address, Age Tag-set 3 (T 3 ) Figure 1 : A tag hierarchy for three tag-sets.", "cite_spans": [ { "start": 24, "end": 31, "text": "Street,", "ref_id": null }, { "start": 32, "end": 37, "text": "City,", "ref_id": null }, { "start": 38, "end": 47, "text": "Hospital,", "ref_id": null }, { "start": 48, "end": 54, "text": "Age>90", "ref_id": null } ], "ref_spans": [ { "start": 109, "end": 125, "text": "Tag-set 3 (T 3 )", "ref_id": "TABREF5" }, { "start": 126, "end": 134, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "At training time, gradients on each dataset-specific labeled examples are propagated as gradients on plausible fine-grained tags. At inference time the model predicts a single sequence of fine-grained tags, which are then mapped to the test tag-set by traversing the tag hierarchy. Importantly, all tagging decisions are performed in the model without the need for a post-processing consolidation step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conducted two experiments. The first evaluated the extension of a tag-set with a new tag via selective annotation of a new dataset with only the extending tag, using datasets from the medical and news domains. In the second experiment we integrated two full tag-sets from the medical domain with their training data while evaluating on a third test tag-set. The results show that the model which incorporates the tag-hierarchy is more robust compared to a combination of independent models or MTL, and typically outperforms them. This is especially evident when many tagging collisions need to be settled at post-processing. In these cases, the performance gap in favor of the tag-hierarchy model is large.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The goal in the heterogeneous tag-sets domain adaptation task is to learn an NER model M that given an input token sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2.1" }, { "text": "x = {x i } n 1 infers a tag sequence y = {y i } n 1 = M (x) over a test tag-set T s , \u2200 i y i \u2208T s . To learn the model, K train- ing datasets {DS r k } K k=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2.1" }, { "text": "are provided, each labeled with its own tag-set T r k . Superscripts 's' and 'r' stand for 'test' and 'training', respectively. In this task, no training tag-set is identical to the test tagset T s by itself. However, all tags in T s can be covered by combining the training tag-sets {T r k } K k=1 . This information is provided in the form of a directed acyclic graph (DAG) representing hyper- nymy relations between all training and test tags. Fig. 1 illustrates such a hierarchy.", "cite_spans": [], "ref_spans": [ { "start": 447, "end": 453, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Task Definition", "sec_num": "2.1" }, { "text": "As mentioned above, an example scenario is selective annotation, in which an original tag-set is extended with a new tag t, each with its own training data, and the test tag-set is their union. But, some setups require combinations other than a simple union, e.g. covering the test tag 'Address' with the finer training tags 'Street' and 'City', each from a different tag-set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2.1" }, { "text": "This task is different from inductive domain adaptation (Pan and Yang, 2010; Ruder, 2017) , in which the tag-sets are different but the tasks differ as well (e.g. NER and parsing), with no need to map the outcomes to a single tag-set at test time.", "cite_spans": [ { "start": 56, "end": 76, "text": "(Pan and Yang, 2010;", "ref_id": "BIBREF26" }, { "start": 77, "end": 89, "text": "Ruder, 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2.1" }, { "text": "As the underlying architecture shared by all models in this paper, we follow the neural network proposed by Lample et al. (2016) , which achieved state-of-the-art results on NER. In this model, depicted in Fig. 2 , each input token x i is represented as a combination of: (a) a one-hot vector x w i , mapping the input to a fixed word vocabulary, and (b) a sequence of one-hot vectors {x c i,j } n i j=1 , representing the input word's character sequence.", "cite_spans": [ { "start": 108, "end": 128, "text": "Lample et al. (2016)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 206, "end": 212, "text": "Fig. 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Neural network for NER", "sec_num": "2.2" }, { "text": "Each input token x i is first embedded in latent space by applying both a word-embedding matrix, we i = E x w i , and a character-based embedding layer ce i = CharBiRNN({x c i,j }) (Ling et al., 2015) . This output of this step is e i = ce i \u2295 we i , where \u2295 stands for vector concatenation. Then, the embedding vector sequence {e i } n is re-encoded in context using a bidirectional RNN layer {r i } n 1 = BiRNN({e i } n 1 ) (Schuster and Paliwal, 1997) . The sequence {r i } n 1 constitutes the latent representation of the input text.", "cite_spans": [ { "start": 181, "end": 200, "text": "(Ling et al., 2015)", "ref_id": "BIBREF22" }, { "start": 426, "end": 454, "text": "(Schuster and Paliwal, 1997)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Neural network for NER", "sec_num": "2.2" }, { "text": "Finally, each re-encoded vector r i is projected to tag space for the target tag-set T ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural network for NER", "sec_num": "2.2" }, { "text": "t i = P r i , where |t i | = |T |. The sequence {t i } n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural network for NER", "sec_num": "2.2" }, { "text": "1 is then taken as input to a CRF layer (Lafferty et al., 2001 ), which maintains a global tag transition matrix. At inference time, the model output is y = M (x), the most probable CRF tag sequence for input x.", "cite_spans": [ { "start": 40, "end": 62, "text": "(Lafferty et al., 2001", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Neural network for NER", "sec_num": "2.2" }, { "text": "One way to learn a model for the heterogeneous tag-sets setting is to train a base NER (Sec. 2.2) on the concatenation of all training-sets, predicting tags from the union of all training tag-sets. In our experiments, this model under performed, due to the fact that it treats each training example as fully tagged despite being tagged only with the tags belonging to the training-set from which the example is taken (see Sec. 6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models for Multiple Tagging Layers", "sec_num": "3" }, { "text": "We next present two models that instead learn to tag each training tag-set separately. In the first model the outputs from independent base models, each trained on a different tag-set, are merged. The second model utilizes the the multitasking approach to train separate tagging layers that share a single text representation layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models for Multiple Tagging Layers", "sec_num": "3" }, { "text": "In this model, we train a separate NER model for each training set, resulting in K models {M k } K k=1 . At test time, each model predicts a sequence y k = M k (x) over the corresponding tag-set T r k . The sequences {y k } K k=1 are consolidated into a single sequence y s over the test tag-set T s .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining independent models", "sec_num": "3.1" }, { "text": "We perform this consolidation in a postprocessing step. First, each predicted tag y k,i is mapped to the test tag-set as y s k,i . We employ the provided tag hierarchy for this mapping by traversing it starting from y k,i until a test tag is reached. Then, for every token x i , we consider the test tags predicted at position i by the different models M (x i ) = {y s k,i |y s k,i = 'Other'}. Cases where M (x i ) contains more than one tag are called collisions. Models must consolidate collisions, selecting a single predicted tag for x i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining independent models", "sec_num": "3.1" }, { "text": "We introduce three different consolidation methods. The first is to randomly select a tag from M (x i ). The second chooses the tag that originates from the tag sequence y k with the highest CRF probability score. The third computes the marginal CRF tag probability for each tag and selects the one with the highest probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining independent models", "sec_num": "3.1" }, { "text": "Lately, several works explored using multitasking (MTL) for inductive transfer learning within a neural architecture (Collobert and Weston, 2008; Chen et al., 2016; Peng and Dredze, 2017) . Such algorithms jointly train a single model to solve different NLP tasks, such as NER, sentiment analysis and text classification. The various tasks share the same text representation layer in the model but maintain a separate tagging layer per task.", "cite_spans": [ { "start": 117, "end": 145, "text": "(Collobert and Weston, 2008;", "ref_id": "BIBREF6" }, { "start": 146, "end": 164, "text": "Chen et al., 2016;", "ref_id": "BIBREF4" }, { "start": 165, "end": 187, "text": "Peng and Dredze, 2017)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Multitasking for heterogeneous tag-sets", "sec_num": "3.2" }, { "text": "We adapt multitasking to heterogeneous tagsets by considering each training dataset, which has a different tag-set T r k , as a separate NER task. Thus, a single model is trained, in which the latent text representation {r i } n 1 (see Sec. 2.2) is shared between NER tasks. As mentioned above, the tagging layers (projection and CRF) are kept separate for each tag-set. Fig. 3 illustrates this architecture.", "cite_spans": [], "ref_spans": [ { "start": 371, "end": 377, "text": "Fig. 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Multitasking for heterogeneous tag-sets", "sec_num": "3.2" }, { "text": "We emphasize that the output of the MTL model still consists of {y k } K k=1 different tag sequence predictions. They are consolidated into a final single sequence y s using the same post-processing step described in Sec. 3.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitasking for heterogeneous tag-sets", "sec_num": "3.2" }, { "text": "The models introduced in Sec. 3.1 and 3.2 learn to predict a tag sequence for each training tagset separately and they do not share parameters between tagging layers. In addition, they require Figure 4 : The tag hierarchy in Fig. 1 for three tag-sets after closure extension. Green nodes and edges were automatically added in this process. Fine-grained tags are surrounded by a dotted box.", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 201, "text": "Figure 4", "ref_id": null }, { "start": 225, "end": 231, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Tag Hierarchy Model", "sec_num": "4" }, { "text": "Tag-set 1 (T 1 ): Name, Street, City, Hospital, Age>90, T1-Other Tag-set 2 (T 2 ): First Name, Last Name, Address, Age, T 2 -Other Tag-set 3 (T 3 ): Name, Location,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag Hierarchy Model", "sec_num": "4" }, { "text": "a post-processing step, outside of the model, for merging the tag sequences inferred for the different tag-sets. A simple concatenation of all training data is also not enough to accommodate the differences between the tag-sets within the model (see Sec. 3). Moreover, none of these models utilizes the relations between tags, which are provided as input in the form of a tag hierarchy.", "cite_spans": [], "ref_spans": [ { "start": 250, "end": 254, "text": "Sec.", "ref_id": null } ], "eq_spans": [], "section": "Tag Hierarchy Model", "sec_num": "4" }, { "text": "In this section, we propose a model that addresses these limitations. This model utilizes the given tag hierarchy at training time to learn a single, shared tagging layer that predicts only finegrained tags. The hierarchy is then used during inference to map fine-grained tags onto a target tag-set. Consequently, all tagging decisions are made in the model, without the need for a postprocessing step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag Hierarchy Model", "sec_num": "4" }, { "text": "In the input hierarchy DAG, each node represents some semantic role of words in sentences, (e.g. 'Name'). A directed edge c \u2192 d implies that c is a hyponym of d, meaning c captures a subset of the semantics of d. Examples include 'LastName' \u2192 'Name', and 'Street' \u2192 'Location' in Fig. 1 . We denote the set of all tags that capture some subset of semantics of d by If a node d has no hyponyms (Sem(d) = {d}), it represents some fine-grained tag semantics. We denote the set of all fine-grained tags by T F G . We also denote all fine-grained tags that are hyponyms of d by", "cite_spans": [], "ref_spans": [ { "start": 280, "end": 286, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Notations", "sec_num": "4.1" }, { "text": "Sem(d) = {d} \u222a {c|c R \u2212 \u2192 d}, where R \u2212 \u2192 in- dicates that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notations", "sec_num": "4.1" }, { "text": "Fine(d) = T F G \u2229 Sem(d), e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notations", "sec_num": "4.1" }, { "text": "Fine(N ame) = {LastN ame, F irstN ame}. As mentioned above, our hierarchical model predicts tag sequences only from T F G and then maps them onto a target tag-set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notations", "sec_num": "4.1" }, { "text": "For each tag d we would like the semantics captured by the union of semantics of all tags in Fine(d) to be exactly the semantics of d, making sure we will not miss any aspect of d when predicting only over T F G . Yet, this semantics-equality property does not hold in general. One such example in Fig. 4 is 'Age> 90'\u2192'Age', because there may be age mentions below 90 annotated in T 2 's dataset.", "cite_spans": [], "ref_spans": [ { "start": 298, "end": 304, "text": "Fig. 4", "ref_id": null } ], "eq_spans": [], "section": "Hierarchy extension with 'Other' tags", "sec_num": "4.2" }, { "text": "To fix the semantics-equality above, we use the notion of the 'Other' tag in NER, which has the semantics of \"all the rest\". Specifically, for every d / \u2208 T F G , a fine-grained tag 'd-Other' \u2208 T F G and an edge 'd-Other'\u2192'd' are automatically added to the graph, hence 'd-Other'\u2208 Fine(d). For instance, 'Age-Other'\u2192'Age'. These new tags represent the aspects of d not captured by the other tags in Fine(d).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchy extension with 'Other' tags", "sec_num": "4.2" }, { "text": "Next a tag 'T i -Other' is automatically added to each tag-set T i , explicitly representing the \"all the rest\" semantics of T i . The labels for 'T i -Other' are induced automatically from unlabeled tokens in the original DS r i dataset. To make sure that the semantics-equality property above also holds for 'T i -Other', a fine-grained tag 'FG-Other' is also added, which captures the \"all the rest\" semantics at the fine-grained level. Then, each 'T i -Other' is connected to all fine-grained tags that do not capture some semantics of the tags in T i , defining:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchy extension with 'Other' tags", "sec_num": "4.2" }, { "text": "Fine(T i -Other) = T F G \\ d\u2208T i {T i -Other} Sem(d)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchy extension with 'Other' tags", "sec_num": "4.2" }, { "text": "This mapping is important at training time, where 'T i -Other' labels are used as distant supervision over their related fine-grained tags (Sec. 4.3). Fig. 4 depicts our hierarchy example after this step. We emphasize that all extensions in this step are done automatically as part of the model's algorithm.", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 158, "text": "Fig. 4", "ref_id": null } ], "eq_spans": [], "section": "Hierarchy extension with 'Other' tags", "sec_num": "4.2" }, { "text": "One outcome of the extension step is that the set of fine-grained tags T F G covers all distinct finegrained semantics across all tag-sets. In the following, we train a single NER model (Sec. 2.2) that predicts sequences of tags from the T F G tagset. As there is only one tagging layer, model parameters are shared across all training examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NER model with tag hierarchy", "sec_num": "4.3" }, { "text": "At inference time, this model predicts the most likely fine-grained tag sequence y f g for the input", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NER model with tag hierarchy", "sec_num": "4.3" }, { "text": "x. As the model outputs only a single sequence, post-processing consolidation is not needed. The tag hierarchy is used to map each predicated finegrained tag y f g i to a tag in a test tag-set T s by traversing the out-edges of y f g i until a tag in T s is reached. This procedure is also used in the baseline models (see Sec. 3.1) for mapping their predictions onto the test tag-set. However, unlike the baselines, which end with multiple candidate predictions in the test tag-set and need to consolidate between them, here, only a single fine-grained tag sequence is mapped, so no further consolidation is needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NER model with tag hierarchy", "sec_num": "4.3" }, { "text": "At training time, each example x that belongs to some training dataset DS r i is labeled with a gold-standard tag sequence y where the tags are taken only from the corresponding tag-set T r i . This means that tags {y i } are not necessarily finegrained tags, so there is no direct supervision for predicting fine-grained tag sequences. However, each gold label y i provides distant supervision over its related fine-grained tags, Fine(y i ). It indicates that one of them is the correct fine-grained label without explicitly stating which one, so we consider all possibilities in a probabilistic manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NER model with tag hierarchy", "sec_num": "4.3" }, { "text": "Henceforth, we say that a fine-grained tag se- We denote all fine-grained tag sequences that agree with y by AgreeWith(y).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NER model with tag hierarchy", "sec_num": "4.3" }, { "text": "quence y f g agrees with y if \u2200 i y f g i \u2208 Fine(y i ), i.e. y f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NER model with tag hierarchy", "sec_num": "4.3" }, { "text": "Using this definition, the tag-hierarchy model is trained with the loss function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NER model with tag hierarchy", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "loss(y) = \u2212log( Z y Z ) (1) Z y = y f g \u2208AgreeWith(y) \u03c6(y f g )", "eq_num": "(2)" } ], "section": "NER model with tag hierarchy", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Z = y f g \u03c6(y f g )", "eq_num": "(3)" } ], "section": "NER model with tag hierarchy", "sec_num": "4.3" }, { "text": "where \u03c6(y) stands for the model's score for sequence y, viewed as unnormalized probability. Z is the standard CRF partition function over all possible fine-grained tag sequences. Z y , on the other hand, accumulates scores only of fine-grained tag sequences that agree with y. Thus, this loss function aims at increasing the summed probability of all fine-grained sequences agreeing with y. Both Z y and Z can be computed efficiently using the Forward-Backward algorithm (Lafferty et al., 2001) . We note that we also considered finding the most likely tag sequence over a test tag-set at inference time by summing the probabilities of all finegrained tag sequences that agree with each candidate sequence y: max y y f g \u2208AgreeWith(y) \u03c6(y f g ). However, this problem is NP-hard (Lyngs\u00f8 and Pedersen, 2002) . We plan to explore other alternatives in future work.", "cite_spans": [ { "start": 471, "end": 494, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF18" }, { "start": 779, "end": 806, "text": "(Lyngs\u00f8 and Pedersen, 2002)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "NER model with tag hierarchy", "sec_num": "4.3" }, { "text": "To test the tag-hierarchy model under heterogeneous tag-set scenarios, we conducted experiments using datasets from two domains. We next describe these datasets as well as implementation details for the tested models. Sec. 6 then details the experiments and their results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5" }, { "text": "Five datasets from two domains, medical and news, were used in our experiments. Table 1 summarizes their main statistics.", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 87, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "For the medical domain we used the datasets I2B2-2006 (denoted I2B2'06) (Uzuner et al., 2007) , I2B2-2014 (denoted I2B2'14) (Stubbs and Uzuner, 2015) and the PhysioNet golden set (denoted Physio) (Goldberger et al., 2000) . These datasets are all annotated for the NER task of deidentification (a.k.a text anonymization) (Dernoncourt et al., 2017). Still, as seen in Table 1 , each dataset is annotated with a different tag-set. Both I2B2'06 and I2B2'14 include train and test sets, while Physio contains only a test set.", "cite_spans": [ { "start": 72, "end": 93, "text": "(Uzuner et al., 2007)", "ref_id": "BIBREF38" }, { "start": 124, "end": 149, "text": "(Stubbs and Uzuner, 2015)", "ref_id": "BIBREF36" }, { "start": 196, "end": 221, "text": "(Goldberger et al., 2000)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 367, "end": 374, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "For the news domain we used the English part of CONLL-2003 (denoted Conll) (Tjong Kim Sang and De Meulder, 2003) and OntoNotes-v5 (denoted Onto) (Weischedel et al., 2013) , both with train and test sets. We note that I2B2'14, Conll and Onto also contain a dev-set, which is used for hyper-param tuning (see below).", "cite_spans": [ { "start": 82, "end": 112, "text": "Kim Sang and De Meulder, 2003)", "ref_id": "BIBREF37" }, { "start": 145, "end": 170, "text": "(Weischedel et al., 2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "In all experiments, each example is a full document. Each document is split into tokens on whitespaces and punctuation. A tag-hierarchy covering the 57 tags from all five datasets was given as input to all models in all experiments. We constructed this hierarchy manually. The only non-trivial tag was 'Location', which in I2B2'14 is split into finer tags ('City', 'Street' etc.) and includes also hospital mentions in Conll and Onto. We resolved these relations similarly to the graph in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 489, "end": 497, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "Four models were compared in our experiments:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Models", "sec_num": "5.2" }, { "text": "M Concat A single NER model on the concatenation of datasets and tag-sets (Sec. 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Models", "sec_num": "5.2" }, { "text": "M Indep Combining predictions of independent NER models, one per tag-set (Sec. 3.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Models", "sec_num": "5.2" }, { "text": "M MTL Multitasking over training tag-sets (Sec. 3.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Models", "sec_num": "5.2" }, { "text": "M Hier A tag hierarchy employed within a single base model (Sec. 4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Models", "sec_num": "5.2" }, { "text": "All models are based on the neural network described in Sec. 2.2. We tuned the hyper-params in the base model to achieve state-of-the-art results for a single NER model on Conll and I2B2'14 when trained and tested on the same dataset (Strubell et al., 2017; Dernoncourt et al., 2017) (see Table 2 ). This is done to maintain a constant baseline, and is also due to the fact that I2B2'06 does not have a standard dev-set.", "cite_spans": [ { "start": 234, "end": 257, "text": "(Strubell et al., 2017;", "ref_id": "BIBREF35" }, { "start": 258, "end": 283, "text": "Dernoncourt et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 289, "end": 296, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Compared Models", "sec_num": "5.2" }, { "text": "We tuned hyper-params over the dev-sets of Conll and I2B2'14. For character-based embedding we used a single bidirectional LSTM (Hochreiter and Schmidhuber, 1997) with hidden state size of 25. For word embeddings we used pre-trained GloVe embeddings 1 (Pennington et al., 2014) , without further training. For token recoding we used a two-level stacked bidirectional LSTM (Graves et al., 2013) with both output and hidden state of size 100.", "cite_spans": [ { "start": 128, "end": 162, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF15" }, { "start": 252, "end": 277, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF28" }, { "start": 372, "end": 393, "text": "(Graves et al., 2013)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Compared Models", "sec_num": "5.2" }, { "text": "Once these hyper-params were set, no further tuning was made in our experiments, which means all models for heterogeneous tag-sets were tested under the above fixed hyper-param set. In each experiment, each model was trained until convergence on the respective training set. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Models", "sec_num": "5.2" }, { "text": "We performed two experiments. The first refers to selective annotation, in which an existing tag-set is extended with a new tag by annotating a new dataset only with the new tag. The second experiment tests the ability of each model to integrate two full tag-sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "6" }, { "text": "In all experiments we assess model performance via micro-averaged tag F1, in accordance with CoNLL evaluation (Tjong Kim Sang and De Meulder, 2003) . Statistical significance was computed using the Wilcoxon two-sided signed ranks test at p = 0.01 (Wilcoxon, 1945) . We next detail each experiment and its results.", "cite_spans": [ { "start": 117, "end": 147, "text": "Kim Sang and De Meulder, 2003)", "ref_id": "BIBREF37" }, { "start": 247, "end": 263, "text": "(Wilcoxon, 1945)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "6" }, { "text": "In all our experiments, we found the performance of the different consolidation methods (Sec. 3.1) to be on par. One reason that using model scores does not beat random selection may be due to the overconfidence of the tagging models -their prediction probabilities are close to 0 or 1. We report figures for random selection as representative of all consolidation methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "6" }, { "text": "In this experiment, we considered the 4 most frequent tags that occur in at least two of our datasets: 'Name', 'Date', 'Location' and 'Hospital' (Table 3 summarizes their statistics). For each frequent tag t and an ordered pair of datasets in which t occurs, we constructed new training sets by removing t from the first training set (termed base dataset) and remove all tags but t from the second training set (termed extending dataset). For example, for the triplet of { 'Name', I2B2'14, I2B2'06}, we constructed a version of I2B2'14 without 'Name' annotations and a version of I2B2'06 containing only annotations for 'Name'. This process yielded 32 such triplets. For every triplet, we train all tested models on the two modified training sets and test them on the test-set of the base dataset (I2B2'14 in the example above). Each test-set was not altered and contains all tags of the base tag-set, including t.", "cite_spans": [], "ref_spans": [ { "start": 145, "end": 153, "text": "(Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Tag-set extension experiment", "sec_num": "6.1" }, { "text": "M Concat performed poorly in this experiment. For example, on the dataset extending I2B2'14 with 'Name' from I2B2'06, M Concat tagged only one 'Name' out of over 4000 'Name' mentions in the test set. Given this, we do not provide further details of the results of M Concat in this experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag-set extension experiment", "sec_num": "6.1" }, { "text": "For the three models tested, this experiment yields 96 results. The main results 2 of this experiment are shown in Table 4 . Surprisingly, in more tests M Indep outperformed M MTL than vice versa, adding to prior observations that multitasking can hurt performance instead of improving it (Bingel and S\u00f8gaard, 2017; Alonso and Plank, 2017; Bjerva, 2017) . But, applying a shared tagging layer on top of a shared text representation boosts the model's capability and stability. Indeed, overall, M Hier outperforms the other models in most tests, and in the rest it is similar to the best performing model.", "cite_spans": [ { "start": 289, "end": 315, "text": "(Bingel and S\u00f8gaard, 2017;", "ref_id": "BIBREF2" }, { "start": 316, "end": 339, "text": "Alonso and Plank, 2017;", "ref_id": "BIBREF0" }, { "start": 340, "end": 353, "text": "Bjerva, 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 115, "end": 122, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Tag-set extension experiment", "sec_num": "6.1" }, { "text": "Analyzing the results, we noticed that the gap between model performance increases when more collisions are encountered for M MTL and M Indep at post-processing time (see Sec. 3.1). The amount of collisions may be viewed as a predictor for the baselines' difficulty to handle a specific heterogeneous tag-sets setting. Table 5 presents the tests in which more than 100 collisions were detected for either M Indep or M MTL , constituting 66% of all 2 Detailed results for all 96 tests are given in the Appendix. test triplets. In these tests, M Hier is a clear winner, outperforming the compared models in all but two comparisons, often by a significant margin. Finally, we compared the models trained with selective annotation to an \"upper-bound\" of training and testing a single NER model on the same dataset with all tags annotated (Table 2) . As expected, performance is usually lower with selective annotation. But, the drop intensifies when the base and extending datasets are from different domains -medical and news. In these cases, we observed that M Hier is more robust. Its drop compared to combining datasets from the same domain is the least in almost all such combinations. Table 6 provides some illustrative examples.", "cite_spans": [ { "start": 448, "end": 449, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 319, "end": 326, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 834, "end": 843, "text": "(Table 2)", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Tag-set extension experiment", "sec_num": "6.1" }, { "text": "A scenario distinct from selective annotation is the integration of full tag-sets. On one hand, more training data is available for similar tags. On the other hand, more tags need to be consolidated among the tag-sets. To test this scenario, we trained the tested model types on the training sets of I2B2'06 and I2B2'14, which have different tag-sets. The models were evaluated both on the test sets of these datasets and on Physio, an unseen test-set that requires the combination of the two training tag-sets for full coverage of its tag-set. We also compared the models to single models trained on each of the training sets alone. Table 7 displays the results.", "cite_spans": [], "ref_spans": [ { "start": 634, "end": 641, "text": "Table 7", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Full tag-set integration experiment", "sec_num": "6.2" }, { "text": "As expected, single models do well on the testset companion of their training-set but they underperform on the other test-sets. This is expected because the tag-set on which they were trained does not cover well the tag-sets in the other test-sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Full tag-set integration experiment", "sec_num": "6.2" }, { "text": "When compared with the best-performing single model, using M Concat shows reduced results on all 3 test sets. This can be attributed to reduced performance for types that are semantically different between datasets (e.g. 'Date'), while performance on similar tags (e.g. 'Name') does not drop.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Full tag-set integration experiment", "sec_num": "6.2" }, { "text": "Combining the two training sets using either M Indep or M MTL leads to substantial performance drop in 5 out of 6 test-sets compared to the bestperforming single model. This is strongly correlated with the number of collisions encountered (see Table 7 ). Indeed, the only competitive result, M MTL tested on Physio, had less than 100 collisions. This demonstrates the non triviality in realworld tag-set integration, and the difficulty of resolving tagging decisions across tag-sets.", "cite_spans": [], "ref_spans": [ { "start": 244, "end": 251, "text": "Table 7", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Full tag-set integration experiment", "sec_num": "6.2" }, { "text": "By contrast, M Hier has no performance drop compared to the single models trained and tested on the same dataset. Moreover, it is the best performing model on the unseen Physio test-set, with 6% relative improvement in F1 over the best single model. This experiment points up the robustness of the tag hierarchy approach when applied to this heterogeneous tag-set scenario. Collobert et al. (2011) introduced the first competitive NN-based NER that required little or no feature engineering. Huang et al. (2015) combined LSTM with CRF, showing performance similar to non-NN models. Lample et al. (2016) extended this model with character-based embeddings in addition to word embedding, achieving state-of-theart results. Similar architectures, such as combinations of convolutional networks as replacements of RNNs were shown to out-perform previous NER models (Ma and Hovy, 2016; Chiu and Nichols, 2016; Strubell et al., 2017) . Dernoncourt et al. (2017) and Liu et al. (2017) showed that the LSTM-CRF model achieves stateof-the-art results also for de-identification in the medical domain. Lee et al. (2018) demonstrated how performance drops significantly when the LSTM-CRF model is tested under transfer learning within the same domain in this task. Collobert and Weston (2008) introduced MTL for NN, and other works followed, showing it helps in various NLP tasks (Chen et al., 2016; Peng and Dredze, 2017) . S\u00f8gaard and Goldberg (2016) and Hashimoto et al. (2017) argue that cascading architectures can improve MTL performance. Several works have explored conditions for successful application of MTL (Bingel and S\u00f8gaard, 2017; Bjerva, 2017; Alonso and Plank, 2017) .", "cite_spans": [ { "start": 374, "end": 397, "text": "Collobert et al. (2011)", "ref_id": "BIBREF7" }, { "start": 492, "end": 511, "text": "Huang et al. (2015)", "ref_id": "BIBREF16" }, { "start": 582, "end": 602, "text": "Lample et al. (2016)", "ref_id": "BIBREF19" }, { "start": 861, "end": 880, "text": "(Ma and Hovy, 2016;", "ref_id": "BIBREF25" }, { "start": 881, "end": 904, "text": "Chiu and Nichols, 2016;", "ref_id": "BIBREF5" }, { "start": 905, "end": 927, "text": "Strubell et al., 2017)", "ref_id": "BIBREF35" }, { "start": 930, "end": 955, "text": "Dernoncourt et al. (2017)", "ref_id": "BIBREF8" }, { "start": 960, "end": 977, "text": "Liu et al. (2017)", "ref_id": "BIBREF23" }, { "start": 1092, "end": 1109, "text": "Lee et al. (2018)", "ref_id": "BIBREF20" }, { "start": 1254, "end": 1281, "text": "Collobert and Weston (2008)", "ref_id": "BIBREF6" }, { "start": 1369, "end": 1388, "text": "(Chen et al., 2016;", "ref_id": "BIBREF4" }, { "start": 1389, "end": 1411, "text": "Peng and Dredze, 2017)", "ref_id": "BIBREF27" }, { "start": 1414, "end": 1441, "text": "S\u00f8gaard and Goldberg (2016)", "ref_id": "BIBREF34" }, { "start": 1446, "end": 1469, "text": "Hashimoto et al. (2017)", "ref_id": "BIBREF13" }, { "start": 1607, "end": 1633, "text": "(Bingel and S\u00f8gaard, 2017;", "ref_id": "BIBREF2" }, { "start": 1634, "end": 1647, "text": "Bjerva, 2017;", "ref_id": "BIBREF3" }, { "start": 1648, "end": 1671, "text": "Alonso and Plank, 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Full tag-set integration experiment", "sec_num": "6.2" }, { "text": "Few works attempt to share information across datasets at the tagging level. Greenberg et al. (2018) proposed a single CRF model for tagging with heterogeneous tag-sets but without a hierarchy. They show the utility of this method for indomain datasets with a balanced tag distribution. Our model can be viewed as an extension of theirs for tag hierarchies. Augenstein et al. (2018) use tag embeddings in MTL to further propagate information between tasks. Li et al. (2017) propose to use a tag-set made of cross-product of two different POS tag-sets and train a model for it. Given the explosion in tag-set size, they introduce automatic pruning of cross-product tags. Kim et al. (2015) and Qu et al. (2016) automatically learn correlations between tag-sets, given training data for both tag-sets. They rely on similar contexts for related source and target tags, such as 'professor' and 'student'.", "cite_spans": [ { "start": 77, "end": 100, "text": "Greenberg et al. (2018)", "ref_id": "BIBREF12" }, { "start": 358, "end": 382, "text": "Augenstein et al. (2018)", "ref_id": "BIBREF1" }, { "start": 457, "end": 473, "text": "Li et al. (2017)", "ref_id": "BIBREF21" }, { "start": 670, "end": 687, "text": "Kim et al. (2015)", "ref_id": "BIBREF17" }, { "start": 692, "end": 708, "text": "Qu et al. (2016)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Our tag-hierarchy model was inspired by recent work on hierarchical multi-label classification (Silla and Freitas, 2011; Zhang and Zhou, 2014) , and can be viewed as an extension of this direction onto sequences tagging.", "cite_spans": [ { "start": 95, "end": 120, "text": "(Silla and Freitas, 2011;", "ref_id": "BIBREF33" }, { "start": 121, "end": 142, "text": "Zhang and Zhou, 2014)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "We proposed a tag-hierarchy model for the heterogeneous tag-sets NER setting, which does not require a consolidation post-processing stage. In the conducted experiments, the proposed model consistently outperformed the baselines in difficult tagging cases and showed robustness when applying a single trained model to varied test sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "In the case of integrating datasets from the news and medical domains we found the blending task to be difficult. In future work, we'd like to improve this integration in order to gain from training on examples from different domains for tags like 'Name' and 'Location'. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" } ], "back_matter": [ { "text": "The authors would like to thank Yossi Matias, Katherine Chou, Greg Corrado, Avinatan Hassidim, Rony Amira, Itay Laish and Amit Markel for their help in creating this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "When is multitask learning effective? semantic sequence prediction under varying data conditions", "authors": [ { "first": "Hector", "middle": [], "last": "Martinez", "suffix": "" }, { "first": "Alonso", "middle": [], "last": "", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2017, "venue": "EACL 2017-15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hector Martinez Alonso and Barbara Plank. 2017. When is multitask learning effective? semantic se- quence prediction under varying data conditions. In EACL 2017-15th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 1-10.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Multi-task learning of pairwise sequence classification tasks over disparate label spaces", "authors": [ { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.09913v2" ] }, "num": null, "urls": [], "raw_text": "Isabelle Augenstein, Sebastian Ruder, and Anders S\u00f8gaard. 2018. Multi-task learning of pairwise sequence classification tasks over disparate label spaces. arXiv:1802.09913v2.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Identifying beneficial task relations for multi-task learning in deep neural networks. In ACL", "authors": [ { "first": "Joachim", "middle": [], "last": "Bingel", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joachim Bingel and Anders S\u00f8gaard. 2017. Identify- ing beneficial task relations for multi-task learning in deep neural networks. In ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Will my auxiliary tagging task help? estimating auxiliary tasks effectivity in multi-task learning", "authors": [ { "first": "Johannes", "middle": [], "last": "Bjerva", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics", "volume": "131", "issue": "", "pages": "216--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes Bjerva. 2017. Will my auxiliary tagging task help? estimating auxiliary tasks effectivity in multi-task learning. In Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017, Gothenburg, Sweden, 131, pages 216-220. Link\u00f6ping University Elec- tronic Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Neural network for heterogeneous annotations", "authors": [ { "first": "Hongshen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongshen Chen, Yue Zhang, and Qun Liu. 2016. Neural network for heterogeneous annotations. In EMNLP.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Named entity recognition with bidirectional lstm-cnns", "authors": [ { "first": "Jason", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nichols", "suffix": "" } ], "year": 2016, "venue": "TACL", "volume": "4", "issue": "1", "pages": "357--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. TACL, 4(1):357-370.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "JMLR", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, 12(Aug):2493-2537.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "De-identification of patient notes with recurrent neural networks", "authors": [ { "first": "Franck", "middle": [], "last": "Dernoncourt", "suffix": "" }, { "first": "Ji", "middle": [ "Young" ], "last": "Lee", "suffix": "" }, { "first": "Ozlem", "middle": [], "last": "Uzuner", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2017, "venue": "J. Am Med Inform Assoc", "volume": "24", "issue": "3", "pages": "596--606", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franck Dernoncourt, Ji Young Lee, Ozlem Uzuner, and Peter Szolovits. 2017. De-identification of patient notes with recurrent neural networks. J. Am Med Inform Assoc, 24(3):596-606.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Multichannel lstm-crf for named entity recognition in chinese social media", "authors": [ { "first": "Chuanhai", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Huijia", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" } ], "year": 2017, "venue": "CCL/NLP-NABD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chuanhai Dong, Huijia Wu, Jiajun Zhang, and Chengqing Zong. 2017. Multichannel lstm-crf for named entity recognition in chinese social media. In CCL/NLP-NABD. Springer.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Physiobank, physiotoolkit, and physionet", "authors": [ { "first": "L", "middle": [], "last": "Ary", "suffix": "" }, { "first": "", "middle": [], "last": "Goldberger", "suffix": "" }, { "first": "A", "middle": [ "N" ], "last": "Luis", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Amaral", "suffix": "" }, { "first": "", "middle": [], "last": "Glass", "suffix": "" }, { "first": "M", "middle": [], "last": "Jeffrey", "suffix": "" }, { "first": "Plamen", "middle": [ "Ch" ], "last": "Hausdorff", "suffix": "" }, { "first": "", "middle": [], "last": "Ivanov", "suffix": "" }, { "first": "G", "middle": [], "last": "Roger", "suffix": "" }, { "first": "Joseph", "middle": [ "E" ], "last": "Mark", "suffix": "" }, { "first": "George", "middle": [ "B" ], "last": "Mietus", "suffix": "" }, { "first": "Chung-Kang", "middle": [], "last": "Moody", "suffix": "" }, { "first": "H Eugene", "middle": [], "last": "Peng", "suffix": "" }, { "first": "", "middle": [], "last": "Stanley", "suffix": "" } ], "year": 2000, "venue": "", "volume": "101", "issue": "", "pages": "215--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ary L Goldberger, Luis AN Amaral, Leon Glass, Jef- frey M Hausdorff, Plamen Ch Ivanov, Roger G Mark, Joseph E Mietus, George B Moody, Chung- Kang Peng, and H Eugene Stanley. 2000. Phys- iobank, physiotoolkit, and physionet. Circulation, 101(23):215-220.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Speech recognition with deep recurrent neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Mohamed", "middle": [], "last": "Abdel-Rahman", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recur- rent neural networks. In ICASSP.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Marginal likelihood training of bilstm-crf for biomedical named entity recognition from disjoint label sets", "authors": [ { "first": "Nathan", "middle": [], "last": "Greenberg", "suffix": "" }, { "first": "Trapit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Verga", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2824--2829", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathan Greenberg, Trapit Bansal, Patrick Verga, and Andrew McCallum. 2018. Marginal likelihood training of bilstm-crf for biomedical named entity recognition from disjoint label sets. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2824-2829.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A joint many-task model: Growing a neural network for multiple nlp tasks", "authors": [ { "first": "Kazuma", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kazuma Hashimoto, Yoshimasa Tsuruoka, Richard Socher, et al. 2017. A joint many-task model: Grow- ing a neural network for multiple nlp tasks. In EMNLP.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A unified model for cross-domain and semi-supervised named entity recognition in chinese social media", "authors": [ { "first": "Hangfeng", "middle": [], "last": "He", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hangfeng He and Xu Sun. 2017. A unified model for cross-domain and semi-supervised named entity recognition in chinese social media. In AAAI.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Bidirectional lstm-crf models for sequence tagging", "authors": [ { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.01991" ] }, "num": null, "urls": [], "raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional lstm-crf models for sequence tagging. arXiv:1508.01991.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "New transfer learning techniques for disparate label sets", "authors": [ { "first": "Young-Bum", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Karl", "middle": [], "last": "Stratos", "suffix": "" }, { "first": "Ruhi", "middle": [], "last": "Sarikaya", "suffix": "" }, { "first": "Minwoo", "middle": [], "last": "Jeong", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Young-Bum Kim, Karl Stratos, Ruhi Sarikaya, and Minwoo Jeong. 2015. New transfer learning tech- niques for disparate label sets. In ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In ICML.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In ACL.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Transfer learning for named-entity recognition with neural networks", "authors": [ { "first": "Ji", "middle": [ "Young" ], "last": "Lee", "suffix": "" }, { "first": "Franck", "middle": [], "last": "Dernoncourt", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ji Young Lee, Franck Dernoncourt, and Peter Szolovits. 2018. Transfer learning for named-entity recogni- tion with neural networks.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Coupled pos tagging on heterogeneous annotations", "authors": [ { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jiayuan", "middle": [], "last": "Chao", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wenliang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Guohong", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jiayuan", "middle": [], "last": "Chao", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wenliang", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2017, "venue": "TASLP", "volume": "25", "issue": "3", "pages": "557--571", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenghua Li, Jiayuan Chao, Min Zhang, Wenliang Chen, Meishan Zhang, Guohong Fu, Zhenghua Li, Jiayuan Chao, Min Zhang, Wenliang Chen, et al. 2017. Coupled pos tagging on heterogeneous an- notations. TASLP, 25(3):557-571.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Finding function in form: Compositional character models for open vocabulary word representation", "authors": [ { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Isabel", "middle": [], "last": "Trancoso", "suffix": "" }, { "first": "Ramon", "middle": [], "last": "Fernandez", "suffix": "" }, { "first": "Silvio", "middle": [], "last": "Amir", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Marujo", "suffix": "" }, { "first": "Tiago", "middle": [], "last": "Luis", "suffix": "" } ], "year": 2015, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang Ling, Chris Dyer, Alan W Black, Isabel Tran- coso, Ramon Fernandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: Compositional character models for open vocabu- lary word representation. In EMNLP.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "De-identification of clinical notes via recurrent neural network and conditional random field", "authors": [ { "first": "Zengjian", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Buzhou", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Xiaolong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qingcai", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2017, "venue": "J. Biomed. Inf", "volume": "75", "issue": "", "pages": "34--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zengjian Liu, Buzhou Tang, Xiaolong Wang, and Qingcai Chen. 2017. De-identification of clinical notes via recurrent neural network and conditional random field. J. Biomed. Inf., 75:34-42.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The consensus string problem and the complexity of comparing hidden markov models", "authors": [ { "first": "B", "middle": [], "last": "Rune", "suffix": "" }, { "first": "", "middle": [], "last": "Lyngs\u00f8", "suffix": "" }, { "first": "N", "middle": [ "S" ], "last": "Christian", "suffix": "" }, { "first": "", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2002, "venue": "Journal of Computer and System Sciences", "volume": "65", "issue": "3", "pages": "545--569", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rune B Lyngs\u00f8 and Christian NS Pedersen. 2002. The consensus string problem and the complexity of comparing hidden markov models. Journal of Com- puter and System Sciences, 65(3):545-569.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In ACL.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A survey on transfer learning", "authors": [ { "first": "Qiang", "middle": [], "last": "Sinno Jialin Pan", "suffix": "" }, { "first": "", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2010, "venue": "IEEE Transactions on knowledge and data engineering", "volume": "22", "issue": "10", "pages": "1345--1359", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Multi-task domain adaptation for sequence tagging", "authors": [ { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nanyun Peng and Mark Dredze. 2017. Multi-task do- main adaptation for sequence tagging. In Proceed- ings of the 2nd Workshop on Representation Learn- ing for NLP.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In EMNLP.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Named entity recognition for novel types by transfer learning", "authors": [ { "first": "Lizhen", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Gabriela", "middle": [], "last": "Ferraro", "suffix": "" }, { "first": "Liyuan", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2016, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lizhen Qu, Gabriela Ferraro, Liyuan Zhou, Weiwei Hou, and Timothy Baldwin. 2016. Named entity recognition for novel types by transfer learning. In EMNLP.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "An overview of multi-task learning in", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2017, "venue": "deep neural networks", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.05098" ] }, "num": null, "urls": [], "raw_text": "Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv:1706.05098.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Bidirectional recurrent neural networks", "authors": [ { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "K", "middle": [], "last": "Kuldip", "suffix": "" }, { "first": "", "middle": [], "last": "Paliwal", "suffix": "" } ], "year": 1997, "venue": "IEEE Transactions on Signal Processing", "volume": "45", "issue": "11", "pages": "2673--2681", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Deep ehr: A survey of recent advances in deep learning techniques for electronic health record (ehr) analysis", "authors": [ { "first": "Benjamin", "middle": [], "last": "Shickel", "suffix": "" }, { "first": "Patrick", "middle": [ "James" ], "last": "Tighe", "suffix": "" }, { "first": "Azra", "middle": [], "last": "Bihorac", "suffix": "" }, { "first": "Parisa", "middle": [], "last": "Rashidi", "suffix": "" } ], "year": 2017, "venue": "IEEE Journal of Biomedical and Health Informatics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Shickel, Patrick James Tighe, Azra Bihorac, and Parisa Rashidi. 2017. Deep ehr: A survey of re- cent advances in deep learning techniques for elec- tronic health record (ehr) analysis. IEEE Journal of Biomedical and Health Informatics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A survey of hierarchical classification across different application domains. Data Mining and Knowledge Discovery", "authors": [ { "first": "N", "middle": [], "last": "Carlos", "suffix": "" }, { "first": "Alex", "middle": [ "A" ], "last": "Silla", "suffix": "" }, { "first": "", "middle": [], "last": "Freitas", "suffix": "" } ], "year": 2011, "venue": "", "volume": "22", "issue": "", "pages": "31--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos N Silla and Alex A Freitas. 2011. A survey of hierarchical classification across different appli- cation domains. Data Mining and Knowledge Dis- covery, 22(1-2):31-72.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Deep multi-task learning with low level tasks supervised at lower layers", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In ACL.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Fast and accurate entity recognition with iterated dilated convolutions", "authors": [ { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Verga", "suffix": "" }, { "first": "David", "middle": [], "last": "Belanger", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2017, "venue": "ENNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In ENNLP.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/uthealth corpus", "authors": [ { "first": "Amber", "middle": [], "last": "Stubbs", "suffix": "" }, { "first": "", "middle": [], "last": "Uzuner", "suffix": "" } ], "year": 2015, "venue": "J. Biomed. Inf", "volume": "58", "issue": "", "pages": "20--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amber Stubbs and\u00d6zlem Uzuner. 2015. Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/uthealth corpus. J. Biomed. Inf., 58:20-29.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Introduction to the conll-2003 shared task: Language-independent named entity recognition", "authors": [ { "first": "Erik F Tjong Kim", "middle": [], "last": "Sang", "suffix": "" }, { "first": "Fien", "middle": [], "last": "De Meulder", "suffix": "" } ], "year": 2003, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In NAACL.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Evaluating the state-of-the-art in automatic deidentification", "authors": [ { "first": "Ozlem", "middle": [], "last": "Uzuner", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2007, "venue": "J. Am Med Inform Assoc", "volume": "14", "issue": "5", "pages": "550--563", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ozlem Uzuner, Yuan Luo, and Peter Szolovits. 2007. Evaluating the state-of-the-art in automatic de- identification. J. Am Med Inform Assoc, 14(5):550- 563.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Individual comparisons by ranking methods", "authors": [ { "first": "Frank", "middle": [], "last": "Wilcoxon", "suffix": "" } ], "year": 1945, "venue": "Biometrics bulletin", "volume": "1", "issue": "6", "pages": "80--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frank Wilcoxon. 1945. Individual comparisons by ranking methods. Biometrics bulletin, 1(6):80-83.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "A review on multi-label learning algorithms", "authors": [ { "first": "Min-Ling", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhi-Hua", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2014, "venue": "IEEE transactions on knowledge and data engineering", "volume": "26", "issue": "8", "pages": "1819--1837", "other_ids": {}, "num": null, "urls": [], "raw_text": "Min-Ling Zhang and Zhi-Hua Zhou. 2014. A re- view on multi-label learning algorithms. IEEE transactions on knowledge and data engineering, 26(8):1819-1837.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Neural architecture for NER." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "NER multitasking architecture for 3 tag-sets." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "there is a directed path from c to d in the graph. For example, Sem(N ame) = {N ame, LastN ame, F irstN ame}." }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "g is a plausible interpretation for y at the fine-grained tag level. For example, following Fig. 4, sequences ['Hospital', 'City'] and ['Street', 'City'] agree with ['Location', 'Location'], unlike ['City', 'Last Name']." }, "TABREF3": { "type_str": "table", "html": null, "num": null, "content": "
I2B2'06I2B2'14ConllOnto
Micro avg. F10.8940.9600.9260.896
", "text": "nlp.stanford.edu/data/glove.6B.zip" }, "TABREF4": { "type_str": "table", "html": null, "num": null, "content": "
Tag Frequency in training / test (%)
I2B2'06I2B2'14ConllOnto
Name1.4 / 1.31.0 / 1.04.3 / 4.93.1 / 2.9
Date1.7 / 1.52.4 / 2.50 / 02.7 / 3.1
Location0.1 / 0.10.2 / 0.33.2 / 3.42.7 / 3.2
Hospital0.6 / 0.70.3 / 0.30 / 00 / 0
", "text": "F1 for training and testing a single base NER model on the same dataset." }, "TABREF5": { "type_str": "table", "html": null, "num": null, "content": "", "text": "Occurrence statistics for tags used in the tagset extension experiment, reported as % out of all tokens in the training and test sets of each dataset." }, "TABREF7": { "type_str": "table", "html": null, "num": null, "content": "
", "text": "F1 in the tag-set extension experiment, averaged over extending datasets for every base dataset." }, "TABREF9": { "type_str": "table", "html": null, "num": null, "content": "
: F1 for tag-set extensions with more than 100
collisions. Blank entries indicate fewer than 100 colli-
sions. (*) indicates all results that are statistically sig-
nificantly better than others in that row.
F1Model
TagBaseExtendingHierIndepMTL
LocationI2B2'14I2B2'06 Onto0.953 0.9540.919 0.8990.919 0.887
NameConllI2B2'06 Onto0.846 0.8950.827 0.8880.809 0.890
", "text": "" }, "TABREF10": { "type_str": "table", "html": null, "num": null, "content": "
: Examples for performance differences when
base datasets are extended with an in-domain dataset
compared to an out-of-domain dataset.
", "text": "" }, "TABREF12": { "type_str": "table", "html": null, "num": null, "content": "
: F1 for combining I2B2'06 and I2B2'14. The
top two models were trained only on a single dataset.
The lower table part holds the number of collisions at
post-processing. (*) indicates results that are statisti-
cally significantly better than others in that column.
", "text": "" }, "TABREF13": { "type_str": "table", "html": null, "num": null, "content": "
F1Model
TagBaseExtendingHierIndepMTL
DateI2B2'14I2B2'060.8990.9040.903
Onto0.7130.6860.671
I2B2'06I2B2'140.8710.8400.875
Onto0.6410.6810.698
OntoI2B2'140.8370.8300.831
I2B2'060.8340.8260.807
HospitalI2B2'14I2B2'060.9310.9410.918
I2B2'06I2B2'140.8670.8660.853
LocationConllI2B2'140.8180.7830.812
I2B2'060.7480.7390.730
Onto0.8360.8300.836
I2B2'14Conll0.9540.8990.887
I2B2'060.9530.9190.919
Onto0.9510.9210.907
I2B2'06Conll0.8760.8160.760
I2B2'140.8860.8830.888
Onto0.8690.8470.812
OntoConll0.7470.7010.703
I2B2'140.7930.6910.707
I2B2'060.8140.6910.666
NameConllI2B2'140.8550.7710.690
I2B2'060.8270.6660.631
Onto0.8600.8410.867
I2B2'14Conll0.9000.8630.890
I2B2'060.9430.8930.927
Onto0.9110.8820.891
I2B2'06Conll0.6620.6790.653
I2B2'140.8340.8240.808
Onto0.7260.7260.727
OntoConll0.8950.8880.890
I2B2'140.8920.8720.886
I2B2'060.8460.8270.809
", "text": "Full experiment results for Section 6.1" } } } }