{ "paper_id": "P19-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:22:04.532741Z" }, "title": "Reliability-aware Dynamic Feature Composition for Name Tagging", "authors": [ { "first": "Ying", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rensselaer Polytechnic Institute", "location": { "settlement": "Troy", "region": "NY", "country": "USA" } }, "email": "yinglin8@illinois.edu" }, { "first": "Liyuan", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign Urbana", "location": { "region": "IL", "country": "USA" } }, "email": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rensselaer Polytechnic Institute", "location": { "settlement": "Troy", "region": "NY", "country": "USA" } }, "email": "hengji@illinois.edu" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign Urbana", "location": { "region": "IL", "country": "USA" } }, "email": "hanj@illinois.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "While word embeddings are widely used for a variety of tasks and substantially improve the performance, their quality is not consistent throughout the vocabulary due to the longtail distribution of word frequency. Without sufficient contexts, embeddings of rare words are usually less reliable than those of common words. However, current models typically trust all word embeddings equally regardless of their reliability and thus may introduce noise and hurt the performance. Since names often contain rare and unknown words, this problem is particularly critical for name tagging. In this paper, we propose a novel reliability-aware name tagging model to tackle this issue. We design a set of word frequencybased reliability signals to indicate the quality of each word embedding. Guided by the reliability signals, the model is able to dynamically select and compose features such as word embedding and character-level representation using gating mechanisms. For example, if an input word is rare, the model relies less on its word embedding and assigns higher weights to its character and contextual features. Experiments on OntoNotes 5.0 show that our model outperforms the baseline model, obtaining up to 6.2% absolute gain in F-score. In crossgenre experiments on six genres in OntoNotes, our model improves the performance for most genre pairs and achieves 2.3% absolute Fscore gain on average. 1", "pdf_parse": { "paper_id": "P19-1016", "_pdf_hash": "", "abstract": [ { "text": "While word embeddings are widely used for a variety of tasks and substantially improve the performance, their quality is not consistent throughout the vocabulary due to the longtail distribution of word frequency. Without sufficient contexts, embeddings of rare words are usually less reliable than those of common words. However, current models typically trust all word embeddings equally regardless of their reliability and thus may introduce noise and hurt the performance. Since names often contain rare and unknown words, this problem is particularly critical for name tagging. In this paper, we propose a novel reliability-aware name tagging model to tackle this issue. We design a set of word frequencybased reliability signals to indicate the quality of each word embedding. Guided by the reliability signals, the model is able to dynamically select and compose features such as word embedding and character-level representation using gating mechanisms. For example, if an input word is rare, the model relies less on its word embedding and assigns higher weights to its character and contextual features. Experiments on OntoNotes 5.0 show that our model outperforms the baseline model, obtaining up to 6.2% absolute gain in F-score. In crossgenre experiments on six genres in OntoNotes, our model improves the performance for most genre pairs and achieves 2.3% absolute Fscore gain on average. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Serving as the basic unit of the model input, word embeddings form the foundation of various natural language processing techniques using deep neural networks. Embeddings can effectively encode semantic information and have proven successful in a wide range of tasks, such as sequence 1 Code and resources for this paper: https://github. com/limteng-rpi/neural_name_tagging A MedChem spokesman said the products contribute about a third of MedChem's sales and 10% to 20% of its earnings ", "cite_spans": [ { "start": 285, "end": 286, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Character-level Representation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability Signals", "sec_num": null }, { "text": "Figure 1: A simplified illustration of the proposed model. We only show the backward part in the figure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gate", "sec_num": null }, { "text": "labeling (Collobert et al., 2011; Chiu and Nichols, 2016; Ma and Hovy, 2016; Lample et al., 2016 ), text classification (Tang et al., 2014; Lai et al., 2015; Yang et al., 2016) , and parsing (Chen and Manning, 2014; Dyer et al., 2015) . Still, due to the long tail distribution, the quality of pre-trained word embeddings is usually inconsistent. Without sufficient contexts, the embeddings of rare words are less reliable and may introduce noise, as current models disregard their quality and consume them in the same way as well-trained embeddings for common words. This issue is particularly important for name tagging, the task of identifying and classifying names from unstructured texts, because names usually contain rare and unknown words, especially when we move to new domains, topics, and genres.", "cite_spans": [ { "start": 9, "end": 33, "text": "(Collobert et al., 2011;", "ref_id": "BIBREF6" }, { "start": 34, "end": 57, "text": "Chiu and Nichols, 2016;", "ref_id": "BIBREF5" }, { "start": 58, "end": 76, "text": "Ma and Hovy, 2016;", "ref_id": "BIBREF23" }, { "start": 77, "end": 96, "text": "Lample et al., 2016", "ref_id": "BIBREF18" }, { "start": 120, "end": 139, "text": "(Tang et al., 2014;", "ref_id": "BIBREF35" }, { "start": 140, "end": 157, "text": "Lai et al., 2015;", "ref_id": "BIBREF17" }, { "start": 158, "end": 176, "text": "Yang et al., 2016)", "ref_id": "BIBREF38" }, { "start": 191, "end": 215, "text": "(Chen and Manning, 2014;", "ref_id": "BIBREF3" }, { "start": 216, "end": 234, "text": "Dyer et al., 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Gate", "sec_num": null }, { "text": "By contrast, when encountering an unknown word, human readers usually seek other clues in the text. Similarly, when informed that an embed-ding is noisy or uninformative, the model should rely more on other features. Therefore, we aim to make the model aware of the quality of input embeddings and guide the model to dynamically select and compose features using explicit reliability signals. For example, in Figure 1 , since the model is informed of the relatively low quality of the word embedding of \"MedChem\", which only occurs 8 times in the embedding training corpus, it assigns higher weights to other features such as its character-level representation and contextual features derived from its context words (e.g., \"spokesman\").", "cite_spans": [], "ref_spans": [ { "start": 409, "end": 417, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Gate", "sec_num": null }, { "text": "The basis of this dynamic composition mechanism is the reliability signals that inform the model of the quality of each word embedding. Specifically, we assume that if a word occurs more frequently, its word embedding will be more fully trained as it has richer contexts and its embedding is updated more often during training. Thus, we design a set of reliability signals based on word frequency in the embedding training corpus and name tagging training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gate", "sec_num": null }, { "text": "As Figure 1 shows, we use reliability signals to control feature composition at two levels in our model. At the word representation level, in addition to word embedding, we generate a characterlevel representation for each word from its compositional characters using convolutional neural networks (see Section 2.1). Such character-level representation is able to capture semantic and morphological information. For example, the character features extracted from \"Med\" and \"Chem\" may encode semantic properties related to medical and chemical industries. At the feature extraction level, we introduce context-only features that are derived only from the context and thus not subject to the quality of the current word representation. For rare words without reliable representations, the contexts may provide crucial information to determine whether they are part of names or not. For example, \"spokesman\", \"products\", and \"sales\" in the context can help the model identify \"MedChem\" as an organization name. Additionally, context-only features are generally more robust because most non-name tokens in the context are common words and unlikely to vary widely across topics and scenarios. To incorporate the character-level representation and contextonly features, we design new gating mechanisms to mix them with the word embedding and en-coder output respectively. These reliability-aware gates learn to dynamically assign weights to various types of features to obtain an optimal mixture.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Gate", "sec_num": null }, { "text": "Experiments on six genres in OntoNotes (see Section 3.1) show that our model outperforms the baseline model without the proposed dynamic feature composition mechanism. In the cross-genre experiments, our model improves the performance for most pairs and obtains 2.3% absolute gain in F-score on average.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gate", "sec_num": null }, { "text": "In this section, we will elaborate each component of our model. In Section 2.1, we will describe the baseline model for name tagging. After that, we will introduce the frequency-based reliability signals in Section 2.2. In Section 2.3, We will elaborate how we guide gates to dynamically compose features at the word representation level and feature extraction level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "We adopt a state-of-the-art name tagging model LSTM-CNN (Long-short Term Memory -Convolutional Neural Network) (Chiu and Nichols, 2016 ) as our base model. In this architecture, the input sentence is represented as a sequence of vectors X = {x 1 , ..., x L }, where x i is the vector representation of the i-th word, and L is the length of the sequence. Generally, x i is a concatenation of word embedding and character-level representation generated with a group of convolutional neural networks (CNNs) with various filter sizes from compositional character embeddings of the word.", "cite_spans": [ { "start": 111, "end": 134, "text": "(Chiu and Nichols, 2016", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "2.1" }, { "text": "Next, the sequence X is fed into a bi-directional Recurrent Neural Network (RNN) with Longshort Term Memory (LSTM) units (Hochreiter and Schmidhuber, 1997) .", "cite_spans": [ { "start": 121, "end": 155, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "2.1" }, { "text": "The bi-directional LSTM network processes the sentence in a sequential manner and encodes both contextual and non-contextual features of each word x i into a hidden state h i , which is afterwards decoded by a linear layer into y i . Each component of y i represents the score for the corresponding name tag category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "2.1" }, { "text": "On top of the model, a CRF (Lafferty et al., 2001) layer is employed to capture the dependencies among predicted tags. Therefore, given an input sequence X and the output of the linear layer Y = {y 1 , ..., y L }, we define the score of a se-quence of predictions\u1e91 = {\u1e91 1 , ...,\u1e91 L } to be", "cite_spans": [ { "start": 27, "end": 50, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "2.1" }, { "text": "s(X,\u1e91) = L+1 i=1 A\u1e91 i\u22121 ,\u1e91 i + L i=1 y i,\u1e91 i ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "2.1" }, { "text": "where A\u1e91 i\u22121 ,\u1e91 i is the score of transitioning from tag\u1e91 i\u22121 to tag\u1e91 i , and y i,\u1e91 i is the component of y i that corresponds to tag\u1e91 i . Additionally,\u1e91 0 and z L+1 are the and tags padded to the predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "2.1" }, { "text": "During training, we maximize the sentencelevel log-likelihood of the true tag path z given the input sequence as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "2.1" }, { "text": "log p(z|X) = log e s(X,z) \u1e91\u2208Z e s(X,\u1e91) = s(X, z) \u2212 log \u1e91\u2208Z e s(X,\u1e91) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "2.1" }, { "text": "where Z is the set of all possible tag paths.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "2.1" }, { "text": "Note that in addition to word embeddings and character-level representations, (Chiu and Nichols, 2016) uses additional features such as capitalization and lexicons, which are not included in our implementation. Other similar name tagging model will be discussed in Section 4.", "cite_spans": [ { "start": 78, "end": 102, "text": "(Chiu and Nichols, 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "2.1" }, { "text": "As the basis of the proposed dynamic feature composition mechanism, reliability signals aim to inform the model of the quality of input word embeddings. Due to the lack of evaluation methods that directly measure the reliability of a single word embedding (Bakarov, 2018) , we design a set of reliability signals based on word frequency as follows:", "cite_spans": [ { "start": 256, "end": 271, "text": "(Bakarov, 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Reliability Signals", "sec_num": "2.2" }, { "text": "1. Word frequency in the word embedding training corpus f e . Generally, if a word has more occurrences in the corpus, it will appear in more diverse contexts, and its word embedding will be updated more times.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability Signals", "sec_num": "2.2" }, { "text": "2. Word frequency in the name tagging training set f n . By fine-tuning pre-trained word embeddings, the name tagging model can encode task-specific information (e.g., \"department\" is often part of an organization name) into embeddings of words in the name tagging training set and improve their quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability Signals", "sec_num": "2.2" }, { "text": "Because word frequency has a broad range of values, we normalize it with tanh (\u03bbf ), where \u03bb is set to 0.001 for f e and 0.01 for f n as the average word frequency is higher in the embedding training corpus. We do not use relative frequency because it turns low frequencies into very small numbers close to zero. Using tanh as the normalization function, the model can react more sensitively towards lower frequency values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability Signals", "sec_num": "2.2" }, { "text": "In addition to the above numeric signals, we introduce binary signals to give the model more explicit clues of the rarity of each word. For example, because we filter out words occurring less than 5 times during word embedding training, the following binary signal can explicitly inform the model whether a word is out-of-vocabulary or not:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability Signals", "sec_num": "2.2" }, { "text": "b(f e , 5) = 1, if f e < 5 0, if f e \u2265 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability Signals", "sec_num": "2.2" }, { "text": "We heuristically set the thresholds to 5, 10, 100, 1000, and 10000 for f e and 5, 10, 50 for f n based on the average word frequency in both corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability Signals", "sec_num": "2.2" }, { "text": "The reliability signals of each word are represented as a vector, of which each component is a certain numeric or binary signal. We apply a dropout layer (Srivastava et al., 2014) with probability 0.2 to the reliability signals.", "cite_spans": [ { "start": 154, "end": 179, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Reliability Signals", "sec_num": "2.2" }, { "text": "It is a common practice in current name tagging models to utilize character-level representations to address the following limitations of word embeddings: 1. Word embeddings take words as atomic units and thus ignore useful subword information such as affixes; 2. Pre-trained word embddings are not available for unknown words, which are typically represented using a randomly initialized vector in current models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "Unlike previous methods that generally use the character-level representation as an additional feature under the assumption that word-and character-level representations learn disjoint features, we split the character-level representation into two segments: the first segment serves as an alternative representation to encode the same semantic information as word embedding and is mixed with word embedding using gating mechanisms; the second segment is used as an additional feature to encode morphological information that cannot be captured by word embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "As Figure 2 illustrates, given the i-th word in a sentence, x w i \u2208 R dw denotes its word embedding,", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "x c i \u2208 R dc denotes its character-level representation, and x r i \u2208 R dr denotes the reliability signals. The character-level representation x c i consists of two subvectors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "x c i = x ca i \u2295 x cc i ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "where \u2295 is the concatenation operator, x ca i \u2208 R dw i acts as an alternative representation to word embedding, and In this example, because the word embedding of \"MedChem\" is not reliable and informative, the model should attend more to x ca i . To enable the model to switch between both representations accordingly, we define a pair of reliability-aware gates g w i and g c i to filter x w i and x ca i respectively. We refer to g w i as the word-level representation gate and g c i as the character-level representation gate. We calculate g w i as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "x cc \u2208 R dc\u2212", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "g w i = \u03c3(W w x w i + W c x c i + W r x r i + b),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "W w \u2208 R dw\u00d7dw , W c \u2208 R dw\u00d7dw , W r \u2208 R dw\u00d7dr", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": ", and b \u2208 R dw are parameters of the gate. The character-level representation gate g c i is defined in the same way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "Finally, the enhanced representation of the i-th word is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "x i = (g w i \u2022 x w i + g c i \u2022 x ca i ) \u2295 x cc i ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "where \u2022 denotes the Hadamard product. We separately calculate g w i and g c i instead of setting g c i = 1 \u2212 g w i because word-and characterlevel representations are not always exclusive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representation Level", "sec_num": null }, { "text": "Although character-level representations can encode semantic information in many cases, they cannot perfectly replace word embeddings. For example, in the following sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction Level", "sec_num": null }, { "text": "\"How does a small town like Linpien come to be home to such a well-organized volunteer effort, and just how did the volunteers set about giving their town a make-over?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction Level", "sec_num": null }, { "text": "The surface information of \"Linpien\" does not provide sufficient clues to infer its meaning and determine whether it is a name. In this case, the model should seek other useful features from the context, such as \"a small town like\" in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction Level", "sec_num": null }, { "text": "However, in our pilot study on OntoNotes, we observe many instances where the model fails to recognize an unseen name even with obvious context clues, along with a huge performance gap in recall between seen (92-96%) and unseen (53-73%) names. A possible reason is that the model can memorize some words without reliable representations in the training set instead of exploiting their contexts in order to reduce the training loss. As a solution to this issue, we encourage the model to leverage contextual features to reduce overfitting to seen names. Compared to names, the context usually consists of more common words. Therefore, contextual features should be more robust when we apply the model to new data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction Level", "sec_num": null }, { "text": "In LSTM, each hidden state h i is computed from the previous forward hidden state \u2212 \u2192 h i\u22121 , next backward hidden state \u2190 \u2212 h i+1 , and the current input x i . To obtain features that are independent of the current input and not affected by its quality, we define context-only features as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction Level", "sec_num": null }, { "text": "o i = \u2212 \u2192 o i \u2295 \u2190 \u2212 o i = F ( \u2212 \u2192 h i\u22121 ) \u2295 F ( \u2190 \u2212 h i+1 ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction Level", "sec_num": null }, { "text": "where F and F are affine transformations followed by a non-linear function such that o i \u2208 R 2d h has the same dimensionality as h i . In order to find an optimal mixture of h i and o i according to the reliability of representations of the current word and its context words, we define two pairs of gates to control the composition: the forward gates \u2212 \u2192 g h i and \u2212 \u2192 g o i , and the backward gates \u2190 \u2212 g h i and \u2190 \u2212 g o i . Figure 3 illustrates how to obtain the forward context-only features \u2212 \u2192 o i and mix it with \u2212 \u2192 h i using reliability-aware gates.", "cite_spans": [], "ref_spans": [ { "start": 427, "end": 435, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Feature Extraction Level", "sec_num": null }, { "text": "All gates are computed in the same way. Take", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction Level", "sec_num": null }, { "text": "x r i-2 x r i-1 small town like Linpien ... ... ... ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction Level", "sec_num": null }, { "text": "It is an unknown word. Rely more on the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Reliability Signals", "sec_num": null }, { "text": "Words in the left context window are common.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Reliability Signals", "sec_num": null }, { "text": "Hidden States", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability-aware Gates", "sec_num": null }, { "text": "Forward LSTM Context-only Features NN o i h i h i-1 h i ' x r i x r i-3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability-aware Gates", "sec_num": null }, { "text": "Reliability Signals Figure 3 : Dynamic feature composition at the feature extraction level. We only show the forward model for the purposes of simplicity.", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 28, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Reliability-aware Gates", "sec_num": null }, { "text": "the forward hidden state gate \u2212 \u2192 g h i as an example: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability-aware Gates", "sec_num": null }, { "text": "\u2212 \u2192 g h i = \u03c3(U h\u2212 \u2192 o i + U r (x r i \u2295 ... \u2295 x r i\u2212C ) + b ), where \u2212 \u2192 g h i is parameterized by U h \u2208 R d h \u00d7d h , U r \u2208 R d h \u00d7dr , and b \u2208 R d h .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability-aware Gates", "sec_num": null }, { "text": "h i = ( \u2212 \u2192 g h i \u2022 \u2212 \u2192 h i + \u2212 \u2192 g o i \u2022 \u2212 \u2192 o i )\u2295( \u2190 \u2212 g h i \u2022 \u2190 \u2212 h i + \u2190 \u2212 g o i \u2022 \u2190 \u2212 o i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability-aware Gates", "sec_num": null }, { "text": "The enhanced hidden state h i is then decoded by a following linear layer as in the baseline model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability-aware Gates", "sec_num": null }, { "text": "We conduct our experiments on OntoNotes 5.0 2 (Weischedel et al., 2013) , the final release of the OntoNotes project because it includes six diverse text genres for us to evaluate the robustness of our approach as Table 1 shows.", "cite_spans": [ { "start": 46, "end": 71, "text": "(Weischedel et al., 2013)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 214, "end": 221, "text": "Table 1", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Data Sets", "sec_num": "3.1" }, { "text": "We adopt the following four common entity types that are also used in other data sets such as TAC-KBP (Ji et al., 2011) : PER (person), ORG (organization), GPE (geo-political entity), and LOC (location). We pre-process the data with Pradhan We use the BIOES tag scheme to annotate tags. The S-prefix indicates a single-token name mention. Prefixes B-, I-, and E-mark the beginning, inside, and end of a multi-token name mention. A word that does not belong to any name mention is annotated as O.", "cite_spans": [ { "start": 102, "end": 119, "text": "(Ji et al., 2011)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets", "sec_num": "3.1" }, { "text": "We use 100-dimensional word embeddings trained on English Wikipedia articles (2017-12-20 dump) with word2vec, and initialize character embeddings as 50-dimensional random vectors. The character-level convolutional networks have filters of width [2, 3, 4] of size 50.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.2" }, { "text": "For the bidirectional LSTM layer, we use a hidden state size of 100. To reduce overfitting, we attach dropout layers (Srivastava et al., 2014) with probability 0.5 to the input and output of the LSTM layer. We use an Adam optimizer with batch size of 20, learning rate of 0.001 and linear learning rate decay.", "cite_spans": [ { "start": 117, "end": 142, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.2" }, { "text": "We use the LSTM-CNN model as our baseline in all experiments. We train and test models on each genre and compare the within-genre results in Table 2. We also merge all genres and show the overall scores in the last column.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Within-genre Results", "sec_num": "3.3" }, { "text": "Overall, with reliability-aware dynamic feature composition, our model achieves up to 6.2% absolute F-score gain on separate genres. T-test results show that the differences are considered to be statistically significant (p < 0.05) to statistically highly significant (p < 0.001).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Within-genre Results", "sec_num": "3.3" }, { "text": "In Figure 4 , we visualize gates that control the mixture of hidden states and context-only features. Each block represents the average of output weights of a certain gate for the correspond- and \u2190 \u2212 g h show that for common words such as \"a\" and \"to\", the model mainly relies on their original hidden states. By contrast, the context-only feature gates \u2212 \u2192 g o and \u2190 \u2212 g o assign greater weights to the unknown word \"Linpien\". Meanwhile, the model barely uses any context-only features for words following \"Linpien\" (\"come\" in the forward model and \"like\" in the backward model) to avoid using unreliable features derived from an unknown word.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Within-genre Results", "sec_num": "3.3" }, { "text": "To our surprise, the model also emphasizes context-only features for the beginning and ending words. Their context-only features actually come from the zero vectors padded to the sequence during gate calculation. Our explanation is that these features may help the model distinguish the beginning and ending words that differ from other words in some aspects. For example, capitalization is usually an indicator of proper nouns for most words except for the first word of a sentence. Figure 4 : Visualization of reliability-aware gates. A darker color indicates a higher average weight.", "cite_spans": [], "ref_spans": [ { "start": 484, "end": 492, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Within-genre Results", "sec_num": "3.3" }, { "text": "Different genres in OntoNotes not only differ in style but also cover different topics and hence different names. As Table 4 : Cross-genre performance on OntoNotes (Fscore, %).", "cite_spans": [], "ref_spans": [ { "start": 117, "end": 124, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Cross-genre Results", "sec_num": "3.4" }, { "text": "In Table 4 , we compare the cross-genre performance between the baseline and our model. For most cross-genre pairs, our model outperforms the baseline and obtains up to 9.6% absolute gains in F-score.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Cross-genre Results", "sec_num": "3.4" }, { "text": "With dynamic feature composition, the crossgenre performance of our model even exceeds the within-genre performance of the baseline model in some cases. For example, when trained on the bn portion and tested on bc, our model achieves 84.8% F-score, which is 1.3% higher than the within-genre performance of the baseline model (83.5% F-score). Such generalization capability is important for real-word applications as it is infeasible to annotate training data for all possible scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-genre Results", "sec_num": "3.4" }, { "text": "In Table 5 , we show some typical name tagging errors corrected by our model. We highlight the difference between the outputs of the baseline model and our model in bold. We also underline words that probably have provided useful contextual clues.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 5", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.5" }, { "text": "Identification Errors BASELINE: The 50-50 joint venture, which may be dubbed Eurodynamics , would have combined annual sales of at least #1.4 billion ($2.17 billion) and would be among the world's largest missile makers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.5" }, { "text": "OUR MODEL: The 50-50 joint venture, which may be dubbed [ORG Eurodynamics] , would have combined annual sales of at least #1.4 billion ($2.17 billion) and would be among the world's largest missile makers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.5" }, { "text": "BASELINE: The Tanshui of illustrations is a place of unblemished beauty, a myth that remains unshakeable. OUR MODEL: The [GPE Tanshui] of illustrations is a place of unblemished beauty, a myth that remains unshakeable. Classification Errors BASELINE: As [PER Syms] 's \"core business of off-price retailing grows, a small subsidiary that is operationally unrelated becomes a difficult distraction,\" said [PER Marcy Syms], president of the parent, in a statement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.5" }, { "text": "OUR MODEL: As [ORG Syms] 's \"core business of off-price retailing grows, a small subsidiary that is operationally unrelated becomes a difficult distraction,\" said [PER Marcy Syms], president of the parent, in a statement. BASELINE: Workers at plants in [GPE Van Nuys] , [GPE Calif.] Character-level representations are particularly effective for words containing morphemes that are related to a certain type of names. For example, \"Eurodynamics\" in the first sentence consists of \"Euro-\" and \"dynamic\". The prefix \"Euro-\" often appears in European organization names such as \"EuroDisney\" (an entertainment resort) and \"Eu-roAtlantic\" (an airline), while \"dynamic\" is used in some company names such as Boston dynamics (a robotics company) and Beyerdynamic (an audio equipment manufacturer). Therefore, \"Eurodynamics\" is likely to be an organization rather than a person or location.", "cite_spans": [ { "start": 253, "end": 267, "text": "[GPE Van Nuys]", "ref_id": null }, { "start": 270, "end": 282, "text": "[GPE Calif.]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.5" }, { "text": "However, for words like \"Tanshui\" (a town) in the second example, character-level representations may not provide much useful semantic information. In this case, contextual features (\"is a place\") play an important role in determining the type of this name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.5" }, { "text": "Contextual features can be critical even for frequent names such as \"Jordan\" (can be a person or a country) and \"Thomson\" (can be various types of entities, including person, organization, city, and river). Take the third sentence in Table 5 as an example. The name \"Syms\" appears twice in the sentence, referring to the Syms Corp and Marcy Syms respectively. As they share the same wordand character-level representations, context clues such as \"core business\" and \"president\" are crucial to distinguish them. Similarly, \"Pontiac\" in the last example can be either a city or a car brand. Cities in its context (e.g., \"Van Nuys, Calif\", \"Oklahoma City\") help the model determine that the first \"Pontiac\" is more likely to be a GPE instead of an ORG.", "cite_spans": [], "ref_spans": [ { "start": 234, "end": 241, "text": "Table 5", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.5" }, { "text": "Still, the contextual information utilized by the current model is not profound enough, and our model is not capable of conducting deep reasoning as human readers. For example, in the following sentence: \"In the middle of the 17th century the Ming dynasty loyalist Zheng Chenggong (also known as Koxinga) brought an influx of settlers to Taiwan from the Fujian and Guangdong regions of China.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.5" }, { "text": "Although our model successfully identifies \"Zheng Chenggong\" as a person, it is not able to connect this name with \"Koxinga\" based on the expression \"also known as\" to further infer that \"Koxinga\" should also be a person.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.5" }, { "text": "Most existing methods treat name tagging as a sequence labeling task. Traditional methods leverage handcrafted features to capture textual signals and employ conditional random fields (CRF) to model label dependencies (Finkel et al., 2005; Settles, 2004; Leaman et al., 2008) .", "cite_spans": [ { "start": 218, "end": 239, "text": "(Finkel et al., 2005;", "ref_id": "BIBREF9" }, { "start": 240, "end": 254, "text": "Settles, 2004;", "ref_id": "BIBREF32" }, { "start": 255, "end": 275, "text": "Leaman et al., 2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work Name Tagging Models", "sec_num": "4" }, { "text": "Bi-LSTM-CRF (Huang et al., 2015) combines word embedding and handcrafted features, integrates neural networks with CRF, and shows performance boost over previous methods. LSTM-CNN further utilizes CNN and illustrates the potential of capturing character-level signals (Chiu and Nichols, 2016) . LSTM-CRF and LSTM-CNNs-CRF are proposed to get rid of hand-crafted features and demonstrate the feasibility to fully rely on representation learning to capture textual features (Lample et al., 2016; Ma and Hovy, 2016; Liu et al., 2018b) . Recently, language modeling methods are proven effective as the representation module for name tagging (Liu et al., 2018a; Peters et al., 2018; Akbik et al., 2018) . At the same time, there has been extensive research about cross-genre Dredze, 2017), crossdomain (Pan et al., 2013; He and Sun, 2017) , cross-time (Mota and Grishman, 2008) , crosstask (S\u00f8gaard and Goldberg, 2016; Liu et al., 2018b) , and cross-lingual (Yang et al., 2017; Lin et al., 2018) adaptation for name tagging training.", "cite_spans": [ { "start": 12, "end": 32, "text": "(Huang et al., 2015)", "ref_id": "BIBREF13" }, { "start": 268, "end": 292, "text": "(Chiu and Nichols, 2016)", "ref_id": "BIBREF5" }, { "start": 472, "end": 493, "text": "(Lample et al., 2016;", "ref_id": "BIBREF18" }, { "start": 494, "end": 512, "text": "Ma and Hovy, 2016;", "ref_id": "BIBREF23" }, { "start": 513, "end": 531, "text": "Liu et al., 2018b)", "ref_id": "BIBREF22" }, { "start": 637, "end": 656, "text": "(Liu et al., 2018a;", "ref_id": "BIBREF21" }, { "start": 657, "end": 677, "text": "Peters et al., 2018;", "ref_id": "BIBREF29" }, { "start": 678, "end": 697, "text": "Akbik et al., 2018)", "ref_id": "BIBREF0" }, { "start": 770, "end": 815, "text": "Dredze, 2017), crossdomain (Pan et al., 2013;", "ref_id": null }, { "start": 816, "end": 833, "text": "He and Sun, 2017)", "ref_id": "BIBREF11" }, { "start": 847, "end": 872, "text": "(Mota and Grishman, 2008)", "ref_id": "BIBREF25" }, { "start": 885, "end": 913, "text": "(S\u00f8gaard and Goldberg, 2016;", "ref_id": "BIBREF33" }, { "start": 914, "end": 932, "text": "Liu et al., 2018b)", "ref_id": "BIBREF22" }, { "start": 953, "end": 972, "text": "(Yang et al., 2017;", "ref_id": "BIBREF37" }, { "start": 973, "end": 990, "text": "Lin et al., 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work Name Tagging Models", "sec_num": "4" }, { "text": "Unlike these models, although we also aim to enhance the performance on new data, we achieve this by improving the generalization capability of the model so that it can work better on unknown new data instead of transferring it to a known target setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work Name Tagging Models", "sec_num": "4" }, { "text": "Recent advances on representation learning allow us to capture textual signals in a data-driven manner. Based on the distributional hypothesis (i.e., \"a word is characterized by the company it keeps\" (Harris, 1954)), embedding methods represent each word as a dense vector, while preserving their syntactic and semantic information in a context-agnostic manner (Mikolov et al., 2013; Pennington et al., 2014) . Recent work shows that word embeddings can cover textual information of various levels (Artetxe et al., 2018) and improve name tagging performance significantly (Cherry and Guo, 2015) . Still, due to the long-tail distri-bution of word frequency, embedding vectors usually have inconsistent reliability, and such inconsistency has been long overlooked.", "cite_spans": [ { "start": 361, "end": 383, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF24" }, { "start": 384, "end": 408, "text": "Pennington et al., 2014)", "ref_id": "BIBREF28" }, { "start": 498, "end": 520, "text": "(Artetxe et al., 2018)", "ref_id": "BIBREF1" }, { "start": 572, "end": 594, "text": "(Cherry and Guo, 2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Word Representation Models", "sec_num": null }, { "text": "Meanwhile, language models such as ELMo, Flair, and BERT have shown their effectiveness on constructing representations in a context-aware manner (Peters et al., 2018; Akbik et al., 2018; Devlin et al., 2018) . These models are designed to better capture the context information by pre-training, while our model dynamically composes representations in a reliability-aware manner. Therefore, our model and these efforts have the potential to mutually enhance each other.", "cite_spans": [ { "start": 146, "end": 167, "text": "(Peters et al., 2018;", "ref_id": "BIBREF29" }, { "start": 168, "end": 187, "text": "Akbik et al., 2018;", "ref_id": "BIBREF0" }, { "start": 188, "end": 208, "text": "Devlin et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Word Representation Models", "sec_num": null }, { "text": "In addition, (Kim et al., 2016) and (Rei et al., 2016 ) also mix word-and character-level representations using gating mechanisms. They use a single gate to balance the representations in a reliability-agnostic way.", "cite_spans": [ { "start": 13, "end": 31, "text": "(Kim et al., 2016)", "ref_id": "BIBREF15" }, { "start": 36, "end": 53, "text": "(Rei et al., 2016", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Word Representation Models", "sec_num": null }, { "text": "We propose a name tagging model that is able to dynamically compose features depending on the quality of input word embeddings. Experiments on the benchmark data sets in both within-genre and cross-genre settings demonstrate the effectiveness of our model and verify our intuition to introduce reliability signals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "Our future work includes integrating advanced word representation methods (e.g., ELMo and BERT) and extending the proposed model to other tasks, such as event extraction and co-reference resolution. We also plan to incorporate external knowledge and common sense as additional signals into our architecture as they are important for human readers to recognize names but still absent from the current model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "https://cemantix.org/data/ontonotes.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Contextual string embeddings for sequence labeling", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the International Con- ference on Computational Linguistics (COLING 2018).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Inigo", "middle": [], "last": "Lopez-Gazpio", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, Inigo Lopez-Gazpio, and Eneko Agirre. 2018. Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation. In Pro- ceedings of the Conference on Computational Natu- ral Language Learning (CoNLL 2018).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A survey of word embeddings evaluation methods", "authors": [ { "first": "Amir", "middle": [], "last": "Bakarov", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1801.09536" ] }, "num": null, "urls": [], "raw_text": "Amir Bakarov. 2018. A survey of word em- beddings evaluation methods. arXiv preprint arXiv:1801.09536.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A fast and accurate dependency parser using neural networks", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP 2014).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The unreasonable effectiveness of word representations for twitter named entity recognition", "authors": [ { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Hongyu", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Cherry and Hongyu Guo. 2015. The unreason- able effectiveness of word representations for twit- ter named entity recognition. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2015).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association of Computational Linguistics", "authors": [ { "first": "Jason", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nichols", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Trans- actions of the Association of Computational Linguis- tics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Transitionbased dependency parsing with stack long shortterm memory", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of The Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short- term memory. In Proceedings of The Annual Meet- ing of the Association for Computational Linguistics (ACL 2015).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Incorporating non-local information into information extraction systems by Gibbs sampling", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Finkel", "suffix": "" }, { "first": "T", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. R. Finkel, T. Grenager, and C. Manning. 2005. In- corporating non-local information into information extraction systems by Gibbs sampling. In ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Distributional structure. Word", "authors": [ { "first": "S", "middle": [], "last": "Zellig", "suffix": "" }, { "first": "", "middle": [], "last": "Harris", "suffix": "" } ], "year": 1954, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zellig S Harris. 1954. Distributional structure. Word.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A unified model for cross-domain and semi-supervised named entity recognition in chinese social media", "authors": [ { "first": "Hangfeng", "middle": [], "last": "He", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "Proceedings of AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hangfeng He and Xu Sun. 2017. A unified model for cross-domain and semi-supervised named entity recognition in chinese social media. In Proceedings of AAAI Conference on Artificial Intelligence (AAAI 2017).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bidirectional lstm-crf models for sequence tagging", "authors": [ { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.01991" ] }, "num": null, "urls": [], "raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An overview of the tac2011 knowledge base population track", "authors": [ { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Hoa", "middle": [ "Trang" ], "last": "Dang", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heng Ji, Ralph Grishman, and Hoa Trang Dang. 2011. An overview of the tac2011 knowledge base pop- ulation track. In Proceedings of the Text Analysis Conference (TAC 2011).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Character-aware neural language models", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "David", "middle": [], "last": "Sontag", "suffix": "" }, { "first": "Alexander M", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2016, "venue": "AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M Rush. 2016. Character-aware neural language models. In AAAI Conference on Artificial Intelli- gence (AAAI 2016).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the International Conference on International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the International Conference on International Conference on Ma- chine Learning (ICML 2001).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Recurrent convolutional neural networks for text classification", "authors": [ { "first": "Siwei", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Liheng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2015, "venue": "Proceedings of AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siwei Lai, Liheng Xu, Kang Liu, and Jian Zhao. 2015. Recurrent convolutional neural networks for text classification. In Proceedings of AAAI Conference on Artificial Intelligence (AAAI 2015).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Lin- guistics (NAACL HLT 2016).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Banner: an executable survey of advances in biomedical named entity recognition", "authors": [ { "first": "Robert", "middle": [], "last": "Leaman", "suffix": "" }, { "first": "Graciela", "middle": [], "last": "Gonzalez", "suffix": "" } ], "year": 2008, "venue": "Pacific Symposium on Biocomputing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Leaman, Graciela Gonzalez, et al. 2008. Ban- ner: an executable survey of advances in biomedical named entity recognition. In Pacific Symposium on Biocomputing.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A multi-lingual multi-task architecture for low-resource sequence labeling", "authors": [ { "first": "Ying", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Shengqi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. 2018. A multi-lingual multi-task architecture for low-resource sequence labeling. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics (ACL 2018).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Efficient contextualized representation: Language model pruning for sequence labeling", "authors": [ { "first": "Liyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jingbo", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liyuan Liu, Xiang Ren, Jingbo Shang, Jian Peng, and Jiawei Han. 2018a. Efficient contextualized repre- sentation: Language model pruning for sequence la- beling. In EMNLP.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Empower sequence labeling with task-aware neural language model", "authors": [ { "first": "Liyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jingbo", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Frank", "middle": [ "Fangzheng" ], "last": "Xu", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Gui", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2018, "venue": "AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liyuan Liu, Jingbo Shang, Xiang Ren, Frank Fangzheng Xu, Huan Gui, Jian Peng, and Jiawei Han. 2018b. Empower sequence la- beling with task-aware neural language model. In AAAI Conference on Artificial Intelligence (AAAI 2018).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. In Proceedings of The Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Is this NE tagger getting old?", "authors": [ { "first": "Cristina", "middle": [], "last": "Mota", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the International Conference on Language Resources and Evaluation (LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cristina Mota and Ralph Grishman. 2008. Is this NE tagger getting old? In Proceedings of the Interna- tional Conference on Language Resources and Eval- uation (LREC 2008).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Transfer joint embedding for cross-domain named entity recognition", "authors": [ { "first": "Zhiqiang", "middle": [], "last": "Sinno Jialin Pan", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Toh", "suffix": "" }, { "first": "", "middle": [], "last": "Su", "suffix": "" } ], "year": 2013, "venue": "ACM Transactions on Information Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sinno Jialin Pan, Zhiqiang Toh, and Jian Su. 2013. Transfer joint embedding for cross-domain named entity recognition. ACM Transactions on Informa- tion Systems.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Multi-task domain adaptation for sequence tagging", "authors": [ { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nanyun Peng and Mark Dredze. 2017. Multi-task do- main adaptation for sequence tagging. In Proceed- ings of the 2nd Workshop on Representation Learn- ing for NLP.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2014).", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT 2018).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Towards robust linguistic analysis using ontonotes", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sameer Pradhan", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Bj\u00f6rkelund", "suffix": "" }, { "first": "Yuchen", "middle": [], "last": "Uryupina", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "", "middle": [], "last": "Zhong", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using ontonotes. In Proceed- ings of the Conference on Computational Natural Language Learning (CoNLL 2013).", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Attending to characters in neural sequence labeling models", "authors": [ { "first": "Marek", "middle": [], "last": "Rei", "suffix": "" }, { "first": "Gamal", "middle": [], "last": "Crichton", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" } ], "year": 2016, "venue": "Proceedings of International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marek Rei, Gamal Crichton, and Sampo Pyysalo. 2016. Attending to characters in neural sequence labeling models. In Proceedings of International Conference on Computational Linguistics (COLING 2016).", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Biomedical named entity recognition using conditional random fields and rich feature sets", "authors": [ { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and Its Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burr Settles. 2004. Biomedical named entity recog- nition using conditional random fields and rich fea- ture sets. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and Its Applications.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Deep multi-task learning with low level tasks supervised at lower layers", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of The Annual Meet- ing of the Association for Computational Linguistics (ACL 2016).", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Dropout: a simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "The Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Learning sentimentspecific word embedding for twitter sentiment classification", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment- specific word embedding for twitter sentiment clas- sification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2014).", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Ontonotes release 5.0 LDC2013T19. Linguistic Data Consortium", "authors": [ { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Kaufman", "suffix": "" }, { "first": "Michelle", "middle": [], "last": "Franchini", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 LDC2013T19. Linguistic Data Consortium, Philadelphia, PA.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Transfer learning for sequence tagging with hierarchical recurrent networks", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017. Transfer learning for sequence tag- ging with hierarchical recurrent networks. In Pro- ceedings of International Conference on Learning Representations (ICLR 2017).", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Hierarchical attention networks for document classification", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Smola", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierar- chical attention networks for document classifica- tion. In Proceedings of the Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL 2016).", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Dynamic feature composition at the word representation level.", "num": null, "type_str": "figure", "uris": null }, "TABREF1": { "content": "
Word Embeddingx w
Reliability
Reliability-aware GatesSignalsx r
Fully Connected Layerx cx c ax cc
Max Pooling Layer
Convolution Layer
Character Embeddings
M e d C h e m
", "type_str": "table", "text": "dw is concatenated as additional features.", "num": null, "html": null }, "TABREF4": { "content": "
et al.'s scripts 3 and therefore follow their split of
training, development, and test sets.
", "type_str": "table", "text": "OntoNotes genres.", "num": null, "html": null }, "TABREF5": { "content": "", "type_str": "table", "text": "bc bn mz nw tc wb all LSTM-CNN 83.5 89.9 86.6 92.8 65.4 79.4 90.1 Rei et al. (2016) 85.4 90.4 87.2 92.5 71.1 77.4 90.0 Our Model* 86.2 91.2 89.8 92.9 71.3 78.5 90.3 Our Model 86.4 91.4 90.0 93.0 71.6 79.1 90.6", "num": null, "html": null }, "TABREF6": { "content": "
Our
", "type_str": "table", "text": "Performance on OntoNotes (F-score, %).", "num": null, "html": null }, "TABREF8": { "content": "
Train Testbcbnmznwtcwb
bc36.353.473.268.981.451.5
bn43.928.572.863.667.849.9
mz81.379.841.182.188.186.4
nw40.243.870.833.155.455.1
tc82.483.293.487.067.879.0
wb54.660.675.470.885.353.4
", "type_str": "table", "text": "Table shows, when tested onanother genre, the model encounters a high percentage of names that are unseen in the training genre. For example, 81.3% names are unseen when we train a model on mz and test it on bc. Therefore, through cross-genre experiments, we can evaluate the generalization capability of the model.", "num": null, "html": null }, "TABREF9": { "content": "
Baseline Model
Train Testbcbnmznwtcwb
bc83.582.470.467.974.875.2
bn83.589.978.775.676.877.1
mz59.270.786.665.966.158.0
nw82.485.472.692.874.476.7
tc53.251.234.038.965.444.3
wb71.578.167.566.670.179.4
Our Model
Train Testbcbnmznwtcwb
bc86.482.576.470.674.776.1
bn84.891.478.779.276.576.1
mz64.373.890.070.557.559.3
nw81.586.174.093.074.978.3
tc58.255.643.647.171.650.4
wb76.378.470.569.672.379.1
", "type_str": "table", "text": "High percentage of unseen names (%).", "num": null, "html": null }, "TABREF11": { "content": "", "type_str": "table", "text": "Name tagging result comparison between the baseline model and our model.", "num": null, "html": null } } } }