Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N18-1002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:54:42.035769Z"
},
"title": "Neural Fine-Grained Entity Type Classification with Hierarchy-Aware Loss",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Alberta",
"location": {
"settlement": "Edmonton",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Denilson",
"middle": [],
"last": "Barbosa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Alberta",
"location": {
"settlement": "Edmonton",
"country": "Canada"
}
},
"email": "denilson@ualberta.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The task of Fine-grained Entity Type Classification (FETC) consists of assigning types from a hierarchy to entity mentions in text. Existing methods rely on distant supervision and are thus susceptible to noisy labels that can be out-of-context or overly-specific for the training sentence. Previous methods that attempt to address these issues do so with heuristics or with the help of hand-crafted features. Instead, we propose an end-to-end solution with a neural network model that uses a variant of crossentropy loss function to handle out-of-context labels, and hierarchical loss normalization to cope with overly-specific ones. Also, previous work solve FETC a multi-label classification followed by ad-hoc post-processing. In contrast, our solution is more elegant: we use public word embeddings to train a single-label that jointly learns representations for entity mentions and their context. We show experimentally that our approach is robust against noise and consistently outperforms the state-of-theart on established benchmarks for the task.",
"pdf_parse": {
"paper_id": "N18-1002",
"_pdf_hash": "",
"abstract": [
{
"text": "The task of Fine-grained Entity Type Classification (FETC) consists of assigning types from a hierarchy to entity mentions in text. Existing methods rely on distant supervision and are thus susceptible to noisy labels that can be out-of-context or overly-specific for the training sentence. Previous methods that attempt to address these issues do so with heuristics or with the help of hand-crafted features. Instead, we propose an end-to-end solution with a neural network model that uses a variant of crossentropy loss function to handle out-of-context labels, and hierarchical loss normalization to cope with overly-specific ones. Also, previous work solve FETC a multi-label classification followed by ad-hoc post-processing. In contrast, our solution is more elegant: we use public word embeddings to train a single-label that jointly learns representations for entity mentions and their context. We show experimentally that our approach is robust against noise and consistently outperforms the state-of-theart on established benchmarks for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Fine-grained Entity Type Classification (FETC) aims at labeling entity mentions in context with one or more specific types organized in a hierarchy (e.g., actor as a subtype of artist, which in turn is a subtype of person). Fine-grained types help in many applications, including relation extraction (Mintz et al., 2009) , question answering (Li and Roth, 2002) , entity linking (Lin et al., 2012) , knowledge base completion (Dong et al., 2014) and entity recommendation (Yu et al., 2014) . Because of the high cost in labeling large training corpora with fine-grained types, current FETC systems resort to distant supervision (Mintz et al., 2009) and annotate mentions in the training corpus with all types associated with the entity in a knowledge graph. This is illustrated in Figure 1 , with three training sentences about entity Steve Kerr. Note that while the entity belongs to three fine-grained types (person, athlete, and coach), some sentences provide evidence of only some of the types: person and coach from S1, person and athlete from S2, and just person for S3. Clearly, direct distant supervision leads to noisy training data which can hurt the accuracy of the FETC model.",
"cite_spans": [
{
"start": 300,
"end": 320,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF10"
},
{
"start": 342,
"end": 361,
"text": "(Li and Roth, 2002)",
"ref_id": "BIBREF6"
},
{
"start": 379,
"end": 397,
"text": "(Lin et al., 2012)",
"ref_id": "BIBREF7"
},
{
"start": 426,
"end": 445,
"text": "(Dong et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 472,
"end": 489,
"text": "(Yu et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 628,
"end": 648,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 781,
"end": 789,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One kind of noise introduced by distant supervision is assigning labels that are out-of-context (athlete in S1 and coach in S2) for the sentence. Current FETC systems sidestep the issue by either ignoring out-of-context labels or using simple pruning heuristics like discarding training examples with entities assigned to multiple types in the knowledge graph. However, both strategies are inelegant and hurt accuracy. Another source of noise introduced by distant supervision is when the type is overly-specific for the context. For instance, example S3 does not support the inference that Mr. Kerr is either an athlete or a coach. Since existing knowledge graphs give more attention to notable entities with more specific types, overly-specific labels bias the model towards popular subtypes instead of generic ones, i.e., preferring athlete over person. Instead of correcting for this bias, most existing FETC systems ignore the problem and treat each type equally and independently, ignoring that many types are semantically related.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Besides failing to handle noisy training data there are two other limitations of previous FETC approaches we seek to address. First, they rely on hand-crafted features derived from various NLP tools; therefore, the inevitable errors introduced by these tools propagate to the FETC systems via the training data. Second, previous systems treat FETC as a multi-label classification problem: during type inference they predict a plausibility score for each type, and, then, either classify types with scores above a threshold (Mintz et al., 2009; Gillick et al., 2014; Shimaoka et al., 2017) or perform a top-down search in the given type hierarchy (Ren et al., 2016a; Abhishek et al., 2017) .",
"cite_spans": [
{
"start": 523,
"end": 543,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF10"
},
{
"start": 544,
"end": 565,
"text": "Gillick et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 566,
"end": 588,
"text": "Shimaoka et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 646,
"end": 665,
"text": "(Ren et al., 2016a;",
"ref_id": "BIBREF12"
},
{
"start": 666,
"end": 688,
"text": "Abhishek et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a neural network based model to overcome the drawbacks of existing FETC systems mentioned above. With publicly available word embeddings as input, we learn two different entity representations and use bidirectional long-short term memory (LSTM) with attention to learn the context representation. We propose a variant of cross entropy loss function to handle out-of-context labels automatically during the training phase. Also, we introduce hierarchical loss normalization to adjust the penalties for correlated types, allowing our model to understand the type hierarchy and alleviate the negative effect of overly-specific labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions:",
"sec_num": null
},
{
"text": "Moreover, in order to simplify the problem and take advantage of previous research on hierarchical classification, we transform the multi-label classification problem to a single-label classification problem. Based on the assumption that each mention can only have one type-path depending on the context, we leverage the fact that type hierarchies are forests, and represent each type-path uniquely by the terminal type (which might not be a leaf node). For Example, type-path rootperson-coach can be represented as just coach, while root-person can be unambiguously represented as the non-leaf person.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions:",
"sec_num": null
},
{
"text": "Finally, we report on an experimental validation against the state-of-the-art on established bench-marks that shows that our model can adapt to noise in training data and consistently outperform previous methods. In summary, we describe a single, much simpler and more elegant neural network model that attempts FETC \"end-to-end\" without post-processing or ad-hoc features and improves on the state-of-the-art for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions:",
"sec_num": null
},
{
"text": "Fine-Grained Entity Type Classification: The first work to use distant supervision (Mintz et al., 2009) to induce a large -but noisy -training set and manually label a significantly smaller dataset to evaluate their FETC system, was Ling and Weld (2012) who introduced both a training and evaluation dataset FIGER (GOLD). They used a linear classifier perceptron for multi-label classification. While initial work largely assumed that mention assignments could be done independently of the mention context, Gillick et al. (2014) introduced the concept of context-dependent FETC where the types of a mention are constrained to what can be deduced from its context and introduced a new OntoNotes-derived (Weischedel et al., 2011) manually annotated evaluation dataset. In addition, they addressed the problem of label noise induced by distant supervision and proposed three label cleaning heuristics. Yogatama et al. (2015) proposed an embedding-based model where userdefined features and labels were embedded into a low dimensional feature space to facilitate information sharing among labels. Ma et al. (2016) presented a label embedding method that incor- porates prototypical and hierarchical information to learn pre-trained label embeddings and adpated a zero-shot framework that can predict both seen and previously unseen entity types. Shimaoka et al. (2016) proposed an attentive neural network model that used LSTMs to encode the context of an entity mention and used an attention mechanism to allow the model to focus on relevant expressions in such context. Shimaoka et al. (2017) summarizes many neural architectures for FETC task. These models ignore the outof-context noise, that is, they assume that all labels obtained via distant supervision are \"correct\" and appropriate for every context in the training corpus. In our paper, a simple yet effective variant of cross entropy loss function is proposed to handle the problem of out-of-context noise.",
"cite_spans": [
{
"start": 83,
"end": 103,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF10"
},
{
"start": 233,
"end": 253,
"text": "Ling and Weld (2012)",
"ref_id": "BIBREF8"
},
{
"start": 507,
"end": 528,
"text": "Gillick et al. (2014)",
"ref_id": "BIBREF4"
},
{
"start": 702,
"end": 727,
"text": "(Weischedel et al., 2011)",
"ref_id": "BIBREF16"
},
{
"start": 899,
"end": 921,
"text": "Yogatama et al. (2015)",
"ref_id": "BIBREF18"
},
{
"start": 1093,
"end": 1109,
"text": "Ma et al. (2016)",
"ref_id": "BIBREF9"
},
{
"start": 1342,
"end": 1364,
"text": "Shimaoka et al. (2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Ren et al. (2016a) have proposed AFET, an FETC system, that separates the loss function for clean and noisy entity mentions and uses labellabel correlation information obtained by given data in its parametric loss function. Considering the noise reduction aspects for FETC systems, Ren et al. (2016b) introduced a method called LNR to reduce label noise without data loss, leading to significant performance gains on both the evaluation dataset of FIGER(GOLD) and OntoNotes. Although these works consider both out-of-context noise and overly-specific noise, they rely on handcrafted features which become an impediment to further improvement of the model performance. For LNR, because the noise reduction step is separated from the FETC model, the inevitable errors introduced by the noise reduction will be propagated into the FETC model which is undesirable. In our FETC system, we handle the problem induced from irrelevant noise and overly-specific noise seamlessly inside the model and avoid the usage of hand-crafted features.",
"cite_spans": [
{
"start": 282,
"end": 300,
"text": "Ren et al. (2016b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Most recently, following the idea from AFET, Abhishek et al. (2017) proposed a simple neural network model which incorporates noisy label information using a variant of non-parametric hinge loss function and gain great performance improvement on FIGER(GOLD). However, their work overlooks the effect of overly-specific noise, treating each type label equally and independently when learning the classifiers and ignores possible correlations among types.",
"cite_spans": [
{
"start": 45,
"end": 67,
"text": "Abhishek et al. (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Hierarchical Loss Function: Due to the intrinsic type hierarchy existing in the task of FETC, it is natural to adopt the idea of hierarchical loss function to adjust the penalties for FETC mistakes depending on how far they are in the hierarchy. The penalty for predicting person instead of athlete should less than the penalty for predicting organization. To the best of our knowledge, the first use of a hierarchical loss function was originally introduced in the context of document categorization with support vector machines (Cai and Hofmann, 2004) . However, that work assumed that weights to control the hierarchical loss would be solicited from domain experts, which is inapplicable for FETC. Instead, we propose a method called hierarchical loss normalization which can overcome the above limitations and be incorporated with cross entropy loss used in our neural architecture. Table 1 provides a summary comparison of our work against the previous state-of-the-art in fine grained entity typing.",
"cite_spans": [
{
"start": 530,
"end": 553,
"text": "(Cai and Hofmann, 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 887,
"end": 894,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our task is to automatically reveal the type information for entity mentions in context. The input is a knowledge graph \u03a8 with schema Y \u03a8 , whose types are organized into a type hierarchy Y, and an automatically labeled training corpus D obtained by distant supervision with Y. The output is a type-path in Y for each named entity mentioned in a test sentence from a corpus D t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Problem",
"sec_num": "3"
},
{
"text": "More precisely, a labeled corpus for entity type classification consists of a set of extracted entity mentions {m i } N i=1 (i.e., token spans representing entities in text), the context (e.g., sentence, paragraph) of each mention {c i } N i=1 , and the candidate type sets {Y i } N i=1 automatically generated for each mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Problem",
"sec_num": "3"
},
{
"text": "We represent the training corpus using a set of mention-based triples",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Problem",
"sec_num": "3"
},
{
"text": "D = {(m i , c i , Y i )} N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Problem",
"sec_num": "3"
},
{
"text": ". If Y i is free of out-of-context noise, the type labels for each m i should form a single type-path in Y i . However, Y i may contain type-paths that are irrelevant to m i in c i if there exists out-of-context noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Problem",
"sec_num": "3"
},
{
"text": "We denote the type set including all terminal types for each type-path as the target type set Y t i . In the example type hierarchy shown in Figure 1 , if Y i contains types person, athlete, coach, Y t i should contain athlete, coach, but not person. In order to understand the trade-off between the effect of out-of-context noise and the size of the training set, we report on experiments with two different training sets: D f iltered only with triples whose Y i form a single type-path in D, and D raw with all triples.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 149,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Background and Problem",
"sec_num": "3"
},
{
"text": "We formulate fine-grained entity classification problem as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Problem",
"sec_num": "3"
},
{
"text": "Definition 1 Given an entity mention m i = (w p , . . . , w t ) (p, t \u2208 [1, T ], p \u2264 t) and its context c i = (w 1 , . . . , w T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Problem",
"sec_num": "3"
},
{
"text": "where T is the context length, our task is to predict its most specific type\u0177 i depending on the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Problem",
"sec_num": "3"
},
{
"text": "In practice, c i is generated by truncating the original context with words beyond the context window size C both to the left and to the right of m i . Specifically, we compute a probability distribution over all the K = |Y| types in the target type hierarchy Y. The type with the highest probability is classified as the predicted type\u0177 i which is the terminal type of the predicted type-path.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Problem",
"sec_num": "3"
},
{
"text": "This section details our Neural Fine-Grained Entity Type Classification (NFETC) model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "As stated in Section 3, the input is an entity mention m i with its context c i . First, we transform each word in the context c i into a real-valued vector to provide lexical-semantic features. Given a word embedding matrix W wrd of size d w \u00d7 |V |, where V is the input vocabulary and d w is the size of word embedding, we map every w i to a column vector w d i \u2208 R dw . To additionally capture information about the relationship to the target entities, we incorporate word position embeddings (Zeng et al., 2014) to reflect relative distances between the i-th word to the entity mention. Every relative distance is mapped to a randomly initialized position vector in R dp , where d p is the size of position embedding. For a given word, we obtain the position vector w p i . The overall embedding for the i-th word is",
"cite_spans": [
{
"start": 496,
"end": 515,
"text": "(Zeng et al., 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representation",
"sec_num": "4.1"
},
{
"text": "w E i = [(w d i ) , (w p i ) ] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representation",
"sec_num": "4.1"
},
{
"text": "For the context c i , we want to apply a non-linear transformation to the vector representation of c i to derive a context feature vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Representation",
"sec_num": "4.2"
},
{
"text": "h i = f (c i ; \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Representation",
"sec_num": "4.2"
},
{
"text": "given a set of parameters \u03b8. In this paper, we adopt bidirectional LSTM with d s hidden units as f (c i ; \u03b8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Representation",
"sec_num": "4.2"
},
{
"text": "The network contains two sub-networks for the forward pass and the backward pass respectively. Here, we use element-wise sum to combine the forward and backward pass outputs. The output of the i-th word in shown in the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Representation",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = [ \u2212 \u2192 h i \u2295 \u2190 \u2212 h i ]",
"eq_num": "(1)"
}
],
"section": "Context Representation",
"sec_num": "4.2"
},
{
"text": "Following Zhou et al. 2016, we employ word-level attention mechanism, which makes our model able to softly select the most informative words during training. Let H be a matrix consisting of output vectors [h 1 , h 2 , . . . , h T ] that the LSTM produced. The context representation r is formed by a weighted sum of these output vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Representation",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "G = tanh(H) (2) \u03b1 = sof tmax(w G)",
"eq_num": "(3)"
}
],
"section": "Context Representation",
"sec_num": "4.2"
},
{
"text": "r c = H\u03b1 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Representation",
"sec_num": "4.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Representation",
"sec_num": "4.2"
},
{
"text": "H \u2208 R ds\u00d7T , w is a trained parameter vec- tor. The dimension of w, \u03b1, r c are d s , T, d s respec- tively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Representation",
"sec_num": "4.2"
},
{
"text": "Averaging encoder: Given the entity mention m i = (w p , . . . , w t ) and its length L = t \u2212 p + 1, the averaging encoder computes the average word embedding of the words in m i . Formally, the averaging representation r a of the mention is computed as follows: This relatively simple method for composing the mention representation is motivated by it being less prone to overfitting (Shimaoka et al., 2017) .",
"cite_spans": [
{
"start": 385,
"end": 408,
"text": "(Shimaoka et al., 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Representation",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r a = 1 L t i=p w d i",
"eq_num": "(5)"
}
],
"section": "Mention Representation",
"sec_num": "4.3"
},
{
"text": "LSTM encoder: In order to capture more semantic information from the mentions, we add one token before and another after the target entity to the mention. The extended mention can be represented as m * i = (w p\u22121 , w p , . . . , w t , w t+1 ). The standard LSTM is applied to the mention sequence from left to right and produces the outputs h p\u22121 , . . . , h t+1 . The last output h t+1 then serves as the LSTM representation r l of the mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Representation",
"sec_num": "4.3"
},
{
"text": "We concatenate context representation and two mention representations together to form the overall feature representation of the input R = [r c , r a , r l ]. Then we use a softmax classifier to predict\u0177 i from a discrete set of classes for a entity mention m and its context c with R as input:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y|m, c) = softmax(W R + b)",
"eq_num": "(6)"
}
],
"section": "Optimization",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y = arg max yp (y|m, c)",
"eq_num": "(7)"
}
],
"section": "Optimization",
"sec_num": "4.4"
},
{
"text": "where W can be treated as the learned type embeddings and b is the bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "4.4"
},
{
"text": "The traditional cross-entropy loss function is represented as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "4.4"
},
{
"text": "J(\u03b8) = \u2212 1 N N i=1 log(p(y i |m i , c i )) + \u03bb \u0398 2 (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "4.4"
},
{
"text": "where y i is the only element in Y t i and (m i , c i , Y i ) \u2208 D f iltered . \u03bb is an L2 regularization hyperparameter and \u0398 denotes all parameters of the considered model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "4.4"
},
{
"text": "In order to handle data with out-of-context noise (in other words, with multiple labeled types) and take full advantage of them, we introduce a simple yet effective variant of the cross-entropy loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "4.4"
},
{
"text": "J(\u03b8) = \u2212 1 N N i=1 log(p(y * i |m i , c i )) + \u03bb \u0398 2 (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "4.4"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "4.4"
},
{
"text": "y * i = arg max y\u2208Y t ip (y|m i , c i ) and (m i , c i , Y i ) \u2208 D raw .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "4.4"
},
{
"text": "With this loss function, we assume that the type with the highest probability among Y t i during training as the correct type. If there is only one element in Y t i , this loss function is equivalent to the cross-entropy loss function. Wherever there are multiple elements, it can filter the less probable types based on the local context automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "4.4"
},
{
"text": "Since the fine-grained types tend to form a forest of type hierarchies, it is unreasonable to treat every type equally. Intuitively, it is better to predict an ancestor type of the true type than some other unrelated type. For instance, if one example is labeled as athlete, it is reasonable to predict its type as person. However, predicting other high level types like location or organization would be inappropriate. In other words, we want the loss function to penalize less the cases where types are related. Based on the above idea, we adjust the estimated probability as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Loss Normalization",
"sec_num": "4.5"
},
{
"text": "p * (\u0177|m, c) = p(\u0177|m, c) + \u03b2 * t\u2208\u0393 p(t|m, c) (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Loss Normalization",
"sec_num": "4.5"
},
{
"text": "where \u0393 is the set of ancestor types along the type-path of\u0177, \u03b2 is a hyperparameter to tune the penalty. Afterwards, we re-normalize it back to a probability distribution, a process which we denote as hierarchical loss normalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Loss Normalization",
"sec_num": "4.5"
},
{
"text": "As discussed in Section 1, there exists overlyspecific noise in the automatically labeled training sets which hurt the model performance severely. With hierarchical loss normalization, the model will get less penalty when it predicts the actual type for one example with overly-specific noise. Hence, it can alleviate the negative effect of overly-specific noise effectively. Generally, hierarchical loss normalization can make the model somewhat understand the given type hierarchy and learn to detect those overly-specific cases. During classification, it will make the models prefer generic types unless there is a strong indicator for a more specific type in the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Loss Normalization",
"sec_num": "4.5"
},
{
"text": "Dropout, proposed by Hinton et al. 2012, prevents co-adaptation of hidden units by randomly omitting feature detectors from the network during forward propagation. We employ both input and output dropout on LSTM layers. In addition, we constrain L2-norms for the weight vectors as shown in Equations 8, 9 and use early stopping to decide when to stop training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "4.6"
},
{
"text": "This section reports an experimental evaluation of our NFETC approach using the previous state-ofthe-art as baselines. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We evaluate the proposed model on two standard and publicly available datasets, provided in a preprocessed tokenized format by Shimaoka et al. (2017) . Table 2 shows statistics about the benchmarks. The details are as follows:",
"cite_spans": [
{
"start": 127,
"end": 149,
"text": "Shimaoka et al. (2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 152,
"end": 159,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "\u2022 FIGER(GOLD): The training data consists of Wikipedia sentences and was automatically generated with distant supervision, by mapping Wikipedia identifiers to Freebase ones. The test data, mainly consisting of sentences from news reports, was manually annotated as described by Ling and Weld (2012) .",
"cite_spans": [
{
"start": 278,
"end": 298,
"text": "Ling and Weld (2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "\u2022 OntoNotes: The OntoNotes dataset consists of sentences from newswire documents present in the OntoNotes text corpus (Weischedel et al., 2013) . DBpedia spotlight (Daiber et al., 2013) was used to automatically link entity mention in sentences to Freebase. Manually annotated test data was shared by Gillick et al. (2014) .",
"cite_spans": [
{
"start": 118,
"end": 143,
"text": "(Weischedel et al., 2013)",
"ref_id": "BIBREF17"
},
{
"start": 164,
"end": 185,
"text": "(Daiber et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 301,
"end": 322,
"text": "Gillick et al. (2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "Because the type hierarchy can be somewhat understood by our proposed model, the quality of the type hierarchy can also be a key factor to the performance of our model. We find that the type hierarchy for FIGER(GOLD) dataset following Freebase has some flaws. For example, software is not a subtype of product and government is not a subtype of organization. Following the proposed type hierarchy of Ling and Weld (2012) , we refine the Freebase-based type hierarchy. The process is a one-to-one mapping for types in the original dataset and we didn't add or drop any type or sentence in the original dataset. As a result, we can directly compare the results of our proposed model with or without this refinement.",
"cite_spans": [
{
"start": 400,
"end": 420,
"text": "Ling and Weld (2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "Aside from the advantages brought by adopting the single label classification setting, we can see one disadvantage of this setting based on Table 2. That is, the performance upper bounds of our proposed model are no longer 100%: for example, the best strict accuracy we can get in this setting is 88.28% for FIGER(GOLD). However, as the strict accuracy of state-of-the-art methods are still nowhere near 80% (Table 3) , the evaluation we perform is still informative.",
"cite_spans": [],
"ref_spans": [
{
"start": 408,
"end": 417,
"text": "(Table 3)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "We compared the proposed model with state-ofthe-art FETC systems 1 : (1) Attentive (Shimaoka et al., 2017) ; (2) AFET (Ren et al., 2016a) ; 3LNR+FIGER (Ren et al., 2016b) ; (4) AAA (Abhishek et al., 2017) .",
"cite_spans": [
{
"start": 83,
"end": 106,
"text": "(Shimaoka et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 118,
"end": 137,
"text": "(Ren et al., 2016a)",
"ref_id": "BIBREF12"
},
{
"start": 151,
"end": 170,
"text": "(Ren et al., 2016b)",
"ref_id": "BIBREF13"
},
{
"start": 181,
"end": 204,
"text": "(Abhishek et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "We compare these baselines with variants of our proposed model: (1) NFETC(f): basic neural model trained on D f iltered (recall Section 4.4);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "(2) NFETC-hier(f): neural model with hierarichcal loss normalization trained on D f iltered . (3) NFETC(r): neural model with proposed variant of cross-entropy loss trained on D raw ; (4) NFETC-hier(r): neural model with proposed variant of cross-entropy loss and hierarchical loss normalization trained on D raw .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "For evaluation metrics, we adopt the same criteria as Ling and Weld (2012) , that is, we evaluate the model performance by strict accuracy, loose macro, and loose micro F-scores. These measures are widely used in existing FETC systems (Shimaoka et al., 2017; Ren et al., 2016b,a; Abhishek et al., 2017) .",
"cite_spans": [
{
"start": 54,
"end": 74,
"text": "Ling and Weld (2012)",
"ref_id": "BIBREF8"
},
{
"start": 235,
"end": 258,
"text": "(Shimaoka et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 259,
"end": 279,
"text": "Ren et al., 2016b,a;",
"ref_id": null
},
{
"start": 280,
"end": 302,
"text": "Abhishek et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.3"
},
{
"text": "We use pre-trained word embeddings that were not updated during training to help the model generalize to words not appearing in the training set. For this purpose, we used the freely available 300-dimensional cased word embedding trained on 840 billion tokens from the Common Crawl supplied by Pennington et al. (2014) . For both datasets, we randomly sampled 10% of the test set as a development set, on which we do the hyperparameters tuning. The remaining 90% is used for final evaluation. We run each model with the welltuned hyperparameter setting five times and report their average strict accuracy, macro F1 and micro F1 on the test set. The proposed model was implemented using the TensorFlow framework. 2 Parameter FIGER(GOLD) OntoNotes lr 0.0002 0.0002 ",
"cite_spans": [
{
"start": 294,
"end": 318,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF11"
},
{
"start": 712,
"end": 713,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.3"
},
{
"text": "d p 85 20 d s 180 440 p i 0.7 0.5 p o 0.9 0.5 \u03bb 0.0 0.0001 \u03b2 0.4 0.3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.3"
},
{
"text": "In this paper, we search different hyperparameter settings for FIGER(GOLD) and OntoNotes separately, considering the differences between the two datasets. The hyperparameters include the learning rate lr for Adam Optimizer, size of word position embeddings (WPE) d p , state size for LSTM layers d s , input dropout keep probability p i and output dropout keep probability p o for LSTM layers 3 , L2 regularization parameter \u03bb and parameter to tune hierarchical loss normalization \u03b2. The values of these hyperparameters, obtained by evaluating the model performance on the development set, for each dataset can be found in Table 4 . Table 3 compares our models with other stateof-the-art FETC systems on FIGER(GOLD) and",
"cite_spans": [],
"ref_spans": [
{
"start": 623,
"end": 630,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 633,
"end": 640,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Hyperparameter Setting",
"sec_num": "5.4"
},
{
"text": "OntoNotes. The proposed model performs better than the existing FETC systems, consistently on both datasets. This indicates benefits of the proposed representation scheme, loss function and hierarchical loss normalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance comparison and analysis",
"sec_num": "5.5"
},
{
"text": "Discussion about Out-of-context Noise: For dataset FIGER(GOLD), the performance of our model with the proposed variant of cross-entropy loss trained on D raw is significantly better than the basic neural model trained on D f iltered , suggesting that the proposed variant of the cross-entropy loss function can make use of the data with outof-context noise effectively. On the other hand, the improvement introduced by our proposed variant of cross-entropy loss is not as significant for the OntoNotes benchmark. This may be caused by the fact that OntoNotes is much smaller than FIGER(GOLD) and proportion of examples without out-of-context noise are also higher, as shown in Table 2 . 57.9 \u00b1 1.3 78.4 \u00b1 0.8 75.0 \u00b1 0.7 54.4 \u00b1 0.3 71.5 \u00b1 0.4 64.9 \u00b1 0.3 NFETC-hier(f) 68.0 \u00b1 0.8 81.4 \u00b1 0.8 77.9 \u00b1 0.7 59.6 \u00b1 0.2 76.1 \u00b1 0.2 69.7 \u00b1 0.2 NFETC(r)",
"cite_spans": [],
"ref_spans": [
{
"start": 677,
"end": 684,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Performance comparison and analysis",
"sec_num": "5.5"
},
{
"text": "56.2 \u00b1 1.0 77.2 \u00b1 0.9 74.3 \u00b1 1.1 54.8 \u00b1 0.4 71.8 \u00b1 0.4 65.0 \u00b1 0.4 NFETC-hier(r) 68.9 \u00b1 0.6 81.9 \u00b1 0.7 79.0 \u00b1 0.7 60.2 \u00b1 0.2 76.4 \u00b1 0.1 70.2 \u00b1 0.2 Investigations on Overly-Specific Noise: With hierarchical loss normalization, the performance of our models are consistently better no matter whether trained on D raw or D f iltered on both datasets, demonstrating the effectiveness of this hierarchical loss normalization and showing that overly-specific noise has a potentially significant influence on the performance of FETC systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance comparison and analysis",
"sec_num": "5.5"
},
{
"text": "By visualizing the learned type embeddings (Figure 3) , we can observe that the parent types are mixed with their subtypes and forms clear distinct clusters without hierarchical loss normalization, making it hard for the model to distinguish subtypes like actor or athlete from their parent types person. This also biases the model towards the most popular subtype. While the parent types tend to cluster together and the general pattern is more complicated with hierarchical loss normalization. Although it's not as easy to interpret, it hints that our model can learn rather subtle intricacies and correlations among types latent in the data with the help of hierarchical loss normalization, instead of sticking to a pre-defined hierarchy.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 53,
"text": "(Figure 3)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "T-SNE Visualization of Type Embeddings",
"sec_num": "5.6"
},
{
"text": "Since there are only 563 sentences for testing in FIGER(GOLD), we look into the predictions for all the test examples of all variants of our model. Table 5 shows 5 examples of test sentence. Without hierarchical loss normalization, our model will make too aggressive predictions for S1 with Politician and for S2 with Software. This kind of mistakes are very common and can be effectively reduced by introducing hierarchical loss normalization leading to significant improvements on the model performance. Using the changed loss function to handle multi-label (noisy) training data can help the model distinguish ambiguous cases. For example, our model trained on D f iltered will misclassify S5 as Title, while the model trained on D raw can make the correct prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Error Analysis on FIGER(GOLD)",
"sec_num": "5.7"
},
{
"text": "However, there are still some errors that can't be fixed with our model. For example, our model cannot make correct predictions for S3 and S4 due to the fact that our model doesn't know that UW is an abbreviation of University of Washington and Washington state is the name of a province. In addition, the influence of overly-specific noise can only be alleviated but not eliminated. Sometimes, our model will still make too aggressive or conservative predictions. Also, mixing up very ambiguous entity names is inevitable in this task. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis on FIGER(GOLD)",
"sec_num": "5.7"
},
{
"text": "In this paper, we studied two kinds of noise, namely out-of-context noise and overly-specific noise, for noisy type labels and investigate their effects on FETC systems. We proposed a neural network based model which jointly learns representations for entity mentions and their context. A variant of cross-entropy loss function was used to handle out-of-context noise. Hierarchical loss normalization was introduced into our model to alleviate the effect of overly-specific noise. Experimental results on two publicly available datasets demonstrate that the proposed model is robust to these two kind of noise and outperforms previous state-of-the-art methods significantly. More work can be done to further develop hierarchical loss normalization since currently it's very simple. Considering type information is valuable in various NLP tasks, we can incorporate results produced by our FETC system to other tasks, such as relation extraction, to check our model's effectiveness and help improve other tasks' per-formance. In addition, tasks like relation extraction are complementary to the task of FETC and therefore may have potentials to be digged to help improve the performance of our system in return.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Further Work",
"sec_num": "6"
},
{
"text": "The results of the baselines are all as reported in their corresponding papers.2 The code to replicate the work is available at: https: //github.com/billy-inn/NFETC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Following TensorFlow terminology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fine-grained entity type classification by jointly learning representations and label embeddings",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Abhishek",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Awekar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL",
"volume": "",
"issue": "",
"pages": "797--807",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Abhishek, Ashish Anand, and Amit Awekar. 2017. Fine-grained entity type classification by jointly learning representations and label embed- dings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computa- tional Linguistics (EACL) pages 797-807.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hierarchical document categorization with support vector machines",
"authors": [
{
"first": "Lijuan",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the thirteenth ACM international conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "78--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lijuan Cai and Thomas Hofmann. 2004. Hierarchi- cal document categorization with support vector ma- chines. In Proceedings of the thirteenth ACM inter- national conference on Information and knowledge management. ACM, pages 78-87.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improving efficiency and accuracy in multilingual entity extraction",
"authors": [
{
"first": "Joachim",
"middle": [],
"last": "Daiber",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Jakob",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Hokamp",
"suffix": ""
},
{
"first": "Pablo",
"middle": [
"N"
],
"last": "Mendes",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 9th International Conference on Semantic Systems",
"volume": "",
"issue": "",
"pages": "121--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachim Daiber, Max Jakob, Chris Hokamp, and Pablo N Mendes. 2013. Improving efficiency and accuracy in multilingual entity extraction. Proceed- ings of the 9th International Conference on Semantic Systems pages 121-124.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Knowledge vault: A web-scale approach to probabilistic knowledge fusion",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Geremy",
"middle": [],
"last": "Heitz",
"suffix": ""
},
{
"first": "Wilko",
"middle": [],
"last": "Horn",
"suffix": ""
},
{
"first": "Ni",
"middle": [],
"last": "Lao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Strohmann",
"suffix": ""
},
{
"first": "Shaohua",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining pages",
"volume": "",
"issue": "",
"pages": "601--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowl- edge fusion. Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining pages 601-610.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Contextdependent fine-grained entity type tagging",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Nevena",
"middle": [],
"last": "Lazic",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Kirchner",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Huynh",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.1820"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh. 2014. Context- dependent fine-grained entity type tagging. arXiv preprint arXiv:1412.1820 .",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improving neural networks by preventing coadaptation of feature detectors",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Geoffrey E Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ruslan R",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1207.0580"
]
},
"num": null,
"urls": [],
"raw_text": "Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing co- adaptation of feature detectors. arXiv preprint arXiv:1207.0580 .",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning question classifiers",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th international conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Li and Dan Roth. 2002. Learning question classi- fiers. In Proceedings of the 19th international con- ference on Computational linguistics-Volume 1. As- sociation for Computational Linguistics, pages 1-7.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "No noun phrase left behind: detecting and typing unlinkable entities",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "893--903",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Lin, Oren Etzioni, et al. 2012. No noun phrase left behind: detecting and typing unlinkable enti- ties. Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning pages 893-903.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Fine-grained entity recognition",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daniel S Weld",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ling and Daniel S Weld. 2012. Fine-grained en- tity recognition. AAAI .",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Label embedding for zero-shot fine-grained named entity typing",
"authors": [
{
"first": "Yukun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Sa",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yukun Ma, Erik Cambria, and Sa Gao. 2016. La- bel embedding for zero-shot fine-grained named en- tity typing. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. pages 171-180.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP pages",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natu- ral Language Processing of the AFNLP pages 1003- 1011.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. EMNLP 14(1532-1543).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Afet: Automatic finegrained entity typing by hierarchical partial-label embedding",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Wenqi",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "16",
"issue": "17",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Ren, Wenqi He, Meng Qu, Lifu Huang, Heng Ji, and Jiawei Han. 2016a. Afet: Automatic fine- grained entity typing by hierarchical partial-label embedding. EMNLP 16(17).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Label noise reduction in entity typing by heterogeneous partial-label embedding",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Wenqi",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Clare",
"middle": [
"R"
],
"last": "Voss",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Ren, Wenqi He, Meng Qu, Clare R Voss, Heng Ji, and Jiawei Han. 2016b. Label noise reduction in entity typing by heterogeneous partial-label embed- ding. KDD .",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An attentive neural architecture for fine-grained entity type classification",
"authors": [
{
"first": "Sonse",
"middle": [],
"last": "Shimaoka",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1604.05525"
]
},
"num": null,
"urls": [],
"raw_text": "Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2016. An attentive neural ar- chitecture for fine-grained entity type classification. arXiv preprint arXiv:1604.05525 .",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural architectures for fine-grained entity type classification",
"authors": [
{
"first": "Sonse",
"middle": [],
"last": "Shimaoka",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2017. Neural architectures for fine-grained entity type classification. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics (EACL) .",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Ontonotes: A large training corpus for enhanced processing. Handbook of Natural Language Processing and Machine Translation",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Belvin",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Weischedel, Eduard Hovy, Mitchell Mar- cus, Martha Palmer, Robert Belvin, Sameer Prad- han, Lance Ramshaw, and Nianwen Xue. 2011. Ontonotes: A large training corpus for enhanced processing. Handbook of Natural Language Pro- cessing and Machine Translation. Springer .",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadelphia",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Kaufman",
"suffix": ""
},
{
"first": "Michelle",
"middle": [],
"last": "Franchini",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadel- phia, PA .",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Embedding methods for fine grained entity type classification",
"authors": [
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Nevena",
"middle": [],
"last": "Lazic",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "291--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dani Yogatama, Daniel Gillick, and Nevena Lazic. 2015. Embedding methods for fine grained entity type classification. ACL (2) pages 291-296.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Personalized entity recommendation: A heterogeneous information network approach",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Yizhou",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Quanquan",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Bradley",
"middle": [],
"last": "Sturt",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Brandon",
"middle": [],
"last": "Norick",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 7th ACM international conference on Web search and data mining pages",
"volume": "",
"issue": "",
"pages": "283--292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Yu, Xiang Ren, Yizhou Sun, Quanquan Gu, Bradley Sturt, Urvashi Khandelwal, Brandon Norick, and Jiawei Han. 2014. Personalized entity recommendation: A heterogeneous information net- work approach. Proceedings of the 7th ACM inter- national conference on Web search and data mining pages 283-292.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Relation classification via convolutional deep neural network. COLING pages",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "2335--2344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via convolutional deep neural network. COLING pages 2335-2344.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Attentionbased bidirectional long short-term memory networks for relation classification",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Zhenyu",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Bingchen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hongwei",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "The 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention- based bidirectional long short-term memory net- works for relation classification. The 54th Annual Meeting of the Association for Computational Lin- guistics .",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "With distant supervision, all the three mentions of Steve Kerr shown are labeled with the same types in oval boxes in the target type hierarchy. While only part of the types are correct: person and coach for S1, person and athlete for S2, and just person for S3.",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "The architecture of the NFETC model.",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "T-SNE visualization of the type embeddings learned from FIGER(GOLD) dataset where subtypes share the same color as their parent type. The seven parent types are shown in the black boxes. The below sub-figure uses the hierarchical loss normalization, while the above not.",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>Attentive (Shimaoka</td></tr></table>",
"num": null,
"text": "Summary comparison to related FETC work. FETC systems listed in the table: (1)",
"html": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Statistics of the datasets",
"html": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Hyperparameter Settings",
"html": null
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td>Test Sentence</td></tr></table>",
"num": null,
"text": "Strict Accuracy, Macro F1 and Micro F1 for the models tested on the FIGER(GOLD) and OntoNotes datasets. Hopkins said four fellow elections is curious , considering the . . . Person S2: . . . for WiFi communications across all the SD cards. Product S3: A handful of professors in the UW Department of Chemistry . . . Educational Institution S4: Work needs to be done and, in Washington state, . . . Province S5: ASC Director Melvin Taing said that because the commission is . . . Organization",
"html": null
},
"TABREF7": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Examples of test sentences in FIGER(GOLD) where the entity mentions are marked as bold italics.",
"html": null
}
}
}
}