{ "paper_id": "C18-1036", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:08:31.057146Z" }, "title": "Cooperative Denoising for Distantly Supervised Relation Extraction", "authors": [ { "first": "Kai", "middle": [], "last": "Lei", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University Shenzhen Graduate School", "location": {} }, "email": "leik@pkusz.edu" }, { "first": "Daoyuan", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University Shenzhen Graduate School", "location": {} }, "email": "chendaoyuan@pku.edu.cn" }, { "first": "Yaliang", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "yaliangli@tencent.com" }, { "first": "Nan", "middle": [], "last": "Du", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Min", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences", "location": {} }, "email": "min.yang@siat.ac.cn" }, { "first": "Wei", "middle": [], "last": "Fan", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ying", "middle": [], "last": "Shen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University Shenzhen Graduate School", "location": {} }, "email": "shenying@pkusz.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Distantly supervised relation extraction greatly reduces human efforts in extracting relational facts from unstructured texts. However, it suffers from noisy labeling problem, which can degrade its performance. Meanwhile, the useful information expressed in knowledge graph is still underutilized in the state-of-the-art methods for distantly supervised relation extraction. In the light of these challenges, we propose CORD, a novel COopeRative Denoising framework, which consists two base networks leveraging text corpus and knowledge graph respectively, and a cooperative module involving their mutual learning by the adaptive bi-directional knowledge distillation and dynamic ensemble with noisy-varying instances. Experimental results on a real-world dataset demonstrate that the proposed method reduces the noisy labels and achieves substantial improvement over the state-of-the-art methods.", "pdf_parse": { "paper_id": "C18-1036", "_pdf_hash": "", "abstract": [ { "text": "Distantly supervised relation extraction greatly reduces human efforts in extracting relational facts from unstructured texts. However, it suffers from noisy labeling problem, which can degrade its performance. Meanwhile, the useful information expressed in knowledge graph is still underutilized in the state-of-the-art methods for distantly supervised relation extraction. In the light of these challenges, we propose CORD, a novel COopeRative Denoising framework, which consists two base networks leveraging text corpus and knowledge graph respectively, and a cooperative module involving their mutual learning by the adaptive bi-directional knowledge distillation and dynamic ensemble with noisy-varying instances. Experimental results on a real-world dataset demonstrate that the proposed method reduces the noisy labels and achieves substantial improvement over the state-of-the-art methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Relation extraction aims to discover the semantic relationships between entities. Recently, it has attracted increasing attention due to its broad applications in many machine learning and natural language processing tasks such as Knowledge Graph (KG) construction (Shin et al., 2015) , information retrieval (Kadry and Dietz, 2017) , and question answering (Abujabal et al., 2017) .", "cite_spans": [ { "start": 265, "end": 284, "text": "(Shin et al., 2015)", "ref_id": "BIBREF23" }, { "start": 309, "end": 332, "text": "(Kadry and Dietz, 2017)", "ref_id": "BIBREF11" }, { "start": 358, "end": 381, "text": "(Abujabal et al., 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Distant supervision is one of the most important techniques in practice for relation extraction due to its ability to generate large-scale labeled training data automatically by aligning KGs to text corpus. Despite its effectiveness, it suffers from noisy labeling problem such as false negative examples, which can severely degrade its performance. To alleviate this limitation, multi-instance learning and probabilistic graphical models have been widely explored by existing work (Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012 ). Due to the success of deep learning, there has been increasing interest in applying deep neural networks to solve this problem. Zeng et al. (2015) proposed a piecewise CNN model combining multi-instance paradigm, Lin et al. (2016) and introduced sentencelevel and multi-lingual attention to alleviate the side-effect caused by noisy instances, and Luo et al. (2017) further modeled noises explicitly.", "cite_spans": [ { "start": 482, "end": 503, "text": "(Riedel et al., 2010;", "ref_id": "BIBREF22" }, { "start": 504, "end": 526, "text": "Hoffmann et al., 2011;", "ref_id": "BIBREF8" }, { "start": 527, "end": 548, "text": "Surdeanu et al., 2012", "ref_id": "BIBREF24" }, { "start": 680, "end": 698, "text": "Zeng et al. (2015)", "ref_id": "BIBREF27" }, { "start": 765, "end": 782, "text": "Lin et al. (2016)", "ref_id": "BIBREF15" }, { "start": 900, "end": 917, "text": "Luo et al. (2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, most existing studies reduce noises by leveraging information within text corpus, ignoring the relational facts expressed in other information sources, such as KG triples or semi-structured tables. Leveraging various information sources simultaneously, which takes full advantage of diverse and supplementary information in different sources, is beneficial to reduce noisy labels for distant supervision. To be more specific, we transform each sentence into entity sequence based on KG information, which is helpful to locate critical entities and adjust word-based network when their predictions are not consistent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Moreover, by incorporating information from other sources, distantly supervised relation extraction methods can better handle \"Not A relation\" (NA) class, which is the main reason for the noisy label problem. The large proportion of NA instances is typical in distantly supervised relation extraction task, and it's non-trivial to characterize the NA patterns only based on text-corpus information. By considering the information from KG, entity sequence can easily discriminate NA from other relations, since entities appeared in NA sentence usually are not connected in KG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Motivated by the above observations, we propose a novel cooperative denoising framework (see Figure 1) to leverage the corpus-based and KG-based information. Specifically, we design two base networks, Corpus-Net and KG-Net, which are modeled with two separate Gated Recurrent Unit (GRU) networks, to predict relations using word-sequence and entity-sequence respectively. For KG-Net, we employ network embedding and KG embedding methods to pre-train the entity embeddings, and then project the prediction to a logic rule regularization subspace. Afterward, we design a cooperative module which involves the interactive learning between the two base networks with an adaptive bi-directional knowledge distillation mechanism, and the predictions of the base networks are dynamically integrated by an ensemble method. The key insight is that the base networks trained on different sources can learn complementary information, and thus the cooperative learning can benefit from the complementarity of different expressions of the same relational fact.", "cite_spans": [ { "start": 93, "end": 102, "text": "Figure 1)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our main contributions are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We explore the feasibility of distantly supervised relation extraction by leveraging the information from different sources cooperatively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We devise a bi-directional knowledge distillation mechanism to enhance each base network via supplementary supervision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We design an adaptive imitation rate setting and a dynamic ensemble strategy to guide the training procedure and help the prediction of noisy-varying instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The experimental results on a benchmark dataset show that the proposed method has robust superiority over compared methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we focus on the task of distantly supervised relation extraction. Our goal is to predict relation r for a given entity pair < e 1 , e 2 >. The proposed framework CORD conducts with multiinstance learning, i.e., we take a bag of sentences mentioning both entity e 1 and e 2 as input, and we compute the probabilities for each relation expressed by this bag as output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "As Figure 1 shows, given a collection of sentences containing the target entity pair, we first transform each sentence into its distributed word-sequence and entity-sequence representations, and predict relation respectively using the attention weighted representation via a multi-instance learning mechanism. We also project the prediction of KG-Net to a logic rule regularization subspace. Then, we train the two base networks simultaneously with a bi-directional knowledge distillation method, in which the predictions of KG-Net and Corpus-Net are used as soft labels for each other. The final prediction is the ensemble ", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "Figure 3: Entity-Sequence Encoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "+ + + +", "sec_num": null }, { "text": "of the two base networks, and their weights in the ensemble are dynamically adjusted. In the rest of this section, we elaborate each component of CORD in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "+ + + +", "sec_num": null }, { "text": "We first introduce the word-sequence encoder shown in Figure 2 , which transforms each sentence s i to corresponding word-based sentence representation s w i . Input Representation Each word in the sentence is mapped to a low-dimensional embedding w i \u2208 R dw through a word embedding layer, where d w denotes the size of the embedding. Similar to Zeng et al. (2014) and Lin et al. (2016) , we encode the relative distances to entity e 1 and e 2 as p i \u2208 R 2dp , where 2d p is the size of the position embedding. The distances are helpful in emphasizing how informative the word is thereby enabling better discrimination for relation extraction. We concatenate word vector w i and position vector p i as w i = w i p i , w i \u2208 R dw+2dp , and feed words input", "cite_spans": [ { "start": 347, "end": 365, "text": "Zeng et al. (2014)", "ref_id": "BIBREF26" }, { "start": 370, "end": 387, "text": "Lin et al. (2016)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 54, "end": 62, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "s i = {w 1 , . . . , w n } to Bi-GRU layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "Bi-directional GRU Layer We employ bi-directional GRU to encode patterns in s i into a hidden representation s w (Cho et al., 2014) .", "cite_spans": [ { "start": 113, "end": 131, "text": "(Cho et al., 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "i = {h f 1 h b T , . . . , h f T h b 1 },", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "Hierarchical Attention As Figure 2 and Figure 1 show, we apply hierarchical attention for Corpus-Net involving word and sentence levels respectively. The motivation is that the semantic meaning of a sentence bag representation is gathered by vectors in different levels from the bottom to up, and the vectors in different context do not contribute equally. Some words express targeted relation more relevantly than others in a sentence. Furthermore, since we concatenate position vector to each word, a word with different distance is also importance-varying. As for sentence-level attention, it can reduce the weights of the sentences that suffer from wrong-labeling and improper-bagging (i.e., express inconsistent relations) problems. Inspired by Lin et al. (2016) , we calculate the aggregation of different level attentions with unified form as:", "cite_spans": [ { "start": 750, "end": 767, "text": "Lin et al. (2016)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 26, "end": 34, "text": "Figure 2", "ref_id": "FIGREF3" }, { "start": 39, "end": 47, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X = n i=1 a i x i ; a i = exp(x i Ar) n j=1 exp(x j Ar) ,", "eq_num": "(1)" } ], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "where x i is individual input vector in different levels, X is the corresponding sum of them, r is randomly initialized global relation vector. Note that we set different r and weighted diagonal matrix A for different attention levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "Finally, the Bi-GRU output of each word w i is gathered to sentence representation s w , and then to the bag-wise representation S w . We feed the resulting vector S w to a Softmax classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "Prediction With S w as input, the condition probability of each relation j is calculated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p j (S w ) = exp(o j ) nr i=1 exp(o i ) ; o = WS w + b,", "eq_num": "(2)" } ], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "o = [o 1 , . . . , o nr ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "is calculated with coefficient matrix W \u2208 R nr\u00d7(2d h ) and bias b \u2208 R nr . n r is the number of relations and o j measures how well S w matches relation j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus-Network", "sec_num": "2.1" }, { "text": "Besides Corpus-Net, we propose another base network to incorporate information of KG. The entitysequence encoder transforms each sentence s i to entity-based sentence representation s e i as illustrated in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 214, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "KG-Network", "sec_num": "2.2" }, { "text": "Input Representation One of the key challenges of leveraging KG information is identifying what information to use. Here we employ an extensible manner by using different entity embedding methods. For instance, network embedding methods such as DeepWalk (Perozzi et al., 2014) primarily encode graph structure information, while knowledge embedding methods such as TransE (Bordes et al., 2013) usually focus on triples information, we can flexibly use one of them or merge them together. Specifically, we link the detected entity names in sentences to the Freebase5M (Bordes et al., 2015) by n-gram text matching, and use the DeepWalk and TransE embeddings of linked entity candidates denoted as {e 1,1 , e 1,2 , . . . , e m,k\u22121 , e m,k }, where k is the amount of candidates for each entity, m is the amount of entities appearing in sentence.", "cite_spans": [ { "start": 254, "end": 276, "text": "(Perozzi et al., 2014)", "ref_id": "BIBREF20" }, { "start": 372, "end": 393, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF2" }, { "start": 567, "end": 588, "text": "(Bordes et al., 2015)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "KG-Network", "sec_num": "2.2" }, { "text": "In addition, we use the position vectors of Corpus-Net for KG-Net because the word-based distances are more discriminative than the entity-based distances. To be specific, the transformation between them is not one-to-one mapping because different word-sequences may result in the same entity-sequence and hence loss of information, e.g., \"Obama flew to the US\" and \"Obama left the US\" are both mapped to \"Obama, US\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KG-Network", "sec_num": "2.2" }, { "text": "Bi-GRU and Attention Then we employ a similar architecture of the Bi-GRU component and the attention aggregation of Corpus-Net. As shown in Figure 3 , the linked candidates {e i,1 , . . . , e i,k } are aggregated to e i for each entity, then entity vectors {e 1 , . . . , e m } and position vectors {p 1 , . . . , p m } are concatenated in element-wise and as input to Bi-GRU layer. The Bi-GRU outputs are gathered to s e with entity-level attention, then to S e with bag-level attention, and fed to a Softmax classifier similar to Corpus-Net.", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 148, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "KG-Network", "sec_num": "2.2" }, { "text": "External Rule Knowledge We regard the patterns learned automatically from word and entity sequences as internal knowledge and manage to transfer external human knowledge such as logic rules into the base network. Furthermore, the KG-specific rules can be incorporated into Corpus-Net gradually by bi-distillation and vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KG-Network", "sec_num": "2.2" }, { "text": "Here we concentrate on relation-specific type rules because: (1) We observe some typical false predictions which could be corrected with type rules (e.g., it is unreasonable to predict two person entities as relation place of birth whose tail type cannot be a person). (2) We can automatically obtain large-scale type resources because most KG reserved this information. For example, in Freebase, we can collect the types of entities located in type/instance field and the relation-specific type constraints located in rdf-schema#domain and rdf-schema#range fields.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KG-Network", "sec_num": "2.2" }, { "text": "To be specific, given all types (an entity can have many types) for an entity pair j as t j,1 and t j,2 , we design the logic rule for each relation i as T i,1 and T i,2 are not missing =\u21d2 (T i,1 \u2208 t j,1 ) \u2227 (T i,2 \u2208 t j,2 ) where T i,1 and T i,2 are the relation-specific type constrains for relation i. Here we apply probabilistic soft logic (Bach et al., 2017) to encode rules flexibly, i.e., the rule scores are continuous truth values in the internal [0, 1] rather than {0, 1} and logic \u2227 denotes averaging of truth values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KG-Network", "sec_num": "2.2" }, { "text": "Moreover, considering type granularity, i.e., type information that is usually specified with hierarchy form (e.g., /people/person/spouse), we divide the number of matched levels (fields) of the type hierarchy by field amount as the value of (T i \u2208 t j ) to gain fine-grained features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KG-Network", "sec_num": "2.2" }, { "text": "Finally, with prediction p(S e ) from Softmax classifier, we use a posterior regularization fashion (Hu et al., 2016) to project p(S e ) into a constrained subspace p (S e ) as follows:", "cite_spans": [ { "start": 100, "end": 117, "text": "(Hu et al., 2016)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "KG-Network", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(p (S e )) * \u221d p(S e ) \u00d7 e sr , s r = \u2212 l i=1 C\u03bb i (1 \u2212 r i (S e )),", "eq_num": "(3)" } ], "section": "KG-Network", "sec_num": "2.2" }, { "text": "where \u03bb i is confidence of each rule, C is regularization parameter and s r is rule score factor, which indicates how S e satisfies the rules. This is the closed-form solution obtained by solving an optimization problem which finds the optimal p (S e ) fitting rules meanwhile staying to p(S e ). We set \u03bb i as p i (S e ), i.e., the higher probability classifier predicts (believes), the stronger effect rule-constraint takes for relation i. We can design other rules and enable to scale with similar manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KG-Network", "sec_num": "2.2" }, { "text": "In this section, we introduce how to ensemble two base networks cooperatively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooperative Module", "sec_num": "2.3" }, { "text": "Bi-directional Knowledge Distillation We observe that KG-Net and Corpus-Net have different hard examples and different wrong predictions, i.e., for the same sentence bag, Corpus-Net may predict higher probability than KG-Net and sometimes, the contrary (we demonstrate the differences in Experiments Section). This observation encouraged us to train them cooperatively with mutual knowledge supplementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooperative Module", "sec_num": "2.3" }, { "text": "We devise a bi-directional knowledge distillation method to enhance their supervision information in label space. Specifically, the two base networks learn with the hard label y from distant supervision. Meanwhile, we set the predicted probability of the two base networks p c and p k as soft label to each other simultaneously:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooperative Module", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L c = N i=1 ( (y i , p c i ) + \u03c0 c (p k i , p c i )), L k = N i=1 ( (y i , p k i ) + \u03c0 k (p c i , p k i )),", "eq_num": "(4)" } ], "section": "Cooperative Module", "sec_num": "2.3" }, { "text": "where is cross entropy loss, \u03c0 is imitation rate, and N is batch size. We update the model parameters by minimizing L c and L k with Adam (Kingma and Ba, 2014) optimizer. The learning process can be regarded as the fact that the two base networks not only learn from the coarse-grained hard label which is one-hot and low entropy, but also learn from the teacher network which expresses specific supplementary knowledge and dependencies between relations with soft label. For example, label [0.3, 0.2, 0.9] is more informative than [0, 0, 1].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooperative Module", "sec_num": "2.3" }, { "text": "Also note that the early base network is not reliable and gives low quality knowledge through soft label, so we pre-train the two base networks separately with certain steps before mutual learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooperative Module", "sec_num": "2.3" }, { "text": "Adaptive Imitation Rate The classification difficulty of the two base networks is varying with different entity pair bag instance, sometimes KG-Net is a better qualified teacher for Corpus-Net and sometimes vice versa. To transfer more reliable knowledge of each base network to another and train them more effectively, we set the imitation weights as following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooperative Module", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c0 c = (y i , p k i ) (y i , p c i ) + (y i , p k i ) , \u03c0 k = (y i , p c i ) (y i , p c i ) + (y i , p k i ) ,", "eq_num": "(5)" } ], "section": "Cooperative Module", "sec_num": "2.3" }, { "text": "where \u03c0 c and \u03c0 k are inversely proportional to the hard-label loss of each other, i.e., the smaller the loss is, the more qualified is the base network as teacher toward each other. In addition, from the perspective of optimization, the adaptive imitation can prevent the gradient from being dominated by ill-classified examples and hence be able to train the model effectively. Later, in Section 3.4, we will demonstrate the effectiveness of the adaptive imitation rate by comparing with the fixed setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooperative Module", "sec_num": "2.3" }, { "text": "Dynamic Ensemble Prediction The final prediction p co of the CORD framework is an ensemble of the two base networks predictions p c and p k because each of them has its strong points. We propose a dynamic ensemble strategy considering that (1) A high type-rule score indicates KG-Net may classify current sentence bag well because the predictions of the classifier satisfy the rules; (2) Ideally, entity name in a sentence should be linked to only one entity in KG, so the KG-Net would be more confused with more linked candidates. Thus, with the above two factors, we can specify the dynamic ensemble prediction p co as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooperative Module", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p co = (1 \u2212 w k )p c + w k p k , w k = \u03b1 + \u03b2( s r n r \u2212 n c n e \u00d7 N e ),", "eq_num": "(6)" } ], "section": "Cooperative Module", "sec_num": "2.3" }, { "text": "where \u03b1 is the empirical KG-Net base weight, \u03b2 is the wave range. They can be set as the ratio of some evaluation indicators (such as F-score) of separate-trained base network. Then the prediction weight of KG-Net w k \u2208 [\u03b1 \u2212 \u03b2, \u03b1 + \u03b2] depends on the normalized (\u2208 [0, 1]) rule score factor and candidates score, i.e., average rule score per-relation and number of candidates per-entity dividing N e , which is the upper limit on the number of candidates for linking. And s r is the rule score factor in Eq. 3, n r , n c n e are the amounts of relations, all entities candidates and gathered entities in sentence respectively. As a comparison, we also deploy a naive baseline using static ensemble weight and report the results in Section 3.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooperative Module", "sec_num": "2.3" }, { "text": "In this section, we aim to evaluate the effectiveness of the proposed CORD framework. We conduct an overall performance comparison with baseline methods and perform a comprehensive examination of the KG-Net and the Cooperative Module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "We conduct experiments on the widely used benchmark dataset NYT10 (Riedel et al., 2010) , which is built by aligning triples in Freebase to the New York Times corpus and contains 53 relations. There are 522,611/172,448 sentences, 281,270/96,678 entity pairs, and 18,252/1,950 relation mentions in train/test dataset respectively. Following previous works (Mintz et al., 2009; Lin et al., 2016) , we evaluate our method in the held-out evaluation with P-R curve and P@N metric without expensive human evaluations.", "cite_spans": [ { "start": 66, "end": 87, "text": "(Riedel et al., 2010)", "ref_id": "BIBREF22" }, { "start": 355, "end": 375, "text": "(Mintz et al., 2009;", "ref_id": "BIBREF19" }, { "start": 376, "end": 393, "text": "Lin et al., 2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset and Evaluation Metrics", "sec_num": null }, { "text": "Parameter Settings We set the embedding dimensions as 5, 50, 64, 64 for position, word2vec, Deep-Walk and TransE respectively. For both base networks, we set the cell size of GRU as 230, learning rate as 0.001, dropout probability as 0.5 and batch size as 20. For the cooperative module, we set the base weight \u03b1 and wave range \u03b2 as 0.4, 0.2 respectively, and fixed w k as 0.4 for ensemble comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset and Evaluation Metrics", "sec_num": null }, { "text": "We compare our approach with three traditional feature-based methods and two state-of-art neural-based methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Baseline Methods", "sec_num": "3.2" }, { "text": "Feature-based Methods Mintz (Mintz et al., 2009 ) is a multiclass logistic regression model; Mul-tiR (Hoffmann et al., 2011) is a probabilistic graphical model which can handle overlapping relations; MIML (Surdeanu et al., 2012) is also a probabilistic graphical model but using a multi-instance multilabel paradigm.", "cite_spans": [ { "start": 28, "end": 47, "text": "(Mintz et al., 2009", "ref_id": "BIBREF19" }, { "start": 101, "end": 124, "text": "(Hoffmann et al., 2011)", "ref_id": "BIBREF8" }, { "start": 205, "end": 228, "text": "(Surdeanu et al., 2012)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Baseline Methods", "sec_num": "3.2" }, { "text": "Neural-based Methods CNN+ATT (Lin et al., 2016) is a sentence-level attention model based on CNN, which can dynamically reduce the weights of noisy instances; PCNN+ATT (Lin et al., 2016) achieves state-of-art results by applying sentence-level attention to the piecewise max pooling model, PCNN (Zeng et al., 2015) .", "cite_spans": [ { "start": 29, "end": 47, "text": "(Lin et al., 2016)", "ref_id": "BIBREF15" }, { "start": 168, "end": 186, "text": "(Lin et al., 2016)", "ref_id": "BIBREF15" }, { "start": 295, "end": 314, "text": "(Zeng et al., 2015)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Baseline Methods", "sec_num": "3.2" }, { "text": "The precision-recall curve results are shown in Figure 4 , where Base-Net-Corpus and Base-Net-KG are our best results for the two base networks with independent training, CORD is the cooperative training and dynamic ensemble results. For the aforementioned five methods, we directly use the results reported in (Lin et al., 2016) . Figure 4 demonstrates that: (1) The KG-Net achieves higher coverage than the feature-based methods, comparable precision with the MultiR and MIML and an obvious gap with other neural-based methods. This indicates that the KG-Net and feature-based methods can capture certain patterns effectively but with relatively low coverage. On one hand, the decent precision shows potentials of the KG-Net to capture patterns with entity-sequence and KG information. On the other hand, we suggest that the weakness of the KG-Net might be caused by the sparsity of entity-sequence space (the dataset scales down after word-to-entity mapping), and we can enhance it by exploring other information such as relational paths (Lin et al., 2015; Zeng et al., 2017) ; (2) Corpus-Net achieves comparable results with CNN+ATT and PCNN+ATT, which reveals the effectiveness of Corpus-Net that could be the backbone of the CORD framework; (3) The CORD outperforms other methods on most recall area, demonstrating the effectiveness of our methods. Note that the CORD framework is significantly superior to the two separate trained base networks, especially in the rightmost area. This shows the cooperative module takes advantages of two base networks effectively and achieves better generalization. Also note that in the right-side area, CORD is still robust although the separate trained KG-Net is weak, which verifies the effectiveness of the CORD with different strengths of the base networks.", "cite_spans": [ { "start": 311, "end": 329, "text": "(Lin et al., 2016)", "ref_id": "BIBREF15" }, { "start": 1041, "end": 1059, "text": "(Lin et al., 2015;", "ref_id": "BIBREF14" }, { "start": 1060, "end": 1078, "text": "Zeng et al., 2017)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 48, "end": 56, "text": "Figure 4", "ref_id": "FIGREF4" }, { "start": 332, "end": 340, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Comparison with Baseline Methods", "sec_num": "3.2" }, { "text": "To evaluate the effect of incorporating KG information, we first compare P-R curve for different KG-Net setups, then we explore the benefit of using external logic rules and make a case study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance of the KG Network", "sec_num": "3.3" }, { "text": "Comparison for Different Setups We experiment three KG-Nets without rule knowledge, using DeepWalk, TransE and their concatenation as entity embedding respectively. Based on whichever yields the best results, we experiment with rule knowledge and report results in Figure 5 . From Figure 5 , we can observe that: (1) The results of DeepWalk and TransE are slightly different and their concatenation improves, verifying the extendibility with different kinds of KG information embeddings; (2) After incorporating logic rules, the result is improved significantly as shown in left recall area, indicating that type-constraints helps to capture certain patterns more precisely.", "cite_spans": [], "ref_spans": [ { "start": 265, "end": 273, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 281, "end": 289, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Performance of the KG Network", "sec_num": "3.3" }, { "text": "Robustness on Long Tail Situation Efforts based on bag-level denoising such as sentence attention are liable to failure because of the long-tail situation in real-life datasets. For example, we observe that 77.63% entity pairs have only one relation instance in NYT10. We expect that supplementary external knowledge (logic rules) can enhance the robustness of this situation. We evaluate P@N of the KG-Net without rules p(S e ) and compare the rule-projected p (S e ) on two kind of sentence amounts setup in From Table 1 , we can observe that the KG-Net gets lower precisions (average reduces 0.9%) on the whole test data comparing with the filtered data. In contrast, the KG-Net with rules gets higher precisions on the whole test data because it can deal with noisy instances effectively in sentence-level and hence be more robust on long tail situation.", "cite_spans": [], "ref_spans": [ { "start": 515, "end": 522, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Performance of the KG Network", "sec_num": "3.3" }, { "text": "Case Study Here we pick an example instance in Table 2 to illustrate the effect of type rules. The KG-Net predicts wrong relation place of death probably because the appearance of entity Joe Williams. In contrast, it can predict correctly with the help of relation-specific type constraints.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 54, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Performance of the KG Network", "sec_num": "3.3" }, { "text": "Relation-Specific Type KG-Net /people/deceased person/place of death /people/deceased person, /location/location KG-Net+ Rule /location/location/contains /location/location, /location/location text In Suffolk County, Fire Island suffered the most damage, according to Joe Williams, commissioner of the county 's office of emergency management. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicted Relation", "sec_num": null }, { "text": "To investigate the effectiveness of the cooperative module, we compare the adaptive imitation rate with the fixed, the dynamic ensemble strategy with the static, and then perform a thorough case study. Adaptive vs Fixed Imitation Rate We find that the adaptive imitation is crucial for the effective training of the CORD. To demonstrate this, we deploy some fixed imitation rate setups comparing the adaptive imitation rate. We set imitation rate (\u03c0 c , \u03c0 k ) as {(0.5, 0.5), (0.6, 0.4), (0.4, 0.6)}, and report the loss curves of the dynamic, and only (0.5, 0.5) for the fixed in Figure 6 because the other two have similar results. Figure 6 shows the remarkable difference between the fixed and the adaptive imitation, where the loss of the adaptive setting reduce gradually while the fixed fluctuates wildly. The waves of fixed KG-Net and fixed Corpus-Net are similar, meanwhile they mislead each other and the gradients are dominated by hard examples, resulting in the non-convergence. Conversely, the adaptive networks are trained effectively because the data speak for itself by providing loss values as clues to reveal the difficulty of predicting current instance. Note that the base networks are pre-trained independently and both descend gradually within the first 10k steps.", "cite_spans": [], "ref_spans": [ { "start": 581, "end": 589, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 634, "end": 642, "text": "Figure 6", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Performance of the Cooperative Module", "sec_num": "3.4" }, { "text": "Dynamic vs Static Ensemble We also compare the dynamic ensemble strategy with the static and report their P@N results within identical base networks in Table 3 . It shows that the dynamic outperforms the static and leverages the two base networks better. Note that the improvements decrease as recall increases, and the degree of decrease for the dynamic is less than the static, demonstrating the dynamic is more robust than the static.", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 159, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Performance of the Cooperative Module", "sec_num": "3.4" }, { "text": "Case Study To gain further insight about how the CORD works, we plot the top 300 predictions of the CORD in descending order, and compare it with its two base networks in Figure 7 . Figure 7 presents the predictions of KG-Net and Corpus-Net which go up and down around the CORD curve. The prediction differences between KG-Net and Corpus-Net are nonuniformly time-varying, indicating that some hard instances for KG-Net are easy to classify by Corpus-Net and vice versa. This supports again our view of employing adaptive imitation and dynamic ensemble.", "cite_spans": [], "ref_spans": [ { "start": 171, "end": 179, "text": "Figure 7", "ref_id": "FIGREF7" }, { "start": 182, "end": 190, "text": "Figure 7", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Performance of the Cooperative Module", "sec_num": "3.4" }, { "text": "To further demonstrate the different advantages of KG-Net and Corpus-Net, we choose two points which are predicted correctly and have significantly different values from Figure 7 . The result is showed in Table 4 , where prediction ID 137 and ID 257 have the different dominating network, KG-Net and Corpus-Net respectively. From Table 4 , we can see that different networks contribute each other from the view of semantic: (1) KG-Net predicts relation location contains more accurately and Corpus-Net may fail if the wording of the sentence doesn't clearly state the relation between two entities. With the help of position and three entity embeddings, KG-Net can capture the relation-dependent features better from the view of graph structure. In contrast, Corpus-Net might be confused by the expression \"the mayor of ...\" and the uncorrelated latter part, \"was killed by ...\"; (2) Corpus-Net predicts relation place of death more accurately and KG-Net may fail if two entities are already known to be related by more than one relation. Here Corpus-Net provides reliable prediction because of the appearance of featured expression \"died on ...\" in the last sentence. Contrarily, KG-net may lack discriminative information and be confused by other possible relations such as place of birth, because the targeted person entity is followed by too many location entities.", "cite_spans": [], "ref_spans": [ { "start": 170, "end": 178, "text": "Figure 7", "ref_id": "FIGREF7" }, { "start": 205, "end": 212, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 330, "end": 337, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Performance of the Cooperative Module", "sec_num": "3.4" }, { "text": "Relation extraction is one of the most important topics in NLP. Many approaches to relation extraction have been proposed, such as supervised classification (Zelenko et al., 2003; Bunescu and Mooney, 2005) , bootstrapping (Carlson et al., 2010) , distant supervision (Mintz et al., 2009; Krause et al., 2012; Min et al., 2013; Pershina et al., 2014; Ji et al., 2017) , and generative model (Zhang et al., 2018) . Among them, distant supervision is popular as it is efficient to obtain large-scale training data automatically. However, it suffers from noisy labeling problem which severely degrades its performance.", "cite_spans": [ { "start": 157, "end": 179, "text": "(Zelenko et al., 2003;", "ref_id": "BIBREF25" }, { "start": 180, "end": 205, "text": "Bunescu and Mooney, 2005)", "ref_id": "BIBREF4" }, { "start": 222, "end": 244, "text": "(Carlson et al., 2010)", "ref_id": "BIBREF5" }, { "start": 267, "end": 287, "text": "(Mintz et al., 2009;", "ref_id": "BIBREF19" }, { "start": 288, "end": 308, "text": "Krause et al., 2012;", "ref_id": "BIBREF13" }, { "start": 309, "end": 326, "text": "Min et al., 2013;", "ref_id": "BIBREF18" }, { "start": 327, "end": 349, "text": "Pershina et al., 2014;", "ref_id": "BIBREF21" }, { "start": 350, "end": 366, "text": "Ji et al., 2017)", "ref_id": "BIBREF10" }, { "start": 390, "end": 410, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "To tackle this problem, Riedel et al. (2010; Hoffmann et al. (2011; Surdeanu et al. (2012) model distant supervision as a multi-instance learning problem under the at-least-one assumption and make it more practical. With the advances in deep learning, Zeng et al. (2015) , Lin et al. (2016) and apply CNN and attention mechanism, further introduces memory network to reduce noises. Compared with these methods, the proposed framework leverages information from other sources such as KG and combine it with information from text corpus by knowledge distillation.", "cite_spans": [ { "start": 24, "end": 44, "text": "Riedel et al. (2010;", "ref_id": "BIBREF22" }, { "start": 45, "end": 67, "text": "Hoffmann et al. (2011;", "ref_id": "BIBREF8" }, { "start": 68, "end": 90, "text": "Surdeanu et al. (2012)", "ref_id": "BIBREF24" }, { "start": 252, "end": 270, "text": "Zeng et al. (2015)", "ref_id": "BIBREF27" }, { "start": 273, "end": 290, "text": "Lin et al. (2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "In this paper, we propose a novel neural relation extraction framework with bi-directional knowledge distillation to cooperatively use different information sources and alleviate the noisy label problem in distantly supervised relation extraction. Extensive experiments show that our framework can effectively model relation patterns between text corpus and KG information, and achieve the state-of-the-art results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "We thank anonymous reviewers for their helpful comments. This work was financially supported by the National Natural Science Foundation of China (No.61602013), and the Shenzhen Science and Technology Innovation Committee (Grant No. JCYJ20170412151008290 and JCYJ20170818091546869).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automated template generation for question answering over knowledge graphs", "authors": [ { "first": "", "middle": [], "last": "Abujabal", "suffix": "" } ], "year": 2017, "venue": "WWW", "volume": "", "issue": "", "pages": "1191--1200", "other_ids": {}, "num": null, "urls": [], "raw_text": "References [Abujabal et al.2017] Abdalghani Abujabal, Mohamed Yahya, Mirek Riedewald, and Gerhard Weikum. 2017. Automated template generation for question answering over knowledge graphs. In WWW, pages 1191-1200.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Hinge-loss Markov random fields and probabilistic soft logic", "authors": [ { "first": "H", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Bert", "middle": [], "last": "Broecheler", "suffix": "" }, { "first": "Lise", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Getoor", "suffix": "" } ], "year": 2017, "venue": "JMLR", "volume": "18", "issue": "109", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Bach et al.2017] Stephen H. Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2017. Hinge-loss Markov random fields and probabilistic soft logic. JMLR, 18(109):1-67.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Translating embeddings for modeling multi-relational data", "authors": [ { "first": "", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2013, "venue": "NIPS", "volume": "", "issue": "", "pages": "2787--2795", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Bordes et al.2013] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In NIPS, pages 2787-2795.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Large-scale simple question answering with memory networks", "authors": [ { "first": "", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2015, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Bordes et al.2015] Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. In NIPS.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A shortest path dependency kernel for relation extraction", "authors": [ { "first": "C", "middle": [], "last": "Razvan", "suffix": "" }, { "first": "Raymond J", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2005, "venue": "HLT/EMNLP", "volume": "", "issue": "", "pages": "724--731", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Bunescu and Mooney2005] Razvan C Bunescu and Raymond J Mooney. 2005. A shortest path dependency kernel for relation extraction. In HLT/EMNLP, pages 724-731.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Toward an architecture for never-ending language learning", "authors": [ { "first": "[", "middle": [], "last": "Carlson", "suffix": "" } ], "year": 2010, "venue": "AAAI", "volume": "5", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Carlson et al.2010] Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for never-ending language learning. In AAAI, volume 5, page 3.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "On the properties of neural machine translation", "authors": [ { "first": "[", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.1259" ] }, "num": null, "urls": [], "raw_text": "[Cho et al.2014] Kyunghyun Cho, Bart Van Merri\u00ebnboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Effective deep memory networks for distant supervised relation extraction", "authors": [ { "first": "", "middle": [], "last": "Feng", "suffix": "" } ], "year": 2017, "venue": "IJCAI", "volume": "", "issue": "", "pages": "19--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Feng et al.2017] Xiaocheng Feng, Jiang Guo, Bing Qin, Ting Liu, and Yongjie Liu. 2017. Effective deep memory networks for distant supervised relation extraction. In IJCAI, pages 19-25.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Knowledge-based weak supervision for information extraction of overlapping relations", "authors": [ { "first": "", "middle": [], "last": "Hoffmann", "suffix": "" } ], "year": 2011, "venue": "ACL", "volume": "", "issue": "", "pages": "541--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Hoffmann et al.2011] Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In ACL, pages 541- 550.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Harnessing deep neural networks with logic rules. ACL", "authors": [ { "first": "[", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Hu et al.2016] Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. 2016. Harnessing deep neural networks with logic rules. ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Distant supervision for relation extraction with sentence-level attention and entity descriptions", "authors": [ { "first": "[", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2017, "venue": "AAAI", "volume": "", "issue": "", "pages": "3060--3066", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Ji et al.2017] Guoliang Ji, Kang Liu, Shizhu He, Jun Zhao, et al. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In AAAI, pages 3060-3066.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Open relation extraction for support passage retrieval: Merit and open issues", "authors": [ { "first": "Amina", "middle": [], "last": "Kadry", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Dietz", "suffix": "" } ], "year": 2017, "venue": "SIGIR", "volume": "", "issue": "", "pages": "1149--1152", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Kadry and Dietz2017] Amina Kadry and Laura Dietz. 2017. Open relation extraction for support passage re- trieval: Merit and open issues. In SIGIR, pages 1149-1152.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "[Kingma and Ba2014] Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Large-scale learning of relation-extraction rules with distant supervision from the web", "authors": [ { "first": "[", "middle": [], "last": "Krause", "suffix": "" } ], "year": 2012, "venue": "ISWC", "volume": "", "issue": "", "pages": "263--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Krause et al.2012] Sebastian Krause, Hong Li, Hans Uszkoreit, and Feiyu Xu. 2012. Large-scale learning of relation-extraction rules with distant supervision from the web. In ISWC, pages 263-278.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Modeling relation paths for representation learning of knowledge bases", "authors": [ { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2015, "venue": "EMNLP", "volume": "", "issue": "", "pages": "705--714", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Lin et al.2015] Yankai Lin, Zhiyuan Liu, Huan-Bo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015. Model- ing relation paths for representation learning of knowledge bases. In EMNLP, pages 705-714.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Neural relation extraction with selective attention over instances", "authors": [ { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "1", "issue": "", "pages": "2124--2133", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Lin et al.2016] Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In ACL, volume 1, pages 2124-2133.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Neural relation extraction with multi-lingual attention", "authors": [ { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "1", "issue": "", "pages": "34--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Lin et al.2017] Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2017. Neural relation extraction with multi-lingual attention. In ACL, volume 1, pages 34-43.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning with noise: Enhance distantly supervised relation extraction with dynamic transition matrix", "authors": [ { "first": "[", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "430--439", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Luo et al.2017] Bingfeng Luo, Yansong Feng, Zheng Wang, Zhanxing Zhu, Songfang Huang, Rui Yan, and Dongyan Zhao. 2017. Learning with noise: Enhance distantly supervised relation extraction with dynamic transition matrix. In ACL, pages 430-439.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Distant supervision for relation extraction with an incomplete knowledge base", "authors": [ { "first": "Min", "middle": [], "last": "", "suffix": "" } ], "year": 2013, "venue": "NAACL HLT", "volume": "", "issue": "", "pages": "777--782", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Min et al.2013] Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant supervi- sion for relation extraction with an incomplete knowledge base. In NAACL HLT, pages 777-782.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Distant supervision for relation extraction without labeled data", "authors": [ { "first": "[", "middle": [], "last": "Mintz", "suffix": "" } ], "year": 2009, "venue": "ACL", "volume": "", "issue": "", "pages": "1003--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Mintz et al.2009] Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL, pages 1003-1011.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Deepwalk: Online learning of social representations", "authors": [ { "first": "[", "middle": [], "last": "Perozzi", "suffix": "" } ], "year": 2014, "venue": "SIGKDD", "volume": "", "issue": "", "pages": "701--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Perozzi et al.2014] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In SIGKDD, pages 701-710.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Infusion of labeled data into distant supervision for relation extraction", "authors": [ { "first": "[", "middle": [], "last": "Pershina", "suffix": "" } ], "year": 2014, "venue": "ACL", "volume": "2", "issue": "", "pages": "732--738", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Pershina et al.2014] Maria Pershina, Bonan Min, Wei Xu, and Ralph Grishman. 2014. Infusion of labeled data into distant supervision for relation extraction. In ACL, volume 2, pages 732-738.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Modeling relations and their mentions without labeled text", "authors": [ { "first": "", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2010, "venue": "ECML PKDD", "volume": "", "issue": "", "pages": "148--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Riedel et al.2010] Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In ECML PKDD, pages 148-163.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Incremental knowledge base construction using deepdive. VLDB", "authors": [ { "first": "Jaeho", "middle": [], "last": "Shin", "suffix": "" }, { "first": "Sen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Feiran", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Christopher", "middle": [ "De" ], "last": "Sa", "suffix": "" }, { "first": "Ce", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "R\u00e9", "suffix": "" } ], "year": 2015, "venue": "", "volume": "8", "issue": "", "pages": "1310--1321", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Shin et al.2015] Jaeho Shin, Sen Wu, Feiran Wang, Christopher De Sa, Ce Zhang, and Christopher R\u00e9. 2015. Incremental knowledge base construction using deepdive. VLDB, 8(11):1310-1321.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Multi-instance multi-label learning for relation extraction", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Tibshirani", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2012, "venue": "EMNLP", "volume": "", "issue": "", "pages": "455--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Surdeanu et al.2012] Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In EMNLP, pages 455-465.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Kernel methods for relation extraction", "authors": [ { "first": "[", "middle": [], "last": "Zelenko", "suffix": "" } ], "year": 2003, "venue": "JMLR", "volume": "3", "issue": "", "pages": "1083--1106", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Zelenko et al.2003] Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. JMLR, 3(Feb):1083-1106.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Relation classification via convolutional deep neural network", "authors": [ { "first": "[", "middle": [], "last": "Zeng", "suffix": "" } ], "year": 2014, "venue": "COLING", "volume": "", "issue": "", "pages": "2335--2344", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Zeng et al.2014] Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classifica- tion via convolutional deep neural network. In COLING, pages 2335-2344.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Distant supervision for relation extraction via piecewise convolutional neural networks", "authors": [ { "first": "[", "middle": [], "last": "Zeng", "suffix": "" } ], "year": 2015, "venue": "In EMNLP", "volume": "", "issue": "", "pages": "1753--1762", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Zeng et al.2015] Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In EMNLP, pages 1753-1762.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Incorporating relation paths in neural relation extraction", "authors": [ { "first": "[", "middle": [], "last": "Zeng", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1768--1777", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Zeng et al.2017] Wenyuan Zeng, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2017. Incorporating relation paths in neural relation extraction. In EMNLP, pages 1768-1777.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "On the generative discovery of structured medical knowledge", "authors": [ { "first": "[", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "SIGKDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Zhang et al.2018] Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, and Philip S. Yu. 2018. On the generative discovery of structured medical knowledge. In SIGKDD.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "Overview of the proposed Cooperative Denoising Framework.", "num": null, "type_str": "figure", "uris": null }, "FIGREF3": { "text": "Word-Sequence Encoder.", "num": null, "type_str": "figure", "uris": null }, "FIGREF4": { "text": "P-R Curve Comparison with Mintz, MIML, MultiR, CNN+ATT and PCNN+ATT.", "num": null, "type_str": "figure", "uris": null }, "FIGREF5": { "text": "P-R Curve Comparison with Different Setup for the Base KG-Net.", "num": null, "type_str": "figure", "uris": null }, "FIGREF6": { "text": "Loss for Adaptive and Fixed Imitation.", "num": null, "type_str": "figure", "uris": null }, "FIGREF7": { "text": "Top300 Predictions of the CORD.", "num": null, "type_str": "figure", "uris": null }, "FIGREF8": { "text": ", the mayor of San Carlos City in the northern Philippines, was killed by gunmen at a campaign rally on April 28, his brother quickly stepped into Swiss tenor who was most renowned as an interpreter of German art song and oratorio roles, died on Saturday in Davos, Switzerland.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "text": "", "content": "
Sentence Representation
Entity-level Attention
Entity Representation
Bi-GRU Layer
Position Embedding
Entity Embedding
Linking-level Attention+
Entity Linking Candidates............
Entity SequenceBarack obamaIllinois...kenyaNew hampshire
", "type_str": "table", "num": null, "html": null }, "TABREF2": { "text": "where (\u2265 1) means whole test dataset and (> 1) means filtering entity pairs which have only one instance.", "content": "
KG-Net KG-Net + Rule#Entity Pair Sentence >1 \u2265 1 >1 \u2265 1P@100 (%) 60.2 60.3(+0.1) 51.5(-2.5) P@200 (%) 54.0 69.2 57.0 73.5(+4.3) 60.4(+3.4) 51.7(+3.1) 61.9(+3.6) P@300 (%) Average (%) 46.4 53.5 46.0(-0.4) 52.6(-0.9) 48.6 58.3
", "type_str": "table", "num": null, "html": null }, "TABREF3": { "text": "P@N for Long Tail Situation.", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF4": { "text": "Effect of Type Constraint Rules. Bold indicates entity, italic indicates targeted entity pair.", "content": "
", "type_str": "table", "num": null, "html": null }, "TABREF6": { "text": "P@N for Dynamic and Static Ensemble.", "content": "
", "type_str": "table", "num": null, "html": null }, "TABREF7": { "text": "Hard Example of KG and Corpus Net. Bold indicates entity, italic indicates targeted entity pair.", "content": "
", "type_str": "table", "num": null, "html": null } } } }