{ "paper_id": "P17-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:18:58.943308Z" }, "title": "Learning with Noise: Enhance Distantly Supervised Relation Extraction with Dynamic Transition Matrix", "authors": [ { "first": "Bingfeng", "middle": [], "last": "Luo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University", "location": { "country": "China" } }, "email": "bfluo@pku.edu.cn" }, { "first": "Yansong", "middle": [], "last": "Feng", "suffix": "", "affiliation": {}, "email": "fengyansong@pku.edu.cn" }, { "first": "Zheng", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University", "location": { "country": "China" } }, "email": "z.wang@lancaster.ac.uk" }, { "first": "Zhanxing", "middle": [], "last": "Zhu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University", "location": { "country": "China" } }, "email": "zhanxing.zhu@pku.edu.cn" }, { "first": "Songfang", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM China Research Lab", "location": { "country": "China" } }, "email": "huangsf@cn.ibm.com" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University", "location": { "country": "China" } }, "email": "ruiyan@pku.edu.cn" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University", "location": { "country": "China" } }, "email": "zhaody@pku.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Distant supervision significantly reduces human efforts in building training data for many classification tasks. While promising, this technique often introduces noise to the generated training data, which can severely affect the model performance. In this paper, we take a deep look at the application of distant supervision in relation extraction. We show that the dynamic transition matrix can effectively characterize the noise in the training data built by distant supervision. The transition matrix can be effectively trained using a novel curriculum learning based method without any direct supervision about the noise. We thoroughly evaluate our approach under a wide range of extraction scenarios. Experimental results show that our approach consistently improves the extraction results and outperforms the state-of-the-art in various evaluation scenarios.", "pdf_parse": { "paper_id": "P17-1040", "_pdf_hash": "", "abstract": [ { "text": "Distant supervision significantly reduces human efforts in building training data for many classification tasks. While promising, this technique often introduces noise to the generated training data, which can severely affect the model performance. In this paper, we take a deep look at the application of distant supervision in relation extraction. We show that the dynamic transition matrix can effectively characterize the noise in the training data built by distant supervision. The transition matrix can be effectively trained using a novel curriculum learning based method without any direct supervision about the noise. We thoroughly evaluate our approach under a wide range of extraction scenarios. Experimental results show that our approach consistently improves the extraction results and outperforms the state-of-the-art in various evaluation scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Distant supervision (DS) is rapidly emerging as a viable means for supporting various classification tasks -from relation extraction (Mintz et al., 2009) and sentiment classification (Go et al., 2009) to cross-lingual semantic analysis (Fang and Cohn, 2016) . By using knowledge learned from seed examples to label data, DS automatically prepares large scale training data for these tasks.", "cite_spans": [ { "start": 133, "end": 153, "text": "(Mintz et al., 2009)", "ref_id": "BIBREF8" }, { "start": 183, "end": 200, "text": "(Go et al., 2009)", "ref_id": "BIBREF3" }, { "start": 236, "end": 257, "text": "(Fang and Cohn, 2016)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While promising, DS does not guarantee perfect results and often introduces noise to the generated data. In the context of relation extraction, DS works by considering sentences containing both the subject and object of a triple as its supports. However, the generated data are not always perfect. For instance, DS could match the knowledge base (KB) triple, in false positive contexts like Donald Trump worked in New York City. Prior works (Takamatsu et al., 2012; Ritter et al., 2013) show that DS often mistakenly labels real positive instances as negative (false negative) or versa vice (false positive), and there could be confusions among positive labels as well. These noises can severely affect training and lead to poorlyperforming models.", "cite_spans": [ { "start": 492, "end": 516, "text": "(Takamatsu et al., 2012;", "ref_id": "BIBREF18" }, { "start": 517, "end": 537, "text": "Ritter et al., 2013)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Tackling the noisy data problem of DS is nontrivial, since there usually lacks of explicit supervision to capture the noise. Previous works have tried to remove sentences containing unreliable syntactic patterns (Takamatsu et al., 2012) , design new models to capture certain types of noise or aggregate multiple predictions under the at-leastone assumption that at least one of the aligned sentences supports the triple in KB (Riedel et al., 2010; Surdeanu et al., 2012; Ritter et al., 2013; Min et al., 2013) . These approaches represent a substantial leap forward towards making DS more practical. however, are either tightly couple to certain types of noise, or have to rely on manual rules to filter noise, thus unable to scale. Recent breakthrough in neural networks provides a new way to reduce the influence of incorrectly labeled data by aggregating multiple training instances attentively for relation classification, without explicitly characterizing the inherent noise (Lin et al., 2016; Zeng et al., 2015) . Although promising, modeling noise within neural network architectures is still in its early stage and much remains to be done.", "cite_spans": [ { "start": 212, "end": 236, "text": "(Takamatsu et al., 2012)", "ref_id": "BIBREF18" }, { "start": 427, "end": 448, "text": "(Riedel et al., 2010;", "ref_id": "BIBREF13" }, { "start": 449, "end": 471, "text": "Surdeanu et al., 2012;", "ref_id": "BIBREF17" }, { "start": 472, "end": 492, "text": "Ritter et al., 2013;", "ref_id": "BIBREF15" }, { "start": 493, "end": 510, "text": "Min et al., 2013)", "ref_id": "BIBREF7" }, { "start": 981, "end": 999, "text": "(Lin et al., 2016;", "ref_id": "BIBREF5" }, { "start": 1000, "end": 1018, "text": "Zeng et al., 2015)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we aim to enhance DS noise modeling by providing the capability to explicitly characterize the noise in the DS-style training data within neural networks architectures. We show that while noise is inevitable, it is possible to characterize the noise pattern in a unified framework along with its original classification objective. Our key insight is that the DS-style training data typically contain useful clues about the noise pattern. For example, we can infer that since some people work in their birthplaces, DS could wrongly label a training sentence describing a working place as a born-in relation. Our novel approach to noisy modeling is to use a dynamically-generated transition matrix for each training instance to (1) characterize the possibility that the DS labeled relation is confused and (2) indicate its noise pattern. To tackle the challenge of no direct guidance over the noise pattern, we employ a curriculum learning based training method to gradually model the noise pattern over time, and utilize trace regularization to control the behavior of the transition matrix during training. Our approach is flexiblewhile it does not make any assumptions about the data quality, the algorithm can make effective use of the data-quality prior knowledge to guide the learning procedure when such clues are available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We apply our method to the relation extraction task and evaluate under various scenarios on two benchmark datasets. Experimental results show that our approach consistently improves both extraction settings, outperforming the state-of-theart models in different settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work offers an effective way for tackling the noisy data problem of DS, making DS more practical at scale. Our main contributions are to (1) design a dynamic transition matrix structure to characterize the noise introduced by DS, and (2) design a curriculum learning based framework to adaptively guide the training procedure to learn with noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task of distantly supervised relation extraction is to extract knowledge triples, , from free text with the training data constructed by aligning existing KB triples with a large corpus. Specifically, given a triple in KB, DS works by first retrieving all the sentences containing both subj and obj of the triple, and then constructing the training data by considering these sentences as support to the existence of the triple. This task can be conducted in both the sentence and the bag levels. The former takes a sentence s containing Figure 1: Overview of our approach both subj and obj as input, and outputs the relation expressed by the sentence between subj and obj. The latter setting alleviates the noisy data problem by using the at-least-one assumption that at least one of the retrieved sentences containing both subj and obj supports the triple. It takes a bag of sentences S as input where each sentence s \u2208 S contains both subj and obj, and outputs the relation between subj and obj expressed by this bag.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "2" }, { "text": "In order to deal with the noisy training data obtained through DS, our approach follows four steps as depicted in Figure 1 . First, each input sentence is fed to a sentence encoder to generate an embedding vector. Our model then takes the sentence embeddings as input and produce a predicted relation distribution, p, for the input sentence (or the input sentence bag). At the same time, our model dynamically produces a transition matrix, T, which is used to characterize the noise pattern of sentence (or the bag). Finally, the predicted distribution is multiplied by the transition matrix to produce the observed relation distribution, o, which is used to match the noisy relation labels assigned by DS while the predicted relation distribution p serves as output of our model during testing. One of the key challenges of our approach is on determining the element values of the transition matrix, which will be described in Section 4.", "cite_spans": [], "ref_spans": [ { "start": 114, "end": 122, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Our approach", "sec_num": "3" }, { "text": "Sentence Embedding and Prediction In this work, we use a piecewise convolutional neural network (Zeng et al., 2015) for sentence encoding, but other sentence embedding models can also be used. We feed the sentence embedding to a full connection layer, and use softmax to generate the predicted relation distribution, p.", "cite_spans": [ { "start": 96, "end": 115, "text": "(Zeng et al., 2015)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence-level Modeling", "sec_num": "3.1" }, { "text": "Noise Modeling First, each sentence embedding x, generated b sentence encoder, is passed to a full connection layer as a non-linearity to obtain the sentence embedding x n used specifically for noise modeling. We then use softmax to calculate the transition matrix T, for each sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-level Modeling", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T ij = exp(w T ij x n + b) |C| j=1 exp(w T ij x n + b)", "eq_num": "(1)" } ], "section": "Sentence-level Modeling", "sec_num": "3.1" }, { "text": "where T ij is the conditional probability for the input sentence to be labeled as relation j by DS, given i as the true relation, b is a scalar bias, |C| is the number of relations, w ij is the weight vector characterizing the confusion between i and j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-level Modeling", "sec_num": "3.1" }, { "text": "Here, we dynamically produce a transition matrix, T, specifically for each sentence, but with the parameters (w ij ) shared across the dataset. By doing so, we are able to adaptively characterize the noise pattern for each sentence, with a few parameters only. In contrast, one could also produce a global transition matrix for all sentences, with much less computation, where one need not to compute T on the fly (see Section 6.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-level Modeling", "sec_num": "3.1" }, { "text": "Observed Distribution When we characterize the noise in a sentence with a transition matrix T, if its true relation is i, we can assume that i might be erroneously labeled as relation j by DS with probability T ij . We can therefore capture the observed relation distribution, o, by multiplying T and the predicted relation distribution, p:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-level Modeling", "sec_num": "3.1" }, { "text": "o = T T \u2022 p (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-level Modeling", "sec_num": "3.1" }, { "text": "where o is then normalized to ensure i o i = 1. Rather than using the predicted distribution p to directly match the relation labeled by DS (Zeng et al., 2015; Lin et al., 2016) , here we utilize o to match the noisy labels during training and still use p as output during testing, which actually captures the procedure of how the noisy label is produced and thus protects p from the noise.", "cite_spans": [ { "start": 140, "end": 159, "text": "(Zeng et al., 2015;", "ref_id": "BIBREF22" }, { "start": 160, "end": 177, "text": "Lin et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence-level Modeling", "sec_num": "3.1" }, { "text": "Bag Embedding and Prediction One of the key challenges for bag level model is how to aggregate the embeddings of individual sentences into the bag level. In this work, we experiment two methods, namely average and attention aggregation (Lin et al., 2016) . The former calculates the bag embedding, s, by averaging the embeddings of each sentence, and then feed it to a softmax classifier for relation classification.", "cite_spans": [ { "start": 236, "end": 254, "text": "(Lin et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Bag Level Modeling", "sec_num": "3.2" }, { "text": "The attention aggregation calculates an attention value, a ij , for each sentence i in the bag with respect to each relation j, and aggregates to the bag level as s j , by the following equations 1 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bag Level Modeling", "sec_num": "3.2" }, { "text": "s j = n i a ij x i ; a ij = exp(x T i r j ) n i exp(x T i r j ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bag Level Modeling", "sec_num": "3.2" }, { "text": "where x i is the embedding of sentence i, n the number of sentences in the bag, and r j is the randomly initialized embedding for relation j. In similar spirit to (Lin et al., 2016) , the resulting bag embedding s j is fed to a softmax classifier to predict the probability of relation j for the given bag.", "cite_spans": [ { "start": 163, "end": 181, "text": "(Lin et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Bag Level Modeling", "sec_num": "3.2" }, { "text": "Noise Modeling Since the transition matrix addresses the transition probability with respect to each true relation, the attention mechanism appears to be a natural fit for calculating the transition matrix in bag level. Similar to attention aggregation above, we calculate the bag embedding with respect to each relation using Equation 3, but with a separate set of relation embeddings r j . We then calculate the transition matrix, T, by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bag Level Modeling", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T ij = exp(s T i r j + b i ) |C| j=1 exp(s T i r j + b i )", "eq_num": "(4)" } ], "section": "Bag Level Modeling", "sec_num": "3.2" }, { "text": "where s i is the bag embedding regarding relation i, and r j is the embedding for relation j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bag Level Modeling", "sec_num": "3.2" }, { "text": "One of the key challenges of this work is on how to train and produce the transition matrix to model the noise in the training data without any direct guidance and human involvement. A straightforward solution is to directly align the observed distribution, o, with respect to the noisy labels by minimizing the sum of the two terms: CrossEntropy(o) + Regularization. However, doing so does not guarantee that the prediction distribution, p, will match the true relation distribution. The problem is at the beginning of the training, we have no prior knowledge about the noise pattern, thus, both T and p are less reliable, making the training procedure be likely to trap into some poor local optimum. Therefore, we require a technique to guide our model to gradually adapt to the noisy training data, e.g., learning something simple first, and then trying to deal with noises.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning based Training", "sec_num": "4" }, { "text": "Fortunately, this is exactly what curriculum learning can do. The idea of curriculum learning (Bengio et al., 2009) is simple: starting with the easiest aspect of a task, and leveling up the difficulty gradually, which fits well to our problem. We thus employ a curriculum learning framework to guide our model to gradually learn how to characterize the noise. Another advantage is to avoid falling into poor local optimum.", "cite_spans": [ { "start": 94, "end": 115, "text": "(Bengio et al., 2009)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning based Training", "sec_num": "4" }, { "text": "With curriculum learning, our approach provides the flexibility to combine prior knowledge of noise, e.g., splitting a dataset into reliable and less reliable subsets, to improve the effectiveness of the transition matrix and better model the noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning based Training", "sec_num": "4" }, { "text": "Before proceeding to training details, we first discuss how we characterize the noise level of the data by controlling the trace of its transition matrix. Intuitively, if the noise is small, the transition matrix T will tend to become an identity matrix, i.e., given a set of annotated training sentences, the observed relations and their true relations are almost identical. Since each row of T sums to 1, the similarity between the transition matrix and the identity matrix can be represented by its trace, trace(T). The larger the trace(T) is, the larger the diagonal elements are, and the more similar the transition matrix T is to the identity matrix, indicating a lower level of noise. Therefore, we can characterize the noise pattern by controlling the expected value of trace(T) in the form of regularization. For example, we will expect a larger trace(T) for reliable data, but a smaller trace(T) for less reliable data. Another advantage of employing trace regularization is that it could help reduce the model complexity and avoid overfitting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trace Regularization", "sec_num": "4.1" }, { "text": "To tackle the challenge of no direct guidance over the noise patterns, we implement a curriculum learning based training method to first train the model without considerations for noise. In other words, we first focus on the loss from the prediction distribution p , and then take the noise modeling into account gradually along the training process, i.e., gradually increasing the importance of the loss from the observed distribution o while decreasing the importance of p. In this way, the prediction branch is roughly trained before the model managing to characterize the noise, thus avoids being stuck into poor local optimum. We thus design to minimize the following loss function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "L = N i=1 \u2212((1 \u2212 \u03b1)log(o iy i ) + \u03b1log(p iy i )) \u2212 \u03b2trace(T i ) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "where 0<\u03b1\u22641 and \u03b2>0 are two weighting parameters, y i is the relation assigned by DS for the i-th instance, N the total number of training instances, o iy i is the probability that the observed relation for the i-th instance is y i , and p iy i is the probability to predict relation y i for the i-th instance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "Initially, we set \u03b1=1, and train our model completely by minimizing the loss from the prediction distribution p. That is, we do not expect to model the noise, but focus on the prediction branch at this time. As the training progresses, the prediction branch gradually learns the basic prediction ability. We then decrease \u03b1 and \u03b2 by 0<\u03c1<1 (\u03b1 * =\u03c1\u03b1 and \u03b2 * =\u03c1\u03b2) every \u03c4 epochs, i.e., learning more about the noise from the observed distribution o and allowing a relatively smaller trace(T) to accommodate more noise. The motivation behind is to put more and more effort on learning the noise pattern as the training proceeds, with the essence of curriculum learning. This gradually learning paradigm significantly distinguishes from prior work on noise modeling for DS seen to date. Moreover, as such a method does not rely on any extra assumptions, it can serve as our default training method for T.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "With Prior Knowledge of Data Quality On the other hand, if we happen to have prior knowledge about which part of the training data is more reliable and which is less reliable, we can utilize this knowledge as guidance to design the curriculum. Specifically, we can build a curriculum by first training the prediction branch on the reliable data for several epochs, and then adding the less reliable data to train the full model. In this way, the prediction branch is roughly trained before exposed to more noisy data, thus is less likely to fall into poor local optimum.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "Furthermore, we can take better control of the training procedure with trace regularization, e.g., encouraging larger trace(T) for reliable subset and smaller trace(T) for less relaibale ones. Specifically, we propose to minimize:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "L = M m=1 Nm i=1 \u2212log(o mi,y mi ) \u2212 \u03b2 m trace(T mi ) (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "where \u03b2 m is the regularization weight for the m-th data subset, M is the total number of subsets, N m the number of instances in m-th subset, and T mi , y mi and o mi,y mi are the transition matrix, the relation labeled by DS and the observed probability of this relation for the i-th training instance in the m-th subset, respectively. Note that different from Equation 5, this loss function does not need to initiate training by minimizing the loss regarding the prediction distribution p, since one can easily start by learning from the most reliable split first.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "We also use trace regularization for the most reliable subset, since there are still some noise annotations inevitably appearing in this split. Specifically, we expect its trace(T) to be large (using a positive \u03b2) so that the elements of T will be centralized to the diagonal and T will be more similar to the identity matrix. As for the less reliable subset, we expect the trace(T) to be small (using a negative \u03b2) so that the elements of the transition matrix will be diffusive and T will be less similar to the identity matrix. In other words, the transition matrix is encouraged to characterize the noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "Note that this loss function only works for sentence level models. For bag level models, since reliable and less reliable sentences are all aggregated into a sentence bag, we can not determine which bag is reliable and which is not. However, bag level models can still build a curriculum by changing the content of a bag, e.g., keeping reliable sentences in the bag first, then gradually adding less reliable ones, and training with Equation 5, which could benefit from the prior knowledge of data quality as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "Our experiments aim to answer two main questions: (1) is it possible to model the noise in the training data generated through DS, even when there is no prior knowledge to guide us? and (2) whether the prior knowledge of data quality can help our approach better handle the noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "5" }, { "text": "We apply our approach to both sentence level and bag level extraction models, and evaluate in the situations where we do not have prior knowledge of the data quality as well as where such prior knowledge is available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "5" }, { "text": "We evaluate our approach on two datasets. TIMERE We build TIMERE by using DS to align time-related Wikidata (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014) KB triples to Wikipedia text. It contains 278,141 sentences with 12 types of relations between an entity mention and a time expression. We choose to use time-related relations because time expressions speak for themselves in terms of reliability. That is, given a KB triple and its aligned sentences, the finergrained the time expression t appears in the sentence, the more likely the sentence supports the existence of this triple. For example, a sentence containing both Alphabet and October-2-2015 is very likely to express the inception-time of Alphabet, while a sentence containing both Alphabet and 2015 could instead talk about many events, e.g., releasing financial report of 2015, hiring a new CEO, etc. Using this heuristics, we can split the dataset into 3 subsets according to different granularities of the time expressions involved, indicating different levels of reliability. Our criteria for determining the reliability are as follows. Instances with full date expressions, i.e., Year-Month-Day, can be seen as the most reliable data, while those with partial date expressions, e.g., Month-Year and Year-Only, are considered as less reliable. Negative data are constructed heuristically that any entity-time pairs in a sentence without corresponding triples in Wikidata are treated as negative data. During training, we can access 184,579 negative and 77,777 positive sentences, including 22,214 reliable, 2,094 and 53,469 less reliable ones. The validation set and test set are randomly sampled from the reliable (full-date) data for relatively fair evaluations and contains 2,776, 2,771 positive sentences and 5,143, 5,095 negative sentences, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "ENTITYRE is a widely-used entity relation extraction dataset, built by aligning triples in Freebase to the New York Times (NYT) corpus (Riedel et al., 2010) . It contains 52 relations, 136,947 positive and 385,664 negative sentences for training, and 6,444 positive and 166,004 negative sentences for testing. Unlike TIMERE, this dataset does not contain any prior knowledge about the data quality. Since the sentence level annotations in EN-TITYRE are too noisy to serve as gold standard, we only evaluate bag-level models on ENTITYRE, a standard practice in previous works (Surdeanu et al., 2012; Zeng et al., 2015; Lin et al., 2016) .", "cite_spans": [ { "start": 135, "end": 156, "text": "(Riedel et al., 2010)", "ref_id": "BIBREF13" }, { "start": 575, "end": 598, "text": "(Surdeanu et al., 2012;", "ref_id": "BIBREF17" }, { "start": 599, "end": 617, "text": "Zeng et al., 2015;", "ref_id": "BIBREF22" }, { "start": 618, "end": 635, "text": "Lin et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "Hyper-parameters We use 200 convolution kernels with widow size 3. During training, we use stochastic gradient descend (SGD) with batch size 20. The learning rates for sentence-level and bag-level models are 0.1 and 0.01, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "Sentence level experiments are performed on TIMERE, using 100-d word embeddings pretrained using GloVe (Pennington et al., 2014) on Wikipedia and Gigaword (Parker et al., 2011) , and 20-d vectors for distance embeddings. Each of the three subsets of TIMERE is added after the previous phase has run for 15 epochs. The trace regularization weights are \u03b2 1 = 0.01, \u03b2 2 = \u22120.01 and \u03b2 3 = \u22120.1, respectively, from the reliable to the most unreliable, with the ratio of \u03b2 3 and \u03b2 2 fixed to 10 or 5 when tuning.", "cite_spans": [ { "start": 103, "end": 128, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF11" }, { "start": 155, "end": 176, "text": "(Parker et al., 2011)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "Bag level experiments are performed on both TIMERE and ENTITYRE. For TIMERE, we use the same parameters as above. For ENTITYRE, we use 50-d word embeddings pre-trained on the NYT corpus using word2vec (Mikolov et al., 2013) , and 5-d vectors for distance embedding. For both datasets, \u03b1 and \u03b2 in Eq. 5 are initialized to 1 and 0.1, respectively. We tried various decay rates, {0.95, 0.9, 0.8}, and steps, {3, 5, 8}. We found that using a decay rate of 0.9 with step of 5 gives best performance in most cases.", "cite_spans": [ { "start": 201, "end": 223, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "Evaluation Metric The performance is reported using the precision-recall (PR) curve, which is a standard evaluation metric in relation extraction. Specifically, the extraction results are first ranked decreasingly by their confidence scores, then the precision and recall are calculated by setting the threshold to be the score of each extraction result one by one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "We evaluate our approach under a wide range of settings for sentence level (sent ) and bag level (bag ) models: (1) mix: trained on all three subsets of TIMERE mixed together; (2) reliable: trained using the reliable subset of TIMERE only; (3) PR: trained with prior knowledge of annotation quality, i.e., starting from the reliable data and then adding the unreliable data; (4) TM: trained with dynamic transition matrix; (5) GTM: trained with a global transition matrix. In bag level, we also investigate the performance of average aggregation ( avg) and attention aggregation ( att). Figure 2 . We can see that mixing all subsets together (sent mix) gives the worst performance, significantly worse than using the reliable subset only (sent reliable). This suggests the noisy nature of the training data obtained through DS and properly dealing with the noise is the key for DS for a wider range of applications. When getting help from our dynamic transition matrix, the model (sent mix TM) significantly improves sent mix, delivering the same level of performance as sent reliable in most cases. This suggests that our transition matrix can help to mitigate the bad influence of noisy training instances. Now let us consider the PR scenario where one can build a curriculum by first training on the reliable subset, then gradually moving to both reliable and less reliable data. We can see that, this simple curriculum learning based model (sent PR) further outperforms sent reliable significantly, indicating that the curriculum learning framework not only reduces the effect of noise, but also helps the model learn from noisy data. When applying the transition matrix approach into this curriculum learning framework using one reliable subset and one unreliable subset generated by mixing our two less reliable subsets, our model (sent PR seg2 TM) further improves sent PR by utilizing the dynamic transition matrix to model the noise. It is not surprising that when we use all three subsets separately, our model (sent PR TM) significantly outperforms all other models by a large margin. Bag Level Models In this setting, we first look at the performance of the bag level models with attention aggregation. The results are shown in Figure 3(a) . Consider the comparison between the model trained on the reliable subset only (bag att reliable) and the one trained on the mixed dataset (bag att mix). In contrast to the sentence level, bag att mix outperforms bag att reliable by a large margin, because bag att mix has taken the at-least-one assumption into consideration through the attention aggregation mechanism (Eq. 3), which can be seen as a denoising step within the bag. This may also be the reason that when we introduce either our dynamic transition matrix (bag att mix TM) or the curriculum of using prior knowledge of data quality (bag att PR) into the bag level models, the improvement regarding bag att mix is not as significant as in the sentence level.", "cite_spans": [], "ref_spans": [ { "start": 587, "end": 596, "text": "Figure 2", "ref_id": null }, { "start": 2241, "end": 2252, "text": "Figure 3(a)", "ref_id": null } ], "eq_spans": [], "section": "Naming Conventions", "sec_num": null }, { "text": "However, when we apply our dynamic transition matrix into the curriculum built upon prior knowledge of data quality (bag att PR TM), the performance gets further improved. This happens especially in the high precision part compared to bag att PR. We also note that the bag level's at-least-one assumption does not always hold, and there are still false negative and false positive problems. Therefore, using our transition matrix approach with or without prior knowledge of data quality, i.e., bag att mix TM and bag att PR TM, both improve the performance, and bag att PR TM performs slightly better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Naming Conventions", "sec_num": null }, { "text": "The results of bag level models with average aggregation are shown in Figure 3(b) , where the relative ranking of various settings is similar to those with attention aggregation. A notable difference 0 . 0 0 . 2 0 . 4 0 . 6 0 . 8 0 . 9 0 0 . 9 2 0 . 9 4 0 . 9 6 0 . 9 8 1 . 0 0 s e n t _ P R s e n t _ P R _ G T M s e n t _ P R _ T M b a g _ a t t _ P R b a g _ a t t _ P R _ G T M b a g _ a t t _ P R _ T M P r e c i s i o n R e c a l l Figure 4 : Global TM v.s. Dynamic TM is that both bag avg PR and bag avg mix TM improve bag avg mix by a larger margin compared to that in the attention aggregation setting. The reason may be that the average aggregation mechanism is not as good as the attention aggregation in denoising within the bag, which leaves more space for our transition matrix approach or curriculum learning with prior knowledge to improve. Also note that bag avg reliable performs best in the very-low-recall region but worst in general. This is because that it ranks higher the sentences expressing either birth-date or death-date, the simplest but the most common relations in the dataset, but fails to learn other relations with limited or noisy training instances, given its relatively simple aggregation strategy.", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 81, "text": "Figure 3(b)", "ref_id": null }, { "start": 438, "end": 446, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Naming Conventions", "sec_num": null }, { "text": "We also compare our dynamic transition matrix method with the global transition matrix method, which maintains only one transition matrix for all training instances. Specifically, instead of dynam-ically generating a transition matrix for each datum, we first initialize an identity matrix T \u2208 R |C|\u00d7|C| , where |C| is the number of relations (including no-relation). Then the global transition matrix T is built by applying softmax to each row of T so that j T ij = 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global v.s. Dynamic Transition Matrix", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T ij = e T ij |C| j=1 e T ij", "eq_num": "(7)" } ], "section": "Global v.s. Dynamic Transition Matrix", "sec_num": null }, { "text": "where T ij and T ij are the elements in the i th row and j th column of T and T . The element values of matrix T are also updated via backpropagation during training. As shown in Figure 4 , using one global transition matrix ( GTM) is also beneficial and improves both the sentence level (sent PR) and bag level (bag att PR) models. However, since the global transition matrix only captures the global noise pattern, it fails to characterize individuals with subtle differences, resulting in a performance drop compared to the dynamic one ( TM).", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 187, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Global v.s. Dynamic Transition Matrix", "sec_num": null }, { "text": "Case Study We find our transition matrix method tends to obtain more significant improvement on noisier relations. For example, time of spacecraft landing is noisier than time of spacecraft launch since compared to the launching of a spacecraft, there are fewer sentences containing the landing time of a spacecraft that talks directly about the landing. Instead, many of these sentences tend to talk about the activities of the crew. Our sent PR TM model improves the F1 of time of spacecraft landing and time of spacecraft launch over sent PR by 9.09% and 2.78%, respectively. The transition matrix makes more significant improvement on time of spacecraft landing since there are more noisy sentences for our method to handle, which results in more significant improvement on the quality of the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global v.s. Dynamic Transition Matrix", "sec_num": null }, { "text": "We evaluate our bag level models on ENTI-TYRE. As shown in Figure 5 , it is not surprising that the basic model with attention aggregation (att) significantly outperforms the average one (avg), where att in our bag embedding is similar in spirit to (Lin et al., 2016) , which has reported the-state-of-the-art performance on ENTI-TYRE. When injected with our transition matrix approach, both att TM and avg TM clearly outperform their basic versions. Table 1 : Comparison with feature-based methods. P@R 10/20/30 refers to the precision when recall equals 10%, 20% and 30%.", "cite_spans": [ { "start": 249, "end": 267, "text": "(Lin et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 59, "end": 67, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 451, "end": 458, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Performance on ENTITYRE", "sec_num": "6.2" }, { "text": "Similar to the situations in TIMERE, since att has taken the at-least-one assumption into account through its attention-based bag embedding mechanism, thus the improvement made by att TM is not as large as by avg TM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance on ENTITYRE", "sec_num": "6.2" }, { "text": "We also include the comparison with three feature-based methods: Mintz (Mintz et al., 2009) is a multiclass logistic regression model; MultiR (Hoffmann et al., 2011) is a probabilistic graphical model that can handle overlapping relations; MIML (Surdeanu et al., 2012) is also a probabilistic graphical model but operates in the multiinstance multi-label paradigm. As shown in Table 1, although traditional feature-based methods have reasonable results in the low recall region, their performances drop quickly as the recall goes up, and MultiR and MIML did not even reach the 30% recall. This indicates that, while humandesigned featurs can effectively capture certain relation patterns, their coverage is relatively low. On the other hand, neural network models have more stable performance across different recalls, and att TM performs generally better than other models, indicating again the effectiveness of our transition matrix method.", "cite_spans": [ { "start": 71, "end": 91, "text": "(Mintz et al., 2009)", "ref_id": "BIBREF8" }, { "start": 142, "end": 165, "text": "(Hoffmann et al., 2011)", "ref_id": "BIBREF4" }, { "start": 245, "end": 268, "text": "(Surdeanu et al., 2012)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Performance on ENTITYRE", "sec_num": "6.2" }, { "text": "In addition to relation extraction, distant supervision (DS) is shown to be effective in generating training data for various NLP tasks, e.g., tweet sentiment classification (Go et al., 2009) , tweet named entity classifying (Ritter et al., 2011) , etc. However, these early applications of DS do not well address the issue of data noise.", "cite_spans": [ { "start": 174, "end": 191, "text": "(Go et al., 2009)", "ref_id": "BIBREF3" }, { "start": 225, "end": 246, "text": "(Ritter et al., 2011)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "In relation extraction (RE), recent works have been proposed to reduce the influence of wrongly labeled data. The work presented by (Takamatsu et al., 2012) removes potential noisy sentences by identifying bad syntactic patterns at the preprocessing stage. (Xu et al., 2013) use pseudorelevance feedback to find possible false negative data. (Riedel et al., 2010) make the at-leastone assumption and propose to alleviate the noise problem by considering RE as a multi-instance classification problem. Following this assumption, people further improves the original paradigm using probabilistic graphic models (Hoffmann et al., 2011; Surdeanu et al., 2012) , and neural network methods (Zeng et al., 2015) . Recently, (Lin et al., 2016) propose to use attention mechanism to reduce the noise within a sentence bag. Instead of characterizing the noise, these approaches only aim to alleviate the effect of noise.", "cite_spans": [ { "start": 132, "end": 156, "text": "(Takamatsu et al., 2012)", "ref_id": "BIBREF18" }, { "start": 257, "end": 274, "text": "(Xu et al., 2013)", "ref_id": "BIBREF21" }, { "start": 342, "end": 363, "text": "(Riedel et al., 2010)", "ref_id": "BIBREF13" }, { "start": 609, "end": 632, "text": "(Hoffmann et al., 2011;", "ref_id": "BIBREF4" }, { "start": 633, "end": 655, "text": "Surdeanu et al., 2012)", "ref_id": "BIBREF17" }, { "start": 685, "end": 704, "text": "(Zeng et al., 2015)", "ref_id": "BIBREF22" }, { "start": 717, "end": 735, "text": "(Lin et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "The at-least-one assumption is often too strong in practice, and there are still chances that the sentence bag may be false positive or false negative. Thus it is important to model the noise pattern to guide the learning procedure. (Ritter et al., 2013) and (Min et al., 2013) try to employ a set of latent variables to represent the true relation. Our approach differs from them in two aspects. We target noise modeling in neutral networks while they target probabilistic graphic models. We further advance their models by providing the capability to model the fine-grained transition from the true relation to the observed, and the flexibility to combine indirect guidance.", "cite_spans": [ { "start": 233, "end": 254, "text": "(Ritter et al., 2013)", "ref_id": "BIBREF15" }, { "start": 259, "end": 277, "text": "(Min et al., 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Outside of NLP, various methods have been proposed in computer vision to model the data noise using neural networks. (Sukhbaatar et al., 2015 ) utilize a global transition matrix with weight decay to transform the true label distribution to the observed. (Reed et al., 2014 ) use a hidden layer to represent the true label distribution but try to force it to predict both the noisy label and the input. (Chen and Gupta, 2015; Xiao et al., 2015) first estimate the transition matrix on a clean dataset and apply to the noisy data. Our model shares similar spirit with (Misra et al., 2016) in that we all dynamically generate a transition matrix for each training instance, but, instead of using vanilla SGD, we train our model with a novel curriculum learning training framework with trace regularization to control the behavior of transition matrix. In NLP, the only work in neural-network-based noise modeling is to use one single global transition matrix to model the noise introduced by crosslingual projection of training data (Fang and Cohn, 2016) . Our work advances them through generating a transition matrix dynamically for each instance, to avoid using one single component to characterize both reliable and unreliable data.", "cite_spans": [ { "start": 117, "end": 141, "text": "(Sukhbaatar et al., 2015", "ref_id": "BIBREF16" }, { "start": 255, "end": 273, "text": "(Reed et al., 2014", "ref_id": "BIBREF12" }, { "start": 403, "end": 425, "text": "(Chen and Gupta, 2015;", "ref_id": "BIBREF1" }, { "start": 426, "end": 444, "text": "Xiao et al., 2015)", "ref_id": "BIBREF20" }, { "start": 567, "end": 587, "text": "(Misra et al., 2016)", "ref_id": "BIBREF9" }, { "start": 1031, "end": 1052, "text": "(Fang and Cohn, 2016)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "In this paper, we investigate the noise problem inherent in the DS-style training data. We argue that the data speak for themselves by providing useful clues to reveal their noise patterns. We thus propose a novel transition matrix based method to dynamically characterize the noise underlying such training data in a unified framework along the original prediction objective. One of our key innovations is to exploit a curriculum learning based training method to gradually learn to model the underlying noise pattern without direct guidance, and to provide the flexibility to exploit any prior knowledge of the data quality to further improve the effectiveness of the transition matrix. We evaluate our approach in two learning settings of the distantly supervised relation extraction. The experimental results show that the proposed method can better characterize the underlying noise and consistently outperform start-of-the-art extraction models under various scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "While(Lin et al., 2016) use bilinear function to calculate aij, we simply use dot product since we find these two functions perform similarly in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported by the National High Technology R&D Program of China (2015AA015403); the National Natural Science Foundation of China (61672057, 61672058); KLSTSPI Key Lab.of Intelligent Press Media Technology; the UK Engineering and Physical Sciences Research Council under grants EP/M01567X/1 (SANDeRs) and EP/M015793/1 (DIVIDEND); and the Royal Society International Collaboration Grant (IE161012).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Curriculum learning", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "J\u00e9r\u00f4me", "middle": [], "last": "Louradour", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2009, "venue": "ICML. ACM", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, J\u00e9r\u00f4me Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In ICML. ACM, pages 41-48.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Webly supervised learning of convolutional networks", "authors": [ { "first": "Xinlei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Gupta", "suffix": "" } ], "year": 2015, "venue": "ICCV", "volume": "", "issue": "", "pages": "1431--1439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinlei Chen and Abhinav Gupta. 2015. Webly super- vised learning of convolutional networks. In ICCV. pages 1431-1439.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning when to trust distant supervision: An application to lowresource pos tagging using cross-lingual projection", "authors": [ { "first": "Meng", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2016, "venue": "CONLL", "volume": "", "issue": "", "pages": "178--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meng Fang and Trevor Cohn. 2016. Learning when to trust distant supervision: An application to low- resource pos tagging using cross-lingual projection. In CONLL. pages 178-186.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Twitter sentiment classification using distant supervision", "authors": [ { "first": "Alec", "middle": [], "last": "Go", "suffix": "" }, { "first": "Richa", "middle": [], "last": "Bhayani", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2009, "venue": "", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Go, Richa Bhayani, and Lei Huang. 2009. Twit- ter sentiment classification using distant supervision. CS224N Project Report, Stanford 1(12).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Knowledgebased weak supervision for information extraction of overlapping relations", "authors": [ { "first": "Raphael", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "Congle", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "541--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of ACL. pages 541-550.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Neural relation extraction with selective attention over instances", "authors": [ { "first": "Yankai", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Shiqi", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "In ACL. vol", "volume": "1", "issue": "", "pages": "2124--2133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In ACL. vol- ume 1, pages 2124-2133.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "NIPS", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS. pages 3111-3119.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Distant supervision for relation extraction with an incomplete knowledge base", "authors": [ { "first": "Bonan", "middle": [], "last": "Min", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Li", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Chang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "David", "middle": [], "last": "Gondek", "suffix": "" } ], "year": 2013, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "777--782", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant supervision for relation extraction with an incomplete knowledge base. In HLT-NAACL. pages 777-782.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Distant supervision for relation extraction without labeled data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" }, { "first": "Rion", "middle": [], "last": "Snow", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2009, "venue": "ACL", "volume": "", "issue": "", "pages": "1003--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Ju- rafsky. 2009. Distant supervision for relation ex- traction without labeled data. In ACL. pages 1003- 1011.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Seeing through the human reporting bias: Visual classifiers from noisy humancentric labels", "authors": [ { "first": "Ishan", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Girshick", "suffix": "" } ], "year": 2016, "venue": "CVPR", "volume": "", "issue": "", "pages": "2930--2939", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ishan Misra, C Lawrence Zitnick, Margaret Mitchell, and Ross Girshick. 2016. Seeing through the human reporting bias: Visual classifiers from noisy human- centric labels. In CVPR. pages 2930-2939.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "English gigaword fifth edition, linguistic data consortium", "authors": [ { "first": "Robert", "middle": [], "last": "Parker", "suffix": "" }, { "first": "David", "middle": [], "last": "Graff", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kazuaki", "middle": [], "last": "Maeda", "suffix": "" } ], "year": 2011, "venue": "Linguistic Data Consortium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword fifth edi- tion, linguistic data consortium. Technical report, Linguistic Data Consortium, Philadelphia.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "14", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532- 1543.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Training deep neural networks on noisy labels with bootstrapping", "authors": [ { "first": "Scott", "middle": [], "last": "Reed", "suffix": "" }, { "first": "Honglak", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Anguelov", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Szegedy", "suffix": "" }, { "first": "Dumitru", "middle": [], "last": "Erhan", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Rabinovich", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6596" ] }, "num": null, "urls": [], "raw_text": "Scott Reed, Honglak Lee, Dragomir Anguelov, Chris- tian Szegedy, Dumitru Erhan, and Andrew Rabi- novich. 2014. Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596 .", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Modeling relations and their mentions without labeled text", "authors": [ { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Limin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2010, "venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases", "volume": "", "issue": "", "pages": "148--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, pages 148-163.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Named entity recognition in tweets: an experimental study", "authors": [ { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2011, "venue": "EMNLP. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1524--1534", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ritter, Alan Ritter, Sam Clark, Oren Etzioni, et al. 2011. Named entity recognition in tweets: an exper- imental study. In EMNLP. Association for Compu- tational Linguistics, pages 1524-1534.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Modeling missing data in distant supervision for information extraction", "authors": [ { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Mausam", "middle": [], "last": "", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2013, "venue": "TACL", "volume": "1", "issue": "", "pages": "367--378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ritter, Luke Zettlemoyer, Mausam, and Oren Et- zioni. 2013. Modeling missing data in distant super- vision for information extraction. TACL 1:367-378.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Training convolutional networks with noisy labels", "authors": [ { "first": "Sainbayar", "middle": [], "last": "Sukhbaatar", "suffix": "" }, { "first": "Joan", "middle": [], "last": "Bruna", "suffix": "" }, { "first": "Manohar", "middle": [], "last": "Paluri", "suffix": "" }, { "first": "Lubomir", "middle": [], "last": "Bourdev", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2015, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. 2015. Training convolutional networks with noisy labels. In ICLR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multi-instance multi-label learning for relation extraction", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Tibshirani", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2012, "venue": "EMNLP-CoNLL", "volume": "", "issue": "", "pages": "455--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In EMNLP-CoNLL. pages 455-465.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Reducing wrong labels in distant supervision for relation extraction", "authors": [ { "first": "Shingo", "middle": [], "last": "Takamatsu", "suffix": "" }, { "first": "Issei", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Nakagawa", "suffix": "" } ], "year": 2012, "venue": "ACL", "volume": "", "issue": "", "pages": "721--729", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In ACL. pages 721-729.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Wikidata: a free collaborative knowledgebase", "authors": [ { "first": "Denny", "middle": [], "last": "Vrande\u010di\u0107", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Kr\u00f6tzsch", "suffix": "" } ], "year": 2014, "venue": "Communications of the ACM", "volume": "57", "issue": "10", "pages": "78--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commu- nications of the ACM 57(10):78-85.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning from massive noisy labeled data for image classification", "authors": [ { "first": "Tong", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Tian", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Chang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaogang", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2015, "venue": "CVPR", "volume": "", "issue": "", "pages": "2691--2699", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xi- aogang Wang. 2015. Learning from massive noisy labeled data for image classification. In CVPR. pages 2691-2699.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Filling knowledge base gaps for distant supervision of relation extraction", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "Le", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2013, "venue": "ACL", "volume": "", "issue": "", "pages": "665--670", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Raphael Hoffmann, Le Zhao, and Ralph Gr- ishman. 2013. Filling knowledge base gaps for distant supervision of relation extraction. In ACL. pages 665-670.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Distant supervision for relation extraction via piecewise convolutional neural networks", "authors": [ { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1753--1762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In EMNLP. pages 1753-1762.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "on TIMERE Sentence Level Models The results of sentence level models on TIMERE are shown in", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "_ a v g _ m i x b a g _ a v g _ r e l i a b l e b a g _ a v g _ P R b a g _ a v g _ m i x _ T M b a g _ a v g _ P R _ T M (b) Average AggregationFigure 3: Bag Level Results on TIMERE", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Results on ENTITYRE", "num": null, "uris": null, "type_str": "figure" } } } }