{ "paper_id": "D12-1042", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:22:48.058263Z" }, "title": "Multi-instance Multi-label Learning for Relation Extraction", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": { "postCode": "94305", "settlement": "Stanford", "region": "CA" } }, "email": "mihais@stanford.edu" }, { "first": "Julie", "middle": [], "last": "Tibshirani", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": { "postCode": "94305", "settlement": "Stanford", "region": "CA" } }, "email": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "", "affiliation": {}, "email": "nallapat@ai.sri.com" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": { "postCode": "94305", "settlement": "Stanford", "region": "CA" } }, "email": "manning@stanford.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Distant supervision for relation extraction (RE)-gathering training data by aligning a database of facts with text-is an efficient approach to scale RE to thousands of different relations. However, this introduces a challenging learning scenario where the relation expressed by a pair of entities found in a sentence is unknown. For example, a sentence containing Balzac and France may express BornIn or Died, an unknown relation, or no relation at all. Because of this, traditional supervised learning, which assumes that each example is explicitly mapped to a label, is not appropriate. We propose a novel approach to multi-instance multi-label learning for RE, which jointly models all the instances of a pair of entities in text and all their labels using a graphical model with latent variables. Our model performs competitively on two difficult domains.", "pdf_parse": { "paper_id": "D12-1042", "_pdf_hash": "", "abstract": [ { "text": "Distant supervision for relation extraction (RE)-gathering training data by aligning a database of facts with text-is an efficient approach to scale RE to thousands of different relations. However, this introduces a challenging learning scenario where the relation expressed by a pair of entities found in a sentence is unknown. For example, a sentence containing Balzac and France may express BornIn or Died, an unknown relation, or no relation at all. Because of this, traditional supervised learning, which assumes that each example is explicitly mapped to a label, is not appropriate. We propose a novel approach to multi-instance multi-label learning for RE, which jointly models all the instances of a pair of entities in text and all their labels using a graphical model with latent variables. Our model performs competitively on two difficult domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Information extraction (IE), defined as the task of extracting structured information (e.g., events, binary relations, etc.) from free text, has received renewed interest in the \"big data\" era, when petabytes of natural-language text containing thousands of different structure types are readily available. However, traditional supervised methods are unlikely to scale in this context, as training data is either limited or nonexistent for most of these structures. One of the most promising approaches to IE that addresses this limitation is distant supervision, which generates training data automatically by aligning a DB = BornIn(Barack Obama, United States) EmployedBy(Barack Obama, United States)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Latent Label Barack Obama is the 44th and current President of the United States.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence", "sec_num": null }, { "text": "Obama was born in the United States just as he has always said.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EmployedBy", "sec_num": null }, { "text": "United States President Barack Obama meets with Chinese Vice President Xi Jinping today.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BornIn", "sec_num": null }, { "text": "Obama ran for the United States Senate in 2004.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EmployedBy", "sec_num": null }, { "text": "- Figure 1 : Training sentences generated through distant supervision for a database containing two facts.", "cite_spans": [], "ref_spans": [ { "start": 2, "end": 10, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "EmployedBy", "sec_num": null }, { "text": "database of facts with text (Craven and Kumlien, 1999; Bunescu and Mooney, 2007) .", "cite_spans": [ { "start": 28, "end": 54, "text": "(Craven and Kumlien, 1999;", "ref_id": "BIBREF3" }, { "start": 55, "end": 80, "text": "Bunescu and Mooney, 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "EmployedBy", "sec_num": null }, { "text": "In this paper we focus on distant supervision for relation extraction (RE), a subproblem of IE that addresses the extraction of labeled relations between two named entities. Figure 1 shows a simple example for a RE domain with two labels. Distant supervision introduces two modeling challenges, which we highlight in the table. The first challenge is that some training examples obtained through this heuristic are not valid, e.g., the last sentence in Figure 1 is not a correct example for any of the known labels for the tuple. The percentage of such false positives can be quite high. For example, Riedel et al. (2010) report up to 31% of false positives in a corpus that matches Freebase relations with New York Times articles. The second challenge is that the same pair of entities may have multiple labels and it is unclear which label is instantiated by any textual mention of the given tuple. For example, in Figure 1 , the tuple (Barack Obama, United States) has two valid labels: BornIn and EmployedBy, each (latently) instantiated in different sentences. In the Figure 2 : Overview of multi-instance multi-label learning. To contrast, in traditional supervised learning there is one instance and one label per object. For relation extraction the object is a tuple of two named entities. Each mention of this tuple in text generates a different instance. Riedel corpus, 7 .5% of the entity tuples in the training partition have more than one label.", "cite_spans": [ { "start": 601, "end": 621, "text": "Riedel et al. (2010)", "ref_id": "BIBREF10" }, { "start": 1365, "end": 1379, "text": "Riedel corpus,", "ref_id": null }, { "start": 1380, "end": 1381, "text": "7", "ref_id": null } ], "ref_spans": [ { "start": 174, "end": 182, "text": "Figure 1", "ref_id": null }, { "start": 453, "end": 459, "text": "Figure", "ref_id": null }, { "start": 917, "end": 925, "text": "Figure 1", "ref_id": null }, { "start": 1073, "end": 1081, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "EmployedBy", "sec_num": null }, { "text": "We summarize this multi-instance multi-label (MIML) learning problem in Figure 2 . In this paper we propose a novel graphical model, which we called MIML-RE, that targets MIML learning for relation extraction. Our work makes the following contributions:", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 80, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "EmployedBy", "sec_num": null }, { "text": "(a) To our knowledge, MIML-RE is the first RE approach that jointly models both multiple instances (by modeling the latent labels assigned to instances) and multiple labels (by providing a simple method to capture dependencies between labels). For example, our model learns that certain labels tend to be generated jointly while others cannot be jointly assigned to the same tuple.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EmployedBy", "sec_num": null }, { "text": "(b) We show that MIML-RE performs competitively on two difficult domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EmployedBy", "sec_num": null }, { "text": "Distant supervision for IE was introduced by Craven and Kumlien (1999) , who focused on the extraction of binary relations between proteins and cells/tissues/diseases/drugs using the Yeast Protein Database as a source of distant supervision. Since then, the approach grew in popularity (Bunescu and Mooney, 2007; Bellare and McCallum, 2007; Wu and Weld, 2007; Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Nguyen and Moschitti, 2011; Sun et al., 2011; Surdeanu et al., 2011a) . However, most of these approaches make one or more approximations in learning. For example, most proposals heuristically transform distant supervision to traditional supervised learning (i.e., singleinstance single-label) (Bellare and McCallum, 2007; Wu and Weld, 2007; Mintz et al., 2009; Nguyen and Moschitti, 2011; Sun et al., 2011; Surdeanu et al., 2011a) . Bunescu and Mooney (2007) and Riedel et al. (2010) model distant supervision for relation extraction as a multi-instance single-label problem, which allows multiple mentions for the same tuple but disallows more than one label per object. Our work is closest to Hoffmann et al. (2011) . They address the same problem we do (binary relation extraction) with a MIML model, but they make two approximations. First, they use a deterministic model that aggregates latent instance labels into a set of labels for the corresponding tuple by OR-ing the classification results. We use instead an objectlevel classifier that is trained jointly with the classifier that assigns latent labels to instances and can capture dependencies between labels. Second, they use a Perceptron-style additive parameter update approach, whereas we train in a Bayesian framework. We show in Section 5 that these approximations generally have a negative impact on performance.", "cite_spans": [ { "start": 45, "end": 70, "text": "Craven and Kumlien (1999)", "ref_id": "BIBREF3" }, { "start": 286, "end": 312, "text": "(Bunescu and Mooney, 2007;", "ref_id": "BIBREF2" }, { "start": 313, "end": 340, "text": "Bellare and McCallum, 2007;", "ref_id": "BIBREF0" }, { "start": 341, "end": 359, "text": "Wu and Weld, 2007;", "ref_id": null }, { "start": 360, "end": 379, "text": "Mintz et al., 2009;", "ref_id": "BIBREF8" }, { "start": 380, "end": 400, "text": "Riedel et al., 2010;", "ref_id": "BIBREF10" }, { "start": 401, "end": 423, "text": "Hoffmann et al., 2011;", "ref_id": "BIBREF5" }, { "start": 424, "end": 451, "text": "Nguyen and Moschitti, 2011;", "ref_id": "BIBREF9" }, { "start": 452, "end": 469, "text": "Sun et al., 2011;", "ref_id": "BIBREF11" }, { "start": 470, "end": 493, "text": "Surdeanu et al., 2011a)", "ref_id": "BIBREF12" }, { "start": 718, "end": 746, "text": "(Bellare and McCallum, 2007;", "ref_id": "BIBREF0" }, { "start": 747, "end": 765, "text": "Wu and Weld, 2007;", "ref_id": null }, { "start": 766, "end": 785, "text": "Mintz et al., 2009;", "ref_id": "BIBREF8" }, { "start": 786, "end": 813, "text": "Nguyen and Moschitti, 2011;", "ref_id": "BIBREF9" }, { "start": 814, "end": 831, "text": "Sun et al., 2011;", "ref_id": "BIBREF11" }, { "start": 832, "end": 855, "text": "Surdeanu et al., 2011a)", "ref_id": "BIBREF12" }, { "start": 858, "end": 883, "text": "Bunescu and Mooney (2007)", "ref_id": "BIBREF2" }, { "start": 888, "end": 908, "text": "Riedel et al. (2010)", "ref_id": "BIBREF10" }, { "start": 1120, "end": 1142, "text": "Hoffmann et al. (2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "MIML learning has been used in fields other than natural language processing. For example, Zhou and Zhang (2007) use MIML for scene classification. In this problem, each image may be assigned multiple labels corresponding to the different scenes captured. Furthermore, each image contains a set of patches, which forms the bag of instances assigned to the given object (image). Zhou and Zhang propose two algorithms that reduce the MIML problem to a more traditional supervised learning task. In one algorithm, for example, they convert the task to a multi-instance single-label problem by creating a separate bag for each label. Due to this, the proposed approach cannot model inter-label dependencies. Moreover, the authors make a series of approximations, e.g., they assume that each instance in a bag shares the bag's overall label. We instead model all these issues explicitly in our approach.", "cite_spans": [ { "start": 91, "end": 112, "text": "Zhou and Zhang (2007)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In general, our approach belongs to the category of models that learn in the presence of incomplete or incorrect labels. There has been interest among machine learning researchers in the general problem of noisy data, especially in the area of instance-based learning. Brodley and Friedl (1999) summarize past approaches and present a simple, all-purpose method to filter out incorrect data before training. While potentially applicable to our problem, this approach is completely general and cannot incorporate our domain-specific knowledge about how the noisy data is generated.", "cite_spans": [ { "start": 269, "end": 294, "text": "Brodley and Friedl (1999)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Here we focus on distant supervision for the extraction of relations between two entities. We define a relation as the construct r(e 1 , e 2 ), where r is the relation name, e.g., BornIn in Figure 1 , and e 1 and e 2 are two entity names, e.g., Barack Obama and United States. Note that there are entity tuples (e 1 , e 2 ) that participate in multiple relations, r 1 , . . . , r i . In other words, the tuple (e 1 , e 2 ) is the object illustrated in Figure 2 and the different relation names are the labels. We define an entity mention as a sequence of text tokens that matches the corresponding entity name in some text, and relation mention (for a given relation r(e 1 , e 2 )) as a pair of entity mentions of e 1 and e 2 in the same sentence. Relation mentions thus correspond to the instances in Figure 2 . 1 As the latter definition indicates, we focus on the extraction of relations expressed in a single sentence. Furthermore, we assume that entity mentions are extracted by a different process, such as a named entity recognizer.", "cite_spans": [], "ref_spans": [ { "start": 190, "end": 198, "text": "Figure 1", "ref_id": null }, { "start": 452, "end": 460, "text": "Figure 2", "ref_id": null }, { "start": 802, "end": 810, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Distant Supervision for Relation Extraction", "sec_num": "3" }, { "text": "We define the task of relation extraction as a function that takes as input a document collection (C), a set of entity mentions extracted from C (E), a set of known relation labels (L) and an extraction model, and outputs a set of relations (R) such that any of the relations extracted is supported by at least one sentence in C. To train the extraction model, we use a database of relations (D) that are instantiated at least once in C. Using distant supervision, D is aligned with sentences in C, producing relation mentions for all relations in D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distant Supervision for Relation Extraction", "sec_num": "3" }, { "text": "Our model assumes that each relation mention involving an entity pair has exactly one label, but allows the pair to exhibit multiple labels across different mentions. Since we do not know the actual relation label of a mention in the distantly supervised setting, we model it using a latent variable z that can take one of the k pre-specified relation labels as well as an additional NIL label, if no relation is expressed by the corresponding mention. We model the multiple relation labels an entity pair can assume 1 For this reason, we use relation mention and relation instance interchangeably in this paper. We unrolled the y plate to emphasize that it is a collection of binary classifiers (one per relation label), whereas the z classifier is multi-class. Each z and y j classifier has an additional prior parameter, which is omitted here for clarity.", "cite_spans": [ { "start": 517, "end": 518, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "using a multi-label classifier that takes as input the latent relation types of the all the mentions involving that pair. The two-layer hierarchical model is shown graphically in Figure 3 , and is described more formally below. The model includes one multi-class classifier (for z) and a set of binary classifiers (for each y j ). The z classifier assigns latent labels from L to individual relation mentions or NIL if no relation is expressed by the mention. Each y j classifier decides if relation j holds for the given entity tuple, using the mention-level classifications as input. Specifically, in the figure:", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 187, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "\u2022 n is the number of distinct entity tuples in D;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "\u2022 M i is the set of mentions for the ith entity pair;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "\u2022 x is a sentence and z is the latent relation classification for that sentence;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "\u2022 w z is the weight vector for the multi-class mention-level classifier;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "\u2022 k is the number of known relation labels in L;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "\u2022 y j is the top-level classification decision for the entity pair as to whether the jth relation holds;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "\u2022 w j is the weight vector for the binary top-level classifier for the jth relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "Additionally, we define P i (N i ) as the set of all known positive (negative) relation labels for the ith entity tuple. In this paper, we construct N i as L \\ P i , but, in general, other scenarios are possible. For example, both Sun et al. (2011) and Surdeanu et al. (2011a) proposed models where N i for the ith tuple (e 1 , e 2 ) is defined as: {r j | r j (e 1 , e k ) \u2208 D, e k = e 2 , r j / \u2208 P i }, which is a subset of L \\ P i . That is, entity e 2 is considered a negative example for relation r j (in the context of entity e 1 ) only if r j exists in the training data with a different value.", "cite_spans": [ { "start": 231, "end": 248, "text": "Sun et al. (2011)", "ref_id": "BIBREF11" }, { "start": 253, "end": 276, "text": "Surdeanu et al. (2011a)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "The addition of the object-level layer (for y) is an important contribution of this work. This layer can capture information that cannot be modeled by the mention-level classifier. For example, it can learn that two relation labels (e.g., BornIn and SpouseOf) cannot be generated jointly for the same entity tuple. So, if the z classifier outputs both these labels for different mentions of the same tuple, the y layer can cancel one of them. Furthermore, the y classifiers can learn when two labels tend to appear jointly, e.g., CapitalOf and Contained between two locations, and use this occurrence as positive reinforcement for these labels. We discuss the features that implement these ideas in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "We train the proposed model using hard discriminative Expectation Maximization (EM). In the Expectation (E) step we assign latent mention labels using the current model (i.e., the mention and relation level classifiers). In the Maximization (M) step we retrain the model to maximize the log likelihood of the data using the current latent assignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "In the equations that follow, we refer to w 1 , . . . , w k collectively as w y for compactness. The vector z i contains the latent mention-level classifications for the ith entity pair, while y i represents the corresponding set of gold-standard labels (that is, y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "(r) i = 1 if r \u2208 P i , and y (r) i = 0 for r \u2208 N i .)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "Using these notations, the log-likelihood of the data is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "LL(w y , w z ) = n i=1 log p(y i |x i , w y , w z ) = n i=1 log z i p(y i , z i |x i , w y , w z )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "The joint probability in the inner summation can be broken up into simpler parts:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "p(y i , z i |x i , w y , w z ) = p(z i |x i , w z )p(y i |z i , w y ) = m\u2208M i p(z (m) i |x (m) i , w z ) r\u2208P i \u222aN i p(y (r) i |z i , w (r) y )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "where the last step follows from conditional independence. Thus the log-likelihood for this problem is not convex (it includes a sum of products). However, we can still use EM, but the optimization focuses on maximizing the lower bound of the loglikelihood, i.e., we maximize the above joint probability for each entity pair in the database. Rewriting this probability in log space, we obtain:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "log p(y i , z i |x i , w y , w z ) (1) = m\u2208M i log p(z (m) i |x (m) i , w z )+ r\u2208P i \u222aN i log p(y (r) i |z i , w", "eq_num": "(r)" } ], "section": "Training", "sec_num": "4.1" }, { "text": "y )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "The algorithm proceeds as follows. E-step: In this step we infer the mention-level classifications z i for each entity tuple, given all its mentions, the gold labels y i , and current model, i.e., w z and w y weights. Formally, we seek to find:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "z i * = arg max z p(z|y i , x i , w y , w z )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "However it is computationally intractable to consider all vectors z as there is an exponential number of possible assignments, so we approximate and consider each mention separately. Concretely,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "p(z (m) i |y i , x i , w y , w z ) \u221d p(y i , z (m) i |x i , w y , w z ) \u2248 p(z (m) i |x (m) i , w z )p(y i |z i , w y ) = p(z (m) i |x (m) i , w z ) r\u2208P i \u222aN i p(y (r) i |z i , w (r) y )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "where z i contains the previously inferred mention labels for group i, with the exception of component m whose label is replaced by z . So for i = 1, . . . , n, and for each m \u2208 M i we calculate:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "z (m) * i = arg max z p(z|x (m) i , w z )\u00d7 (2) r\u2208P i \u222aN i p(y (r) i |z i , w (r) y )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "Intuitively, the above equation indicates that mention labels are chosen to maximize: (a) the probabilities assigned by the mention-level model; (b) the probability that the correct relation labels are assigned to the corresponding tuple; and (c) the probability that the labels known to be incorrect are not assigned to the tuple. For example, if a particular mention label receives a high mention-level probability but it is known to be a negative label for that tuple, it will receive a low overall score. M-step: In this step we find w y , w z that maximize the lower bound of the log-likelihood, i.e., the probability in equation 1, given the current assignments for z i . From equation 1it is clear that this can be maximized separately with respect to w y and w z . Intuitively, this step amounts to learning the weights for the mention-level classifier (w z ) and the weights for each of the k top-level classifiers (w y ). The updates are given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "w * z = arg max w n i=1 m\u2208M i log p(z (m) * i |x (m) i , w) (3) w (r) * y = arg max w 1\u2264i\u2264n s.t. r\u2208P i \u222aN i log p(y (r) i |z * i , w) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "Note that these are standard updates for logistic regression. We obtained these weights using k + 1 logistic classifiers: one multi-class classifier for w z and k binary classifiers for each relation label r \u2208 L. We implemented all using the L2-regularized logistic regression from the publicly-downloadable Stanford CoreNLP package. 2 The main difference between the classifiers is how features are generated: the mention-level classifier computes its features based on x i , whereas the relation-level classifiers generate features based on the current assignments for z i and the corresponding relation label r. We discuss the actual features used in our experiments in Section 5.", "cite_spans": [ { "start": 334, "end": 335, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.1" }, { "text": "Given an entity tuple, we obtain its relation labels as follows. We first classify its mentions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z (m) * i = arg max z p(z|x (m) i , w z )", "eq_num": "(5)" } ], "section": "Inference", "sec_num": "4.2" }, { "text": "2 nlp.stanford.edu/software/corenlp.shtml then decide on the final relation labels using the toplevel classifiers:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y (r) * i = arg max y\u2208{0,1} p(y|z * i , w (r) y )", "eq_num": "(6)" } ], "section": "Inference", "sec_num": "4.2" }, { "text": "We discuss next several details that are crucial for the correct implementation of the above model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "Initialization: Since EM is not guaranteed to converge at the global maximum of the observed data likelihood, it is important to provide it with good starting values. In our context, the initial values are labels assigned to z i , which are required to compute equation 2in the first iteration (z i ). We generate these values using a local logistic regression classifier that uses the same features as the mention-level classifier in the joint model but treats each relation mention independently. We train this classifier using \"traditional\" distant supervision: for each relation in the database D we assume that all the corresponding mentions are positive examples for the corresponding label (Mintz et al., 2009) . Note that this heuristic repeats relation mentions with different labels for the tuples that participate in multiple relations. For example, all the relation mentions in Figure 1 will yield datums with both the EmployedBy and BornIn labels. Despite this limitation, we found that this is a better initialization heuristic than random assignment.", "cite_spans": [ { "start": 697, "end": 717, "text": "(Mintz et al., 2009)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 890, "end": 898, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "For the second part of equation 2, we initialize the relation-level classifier with a model that replicates the at least one heuristic of Hoffmann et al. (2011) . Each w (r) y model has a single feature with a high positive weight that is triggered when label r is assigned to any of the mentions in z * i .", "cite_spans": [ { "start": 138, "end": 160, "text": "Hoffmann et al. (2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "Avoiding overfitting: A na\u00efve implementation of our approach leads to an unrealistic training scenario where the z classifier generates predictions (in equation (2)) for the same datums it has seen in training in the previous iteration. To avoid this overfitting problem we used cross validation: we divided the training tuples in K distinct folds and trained K different mention-level classifiers. Each classifier outputs p(z|x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "i , w z ) for tuples in a given fold during the E-step (equation 2) and is trained (equation 3 i , w z ) in equation 5as the average of the probabilities of the above set of mention classifiers:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "p(z|x (m) i , w z ) = K j=1 p(z|x (m) i , w j z ) K where w j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "z are the weights of the mention classifier responsible for fold j. We found that this simple bagging model performs slightly better in practice (a couple of tenths of a percent) than training a single mention classifier on the latent mention labels generated in the last training iteration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "Inference during training: During the inference process in the E-step, the algorithm incrementally \"flips\" mention labels based on equation 2, for each group of mentions M i . Thus, z i changes as the algorithm progresses, which may impact the label assigned to the remaining mentions in that group. To avoid any potential bias introduced by the arbitrary order of mentions as seen in the data, we randomize each group M i before we inspect its mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "We evaluate our algorithm on two corpora. The first was developed by Riedel et al. (2010) by aligning Freebase 3 relations with the New York Times (NYT) corpus. They used the Stanford named entity recognizer (Finkel et al., 2005) to find entity mentions in text and constructed relation mentions only between entity mentions in the same sentence. Riedel et al. (2010) observes that evaluating on this corpus underestimates true extraction accuracy because Freebase is incomplete. Thus, some relations extracted during testing will be incorrectly marked as wrong, simply because Freebase has no information on them. To mitigate this issue, Riedel et al. (2010) and Hoffman et al. (2011) perform a second evaluation where they compute the accuracy of labels assigned to a set of relation mentions that they manually annotated. To avoid any potential annotation biases, we instead evaluate on a second corpus that has comprehensive annotations generated by experts for all test relations.", "cite_spans": [ { "start": 69, "end": 89, "text": "Riedel et al. (2010)", "ref_id": "BIBREF10" }, { "start": 208, "end": 229, "text": "(Finkel et al., 2005)", "ref_id": "BIBREF4" }, { "start": 347, "end": 367, "text": "Riedel et al. (2010)", "ref_id": "BIBREF10" }, { "start": 639, "end": 659, "text": "Riedel et al. (2010)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "We constructed this second dataset using mainly resources distributed for the 2010 and 2011 KBP shared tasks (Ji et al., 2010; Ji et al., 2011) . We generated training relations from the knowledge base provided by the task organizers, which is a subset of the English Wikipedia infoboxes from a 2008 snapshot. Similarly to the corpus of Riedel et al., these infoboxes contain open-domain relations between named entities, but with a different focus. For example, more than half of the relations in the evaluation data are alternate names of organizations or persons (e.g., org:alternate names) or relations associated with employment and membership (e.g., per:employee of) (Ji et al., 2011) . We aligned these relations against a document collection that merges two distinct sources: (a) the collection provided by the shared task, which contains approximately 1.5 million documents from a variety of sources, including newswire, blogs and telephone conversation transcripts; and (b) a complete snapshot of the English Wikipedia from June 2010. During training, for each entity tuple (e 1 , e 2 ), we retrieved up to 50 sentences that contain both entity mentions. 4 We used Stanford's CoreNLP package to find entity mentions in text and, similarly to Riedel et al. (2010) , we construct relation mention candidates only between entity mentions in the same sentence. We analyzed a set of over 2,000 relation mentions and we found that 39% of the mentions where e 1 is an organization name and 36% of mentions where e 1 is a person name do not express the corresponding relation.", "cite_spans": [ { "start": 109, "end": 126, "text": "(Ji et al., 2010;", "ref_id": "BIBREF6" }, { "start": 127, "end": 143, "text": "Ji et al., 2011)", "ref_id": "BIBREF7" }, { "start": 337, "end": 351, "text": "Riedel et al.,", "ref_id": null }, { "start": 673, "end": 690, "text": "(Ji et al., 2011)", "ref_id": "BIBREF7" }, { "start": 1165, "end": 1166, "text": "4", "ref_id": null }, { "start": 1252, "end": 1272, "text": "Riedel et al. (2010)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "At evaluation time, the KBP shared task requires the extraction of all relations r(e 1 , e 2 ) given a query that contains only the first entity e 1 . To accommodate this setup, we adjusted our sentence extraction component to use just e 1 as the retrieval query and we kept up to 50 sentences that contain a mention of the input entity for each evaluation query. For tuning and testing we used the 200 queries from the 2010 and 2011 evaluations. We randomly selected 40 queries for development and used the remaining 160 for the formal evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "To address the large number of negative examples in training, Riedel et al. subsampled them randomly with a retention probability of 10%. For the KBP corpus, we followed the same strategy, but we used Table 1 : Statistics about the two corpora used in this paper. Some of the numbers for the Riedel dataset is from (Riedel et al., 2010; Hoffmann et al., 2011) .", "cite_spans": [ { "start": 315, "end": 336, "text": "(Riedel et al., 2010;", "ref_id": "BIBREF10" }, { "start": 337, "end": 359, "text": "Hoffmann et al., 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 201, "end": 208, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "a subsampling probability of 5% because this led to the best results in development for all models. Table 1 provides additional statistics about the two corpora. The table indicates that having multiple mentions for an entity tuple is a very common phenomenon in both corpora, and that having multiple labels per tuple is more common in the Riedel dataset than KBP (7.5% vs. 2.8%).", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 107, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "Our model requires two sets of features: one for the mention classifier (z) and one for the relation classifier (y). In the Riedel dataset, we used the same features as Riedel et al. (2010) and Hoffmann et al. (2011) for the mention classifier. In the KBP dataset, we used a feature set that was developed in our previous work (Surdeanu et al., 2011b) . These features can be grouped in three classes: (a) features that model the two entities, such as their head words; (b) features that model the syntactic context of the relation mention, such as the dependency path between the two entity mentions; and (c) features that model the surface context, such as the sequence of part of speech tags between the two entity mentions. We used these features for all the models evaluated on the KBP dataset. 5 For the relation-level classifier, we developed two feature groups. The first models Hoffmann et al.'s at least one heuristic using a single feature, which is set to true if at least one mention in z i has the label r, which is modeled by the current relation classifier. The second group models the dependencies between relation labels. This is implemented by a set of |L| \u2212 1 features, where feature j is instantiated whenever the label modeled (r) is predicted jointly with another label r j (r j \u2208 L, r j = r) in z i . These features learn both positive and negative reinforcements between labels. For example, if labels r 1 and r 2 tend to be generated jointly, the feature for the corresponding dependency will receive a positive weight in the models for r 1 and r 2 . Similarly, if r 1 and r 2 cannot be generated jointly, the model will assign a negative weight to feature 2 in r 1 's classifier and to feature 1 in r 2 's classifier. Note that this feature is asymmetric, i.e., feature 1 in r 2 's classifier may have a different value than feature 2 in r 1 's classifier, depending on the accuracy of the individual predictions for r 1 and r 2 .", "cite_spans": [ { "start": 169, "end": 189, "text": "Riedel et al. (2010)", "ref_id": "BIBREF10" }, { "start": 194, "end": 216, "text": "Hoffmann et al. (2011)", "ref_id": "BIBREF5" }, { "start": 327, "end": 351, "text": "(Surdeanu et al., 2011b)", "ref_id": "BIBREF13" }, { "start": 800, "end": 801, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "We compare our approach against three models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.3" }, { "text": "Mintz++ -This is the model used to initialize the mention-level classifier in our model. As discussed in Section 4.3, this model follows the \"traditional\" distant supervision heuristic, similarly to (Mintz et al., 2009) . However, our implementation has several advantages over the original model: (a) we model each relation mention independently, whereas Mintz et al. collapsed all the mentions of the same entity tuple into a single datum; (b) we allow multi-label outputs for a given entity tuple at prediction time by OR-ing the predictions for the individual relation mentions corresponding to the tuple (similarly to (Hoffmann et al., 2011)) 6 ; and (c) we use the simple bagging strategy described in Section 4.3 to combine multiple models. Empirically, we observed that these changes yield a significant improvement over the original proposal. For this reason, we consider this model a strong baseline on its own.", "cite_spans": [ { "start": 199, "end": 219, "text": "(Mintz et al., 2009)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.3" }, { "text": "Riedel -This is the \"at-least-once\" model reported in (Riedel et al., 2010) , which had the best performance in that work. This approach models the task as a multi-instance single-label problem. Note that this is the only model shown here that does not allow multi-label outputs for an entity tuple.", "cite_spans": [ { "start": 54, "end": 75, "text": "(Riedel et al., 2010)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.3" }, { "text": "Hoffmann -This is the \"MultiR\" model, which performed the best in (Hoffmann et al., 2011) . This models RE as a MIML problem, but learns using a Perceptron algorithm and uses a deterministic \"at least one\" decision instead of a relation classifier. We used Hoffman's publicly released code 7 for the experiments on the Riedel dataset and our own implementation for the KBP experiments. 8", "cite_spans": [ { "start": 66, "end": 89, "text": "(Hoffmann et al., 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.3" }, { "text": "We tuned all models using three-fold cross validation for the Riedel dataset and using the development queries for the KBP dataset. MIML-RE has two parameters that require tuning: the number of EM epochs (T ) and the number of folds for the mention classifiers (K). 9 The values obtained after tuning are T = 15, K = 5 for the Riedel dataset and T = 8, K = 3 for KBP. Similarly, we tuned the number of epochs for the Hoffmann model on the KBP dataset, obtaining an optimal value of 20.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.4" }, { "text": "On the Riedel dataset we evaluate all models using standard precision and recall measures. For the KBP evaluation we used the official KBP scorer, 10 with two changes: (a) we score with the parameter anydoc set to true, which configures the scorer to accept relation mentions as correct regardless of their supporting document; and (b) we score only on the subset of gold relations that have at least one mention in our sentences. The first decision is necessary because the gold KBP answers contain supporting documents only from the corpus provided by the organizers but we retrieve candidate answers from multiple collections. The second is required because the focus of this work is not on sentence retrieval but on RE, which should be evaluated in isolation. 11 Similarly to previous work, we report precision/recall curves in Figure 4 . We evaluate two variants of MIML-RE: one that includes all the features for the y model, and another (MIML-RE At-Least-One) which has only the at least one feature. For all the Bayesian models implemented here, we sorted the predicted relations by the noisyor score of the top predictions for their mentions. Formally, we rank a relation r predicted for group i, i.e., r \u2208 y * i , using:", "cite_spans": [ { "start": 764, "end": 766, "text": "11", "ref_id": null } ], "ref_spans": [ { "start": 832, "end": 840, "text": "Figure 4", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Results", "sec_num": "5.4" }, { "text": "noisyOr i (r) = 1 \u2212 m\u2208M i (1 \u2212 s (m) i (r)) where s (m) i (r) = p(r|x (m) i , wz) if r = z (m) * i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.4" }, { "text": "or 0 otherwise. The noisy-or formula performs well for ranking because it integrates model confidence (the higher the probabilities, the higher the score) and redundancy (the more mentions are predicted with a label, the higher that label's score). Note that the above ranking score does not include the probability of the relation classifier (equation 6) for MIML-RE. While we use equation 6to generate y * i , we found that the corresponding probabilities are too coarse to provide a good ranking score. This is caused by the fact that our relation-level classifier works with a small number of (noisy) features. Lastly, for our implementation of the Hoffmann et al. model, we used their ranking heuristic (sorting predictions by the maximum extraction score for that relation). Figure 4 indicates that MIML-RE generally outperforms the current state of the art. In the Riedel dataset, MIML-RE has higher overall recall than the Riedel et al. model, and, for the same recall point, MIML-RE's precision is between 2 and 15 points higher. For most of the curve, our model obtains better precision for the same recall point than the Hoffmann model, which currently has the best reported results on this dataset. The difference is as high as 5 precision points around the middle of the curve. The Hoffmann model performs better close to the extremities of the curve (low/high recall). Nevertheless, we argue that our model is more stable than Hoffmann's: MIML-RE yields a smoother precision/recall curve, without most of the depressions seen in the Hoffmann results. In the KBP dataset, MIML-RE performs consistently better than our implementation of Hoffmann's model, with higher precision values for the same recall point, and much higher overall recall. We believe that these differences are caused by our Bayesian framework, which provides a more formal implementation of the MIML problem. Figure 4 also indicates that MIML-RE yields a consistent improvement over Mintz++ (with the exception of a few points in the low-recall portion of the KBP curves). The difference in precision for the same recall point is as high as 25 precision points in the Riedel dataset and up to 5 points in KBP. Overall, the best F1 score of MIML-RE is slightly over 1 point higher than the best F1 score of Mintz++ in the Riedel dataset and 3 points higher in KBP. Considering that Mintz++ is a strong baseline and we evaluate on two challenging domains, we consider these results proof that the correct modeling of the MIML scenario is beneficial.", "cite_spans": [], "ref_spans": [ { "start": 781, "end": 789, "text": "Figure 4", "ref_id": "FIGREF6" }, { "start": 1892, "end": 1900, "text": "Figure 4", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Results", "sec_num": "5.4" }, { "text": "Lastly, Figure 4 shows that MIML-RE outperforms its variant without label-dependency features (MIML-RE At-Least-One) in the higherrecall part of the curve in the Riedel dataset. The improvement is approximately 1 F1 point throughout the last segment of the curve. The overall increase in F1 was found to be significant (p = 0.0296) in a one-sided, paired t-test over randomly sampled test data. We see a smaller improvement in KBP (concentrated around the middle of the curve), likely because the number of entity tuples with multiple labels in training is small (see Table 1 ). Nevertheless, this exercise shows that, when dependencies between labels exist in a dataset, modeling them, which can be trivially done in MIML-RE, is useful. Table 2 : Results at the highest F1 point in the precision/recall curve on the dataset that contains groups with at least 10 mentions.", "cite_spans": [], "ref_spans": [ { "start": 8, "end": 16, "text": "Figure 4", "ref_id": "FIGREF6" }, { "start": 568, "end": 575, "text": "Table 1", "ref_id": null }, { "start": 738, "end": 745, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "In a similar vein, we tested the models previously described on a subset of the Riedel evaluation dataset that only includes groups with at least 10 mentions. This corpus contains approximately 2% of the groups from the original testing partition, out of which 90 tuples have at least one known label and 1410 groups serve as negative examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "For conciseness, we do not include the entire precision/recall curves for this experiment, but summarize them in Table 2 , which lists the performance peak (highest F1 score) for each of the models investigated. The table shows that MIML-RE obtains the highest F1 score overall, 1.5 points higher than MIML-RE At-Least-One and 2.6 points higher than Mintz++. More importantly, for approximately the same recall point, MIML-RE obtains a precision that is over 8 percentage points higher than that of MIML-RE At-Least-One. A post-hoc inspection of the results indicates that, indeed, MIML-RE successfully eliminates undesired labels when two (or more) incompatible labels are jointly assigned to the same tuple. Take for example the tuple (Mexico City, Mexico), for which the correct relation is /location/administrative division/country. MIML-RE At-Least-One incorrectly predicts the additional /location/location/contains relation, while MIML-RE does not make this prediction because it recognizes that these two labels are incompatible in general: one location cannot both be within another location and contain it. Indeed, examining the weights assigned to label-dependency features in MIML-RE, we see that the model has assigned a large negative weight to the dependency feature between /location/location/contains and /location/administrative division/country for the /location/location/contains class. We also observe positive dependencies between labels. For example, MIML-RE learns that the relations /people/person/place lived and /peo-ple/person/place of birth tend to co-occur and assigns a positive weight to this dependency feature for the corresponding classes.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "These results strongly suggest that when all aspects of the MIML scenario are present, our model can successfully capture them and make use of the additional structure to improve performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "In this paper we showed that distant supervision for RE, which generates training data by aligning a database of facts with text, poses a distinct multiinstance multi-label learning scenario. In this setting, each entity pair to be modeled typically has multiple instances in the text and may have multiple labels in the database. This is considerably different from traditional supervised learning, where each instance has a single, explicit label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We argued that this MIML scenario should be formally addressed. We proposed, to our knowledge, the first approach that models all aspects of the MIML setting, i.e., the latent assignment of labels to instances and dependencies between labels assigned to the same entity pair. We evaluated our model on two challenging domains and obtained state-of-the-art results on both. Our model performs well even when not all aspects of the MIML scenario are common, and as seen in the discussion, shows significant improvement when evaluated on entity pairs with many labels or mentions. When all aspects of the MIML scenario are present, our model is well-equipped to handle them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The code and data used in the experiments reported in this paper are available at: http://nlp. stanford.edu/software/mimlre.shtml.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "freebase.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Sentences were ranked using the similarity between their parent document and the query that concatenates the two entity names. We used the default Lucene similarity measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To avoid an excessive number of features in the KBP experiments, we removed features seen less than five times in training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We also allow multiple labels per tuple at training time, in which case we replicate the corresponding datum for each label. However, this did not improve performance significantly compared to selecting a single label per datum during training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "cs.washington.edu/homes/raphaelh/mr/ 8 The decision to reimplement the Hoffmann model was a practical one, driven by incompatibilities between their implementation and our KBP framework.9 We could also tune the prior parameters for both our model and Mintz++, but we found in early experiments that the default value of 1 yields the best scores for all priors.10 nlp.cs.qc.cuny.edu/kbp/2011/scoring.html 11 Due to these changes, the scores reported in this paper are not directly comparable with the shared task scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We gratefully acknowledge the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA, AFRL, or the US government. We gratefully thank Raphael Hoffmann and Sebastian Riedel for sharing their code and data and for the many useful discussions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning extractors from unlabeled text using relevant databases", "authors": [ { "first": "Kedar", "middle": [], "last": "Bellare", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Sixth International Workshop on Information Extraction on the Web", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kedar Bellare and Andrew McCallum. 2007. Learn- ing extractors from unlabeled text using relevant databases. In Proceedings of the Sixth International Workshop on Information Extraction on the Web.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Identifying mislabeled training data", "authors": [ { "first": "Carla", "middle": [], "last": "Brodley", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Friedl", "suffix": "" } ], "year": 1999, "venue": "Journal of Artificial Intelligence Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carla Brodley and Mark Friedl. 1999. Identifying mis- labeled training data. Journal of Artificial Intelligence Research (JAIR).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning to extract relations from the web using minimal supervision", "authors": [ { "first": "Razvan", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Razvan Bunescu and Raymond Mooney. 2007. Learning to extract relations from the web using minimal super- vision. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Constructing biological knowledge bases by extracting information from text sources", "authors": [ { "first": "Mark", "middle": [], "last": "Craven", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Kumlien", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In Proceedings of the Seventh Inter- national Conference on Intelligent Systems for Molec- ular Biology.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Incorporating non-local information into information extraction systems by gibbs sampling", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher D. Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43nd Annual Meeting of the As- sociation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Knowledgebased weak supervision for information extraction of overlapping relations", "authors": [ { "first": "Raphael", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "Congle", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the Annual Meeting of the Association for Computational Linguis- tics (ACL).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Overview of the TAC 2010 knowledge base population track", "authors": [ { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Hoa", "middle": [ "T" ], "last": "Dang", "suffix": "" }, { "first": "Kira", "middle": [], "last": "Griffitt", "suffix": "" }, { "first": "Joe", "middle": [ "Ellis" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Text Analytics Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heng Ji, Ralph Grishman, Hoa T. Dang, Kira Griffitt, and Joe Ellis. 2010. Overview of the TAC 2010 knowl- edge base population track. In Proceedings of the Text Analytics Conference.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Overview of the TAC 2011 knowledge base population track", "authors": [ { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Hoa", "middle": [ "T" ], "last": "Dang", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Text Analytics Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heng Ji, Ralph Grishman, and Hoa T. Dang. 2011. Overview of the TAC 2011 knowledge base popula- tion track. In Proceedings of the Text Analytics Con- ference.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Distant supervision for relation extraction without labeled data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "End-to-end relation extraction using distant supervision from external semantic repositories", "authors": [ { "first": "Truc", "middle": [], "last": "Vien", "suffix": "" }, { "first": "T", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Truc Vien T. Nguyen and Alessandro Moschitti. 2011. End-to-end relation extraction using distant supervi- sion from external semantic repositories. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Modeling relations and their mentions without labeled text", "authors": [ { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Limin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD '10)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of the European Confer- ence on Machine Learning and Knowledge Discovery in Databases (ECML PKDD '10).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "New York University 2011 system for KBP slot filling", "authors": [ { "first": "Ang", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Bonan", "middle": [], "last": "Min", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Text Analytics Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ang Sun, Ralph Grishman, Wei Xu, and Bonan Min. 2011. New York University 2011 system for KBP slot filling. In Proceedings of the Text Analytics Confer- ence.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Stanford's distantlysupervised slot-filling system", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" }, { "first": "Angel", "middle": [ "X" ], "last": "Chang", "suffix": "" }, { "first": "Valentin", "middle": [ "I" ], "last": "Spitkovsky", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Text Analytics Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Surdeanu, Sonal Gupta, John Bauer, David Mc- Closky, Angel X. Chang, Valentin I. Spitkovsky, and Christopher D. Manning. 2011a. Stanford's distantly- supervised slot-filling system. In Proceedings of the Text Analytics Conference.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Customizing an information extraction system to a new domain", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" }, { "first": "Mason", "middle": [ "R" ], "last": "Smith", "suffix": "" }, { "first": "Andrey", "middle": [], "last": "Gusev", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the International Conference on Information and Knowledge Management (CIKM)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Surdeanu, David McClosky, Mason R. Smith, An- drey Gusev, and Christopher D. Manning. 2011b. Customizing an information extraction system to a new domain. In Proceedings of the Workshop on Re- lational Models of Semantics, Portland, Oregon, June. Fei Wu and Dan Weld. 2007. Autonomously semanti- fying Wikipedia. In Proceedings of the International Conference on Information and Knowledge Manage- ment (CIKM).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Multi-instance multilabel learning with application to scene classification", "authors": [ { "first": "Z", "middle": [ "H" ], "last": "Zhou", "suffix": "" }, { "first": "M", "middle": [ "L" ], "last": "Zhang", "suffix": "" } ], "year": 2007, "venue": "Advances in Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z.H. Zhou and M.L. Zhang. 2007. Multi-instance multi- label learning with application to scene classification. In Advances in Neural Information Processing Sys- tems (NIPS).", "links": null } }, "ref_entries": { "FIGREF2": { "uris": null, "num": null, "text": "MIML model plate diagram.", "type_str": "figure" }, "FIGREF4": { "uris": null, "num": null, "text": ") using tuples from all other folds. At testing time, we compute p(z|x (m)", "type_str": "figure" }, "FIGREF6": { "uris": null, "num": null, "text": "Results in the Riedel dataset (top) and the KBP dataset (bottom). The Hoffmann scores in the KBP dataset were generated using our implementation. The other Hoffmann and Riedel results were taken from their papers.", "type_str": "figure" } } } }