{ "paper_id": "D13-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:43:27.116342Z" }, "title": "Unsupervised Relation Extraction with General Domain Knowledge", "authors": [ { "first": "Oier", "middle": [], "last": "Lopez De Lacalle", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "addrLine": "10 Crichton Street", "postCode": "EH8 9AB", "settlement": "Edinburgh" } }, "email": "oier.lopezdelacalle@ehu.es" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "addrLine": "10 Crichton Street", "postCode": "EH8 9AB", "settlement": "Edinburgh" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we present an unsupervised approach to relational information extraction. Our model partitions tuples representing an observed syntactic relationship between two named entities (e.g., \"X was born in Y\" and \"X is from Y\") into clusters corresponding to underlying semantic relation types (e.g., BornIn, Located). Our approach incorporates general domain knowledge which we encode as First Order Logic rules and automatically combine with a topic model developed specifically for the relation extraction task. Evaluation results on the ACE 2007 English Relation Detection and Categorization (RDC) task show that our model outperforms competitive unsupervised approaches by a wide margin and is able to produce clusters shaped by both the data and the rules.", "pdf_parse": { "paper_id": "D13-1040", "_pdf_hash": "", "abstract": [ { "text": "In this paper we present an unsupervised approach to relational information extraction. Our model partitions tuples representing an observed syntactic relationship between two named entities (e.g., \"X was born in Y\" and \"X is from Y\") into clusters corresponding to underlying semantic relation types (e.g., BornIn, Located). Our approach incorporates general domain knowledge which we encode as First Order Logic rules and automatically combine with a topic model developed specifically for the relation extraction task. Evaluation results on the ACE 2007 English Relation Detection and Categorization (RDC) task show that our model outperforms competitive unsupervised approaches by a wide margin and is able to produce clusters shaped by both the data and the rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Information extraction (IE) is becoming increasingly useful as a form of shallow semantic analysis. Learning relational facts from text is one of the core tasks of IE and has applications in a variety of fields including summarization, question answering, and information retrieval. Previous work (Surdeanu and Ciaramita, 2007; Culotta and Sorensen, 2004; Zhou et al., 2007) has traditionally relied on extensive human involvement (e.g., hand-annotated training instances, manual pattern extraction rules, hand-picked seeds). Standard supervised techniques can yield high performance when large amounts of hand-labeled data are available for a fixed inventory of relation types (e.g., Employment, Located), however, extraction systems do not easily generalize beyond their training domains and often must be re-engineered for each application. Unsupervised approaches offer a promising alternative which could lead to significant resource savings and more portable extraction systems.", "cite_spans": [ { "start": 297, "end": 327, "text": "(Surdeanu and Ciaramita, 2007;", "ref_id": "BIBREF23" }, { "start": 328, "end": 355, "text": "Culotta and Sorensen, 2004;", "ref_id": "BIBREF7" }, { "start": 356, "end": 374, "text": "Zhou et al., 2007)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It therefore comes as no surprise that latent topic analysis methods have been used for a variety of IE tasks. Yao et al. (2011) , for example, propose a series of topic models which perform relation discovery by clustering tuples representing an observed syntactic relationship between two named entities (e.g., \"X was born in Y\" and \"X is from Y\"). The clusters correspond to semantic relations whose number or type is not known in advance. Their models depart from standard Latent Dirichlet Allocation (Blei et al., 2003) in that a document consists of relation tuples rather than individual words; moreover, tuples have features each of which is generated independently from a hidden relation (e.g., the words corresponding to the first and second entities, the type and order of the named entities). Since these features are local, they cannot capture more global constraints pertaining to the relation extraction task. Such constraints may take the form of restrictions on which tuples should be clustered together or not. For instance, different types of named entities may be indicative of different relations (ORG-LOC entities often express a Location relation whereas PER-PER entities express Business or Family relations) and thus tuples bearing these entities should not be grouped together. Another example are tuples with identical or similar features which intuitively should be clustered together.", "cite_spans": [ { "start": 111, "end": 128, "text": "Yao et al. (2011)", "ref_id": "BIBREF25" }, { "start": 505, "end": 524, "text": "(Blei et al., 2003)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose an unsupervised approach to relation extraction which does not re-quire any relation-specific training data and allows to incorporate global constraints general expressing domain knowledge. We encode domain knowledge as First Order Logic (FOL) rules and automatically integrate them with a topic model to produce clusters shaped by the data and the constraints at hand. Specifically, we extend the Fold-all (First-Order Logic latent Dirichlet Allocation) framework (Andrzejewski et al., 2011) to the relation extraction task, explain how to incorporate meaningful constraints, and develop a scalable inference technique. In the presence of multiple candidate relation decompositions for a given corpus, domain knowledge can steer the model towards relations which are best aligned with user and task modeling goals. We also argue that a general mechanism for encoding additional modeling assumptions and side information can lessen the need for \"custom\" relation extraction model variants. Experimental results on the ACE-2007 Relation Detection and Categorization (RDC) dataset show that our model outperforms competitive unsupervised approaches by a wide margin and is able to uncover meaningful relations with only two general rule types.", "cite_spans": [ { "start": 491, "end": 518, "text": "(Andrzejewski et al., 2011)", "ref_id": "BIBREF2" }, { "start": 1044, "end": 1096, "text": "ACE-2007 Relation Detection and Categorization (RDC)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions in this work are three-fold: a new model that modifies the Fold-all framework and extends it to the relation extraction task; a new formalization of the logic rules applicable to topic models defined over a rich set of features; and a proposal for mining the logic rules automatically from a corpus contrary to Andrzejewski et al. (2011) who employ manually crafted seeds.", "cite_spans": [ { "start": 329, "end": 355, "text": "Andrzejewski et al. (2011)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A variety of learning paradigms have been applied to relation extraction. As mentioned earlier, supervised methods have been shown to perform well in this task. The reliance on manual annotation, which is expensive to produce and thus limited in quantity, has provided the impetus for semi-supervised and purely unsupervised approaches. Semi-supervised methods use a small number of seed instances or patterns (per relation) to launch an iterative training process (Riloff and Jones, 1999; Agichtein and Gravano, 2000; Bunescu and Mooney, 2007; Pantel and Pennacchiotti, 2006) . The seeds are used to extract a new set of patterns from a large corpus, which are then used to extract more instances, and so on. Unsupervised relation extraction methods are not limited to a predefined set of target relations, but discover all types of relations found in the text. The relations represent clusters over strings of words (Banko et al., 2007; Hasegawa et al., 2004) , syntactic patterns between entities (Yao et al., 2011; Shinyama and Sekine, 2006) , or logical expressions (Poon and Domingos, 2009) . Another learning paradigm is distant supervision which does not require labeled data but instead access to a relational database such as Freebase (Mintz et al., 2009) . The idea is to take entities that appear in some relation in the database, find the sentences that express the relation in an unlabeled corpus, and use them to train a relation classifier.", "cite_spans": [ { "start": 465, "end": 489, "text": "(Riloff and Jones, 1999;", "ref_id": "BIBREF20" }, { "start": 490, "end": 518, "text": "Agichtein and Gravano, 2000;", "ref_id": "BIBREF0" }, { "start": 519, "end": 544, "text": "Bunescu and Mooney, 2007;", "ref_id": "BIBREF6" }, { "start": 545, "end": 576, "text": "Pantel and Pennacchiotti, 2006)", "ref_id": "BIBREF18" }, { "start": 918, "end": 938, "text": "(Banko et al., 2007;", "ref_id": "BIBREF3" }, { "start": 939, "end": 961, "text": "Hasegawa et al., 2004)", "ref_id": "BIBREF12" }, { "start": 1000, "end": 1018, "text": "(Yao et al., 2011;", "ref_id": "BIBREF25" }, { "start": 1019, "end": 1045, "text": "Shinyama and Sekine, 2006)", "ref_id": "BIBREF22" }, { "start": 1071, "end": 1096, "text": "(Poon and Domingos, 2009)", "ref_id": "BIBREF19" }, { "start": 1245, "end": 1265, "text": "(Mintz et al., 2009)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our own work adds an additional approach into the mix. We use a topic model to infer an arbitrary number of relations between named entities. Although we do not have access to relation-specific information (either as a relational database or manually annotated data), we impose task-specific constraints which inject domain knowledge into the learning algorithm. We thus alleviate known problems with the interpretability of the clusters obtained from topic models and are able to guide our model towards reasonable relations. Andrzejewski et al. (2011) show how to integrate First-Order Logic with vanilla LDA. We extend their formulation to relation tuples rather than individual words. Our model generates a corpus of entity tuples which are in turn represented by features and uses automatically acquired FOL rules. The idea of integrating topic modeling with FOL builds on research in probabilistic logic modeling such as Markov Logic Networks (Richardson and Domingos, 2006) . Schoenmackers et al. (2010) learn Horn clauses from web-scale text with aim of finding answers to a user's query. Our work is complementary to theirs. We could make use of their rules to discover more accurate relations.", "cite_spans": [ { "start": 527, "end": 553, "text": "Andrzejewski et al. (2011)", "ref_id": "BIBREF2" }, { "start": 949, "end": 980, "text": "(Richardson and Domingos, 2006)", "ref_id": null }, { "start": 983, "end": 1010, "text": "Schoenmackers et al. (2010)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The general goal of assisting the learner in recovering the \"correct\" clustering by supplying additional domain knowledge is not new. Gondek and Hofmann (2004) supply a known clustering they do not want the learner to return, whereas Wagstaff et al. (2001) use pairwise labels for items indicating whether they belong in the same cluster. These methods combine domain knowledge with statistical learning in order to improve performance with re-spect to the true target clustering. Although, the target labels are not available in our case, we are able to show that the inclusion of domain knowledge yields clustering improvements.", "cite_spans": [ { "start": 134, "end": 159, "text": "Gondek and Hofmann (2004)", "ref_id": "BIBREF10" }, { "start": 234, "end": 256, "text": "Wagstaff et al. (2001)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our relation extraction task broadly adheres to the ACE specification guidelines. Our aim is to detect and characterize the semantic relations between two named entities. The input to our model is a corpus of documents, where each document is a bag of relation tuples which can be obtained from the output of any dependency parser. Each tuple represents a syntactic relationship between two named entity (NE) mentions, and consists of three components: the dependency path between the two mentions, the source NE, and the target NE. A dependency path is the concatenation of dependency edges and nodes along a path in the dependency tree. For example, the sentence \"George Bush traveled to France on Thursday for a summit.\" would yield the tuple [SOURCE:George Bush(PER), PATH:\u2192nsubj\u2192traveled\u2192prep\u2192to\u2192pobj\u2192, DES:France(LOC)]. The tuple here expresses the relation Located, however our model does not observe any relation labels during training. The model assigns tuples to clusters, corresponding to an underlying relation type. Each tuple instance can be then labeled with an identifier corresponding to the cluster (aka relation) it has been assigned to.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Setting", "sec_num": "3" }, { "text": "Our model builds on the work of Yao et al. (2011) who develop a series of generative probabilistic models for relation extraction. Specifically, we extend their relational LDA model by interfacing it with FOL-rules. In the following, we first describe their approach in more detail and then present our extensions and modifications.", "cite_spans": [ { "start": 32, "end": 49, "text": "Yao et al. (2011)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling Framework", "sec_num": "4" }, { "text": "Relational LDA is an extension to LDA with a similar generative story. LDA models each document as a mixture of topics, which are in turn characterized as distributions over words. In relational LDA, each document is a mixture of relations over tuples representing syntactic relations between two named entities. The relation tuples are in turn generated a by set of features drawn independently from the underlying relation distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relational LDA", "sec_num": "4.1" }, { "text": "More technically, a multinomial distribution over relations \u03b8 d i is drawn from a Dirichlet prior (\u03b8 \u223c Dir(\u03b1)) at the document level. Relation tuples are generated from a multinomial distribution ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relational LDA", "sec_num": "4.1" }, { "text": "\u03b8 d i (z i |\u03b8 d i \u223c Mult(\u03b8 d i ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relational LDA", "sec_num": "4.1" }, { "text": "i (f ik |z i , \u03c6 z i \u223c Mult(\u03c6 z i )). Rela- tions are drawn from a Dirichlet prior (\u03c6 \u223c Dir(\u03b2)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relational LDA", "sec_num": "4.1" }, { "text": "In other words, each tuple in a document is assigned a hidden relation (z = z 1 ...z N ); each relation is represented by a multinomial distribution over features \u03c6 r (Dirichlet prior \u03b2). \u03c6 r is a vector with F dimensions each corresponding to a feature. Finally, documents (j = 1...D) are associated with a multinomial distribution \u03b8 j over relations (Dirichlet prior \u03b1). \u03b8 j is a vector with R dimensions, one for each relation. Figure 1 represents relational LDA model as a an undirected graphical model or factor graph (Bishop, 2006) , ignoring for the moment the factor which connects the d, z, f 1...k and o variables. Directed graphical models can be converted into undirected ones by adding edges between co-parents (Koller and Friedman, 2009) . Each clique in the graph defines a potential function which replaces the conditional probabilities in the directed graph. Each maximal clique is associated with a special factor node (the black squares) and clique members are connected to that factor. The probability of any specific configuration is calculated by multiplying the potential functions and normalizing them. We adopt the factor graph representation as is it convenient for introducing logic rules into the model. The joint probability of the model given the priors and the docu-", "cite_spans": [ { "start": 523, "end": 537, "text": "(Bishop, 2006)", "ref_id": "BIBREF4" }, { "start": 724, "end": 751, "text": "(Koller and Friedman, 2009)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 431, "end": 439, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Relational LDA", "sec_num": "4.1" }, { "text": "ments (P (p, z, \u03c6, \u03b8|\u03b1, \u03b2, d)) is equivalent to: R r p(\u03c6 r |\u03b2) D j p(\u03b8 j |\u03b1) N i \u03b8 d i (z i ) k\u2208p i \u03c6 z i (f k ) (1) where \u03b8 d i (z i ) is the z i -th element in the vector \u03b8 d i and \u03c6 z i (f k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relational LDA", "sec_num": "4.1" }, { "text": "is f k -th feature in the \u03c6 z i vector. Variable p i is the i-th tuple containing k features. The parameters of the latent variables (e.g., \u03c6, \u03b8) are typically estimated using an approximate inference algorithm such as Gibbs Sampling (Griffiths and Steyvers, 2004). As shown in Figure 1 , the observed variables are represented by filled circles. In our case, our model sees the corpus (p, d), where d is the variable representing the document and the tuples (p) are represented by a set of features f 1 ,f 2 . . . f k in the graph. Empty circles are associated with latent variables to be estimated: z represents the relation type assignment to the tuple, \u03b8 is the relation type proportion for the given document, and \u03c6 is the relation type distribution over the features.", "cite_spans": [], "ref_spans": [ { "start": 278, "end": 286, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Relational LDA", "sec_num": "4.1" }, { "text": "The features representing the tuples tap onto semantic information expressed by different surface forms and are an important part of the model. We use a subset of the features proposed in Yao et al. (2011) which we briefly describe below: SOURCE This feature corresponds to the first entity mention of the tuple. In the sentence George Bush traveled to France on Thursday for a summit., the value of this feature would be George Bush .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relational LDA", "sec_num": "4.1" }, { "text": "Predicate Description", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Value", "sec_num": null }, { "text": "z i = r Z(i, r)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Value", "sec_num": null }, { "text": "Latent relation type", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Value", "sec_num": null }, { "text": "f k = v F(k, v) feature of relation tuple p i = i P(i, f k ) tuple i contains feature f k d i = j D(i, j)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Value", "sec_num": null }, { "text": "observed document DEST The feature corresponds to the second entity mention and its value would be France in the previous example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Value", "sec_num": null }, { "text": "NEPAIR The feature indicates the type and order of two entity mentions in the tuple. This would be PER-ORG in our example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Value", "sec_num": null }, { "text": "PATH This feature refers to the dependency path between two entity mentions. In our sentence, the value of the feature would be PATH:\u2192nsubj\u2192traveled\u2192prep\u2192to\u2192pobj\u2192. TRIGGER Finally, trigger features are content words occurring in the dependency path. The path PATH:\u2192nsubj\u2192traveled\u2192prep\u2192to\u2192pobj\u2192 contains only one trigger word, namely traveled. The intuition behind this feature is that paths sharing the same set of trigger words should be grouped in the same cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Value", "sec_num": null }, { "text": "We next couple relational LDA with global constraints, which we express using FOL rules. We begin by representing relational LDA as a Markov Logic Network (Richardson and Domingos, 2006) . We define a logical predicate for each model variable. For example, assigned relation variable (Z(i, r)) is true if z i = r and false otherwise. Table 1 shows the mapping of model variables onto logical predicates. Logical rules are encoded in the form of a weighted FOL knowledge base (KB) which is then converted into Conjunctive Normal form:", "cite_spans": [ { "start": 155, "end": 186, "text": "(Richardson and Domingos, 2006)", "ref_id": null } ], "ref_spans": [ { "start": 334, "end": 341, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "First Order Logic and Relational LDA", "sec_num": "4.2" }, { "text": "KB = {(\u03bb 1 , \u03c8 1 ), ..., (\u03bb L , \u03c8 L )} (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First Order Logic and Relational LDA", "sec_num": "4.2" }, { "text": "The KB consists of L pairs, where each \u03c8 l represents a FOL rule and \u03bb l \u2265 0 its weight. Rules are soft preferences rather than hard constraints; the weights represent the importance of \u03c8 l and are set manually by the domain expert. The KB is tied to the probabilistic model via its groundings in the corpus. For each FOL rule \u03c8 l , let G(\u03c8 l ) be the set of groundings, each mapping the free variables in \u03c8 l to a specific value. For example, in the rule \u2200i, j, p :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First Order Logic and Relational LDA", "sec_num": "4.2" }, { "text": "F(i, Obama) \u2227 F(j, W hiteHouse) \u2227 P(p, i) \u2227 P(p, j) \u21d2 Z(p, r) 1 , G", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First Order Logic and Relational LDA", "sec_num": "4.2" }, { "text": "consists of all the rules where the free variables i, j and p are instantiated. At grounding time, we parse the corpus searching for the tuples that satisfy the logic rules and store the indices of the tuples that ground the rule. The stored indices are used to set \u03c8 l to a specific value. For the (Obama, White House) example above, G consists of F propositional rules for each observed feature, where i \u2208 [1 . . . F ]. For each grounding (g \u2208 G(\u03c8 l )) we define an indicator function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First Order Logic and Relational LDA", "sec_num": "4.2" }, { "text": "1 g (z, p, d, o) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1, if g is true under z and p, d, o 0, otherwise (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First Order Logic and Relational LDA", "sec_num": "4.2" }, { "text": "where z are relation assignments to tuples, p is the set of features in tuples, d are documents, and o the side information encoded in FOL. Contrary to Andrzejewski et al. 2011, we need to ground the rules while taking into account if the feature specified in the rule is expressed by any tuple or the specific given tuple, since we are assigning relations to tuples, and not directly to words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First Order Logic and Relational LDA", "sec_num": "4.2" }, { "text": "Next, we define a Markov Random Field (MRF) which combines relational LDA with the FOL knowledge base. The MRF is defined over latent relation tuple assignments z, relation feature multinomials \u03c6, and relation document multinomials \u03b8 (the feature set, document, and external information o are observed). Under this model the conditional probability P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First Order Logic and Relational LDA", "sec_num": "4.2" }, { "text": "(z, \u03c6, \u03b8|\u03b1, \u03b2, p, d, o, KB) is proportional to: exp \uf8eb \uf8ed L l g\u2208G(\u03c8 l ) \u03bb l 1 g (z, p, d, o) \uf8f6 \uf8f8 \u00d7 R r p(\u03c6 r |\u03b2) D j p(\u03b8 j |\u03b1) N i \u03b8 d i (z i ) k\u2208p i \u03c6 z i (f k ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First Order Logic and Relational LDA", "sec_num": "4.2" }, { "text": "The first term in Equation (4) corresponds to the logic factor in Figure 1 that groups variables d, z, f 1 , f 2 , . . . f k and o. The remaining terms in Equation (4) refer to relational LDA. The goal of the model is to estimate the most likely \u03b8 and \u03c6 for the given observed state. As z can not be marginalized out, we proceed with MAP estimation of (z, \u03c6, \u03b8), maximizing the log of the probability as in Andrzejewski et al. 2011:", "cite_spans": [], "ref_spans": [ { "start": 66, "end": 74, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "First Order Logic and Relational LDA", "sec_num": "4.2" }, { "text": "arg max z,\u03c6,\u03b8 L l g\u2208G(\u03c8 l ) \u03bb l 1 g (z, p, d, o)+ R r log p(\u03c6 r |\u03b2)+ N i log \u03b8 d i (z i ) k\u2208p i \u03c6 z i (f k ) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First Order Logic and Relational LDA", "sec_num": "4.2" }, { "text": "Once the parameters of the model are estimated (see Section 4.3 for details), we use the \u03c6 probability distribution to assign a relation to a new test tuple. We select the relation that maximizes the probability arg max r k i P (f i |\u03c6 r ) where f 1 . . . f k are features representing the tuple and r the relation index.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First Order Logic and Relational LDA", "sec_num": "4.2" }, { "text": "Exact inference is intractable for both relational LDA and MLN models. In order to infer the most likely multinomial parameters \u03c6 and \u03b8, we applied the Alternating Optimization with Mirror Descent algorithm introduced in Andrzejewski et al. (2011) . The algorithm alternates between optimizing the multinomial parameters (\u03c6, \u03b8), whilst holding the relation assignments (z) fixed, and vice-versa. At each iteration, the algorithm first finds the optimal (\u03c6, \u03b8) for a fixed z as the MAP estimate of the Dirichlet posterior:", "cite_spans": [ { "start": 221, "end": 247, "text": "Andrzejewski et al. (2011)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c6 r (f ) \u221d n rf + \u03b2 \u2212 1 (6) \u03b8 j (r) \u221d n jr + \u03b1 \u2212 1", "eq_num": "(7)" } ], "section": "Inference", "sec_num": "4.3" }, { "text": "where n rf is the number of times feature f is assigned to relation r in relation assignments z, and n jr is the number of times relation r is assigned to document j. Next, z is optimized while keeping \u03c6 and \u03b8 fixed. This step is divided into two parts. The algorithm first deals with all z i which appear only in trivial groundings, i.e., groundings whose indicator functions 1 g are not affected by the latent relation assignment z. As z i only appears in the last term of Equation (5), the algorithm needs only optimize the following term:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z i = arg max r=1...R \u03b8 d i (r) k\u2208p i \u03c6 z i (f k )", "eq_num": "(8)" } ], "section": "Inference", "sec_num": "4.3" }, { "text": "The second part deals with the remaining z i that appear in non-trivial groundings in the first term of Equation (5). We follow Andrzejewski et al. (2011) in relaxing (5) into a continuous optimization problem and refer the reader to their paper for a more in depth treatment. Suffice it to say that once the binary variables z ir \u2208 {0, 1} are relaxed to continuous values z ir \u2208 [0, 1], it is possible to introduce the relational LDA term in the equation and compute the gradient using the Entropic Mirror Descent Algorithm (Beck and Teboulle, 2003) :", "cite_spans": [ { "start": 128, "end": 154, "text": "Andrzejewski et al. (2011)", "ref_id": "BIBREF2" }, { "start": 525, "end": 550, "text": "(Beck and Teboulle, 2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "arg max z\u2208[0,1] |KB| L l g\u2208G(\u03c8 l ) \u03bb l 1 g (z)+ i,r z ir log \u03b8 d i (r) k\u2208p i \u03c6 z i (f k ) s.t z ir \u2265 0 , i,r z ir = 1", "eq_num": "(9)" } ], "section": "Inference", "sec_num": "4.3" }, { "text": "In every iteration the approximation algorithm randomly samples a term from the objective function (Equation (9)). The sampled term can be a particular ground rule g or the relational LDA term ( r z ir log \u03b8 d i (r) k\u2208p i \u03c6 z i (f k )) for some uniformly sampled index i. The sampling of the terms is weighted according to the rule weight (\u03bb l ) and the grounded value (G(\u03c8 l )) in the case of logic rules, and the size of corpus in tuples (|z KB |) for relational LDA. Once we choose term f and take the gradient, we can apply the Entropic Mirror Descent update:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z ir \u2190 z ir exp(\u03b7 z ir f ) r z ir exp(\u03b7 z ir f )", "eq_num": "(10)" } ], "section": "Inference", "sec_num": "4.3" }, { "text": "Finally, z i is recovered by rounding to arg max r z ir . The main advantage of this approach is that it requires only a means to sample groundings g for each rule \u03c8 l , and can avoid fully grounding the FOL rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.3" }, { "text": "Our model assigns relations to tuples rather than topics to words. Since our tuples are described in terms of features our logic rules must reflect this too. For our experiments we defined two very general types of rules described below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Rules", "sec_num": "4.4" }, { "text": "Must-link Tuple The motivation behind this rule is that tuples which share features probably express the same underlying relation. The rule must specify which feature has to be shared for the tuples to be clustered together. For example, the rule below states that tuples containing the dependency path PATH:\u2192appos\u2192president\u2192prep\u2192of\u2192pobj\u2192 should go in the same cluster:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Rules", "sec_num": "4.4" }, { "text": "\u2200i, j, k : F(i, PATH:is the president of )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Rules", "sec_num": "4.4" }, { "text": "\u2227 P(j, f i ) \u2227P(k, f i ) \u21d2 \u00acZ(j, t) \u2228 Z(k, r)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Rules", "sec_num": "4.4" }, { "text": "Cannot-link Tuple We also define rules prohibiting tuples to be clustered together because they do not share any features. For example, tuples with ORG-LOC entities, probably express a Location relation and should not be clustered together with PER-PER tuples, which in all likelihood express a different relationship (e.g., Family). The rule below expresses this constraint:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Rules", "sec_num": "4.4" }, { "text": "\u2200i, j, k, l : F(i, NEPAIR:PER-PER) \u2227F(j, NEPAIR:ORG-LOC) \u2227P(k, f i ) \u2227 P(l, f j ) \u21d2 \u00acZ(k, r) \u2228 \u00acZ(l, r)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Rules", "sec_num": "4.4" }, { "text": "The specification of the first order logic rules is an integral part of the model. The rules express knowledge about the task at hand, the domain involved, and the way the relation extraction problem is modeled (i.e., tuples expressed as features). So far, we have abstractly formulated the rules without explaining how they are specifically instantiated in our model. We could write them down by hand after inspecting some data or through consultation with a domain expert. Instead, we obtain logic rules automatically from a corpus following the procedure described in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Rules", "sec_num": "4.4" }, { "text": "Data We trained our model on the New York Times (years 2000-2007) corpus created by Yao et al. (2011) . The corpus contains approximately 2M entity tuples. The latter were extracted from 428K documents. After post-processing (tokenization, sentence-splitting, and part-of-speech tagging), Must-link Tuple F(i, NEPAIR:PER-PER, TRIGGER:wife) \u2227 P(j, named entities were automatically recognized and labeled with PER, ORG, LOC, and MISC (Finkel et al., 2005) . Dependency paths for each pair of named entity mentions were extracted from the output of the MaltParser (Nivre et al., 2004) . In our experiments, we discarded tuples with paths longer than 10 edges (Lin and Pantel, 2001) . We evaluated our model on the test partition of the ACE 2007 (English) RDC dataset which is labeled with gold standard entity mentions and their relations. There are six general relation types and 18 subtypes. We used 25% of the ACE training partition as a development set for parameter tuning.", "cite_spans": [ { "start": 84, "end": 101, "text": "Yao et al. (2011)", "ref_id": "BIBREF25" }, { "start": 433, "end": 454, "text": "(Finkel et al., 2005)", "ref_id": "BIBREF9" }, { "start": 562, "end": 582, "text": "(Nivre et al., 2004)", "ref_id": "BIBREF16" }, { "start": 657, "end": 679, "text": "(Lin and Pantel, 2001)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "f i ) \u2227 P(k, f i ) \u21d2 \u00acZ(j, t) \u2228 Z(k, r) F(i, NEPAIR:PER-LOC, TRIGGER:die) \u2227 P(j, f i ) \u2227 P(k, f i ) \u21d2 \u00acZ(j, t) \u2228 Z(k, r) F(i, PATH:\u2190nsubj\u2190die\u2192prep\u2192in\u2192pobj\u2192) \u2227 P(j, f i ) \u2227 P(k, f i ) \u21d2 \u00acZ(j, t) \u2228 Z(k, r) F(i, SOURCE:Kobe, DEST:Lakers) \u2227 P(j, f i ) \u2227 P(k, f i ) \u21d2 \u00acZ(j, t) \u2228 Z(k, r) Cannot-link Tuple F(i, NEPAIR:ORG-LOC) \u2227 F(j, NEPAIR:PER-PER) \u2227 P(k, f i ) \u2227 P(l, f j ) \u21d2 \u00acZ(k, r) \u2228 \u00acZ(l, r) F(i, NEPAIR:LOC-LOC) \u2227 F(j, TRIGGER:president) \u2227 P(k, f i ) \u2227 P(l, f j ) \u21d2 \u00acZ(k, r) \u2228 \u00acZ(l, r) F(i, NEPAIR:PER-LOC) \u2227 F(j, TRIGER:member) \u2227 P(k, f i ) \u2227 P(l, f j ) \u21d2 \u00acZ(k, r) \u2228 \u00acZ(l, r) F(i, NEPAIR:PER-PER) \u2227 F(j, TRIGER:sell) \u2227 P(k, f i ) \u2227 P(l, f j ) \u21d2 \u00acZ(k, r) \u2228 \u00acZ(l, r)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "We automatically extracted logic rules from the New York Times (NYT) corpus as follows. The intuition behind Must-link rules is that tuples with common features should cluster together. Although we do not know which features would yield the best rules, we naively assume that good features are frequently co-occurring features. Using the log-likelihood ratio (Dunning, 1993) , we first discarded low confidence feature co-occurrences (p < 0.05). Two features co-occur if they are both found within the same sentence. We then sorted the remaining co-occurrences by their frequency and retained the N -best ones. We only considered unigram and bigram features since higher-order ones tend to be sparse. An example of a bigram feature would be (PATH:\u2190nsubj\u2190grow\u2192prep\u2192in\u2192pobj\u2192, DEST:Chicago).", "cite_spans": [ { "start": 359, "end": 374, "text": "(Dunning, 1993)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Logic Rule Extraction", "sec_num": null }, { "text": "The main intuition behind Cannot-link rules is that tuples without any common features should not cluster together. So, if two features never co-occur, they probably express different relations. For every unigram and bigram feature in the respective N -best list, we find the features it does not co-occur with in the NYT corpus. For example, NEPAIR:PER-LOC does not co-occur with DEST:Yankees and the bigram DEST:United Nations, NEPAIR:PER-ORG does not co-occur with SOURCE:Mr. Bush, NEPAIR:PER-LOC. Cannotlink rules are then based on such non-co-occurring feature pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Rule Extraction", "sec_num": null }, { "text": "We optimized N empirically on the development set. We experimented with values ranging from 20 to 500. We obtained 20 Must-link rules for coarsegrained relations and 400 rules for their subtypes. We extracted 1,814 Cannot-link rules for general relations (N = 50) and 34,522 rules for subtypes (N = 400). The number of features involved in the Must-link rules was 25 for coarse-grained relations and 422 for fine-grained relations. For Cannot-link rules, 62 features were involved in coarse-grained relations and 422 in fine-grained relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Rule Extraction", "sec_num": null }, { "text": "Examples of the rules we extracted are shown in Table 2 . The first rule in the upper half of the table states that tuples must cluster together if their source and target entities are PER and contain the trigger word wife in their dependency path. The second rule is similar, the source entity here is PER, the target LOC and the trigger word is die. According to the third rule, tuples featuring the path PATH:\u2190nsubj\u2190die\u2192prep\u2192in\u2192pobj\u2192 should be in the same cluster. The fourth rule forces tuples whose source entity is Kobe and target entity is Lakers to cluster together. The second half of the table illustrates Cannot-link tuple rules. The first rule prevents tuples with ORG-LOC entities to cluster to-gether with PER-PER tuples. The second rule states that we cannot link LOC-LOC tuples with those whose trigger word is president, and so on.", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 55, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Logic Rule Extraction", "sec_num": null }, { "text": "Parameter Tuning Our framework has several parameters that must be adjusted for an optimal clustering solution. These include the hyperparameters \u03b1 and \u03b2 as well as the number of clusters. In addition, we have to assign a weight to each FOL rule grounding. An exhaustive search on the hyperparameters and rule weights is not possible. We therefore followed a step-wise approximation procedure. First, we find the best \u03b1 and \u03b2 values, whilst varying the number of clusters. Once we have the best hyperparameters for each clustering, we set the weights for the FOL rules. We varied the number of relations from 5 to 50. We experimented with \u03b1 values in the range of [0.05 \u2212 0.5] and \u03b2 values in the range of [0.05 \u2212 0.5]. These values were optimized separately for coarse-and fine-grained relations. Table 3 shows the optimal number of clusters for different model variants and relation types.", "cite_spans": [], "ref_spans": [ { "start": 798, "end": 805, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Logic Rule Extraction", "sec_num": null }, { "text": "The FOL weights can also make a difference in the final output; the bigger the weight the more times the rule will be sampled in the Mirror Descent algorithm. We experimented with two weighting schemes: (a) we gave a weight of 1 or 0.5 to each rule grounding and (b) we scaled the weights so as to make their contribution comparable to relational LDA. We obtained best results on the development set with the former scheme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Rule Extraction", "sec_num": null }, { "text": "Baselines We compared our FOL relational LDA model against standard LDA (Blei et al., 2003) and relational LDA without the FOL component. In the case of standard LDA, we estimated topics (relations) over words, and used the context of the entity mentions pairs as a bag of words feature to select the most likely cluster at test time. Parameters for LDA and relational LDA were optimized following the same parameter tuning procedure described above.", "cite_spans": [ { "start": 72, "end": 91, "text": "(Blei et al., 2003)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Logic Rule Extraction", "sec_num": null }, { "text": "We also compared our model against the unsupervised method introduced in Hasegawa et al. (2004) . Their key idea is to cluster pairs of co-occurring named entities according to the similarity of their surrounding contexts. Following their approach, we measured context similarity using the vector space model and the cosine metric and grouped NE pairs into clusters using a complete linkage hierarchical clustering algorithm. We adopted the same parameter values as detailed in their paper (e.g., cosine similarity threshold, length of context vectors). At test time, instances were assigned to the relation cluster most similar to them (according to the cosine measure).", "cite_spans": [ { "start": 73, "end": 95, "text": "Hasegawa et al. (2004)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Logic Rule Extraction", "sec_num": null }, { "text": "Evaluation We evaluated the clusters obtained by our model and the comparison systems using the Fscore measure introduced in the SemEval 2007 task (Agirre and Soroa, 2007) ; it is the harmonic mean of precision and recall defined as the number of correct members of a cluster divided by the number of items in the cluster and the number of items in the gold-standard class, respectively.", "cite_spans": [ { "start": 147, "end": 171, "text": "(Agirre and Soroa, 2007)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Logic Rule Extraction", "sec_num": null }, { "text": "Our results are summarized in Table 3 which reports Fscore for (Hasegawa et al., 2004) , LDA, relational LDA (RelLDA), and our model with the FOL component. To assess the impact of the rules on the clustering, we conducted several rule ablation studies. We thus present results with a model that includes both Must-link and Cannot-link tuple rules (CLT+MLT), and models that include either Mustlink (MLT) or Cannot-link (CLT) rules but not both. We show the performance of these models with the entire feature set (see (ALL) in the table) and with a subset consisting solely of NE pair related features (see (NEPAIR) in the table). We report results against coarse-and fine-grained relations (6 and 18 relation types in ACE, respectively). The table shows the optimal number of relation clusters (in parentheses) per model and relation type.", "cite_spans": [ { "start": 63, "end": 86, "text": "(Hasegawa et al., 2004)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 30, "end": 37, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "We also wanted to examine the quality of the logic rules. Recall that we learn these heuristically from the NYT corpus. We thus trained an additional variant of our model with rules extracted from the ACE training set (75%) which contains relation annotations. The extraction procedure was similar to the unsupervised case, save that the relation types were known and thus informative features could be mined more reliably. For Must-link rules, we extracted unigram and bigram feature frequencies for each relation type and applied TF-IDF weighting in order to discover the most discriminative ones. We created logic rules for the 10 best feature combinations in each relation type. Regarding Cannot-link rules, we enumerated the features (unigrams and bigrams) Table 3 : Model performance on the ACE 2007 test set using Fscore. Results are shown for six main relation types and their subtypes (18 in total). (ALL) models contain rules extracted from the entire feature set. For (NE-PAIR) models, rules were extracted from NEPAIR-related features only. Prefix U-denotes models that use unsupervised rules; prefix S-highlights models using supervised rules. The optimal number of relations per model is shown in parentheses.", "cite_spans": [], "ref_spans": [ { "start": 762, "end": 769, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "that did not co-occur in any relation type and applied TF-IDF weighting. Again, we created rules for the 10 most discriminative features. We defined rules over the entire feature set (466 Must-link and 26,074 Cannot-link rules) and a subset containing only NE pairs. In Table 3 , prefixes S-and U-indicate model variants with supervised and unsupervised rules, respectively.", "cite_spans": [], "ref_spans": [ { "start": 270, "end": 277, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "Our results show that standard LDA is not suitable for relation extraction. The obtained clusters are not informative enough to induce semantic relations, whereas RelLDA yields substantially better Fscores. This is not entirely surprising, given that RelLDA is a relation extraction specific model. Hasegawa et al.'s (2004) model lies somewhere in the middle between LDA and RelLDA. The combination of RelLDA with automatically extracted FOL rules improves over RelLDA across the board (see the U-models in Table 3 ). MLT rules deliver the largest improvement for both coarse and finegrained relation types. In general, CLT models perform worse as well as models using both types of rules (MLT+CLT). The inferior performance of the rule combination may be due to the fact that MLT and CLT rules contain conflicting information and to a certain extent cancel each other out. The use of many rules might also negatively impact inference, i.e., discriminative rules are sampled less and cannot influence the model towards a better solution. Restricting the number of features and rules to named entity pairs only incurs a negligible drop in performance. This is good news for scaling purposes, since a small number of rules can greatly speed-up inference. Interestingly, model variants which use supervised FOL rules (see the prefix Sin Table 3 ) perform on par with unsupervised models. Again, MLT rules perform best in the supervised case, whereas CLT rules marginally improve over RelLDA.", "cite_spans": [ { "start": 299, "end": 323, "text": "Hasegawa et al.'s (2004)", "ref_id": null } ], "ref_spans": [ { "start": 507, "end": 514, "text": "Table 3", "ref_id": null }, { "start": 1334, "end": 1341, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "We assessed whether differences in performance are statistically significant (p < 0.05) using bootstrap resampling (Noreen, 1989) . All models across all relation types are significantly better than LDA and Hasegawa et al. (2004) . FOL-based models perform significantly better than RelLDA, with the exception of all CLT models and U-CLT+MLT (ALL). MLT models are significantly better than any other rule-based model, except those that only use NE-PAIR features. We also measured whether different models agree on their topic assignments using Cohen's Kappa. 2 RelLDA agrees least with MLT models and most with CLT models (i.e., \u03ba = 0.50 for U-MLT (ALL) and \u03ba = 0.65 for U-CLT (ALL)). This suggests that the CLT rules do not affect the output of RelLDA as much as MLT ones. Examples of relation clusters discovered by the U-MLT (ALL) model are shown in Table 4 .", "cite_spans": [ { "start": 115, "end": 129, "text": "(Noreen, 1989)", "ref_id": "BIBREF17" }, { "start": 207, "end": 229, "text": "Hasegawa et al. (2004)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 853, "end": 860, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "A last note on parameter selection. Our experiments explored the parameter space extensively in order to examine any interactions between the induced relations and the logic rules. For most model variants inferring subtype relations, the preferred number of clusters is 10. For coarse-grained relations, the optimal number of clusters is five. Overall, we found that the quality of the output is highly correlated with the quality of the logic rules and that a few good rules are more important than the optimal number of clusters. We consider these findings robust enough to apply across domains and datasets. Table 4 : Clusters discovered by the U-MLT (ALL) model indicating employment-and sports-type relations. For the sake of readability, we do not display the syntactic dependencies between words in a path.", "cite_spans": [], "ref_spans": [ { "start": 611, "end": 618, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "In this paper we presented a new model for unsupervised relation extraction which operates over tuples representing a syntactic relationship between two named entities. Our model clusters such tuples into underlying semantic relations (e.g., Located, Family) by incorporating general domain knowledge which we encode as First Order Logic rules. Specifically, we combine a topic model developed for the relation extraction task with domain relevant rules, and present an algorithm for estimating the parameters of this model. Evaluation results on the ACE 2007 (English) RDC task show that our model outperforms competitive unsupervised approaches by a wide margin and is able to produce clusters shaped by both the data and the rules. In the future, we would like to explore additional types of rules such as seed rules, which would assign tuples complying with the \"seed\" information to distinct relations. Aside from devising new rule types, an obvious next step would be to explore different ways of extracting the rule set based on different criteria (e.g., the most general versus most specific rules). Also note that in the current framework rule weights are set manually by the domain expert.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "An appealing direction would be to learn these automatically e.g., via a procedure that optimizes some clustering objective. Finally, it should be interesting to use some form of distant supervision (Mintz et al., 2009) either as a means of obtaining useful rules or to discard potentially noisy or uninformative rules.", "cite_spans": [ { "start": 199, "end": 219, "text": "(Mintz et al., 2009)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "This rule translates as \"every tuple containing Obama and White House as features should be in relation cluster r\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For all comparison models the number of relation clusters was set to 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We gratefully acknowledge financial support from the Department of Education, Universities and Research of the Basque Government (BFI-2011-442). We also thank Limin Yao and Sebastian Riedel for sharing their corpus with us and the members of the Probabilistic Models reading group at the University of Edinburgh for helpful feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Type HASEGAWA 26.1 (12) 34.7 (12) LDA 23.4 (10) 29.0 (5) RelLDA 30.4 (40) 38.6 (5) U-MLT (ALL) 36.6 (10) 48.0 (5) U-CLT (ALL) 30.5 (5) 39.3 (5) U-CLT+MLT (ALL) 29.8 (5) 42.0 (5) U-MLT (NEPAIR) 36.5 (10) 47.2 (5) U-CLT (NEPAIR) 28.8 (50) 40.5 (5) U-CLT+MLT (NEPAIR) 30.9 (10) 41.5(5) S-MLT (ALL) 37.0 (10) 47.0 (5) S-CLT (ALL) 31.4 (50) 40.9 (5) S-CLT+MLT (ALL) 32.3 (10) 42.5 (5) S- MLT (NEPAIR) 37.0 (10) 47.6 (10) S-CLT (NEPAIR) 31.4 (10) 40.1 (5) S-CLT+MLT (NEPAIR) 37.1 (10) 46.0 (5)", "cite_spans": [ { "start": 218, "end": 226, "text": "(NEPAIR)", "ref_id": null }, { "start": 383, "end": 395, "text": "MLT (NEPAIR)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Subtype", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Snowball: Extracting relations from large plain-text collections", "authors": [ { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Gravano", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 5th ACM International Conference on Digital Libraries", "volume": "", "issue": "", "pages": "85--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the 5th ACM International Confer- ence on Digital Libraries, pages 85-94, San Antonio, Texas.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Semeval-2007 task 02: Evaluating word sense induction and discrimination systems", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Soroa", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Workshop on Semantic Evaluations", "volume": "", "issue": "", "pages": "7--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre and Aitor Soroa. 2007. Semeval-2007 task 02: Evaluating word sense induction and discrim- ination systems. In Proceedings of the 4th Interna- tional Workshop on Semantic Evaluations, pages 7-12, Prague, Czech Republic.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A framework for incorporating general domain knowledge into latent Dirichlet allocation using first-order logic", "authors": [ { "first": "David", "middle": [], "last": "Andrzejewski", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Craven", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Recht", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 22nd International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1171--1177", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Andrzejewski, Xiaojin Zhu, Mark Craven, and Ben Recht. 2011. A framework for incorporating general domain knowledge into latent Dirichlet allocation us- ing first-order logic. In Proceedings of the 22nd In- ternational Joint Conference on Artificial Intelligence, pages 1171-1177, Barcelona, Spain.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Mirror descent and nonlinear projected subgradient methods for convex optimization", "authors": [ { "first": "Michele", "middle": [], "last": "Banko", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Cafarella", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Broadhead", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence", "volume": "31", "issue": "", "pages": "167--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, pages 2670-2676, Hyderabad, India. Amir Beck and Marc Teboulle. 2003. Mirror de- scent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167-175.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Pattern Recognition and Machine Learning", "authors": [ { "first": "Christopher", "middle": [ "M" ], "last": "Bishop", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning. Springer.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Latent Dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning to extract relations from the web using minimal supervision", "authors": [ { "first": "Razvan", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "576--583", "other_ids": {}, "num": null, "urls": [], "raw_text": "Razvan Bunescu and Raymond Mooney. 2007. Learning to extract relations from the web using minimal su- pervision. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 576-583, Prague, Czech Republic.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Dependency tree kernels for relation extraction", "authors": [ { "first": "Aron", "middle": [], "last": "Culotta", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Sorensen", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics, Main Volume", "volume": "", "issue": "", "pages": "423--429", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics, Main Volume, pages 423-429, Barcelona, Spain.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Accurate methods for the statistics of surprise and coincidence", "authors": [ { "first": "Ted", "middle": [], "last": "Dunning", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "61--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguis- tics, 19(1):61-74.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Incorporating non-local information into information extraction systems by Gibbs sampling", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "363--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local informa- tion into information extraction systems by Gibbs sam- pling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 363-370, Ann Arbor, Michigan.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Nonredundant data clustering", "authors": [ { "first": "David", "middle": [], "last": "Gondek", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2004, "venue": "IEEE International Conference on Data Mining", "volume": "", "issue": "", "pages": "75--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Gondek and Thomas Hofmann. 2004. Non- redundant data clustering. In IEEE International Con- ference on Data Mining, pages 75-82. IEEE Computer Society.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Finding scientific topics", "authors": [ { "first": "L", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "", "middle": [], "last": "Steyvers", "suffix": "" } ], "year": 2004, "venue": "", "volume": "101", "issue": "", "pages": "5228--5235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. PNAS, 101(1):5228-5235.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Discovering relations among named entities from large corpora", "authors": [ { "first": "Takaaki", "middle": [], "last": "Hasegawa", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "415--422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takaaki Hasegawa, Satoshi Sekine, and Ralph Grishman. 2004. Discovering relations among named entities from large corpora. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguis- tics, pages 415-422, Barcelona, Spain.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Probabilistic Graphical Models: Principles and Techniques", "authors": [ { "first": "D", "middle": [], "last": "Koller", "suffix": "" }, { "first": "N", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Koller and N. Friedman. 2009. Probabilistic Graphi- cal Models: Principles and Techniques. MIT Press.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "DIRT -discovery of inference rules from text", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "323--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin and Patrick Pantel. 2001. DIRT -discovery of inference rules from text. In Proceedings of the 7th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 323-328, San Francisco, California.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Distant supervision for relation extraction without labeled data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" }, { "first": "Rion", "middle": [], "last": "Snow", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "1003--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Juraf- sky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Con- ference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP, pages 1003-1011, Suntec, Singapore.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Memory-based dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 8th Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "49--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2004. Memory-based dependency parsing. In Proceedings of the 8th Conference on Computational Natural Lan- guage Learning, pages 49-56, Boston, Massachusetts.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Computer-intensive Methods for Testing Hypotheses: An Introduction", "authors": [ { "first": "Eric", "middle": [ "W" ], "last": "Noreen", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric W. Noreen. 1989. Computer-intensive Methods for Testing Hypotheses: An Introduction. John Wiley and Sons Inc.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Espresso: Leveraging generic patterns for automatically harvesting semantic relations", "authors": [ { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "113--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automatically harvest- ing semantic relations. In Proceedings of the 21st In- ternational Conference on Computational Linguistics and 44th Annual Meeting of the Association for Com- putational Linguistics, pages 113-120, Sydney, Aus- tralia.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Unsupervised semantic parsing", "authors": [ { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Domingos", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "62", "issue": "", "pages": "107--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hoifung Poon and Pedro Domingos. 2009. Unsuper- vised semantic parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1-10, Suntec, Singapore. Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine Learning, 62(1-2):107-136.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning dictionaries for information extraction", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Rosie", "middle": [], "last": "Jones", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 16th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "474--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff and Rosie Jones. 1999. Learning dictionar- ies for information extraction. In Proceedings of the 16th International Joint Conference on Artificial Intel- ligence, pages 474-479, Stockholm, Sweden.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Learning first-order Horn clauses from web text", "authors": [ { "first": "Stefan", "middle": [], "last": "Schoenmackers", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Weld", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1088--1098", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Schoenmackers, Jesse Davis, Oren Etzioni, and Daniel Weld. 2010. Learning first-order Horn clauses from web text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Process- ing, pages 1088-1098, Cambridge, MA, October. As- sociation for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Preemptive information extraction using unrestricted relation discovery", "authors": [ { "first": "Yusuke", "middle": [], "last": "Shinyama", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", "volume": "", "issue": "", "pages": "304--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive information extraction using unrestricted relation dis- covery. In Proceedings of the Human Language Tech- nology Conference of the NAACL, Main Conference, pages 304-311, New York City, USA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Robust information extration with perceptrons", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Massimiliano", "middle": [], "last": "Ciaramita", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the NIST 2007 Automatic Content Extraction Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Surdeanu and Massimiliano Ciaramita. 2007. Ro- bust information extration with perceptrons. In Pro- ceedings of the NIST 2007 Automatic Content Extrac- tion Workshop.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Constrained k-means clustering with background knowledge", "authors": [ { "first": "Kiri", "middle": [], "last": "Wagstaff", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "S", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "", "middle": [], "last": "Schr\u00f6dl", "suffix": "" } ], "year": 2001, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "577--584", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kiri Wagstaff, Claire Cardie, C Rogers, and S Schr\u00f6dl. 2001. Constrained k-means clustering with back- ground knowledge. In International Conference on Machine Learning, pages 577-584. Morgan Kauf- mann.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Tree kernel-based relation extraction with context-sensitive structured parse tree information", "authors": [ { "first": "Limin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": ";", "middle": [], "last": "Edinburgh", "suffix": "" }, { "first": "Uk", "middle": [ "Guodong" ], "last": "Scotland", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Donghong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qiaoming", "middle": [], "last": "Ji", "suffix": "" }, { "first": "", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "728--736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Limin Yao, Aria Haghighi, Sebastian Riedel, and Andrew McCallum. 2011. Structured relation discovery using generative models. In Proceedings of the 2011 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 1456-1466, Edinburgh, Scotland, UK. GuoDong Zhou, Min Zhang, DongHong Ji, and QiaoM- ing Zhu. 2007. Tree kernel-based relation extraction with context-sensitive structured parse tree informa- tion. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, pages 728-736, Prague, Czech Republic.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "and are represented with k features. Each feature is drawn (independently) from a multinomial distribution selected by the relation assigned to tuple", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Relational LDA as a factor graph. Filled circles represent observed variables, empty circles are associated with latent variables or model hyperparameters, and plates indicate repeating structures. The black squares are the factor nodes and are associated with the potential functions corresponding to conditional independence among the variables. The model observes D documents (d) consisting of N tuples (p), each represented by a set of features f 1 ,f 2 . . . f k . z represents the relation type assignment to a tuple, \u03b8 is the relation type proportion for a given document, and \u03c6 the relation type distribution over the features. The logic factor (indicated with the arrow) connects the KB with the relational LDA model. Variable o is an observed variable which contains the side information expressed in FOL.", "uris": null, "type_str": "figure" }, "TABREF0": { "num": null, "type_str": "table", "text": "Logical variables for Relational LDA. The variable i ranges over tuples in the corpus (i = [1 . . . N ]), and k over features in the corpus (k = [1 . . . F ]).", "content": "", "html": null }, "TABREF1": { "num": null, "type_str": "table", "text": "Examples of automatically extracted Must-link and Cannot-link tuple rules.", "content": "
", "html": null } } } }