{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:58:14.921482Z" }, "title": "Entity Prediction in Knowledge Graphs with Joint Embeddings", "authors": [ { "first": "Matthias", "middle": [], "last": "Baumgartner", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Zurich", "location": {} }, "email": "baumgartner@ifi.uzh.ch" }, { "first": "Daniele", "middle": [], "last": "Dell'aglio", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Aalborg University of Zurich", "location": {} }, "email": "" }, { "first": "Abraham", "middle": [], "last": "Bernstein", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Zurich", "location": {} }, "email": "bernstein@ifi.uzh.ch" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Knowledge Graphs (KGs) have become increasingly popular in the recent years. However, as knowledge constantly grows and changes, it is inevitable to extend existing KGs with entities that emerged or became relevant to the scope of the KG after its creation. Research on updating KGs typically relies on extracting named entities and relations from text. However, these approaches cannot infer entities or relations that were not explicitly stated. Alternatively, embedding models exploit implicit structural regularities to predict missing relations, but cannot predict missing entities. In this article, we introduce a novel method to enrich a KG with new entities given their textual description. Our method leverages joint embedding models, hence does not require entities or relations to be named explicitly. We show that our approach can identify new concepts in a document corpus and transfer them into the KG, and we find that the performance of our method improves substantially when extended with techniques from association rule mining, text mining, and active learning.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Knowledge Graphs (KGs) have become increasingly popular in the recent years. However, as knowledge constantly grows and changes, it is inevitable to extend existing KGs with entities that emerged or became relevant to the scope of the KG after its creation. Research on updating KGs typically relies on extracting named entities and relations from text. However, these approaches cannot infer entities or relations that were not explicitly stated. Alternatively, embedding models exploit implicit structural regularities to predict missing relations, but cannot predict missing entities. In this article, we introduce a novel method to enrich a KG with new entities given their textual description. Our method leverages joint embedding models, hence does not require entities or relations to be named explicitly. We show that our approach can identify new concepts in a document corpus and transfer them into the KG, and we find that the performance of our method improves substantially when extended with techniques from association rule mining, text mining, and active learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Knowledge graphs (KGs) have gained popularity as a versatile, general-purpose, and domainindependent model to represent information and are the major backbone for many applications on the web (Noy et al., 2019) . KGs express knowledge as collections of head-relation-tail statements, named triples, e.g. (:Cheney, :vice-of, :Bush) expresses that Cheney is the vice president of Bush. Since KGs are mostly built through automatic processes (Carlson et al., 2010; Dong et al., 2014) they are often incomplete, e.g. a KG may contain the fact that Cheney was the vice-president of Bush, but not that Cheney is a US citizen. In addition, KGs evolve and require maintenance: they grow and change as the knowledge they describe expands and adapts to the real world.", "cite_spans": [ { "start": 192, "end": 210, "text": "(Noy et al., 2019)", "ref_id": "BIBREF20" }, { "start": 439, "end": 461, "text": "(Carlson et al., 2010;", "ref_id": null }, { "start": 462, "end": 480, "text": "Dong et al., 2014)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The problem of deriving missing portions of knowledge is known as KG completion. So far, the problem has been tackled by link prediction, i.e. finding relationships between previously known entities in the graph. In this paper, we focus on the problem of adding and integrating new entities into the KG-a task we call entity prediction. This is different from link prediction, where entities are ex-ante partially described in the KG. In entity prediction, we discover the existence of an entity from an external source and the KG neither contains the entity nor any information about how it relates to the other entities in the KG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As external source, we target document corpora, which describe the entities and the relations between them. For example, a document corpus like Wikipedia contains a description of Joe Biden and his relations with Obama (vice president) and Cheney (successor) . This lead to our core research question: given a KG G and a document corpus D, how can we complete G with entities (textually) described in D but not yet contained in G?", "cite_spans": [ { "start": 240, "end": 258, "text": "Cheney (successor)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As a solution, we represent the KG and document corpus in a common metric space and exploit this space in conjunction with user feedback and graph features to derive statements describing the new entities. Specifically, we leverage joint embedding models for creating a numerical space to represent the KG and the background source. Whereas KG embedding models give good performance on the link prediction task (Cai et al., 2018a) , joint embedding models combine a KG with a document corpus to draw conclusions in terms of similarity between documents and KG entities. Our experiments determine that joint embedding models are well-founded methods to propose new entities to a KG. We also discuss how the prediction performance can be improved by integrating user feedback and explicit graph features.", "cite_spans": [ { "start": 411, "end": 430, "text": "(Cai et al., 2018a)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The next section discusses related literature and introduces relevant notations. Section 4 outlines our solution based on joint embedding models, user feedback, and graph features. Section 5 describes the experimental setup and evaluates our hypotheses. Finally, Section 6 presents overall conclusions and outlines future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Like in our scenario, ontology population and ontology enrichment 1 extract information from documents (Petasis et al., 2011) . Ontology population adds instances to an existing ontology, whereas the structure of the ontology remains unchanged. In contrast to our problem it does not need to learn relations between instances and assumes an ontology to guide the information extraction process (Buitelaar et al., 2006; Etzioni et al., 2004; Petasis et al., 2013) . Ontology enrichment inserts new concepts or relations into an ontology. It differs from our setting in that it extends the schema of an ontology, using its concepts, instances, and schema constraints, while we solely rely on relationships between entities (Faure et al., 1998; Cimiano and V\u00f6lker, 2005; Hahn and Marko, 2002) .", "cite_spans": [ { "start": 103, "end": 125, "text": "(Petasis et al., 2011)", "ref_id": "BIBREF23" }, { "start": 394, "end": 418, "text": "(Buitelaar et al., 2006;", "ref_id": "BIBREF3" }, { "start": 419, "end": 440, "text": "Etzioni et al., 2004;", "ref_id": "BIBREF10" }, { "start": 441, "end": 462, "text": "Petasis et al., 2013)", "ref_id": "BIBREF24" }, { "start": 721, "end": 741, "text": "(Faure et al., 1998;", "ref_id": "BIBREF11" }, { "start": 742, "end": 767, "text": "Cimiano and V\u00f6lker, 2005;", "ref_id": "BIBREF8" }, { "start": 768, "end": 789, "text": "Hahn and Marko, 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "KG enrichment aims at completing a given KG with new statements, or identifying erroneous ones (Paulheim, 2017) , by predicting entity types (Nickel and Tresp, 2013; Socher et al., 2013) or links (Oren et al., 2007; Socher et al., 2013) . In contrast, our goal is to complete a KG by adding new entities and statements related to them. (Paulheim, 2017) states that no such approach was known until 2015 and to the best of our knowledge, this has not changed meanwhile.", "cite_spans": [ { "start": 95, "end": 111, "text": "(Paulheim, 2017)", "ref_id": "BIBREF22" }, { "start": 141, "end": 165, "text": "(Nickel and Tresp, 2013;", "ref_id": "BIBREF19" }, { "start": 166, "end": 186, "text": "Socher et al., 2013)", "ref_id": "BIBREF27" }, { "start": 196, "end": 215, "text": "(Oren et al., 2007;", "ref_id": "BIBREF21" }, { "start": 216, "end": 236, "text": "Socher et al., 2013)", "ref_id": "BIBREF27" }, { "start": 336, "end": 352, "text": "(Paulheim, 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Automatic Knowledge Base Construction (AKBC) methods such as NELL or OpenIE approach a similar problem by means of text processing. They extract named entities and relations from a document, then arrange them as a KG (Verga and McCallum, 2016; Mitchell et al., 2018; Mart\u00ednez-Rodr\u00edguez et al., 2018) . Similarly, Entity Linking extracts named entities from text, then disambiguates and links them with a background database (Hoffart et al., 2014) . These approaches assume that all entities and all relations are explicitly mentioned in the text under their canonical name. In contrast, we consider the scenario where entities are not stated in the text but described implicitly.", "cite_spans": [ { "start": 217, "end": 243, "text": "(Verga and McCallum, 2016;", "ref_id": "BIBREF29" }, { "start": 244, "end": 266, "text": "Mitchell et al., 2018;", "ref_id": null }, { "start": 267, "end": 299, "text": "Mart\u00ednez-Rodr\u00edguez et al., 2018)", "ref_id": "BIBREF15" }, { "start": 424, "end": 446, "text": "(Hoffart et al., 2014)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "This section presents the key concepts and notation used throughout the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "Knowledge graphs. We define a knowledge graph as G := (V, R, E), with V a set of vertices (entities), R a set of relations, and E a set of directed edges, also known as statements. A statement is a triple (h, r, t) \u2208 E, with h, t \u2208 V the head/tail entities, and r \u2208 R the relation. For example, the sentence \"Joe Biden is the vice president of Barack Obama\" is represented by the triple (:Biden, :vice-of, :Obama). Let U be the the universe of the entities which can be described. V identifies the entities described in G, and V \u2282 U, i.e., G does not include all possible entities that may be described, which is the case in real KGs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "KG embedding. Embedding models create a numeric, low-dimensional representation of a KG by learning a latent vector (embedding) for each KG entity and relation. Ideally, the distance between entity embeddings resembles the relatedness of the KG entities, e.g. the embeddings of :Biden and :Obama are close. Embedding models exploit structural regularities in the KG by defining an optimization problem in terms of a loss function that incorporates the graph's structure, as well as embeddings of a given size. (Bordes et al., 2013) introduced the TransE embedding model, based on the idea that a relation r is a translation from the source entity h to the target entity t in the embedding space, i.e.:", "cite_spans": [ { "start": 510, "end": 531, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L \u223c (h,r,t)\u2208E h + r \u2212 t 2", "eq_num": "(1)" } ], "section": "Background", "sec_num": "3" }, { "text": "We denote embeddings in bold font and elements of V \u222a R in italic, e.g. h and r are the embeddings of h \u2208 V and r \u2208 R, respectively. While there exists a range of embedding models (Wang et al., 2017; Cai et al., 2018b) , we focus on TransE due to its popularity and conceptual clarity.", "cite_spans": [ { "start": 180, "end": 199, "text": "(Wang et al., 2017;", "ref_id": "BIBREF30" }, { "start": 200, "end": 218, "text": "Cai et al., 2018b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "Joint embedding models. Not only do embedding models exist for KGs but also for text (Mikolov et al., 2013; Le and Mikolov, 2014) . Joint embedding models combine two embedding models for different modalities, allowing to compare embeddings between them while maintaining the characteristics of the individual models. These models make two principal assumptions. First, each document d from a cor-pus D is a textual description of a single entity e \u2208 U, e.g. the Wikipedia document d B (https://en.wikipedia.org/wiki/ Joe_Biden) describes the entity e B (:Biden). The description may be implicit, i.e. not actually mention the entity name, and can mention other entities. Second, the two modalities are linked to each other via known correspondences. We define the correspondences as a bijective function m : D \u2192 U, and its inverse m : U \u2192 D, e.g. (Baumgartner et al., 2018) joins the TransE KG embedding model and the par2vec document embedding model (Le and Mikolov, 2014) by adding a regularizer term (weighted by \u03bb) to both models, then training them in an alternating fashion. The regularization forces embeddings of corresponding documents and entities to be close to each other:", "cite_spans": [ { "start": 85, "end": 107, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF16" }, { "start": 108, "end": 129, "text": "Le and Mikolov, 2014)", "ref_id": "BIBREF14" }, { "start": 848, "end": 874, "text": "(Baumgartner et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "L KADE Docs \u223c L Docs + \u03bb d d\u2208D d \u2212 m[d] L KADE KG \u223c L KG + \u03bb g e\u2208V e \u2212 m [e]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "StarSpace (Wu et al., 2018) models entities, relations, and words as atomic features, and defines objects as aggregates over these features. A document embedding thus becomes the sum of its words' embeddings. It then learns feature embeddings by minimizing the distance (inner product or cosine) between related objects:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "L SS \u223c (h,r,t)\u2208E dst(h+r, t)+ e\u2208V dst(e, w\u2208m [e] w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "As inputs, our approach receives a KG G, a document corpus D, and correspondences m between the two modalities. While every entity has at least one corresponding document, we assume a number of surplus documents in D that are not associated with any entity in G. A surplus document can either describe a novel entity we want to add to G or its association to an existing entity in G is unknown. The problem of entity prediction can then be divided into two subproblems: 1. Identify whether a surplus document describes a novel entity. E.g. a document that describes the entity :Biden which is not part of the KG in Figure 1a .", "cite_spans": [], "ref_spans": [ { "start": 615, "end": 624, "text": "Figure 1a", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Approach", "sec_num": "4" }, { "text": "2. Add an entity e * to G and propose edges between e * and the graph's current entities V. E.g. in Figure 1a , we would ideally add :Biden and all red colored edges.", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 109, "text": "Figure 1a", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Approach", "sec_num": "4" }, { "text": "We address both problems in the next sections by describing how our method adds one entity to the KG. For multiple entities, we repeat the procedure for each one independently, and leave an approach that updates the KG and its embeddings incrementally as future work to study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4" }, { "text": "We first discuss how to distinguish surplus documents that describe novel entities from those that have an unknown association to an entity in G. We approach this task as a binary classification prob-lem: a surplus document is either an alternative description of an entity in G or a novel entity. Our intuition is that documents that describe the same entity are more similar to each other than to the remaining documents in the corpus. Since joint embeddings preserve the characteristics of the the document embedding, we measure the document similarity via the embedding distance. Hence, we train the joint embedding model, then compute the distances between surplus document's embedding to the other document embeddings. We compute the mean, variance, minimum, maximum, percentiles, span, entropy, and skew of these distances, concatenate them with the surplus document's embedding, and use the resulting feature vector as input to a binary classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Novelty detection", "sec_num": "4.1" }, { "text": "Next, we discuss how to derive new triples that have e * as head or tail entity. This task is challenging because the number of possible triples is 2|V||R|, which is orders of magnitude larger than the average number of triples an entity typically takes part in 2 . In the remainder of this section we describe the three components we use to tackle this challenge. First, we measure a potential triple's plausibility in a joint embedding space. Second, we propose a method to find likely relations with the help of user feedback. Third, we incorporate explicit features of the graph's structure into the previous methods. For the sake of brevity, we only discuss the case where e * is in the head position. Triple loss. We first look into the joint embedding space to retrieve the most plausible triples. Joint embedding models strive to produce the same embedding for a corresponding document and entity. Therefore, they suggest that the entity e * is located at the same position in the embedding space as its corresponding document d, i.e. e * := d. This is exemplified in Figure 1b : it shows the embeddings of all entities and documents, with corresponding items close to each other. Since :Biden is missing from the graph, its embedding is proposed to be at the position of the document describing it. KG embedding models define a triple loss L(h, r, t) that expresses the plausibility of a triple: KADE uses TransE's triple loss h + r \u2212 t 2 , StarSpace defines it as dst(h + r, t). We compute the triple loss for every possible triple and collect them in a loss matrix S e * : R |R|\u00d7|V| , where each cell is defined as:", "cite_spans": [], "ref_spans": [ { "start": 1076, "end": 1085, "text": "Figure 1b", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Triple reconstruction", "sec_num": "4.2" }, { "text": "S e *", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triple reconstruction", "sec_num": "4.2" }, { "text": "r,e = L(e * , r, e) (2) Figure 1c presents an example triple loss matrix. For the sake of readability, we omit indices of S if possible, e.g. we use S e * r to indicate the row S e * r,\u2022 . For the triple reconstruction, we are mostly interested in the ranking of losses -the triple with the lowest value in S is the most plausible one, irrespective of the actual value. We therefore rank triples in S in ascending order, i.e. assign the lowest rank to the triple that the embedding model determines to be the most plausible. Without further information, it is optimal to select the N lowest ranked triples in S e * , which we denote as the TopN method. User feedback. We refine the triple reconstruction from joint embedding models by incorporating additional information from a user's feedback. The main challenge of the triple reconstruction is that the number of true triples to restore is much lower than the number of possible triples. To circumvent this issue, we split the triple reconstruction into two subtasks: First, we identify relations r \u2208 R present at e * , then we identify the tail entities given the previously found relations. We propose to involve the user in the first subtask, then to solve the second one autonomously. This is because there are typically fewer relations than vertices in a KG, thus the user has to take fewer decisions while their feedback's impact is maximized.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 33, "text": "Figure 1c", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Triple reconstruction", "sec_num": "4.2" }, { "text": "We formalize this idea in the UF procedure in Algorithm 1. We employ a logistic classifier to distinguish relations that should be present at e * from those that should not. The inputs to the classifier are the triple loss statistics of one relation r, i.e. the mean, median, variance, minimum, maximum, quantiles, entropy, and skew of S e * r . Its output is the likelihood of e * having any triple with relation r. Out of the M most likely relations we then ask a user to select a correct one. For the chosen relation r we add the triples with the lowest ranks in S e * r . Since e * can have multiple triples with the same relation to different entities, we pick the N r lowest ranked ones, whereas N r is the average number of r-triples (i.e. triples with relation r) at entities in G. In addition, we discard triples that have a rank in S e * larger than a threshold \u03b8. The process repeats until the user judges that no relation is valid.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triple reconstruction", "sec_num": "4.2" }, { "text": "One issue of UF is that the algorithm terminates without proposing any triple if the initial set of relations suggested to the user lacks a valid one. To prevent this problem, we introduce UF-s, which keeps generating initial candidate relations (line 5 in Algorithm 1) until the user selects one of them, then continues in the same way as UF. Graph features We further improve the triple reconstruction performance by exploiting the graph structure. In the following, we define two features and integrate them into the UF method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triple reconstruction", "sec_num": "4.2" }, { "text": "The first feature focuses on improving the selection of a relation in UF. Once a user has selected a relation, we use this new evidence to improve the estimate of other relations' likelihoods. For this, we use the confidence measure (CO) from Association Rule Learning (Agrawal et al., 1993) , which expresses how certain we are about an entity having a relation r i if we know that it has r j :", "cite_spans": [ { "start": 269, "end": 291, "text": "(Agrawal et al., 1993)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Triple reconstruction", "sec_num": "4.2" }, { "text": "conf(r j \u21d2 r i ) = p(r i |r j ) = |{h|(h, r j , \u2022) \u2208 E} \u2229 {h|(h, r i , \u2022) \u2208 E}| |{h|(h, r j , \u2022) \u2208 E}|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triple reconstruction", "sec_num": "4.2" }, { "text": "We integrate the confidence into UF by multiplying it with the respective likelihood predicted by the clf classifier. Note that this notion of confidence assumes that both r i and r j have the same direction, e.g. e * in the head position. To incorporate the case where their direction differs (i.e. one is inbound, the other outbound to e * ), we modify Algorithm 1 to alternate between reconstructing triples with e * in the head and tail position. The second feature helps with finding the tail entities under a given relation. With no other information than the graph, it is reasonable to add an edge from e * to the entity that is most frequently used with the given relation. This measure is especially informative if the relation occurs at few entities. To express these ideas, we use the BM25 weighting scheme (BM ), popular in information retrieval (Robertson and Zaragoza, 2009) . It assigns a large weight to an edge if the entity is likely to have the relation (term frequency) and if having that relation is also informative (document frequency). We integrate this feature into the UF method by dividing each value in S e * by its BM25 score.", "cite_spans": [ { "start": 858, "end": 888, "text": "(Robertson and Zaragoza, 2009)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Triple reconstruction", "sec_num": "4.2" }, { "text": "Both of these features can be calculated in a single pass over the graph, i.e. they have complexity O (|E|). Training an embedding model requires multiple iterations over the graph, making their complexity O (k|E|) with k typically in the thousands. Therefore, calculating the graph statistics does not impact the scalability of the method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triple reconstruction", "sec_num": "4.2" }, { "text": "In this section, we first describe the experimental setup, then discuss the novelty detection, and finally show the triple reconstruction results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "We evaluate our methods on FB15k-237 and DBP50, two popular KGs for KG embedding model benchmarking (Toutanova and Chen, 2015; Shi and Weninger, 2018) . For entities from either KG, we select a random section of their respective Wikipedia article as corresponding document. To ensure that our methods do not learn from explicitly mentioned entity names, we replace all mentions of any of the entity label's words with 'entity'. For example, if the label is 'Joe Biden', we replace any occurrence of 'Joe' and 'Biden' with 'entity'. We then apply tokenization, normalization, and stopword removal on the documents. Finally, we remove entities from the graphs that cannot be associated with a unique document. 100 entities from each KG and remove them from the respective graph, leaving k surplus documents in both datasets. We then sample another k of the remaining entities in both KGs and add a second document to each of them, again randomly selected from their Wikipedia article and preprocessed in the same way as before. We omit these documents from the known correspondences m. This produces a total of 2 \u2022 k surplus documents: For half of them no entity exists in the KG, the other half of them have existing yet unknown entities in the graph.", "cite_spans": [ { "start": 100, "end": 126, "text": "(Toutanova and Chen, 2015;", "ref_id": "BIBREF28" }, { "start": 127, "end": 150, "text": "Shi and Weninger, 2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "We then train StarSpace and KADE on the KGs and corpora. As our goal is not to get the best performance out of the embedding models, we use common parameters for these models: for KADE we set \u03bb d = \u03bb g = 0.01, for StarSpace we use the inner product. In both cases, we use embedding vectors of size 100 and we train for 1000 epochs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "We first discuss the results of novelty detection with joint embedding models. We hypothesize that joint embedding models have a higher accuracy in distinguishing novel from unassociated documents than other unsupervised document models. Experiment 1: Novelty classification. To test the novelty detection, we train a boosting decision stumps classifier on the 2 \u2022 k surplus documents, whereas half of them describe a novel entity, half of them describe an existing one. We compare KADE document embeddings to a bag-of-word document representation, and evaluate the classifier in a 10fold cross-validation setting. Table 2 shows the two classifiers' performances in terms of accuracy (overall correct classification), type-I (mistaken as novel) and type-II errors (mistaken as unassociated), precision, and recall. The bag-of-words model achieves near-random accuracy, while KADE embeddings achieve a substantially higher performance. The advantage of KADE is mostly in its lower type-I error rate, which prevents redundancy, i.e. that existing entities are being added a second time to the KG. Table 2 : Performance of classifying documents as describing a novel or existing entity. Higher accuracy, precision, and recall are better, while lower type-I and type-II errors are better.", "cite_spans": [], "ref_spans": [ { "start": 615, "end": 622, "text": "Table 2", "ref_id": null }, { "start": 1095, "end": 1102, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Novelty detection results", "sec_num": "5.2" }, { "text": "In the following, we compare the different triple reconstruction methods and their variations. First, we discuss the triple reconstruction considering only the embedding model's triple loss (TopN). Second, we investigate the impact of the separation into relation and triple prediction with user feedback (UF, UF-s). Third, we compare different combinations of graph features (BM , CO, or both) on their effect on Algorithm 1. Last, we discuss how much effort the different procedures inflict on the user. We evaluate our methods on the binary classification metrics precision and recall. The precision indicates the portion of correct triples out of all proposed triples. The recall measures the portion of correctly proposed triples out of all correct triples. We apply our methods on the k novel entities individually and report the averaged metrics. Experiment 2: Joint embedding model. In this experiment, we study how joint embedding models perform in the triple reconstruction task. Specifically, we compare the two joint embedding methods StarSpace and KADE in the TopN setting, and investigate how well the document embedding serves as embedding of the novel entity, as proposed by these models. To test the latter, we train a TransE model on the KG (without the k omitted entities) and derive the embedding of a novel entity e * according to TransE's loss function, i.e. h + r \u2212 e * by using triples from the original KG. We denote this as Oracle, as it computes the optimal entity embedding from ground-truth data. As lower baseline, we use random embeddings (Random). For each entity embedding (from a baseline or joint embedding space), we select triples as specified by TopN. We set N = 10 which gave the best performance in our experiments. We hypothesize that the triple reconstruction performs substantially better with joint embeddings than the Random baseline. Figure 2a shows the precision and recall of TopN with embeddings from the different models. It shows that KADE performs substantially better than StarSpace in both metrics and datasets, meaning that the more constrained document model used by KADE is advantageous in the entity prediction task. As expected, the performance of StarSpace and KADE lies between the two baselines, however compared to the Oracle baseline their performance is unsatisfactory, motivating further improvements. Experiment 3: User feedback. Next, we hypothesise that user feedback increases precision and recall in the triple reconstruction task. For these experiments, we use KADE embeddings, \u03b8 = 500, and M = 10. Since a user study would exceed the scope of this experiment, we provide user feedback by automatically selecting one random correct relation out of the suggested ones. Out of the k entities omitted from the KG, one is selected as e * . The other omitted entities are used to train the classifier clf by constructing features from their triple loss matrices as stated in Section 4.2 and using their triples from the ground-truth KG as training targets. We repeat this procedure for all k entities, and report the averaged evaluation metrics. The Upper baseline knows all true relations of e * , then picks the lowest ranked N triples for each of them, according to S e * . Varying N shifts the trade-off between precision and recall, whereas we use the N that maximizes the F1 score (i.e. the harmonic mean of precision and recall). Note that like Oracle, this baseline is not practically viable as it uses ground-truth information, but rather indicates the maximum performance that could be achieved given the triple loss of the joint embeddings. Figure 2b shows that that the user feedback has a positive impact on at least one of the two evaluation metrics, and that UF-s improves over UF with an average increase of 13.43% in precision and 9.83% in recall. The latter is expected, since UF-s has a guaranteed initial relation. In FB15k-237, the user feedback improves recall by 76.95%, implying that the classifier learns relations present at e * . While more relations are considered, the ratio of correctly selected triples does not improve with UF, meaning that the ranking of triples within one relation is about as accurate as the rankings in the full triple loss matrix. In DBP50, the precision increases much more than the recall. In contrast to FB15k-237, this dataset is sparser (about 2.5 triples per vertex), which makes it harder to find relations. However, once a relation is known, the triple scores from the joint embedding model are reliable, i.e. the true triples have low ranks and are thus selected. Experiment 4: Graph statistics. As the last part of our triple reconstruction method we evaluate the combination of user feedback and graph features. We first hypothesize that combining UF and UF-s with either BM or CO increases their precision and recall. Our second hypothesis is that the combination of BM and CO yields an improvement over having only one of them. For this experiment, we use the same setup as in the previous one and apply the BM25 parameters b = 0.75 and k 1 = 2.0. Figure 2c shows the use of the different graph features in UF, UF-s, and the Upper baseline, and includes TopN as lower baseline. Vanilla indicates no graph features were in effect. Note that the Upper baseline is not affected by the CO feature as it does not select relations iteratively.", "cite_spans": [], "ref_spans": [ { "start": 1880, "end": 1889, "text": "Figure 2a", "ref_id": "FIGREF4" }, { "start": 3619, "end": 3628, "text": "Figure 2b", "ref_id": "FIGREF4" }, { "start": 5082, "end": 5091, "text": "Figure 2c", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Triple reconstruction results", "sec_num": "5.3" }, { "text": "On FB15k-237, the BM feature improves the precision by 23.05%, while the CO feature increases the recall by 6.10% (at the cost of slightly lowering the precision). The combination of both features (BM + CO) amplifies these effects, with an improvement of 23.11% in recall and 22.02% in precision. These observations relate to how the features affect the different parts of Algorithm 1: CO helps in finding more relations, hence increases the recall; BM increases the number of retrieved true triples of a given relation, hence increases the precision. These effects are greater on UF-s than on UF because the user provides at least one relation, which in turn allows the graph features to become more effective. On DBP50, the graph features have no significant effect in our methods nor the Upper baseline; instead, the variance is larger than the difference between the methods. We attribute this to the sparsity of the dataset, since it provides too few samples to estimate frequencies accurately and small differences in the triple selection have a huge impact on the precision and recall. Experiment 5: User involvement. To evaluate the user workload, we measure how many relations a user has to judge during the triple reconstruction task, assuming that the user reports the first valid relation they find in each iteration. Figure 2d shows the number of judgements of UF, UF-s, their variations, and an upper baseline. The upper baseline expresses the case where at most one relation is valid in each iteration. The lower baseline is 2M = 20 since the user has to review all presented relations at least once.", "cite_spans": [], "ref_spans": [ { "start": 1330, "end": 1339, "text": "Figure 2d", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Triple reconstruction results", "sec_num": "5.3" }, { "text": "It is apparent that UF-s involves more judgements than UF, which comes from two factors: A higher effort to find an initial relation, and more subsequent iterations. We further observe that the number of judgements is substantially lower than the upper baseline in FB15k-237, meaning that it finds multiple valid relations per iteration. On the other hand, it is more difficult to find a valid relation in DBP50, hence the user workload is higher, especially in UF-s. Graph features generally show a positive impact in FB15k-237 and a marginal negative effect in DBP50, in particular the CO feature as it affects which relations are shown to the user.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triple reconstruction results", "sec_num": "5.3" }, { "text": "In this paper, we studied the problem of integrating new entities into a KG given their textual description. We exploited joint embeddings to identify entity candidates, and combined information from the joint embedding model with user feedback and graph features to improve the triple reconstruction. Our method solely relies on structural patterns in the data and does not need explicit mentions of entities or relations in the text. Our experiments suggest that joint embeddings are viable methods for entity prediction, and confirm that user feedback and graph features have a substantial impact on the triple reconstruction. In particular, experiments indicate that user feedback, features on relations (CO), and features on entities (BM ) treat different aspects of the problem, making their combination more successful than using only one of them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Comparing the results with the upper baselines shows that there is room for improvement. A possible way to fill this gap is to integrate explicit information into the process, e.g. considering the schema or the semantics of the relations or entities. Another approach is to study the incremental addition of new entities and triples: We restore entities and triples independently of each other, however, the restoration provides new information that can be exploited subsequently. Finally, our method could be extended in a straight-forward manner to other external data sources such as images, or to predict novel relations instead of entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "In this study we do not distinguish between KG and ontology. We opt for ontology when it is used to refer to a known problem in literature, i.e. ontology population.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In FB15k-237 the average entity only occurs in about 21 out of 302'455 possible triples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgements. We thank the Swiss National Science Foundation for their partial support under contract number #407550_167177 and Ralph Bobrik from Swiss Re for his helpful insights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Mining association rules between sets of items in large databases", "authors": [ { "first": "Rakesh", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Tomasz", "middle": [], "last": "Imielinski", "suffix": "" }, { "first": "Arun", "middle": [ "N" ], "last": "Swami", "suffix": "" } ], "year": 1993, "venue": "SIGMOD Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rakesh Agrawal, Tomasz Imielinski, and Arun N. Swami. 1993. Mining association rules between sets of items in large databases. In SIGMOD Confer- ence.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Aligning knowledge base and document embedding models using regularized multitask learning", "authors": [ { "first": "Matthias", "middle": [], "last": "Baumgartner", "suffix": "" }, { "first": "Wen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Bibek", "middle": [], "last": "Paudel", "suffix": "" }, { "first": "Daniele", "middle": [], "last": "Dell'aglio", "suffix": "" }, { "first": "Huajun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Abraham", "middle": [], "last": "Bernstein", "suffix": "" } ], "year": 2018, "venue": "International Semantic Web Conference", "volume": "11136", "issue": "", "pages": "21--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthias Baumgartner, Wen Zhang, Bibek Paudel, Daniele Dell'Aglio, Huajun Chen, and Abraham Bernstein. 2018. Aligning knowledge base and doc- ument embedding models using regularized multi- task learning. In International Semantic Web Con- ference (1), volume 11136 of Lecture Notes in Com- puter Science, pages 21-37. Springer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Translating embeddings for modeling multirelational data", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Garc\u00eda-Dur\u00e1n", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Oksana", "middle": [], "last": "Yakhnenko", "suffix": "" } ], "year": 2013, "venue": "NIPS", "volume": "", "issue": "", "pages": "2787--2795", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garc\u00eda- Dur\u00e1n, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In NIPS, pages 2787-2795.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Ontology-based information extraction with SOBA", "authors": [ { "first": "Paul", "middle": [], "last": "Buitelaar", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Cimiano", "suffix": "" }, { "first": "Stefania", "middle": [], "last": "Racioppa", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Siegel", "suffix": "" } ], "year": 2006, "venue": "LREC", "volume": "", "issue": "", "pages": "2321--2324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Buitelaar, Philipp Cimiano, Stefania Racioppa, and Melanie Siegel. 2006. Ontology-based informa- tion extraction with SOBA. In LREC, pages 2321- 2324. European Language Resources Association (ELRA).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications", "authors": [ { "first": "Hongyun", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Vincent", "middle": [ "W" ], "last": "Zheng", "suffix": "" }, { "first": "Kevin Chen-Chuan", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2018, "venue": "IEEE Trans. Knowl. Data Eng", "volume": "30", "issue": "9", "pages": "1616--1637", "other_ids": {}, "num": null, "urls": [], "raw_text": "HongYun Cai, Vincent W. Zheng, and Kevin Chen- Chuan Chang. 2018a. A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications. IEEE Trans. Knowl. Data Eng., 30(9):1616-1637.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A comprehensive survey of graph embedding: Problems, techniques, and applications", "authors": [ { "first": "Hongyun", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Vincent", "middle": [ "W" ], "last": "Zheng", "suffix": "" }, { "first": "Kevin Chen-Chuan", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2018, "venue": "IEEE Trans. Knowl. Data Eng", "volume": "30", "issue": "9", "pages": "1616--1637", "other_ids": {}, "num": null, "urls": [], "raw_text": "HongYun Cai, Vincent W. Zheng, and Kevin Chen- Chuan Chang. 2018b. A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE Trans. Knowl. Data Eng., 30(9):1616-1637.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Toward an Architecture for Never-Ending Language Learning", "authors": [ { "first": "", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2010, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell. 2010. Toward an Architecture for Never- Ending Language Learning. In AAAI. AAAI Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "NLDB", "authors": [ { "first": "Philipp", "middle": [], "last": "Cimiano", "suffix": "" }, { "first": "Johanna", "middle": [], "last": "V\u00f6lker", "suffix": "" } ], "year": 2005, "venue": "", "volume": "3513", "issue": "", "pages": "227--238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Cimiano and Johanna V\u00f6lker. 2005. Text2onto. In NLDB, volume 3513 of Lecture Notes in Com- puter Science, pages 227-238. Springer.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Knowledge vault: a web-scale approach to probabilistic knowledge fusion", "authors": [ { "first": "Xin", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Geremy", "middle": [], "last": "Heitz", "suffix": "" }, { "first": "Wilko", "middle": [], "last": "Horn", "suffix": "" }, { "first": "Ni", "middle": [], "last": "Lao", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Strohmann", "suffix": "" }, { "first": "Shaohua", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2014, "venue": "KDD", "volume": "", "issue": "", "pages": "601--610", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: a web-scale approach to probabilistic knowl- edge fusion. In KDD, pages 601-610. ACM.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Web-scale information extraction in knowitall: (preliminary results)", "authors": [ { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Cafarella", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Downey", "suffix": "" }, { "first": "Stanley", "middle": [], "last": "Kok", "suffix": "" }, { "first": "Ana-Maria", "middle": [], "last": "Popescu", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Shaked", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Yates", "suffix": "" } ], "year": 2004, "venue": "WWW", "volume": "", "issue": "", "pages": "100--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Etzioni, Michael J. Cafarella, Doug Downey, Stanley Kok, Ana-Maria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2004. Web-scale information extraction in knowitall: (preliminary results). In WWW, pages 100-110. ACM.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Acquisition of semantic knowledge using machine learning methods: The system\" asium", "authors": [ { "first": "David", "middle": [], "last": "Faure", "suffix": "" }, { "first": "Claire", "middle": [], "last": "N\u00e9dellec", "suffix": "" }, { "first": "C\u00e9line", "middle": [], "last": "Rouveirol", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Faure, Claire N\u00e9dellec, and C\u00e9line Rouveirol. 1998. Acquisition of semantic knowledge using ma- chine learning methods: The system\" asium\". In Universite Paris Sud. Citeseer.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Ontology and lexicon evolution by text understanding", "authors": [ { "first": "Udo", "middle": [], "last": "Hahn", "suffix": "" }, { "first": "", "middle": [], "last": "Marko", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ECAI 2002 Workshop on Machine Learning and Natural Language Processing for Ontology Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Udo Hahn and Kornl G Marko. 2002. Ontology and lexicon evolution by text understanding. In Proceed- ings of the ECAI 2002 Workshop on Machine Learn- ing and Natural Language Processing for Ontology Engineering (OLT 2002), Lyon, France.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Discovering emerging entities with ambiguous names", "authors": [ { "first": "Johannes", "middle": [], "last": "Hoffart", "suffix": "" }, { "first": "Yasemin", "middle": [], "last": "Altun", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2014, "venue": "WWW", "volume": "", "issue": "", "pages": "385--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes Hoffart, Yasemin Altun, and Gerhard Weikum. 2014. Discovering emerging entities with ambiguous names. In WWW, pages 385-396. ACM.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "ICML", "volume": "32", "issue": "", "pages": "1188--1196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc V. Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In ICML, volume 32 of JMLR Workshop and Conference Pro- ceedings, pages 1188-1196. JMLR.org.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Openie-based approach for knowledge graph construction from text", "authors": [ { "first": "Ivan", "middle": [], "last": "Jos\u00e9-L\u00e1zaro Mart\u00ednez-Rodr\u00edguez", "suffix": "" }, { "first": "Ana", "middle": [ "B" ], "last": "L\u00f3pez-Ar\u00e9valo", "suffix": "" }, { "first": "", "middle": [], "last": "Rios-Alvarado", "suffix": "" } ], "year": 2018, "venue": "Expert Syst. Appl", "volume": "113", "issue": "", "pages": "339--355", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jos\u00e9-L\u00e1zaro Mart\u00ednez-Rodr\u00edguez, Ivan L\u00f3pez-Ar\u00e9valo, and Ana B. Rios-Alvarado. 2018. Openie-based ap- proach for knowledge graph construction from text. Expert Syst. Appl., 113:339-355.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "ICLR (Workshop)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In ICLR (Workshop).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Never-ending learning", "authors": [ { "first": "Derry", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Wijaya", "suffix": "" }, { "first": "Xinlei", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Abulhair", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Malcolm", "middle": [], "last": "Saparov", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Greaves", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2018, "venue": "Commun. ACM", "volume": "61", "issue": "5", "pages": "103--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, Derry Wijaya, Abhinav Gupta, Xinlei Chen, Abulhair Saparov, Malcolm Greaves, and Joel Welling. 2018. Never-ending learning. Commun. ACM, 61(5):103-115.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Tensor factorization for multi-relational learning", "authors": [ { "first": "Maximilian", "middle": [], "last": "Nickel", "suffix": "" }, { "first": "", "middle": [], "last": "Volker Tresp", "suffix": "" } ], "year": 2013, "venue": "ECML/PKDD (3)", "volume": "8190", "issue": "", "pages": "617--621", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximilian Nickel and Volker Tresp. 2013. Ten- sor factorization for multi-relational learning. In ECML/PKDD (3), volume 8190 of Lecture Notes in Computer Science, pages 617-621. Springer.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Industry-scale knowledge graphs: Lessons and challenges", "authors": [ { "first": "Natasha", "middle": [], "last": "Noy", "suffix": "" }, { "first": "Yuqing", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Anshu", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Anant", "middle": [], "last": "Narayanan", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Patterson", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2019, "venue": "Commun. ACM", "volume": "62", "issue": "8", "pages": "36--43", "other_ids": { "DOI": [ "10.1145/3331166" ] }, "num": null, "urls": [], "raw_text": "Natasha Noy, Yuqing Gao, Anshu Jain, Anant Narayanan, Alan Patterson, and Jamie Taylor. 2019. Industry-scale knowledge graphs: Lessons and chal- lenges. Commun. ACM, 62(8):36-43.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Simple algorithms for predicate suggestions using similarity and co-occurrence", "authors": [ { "first": "Eyal", "middle": [], "last": "Oren", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Gerke", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Decker", "suffix": "" } ], "year": 2007, "venue": "ESWC", "volume": "4519", "issue": "", "pages": "160--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eyal Oren, Sebastian Gerke, and Stefan Decker. 2007. Simple algorithms for predicate suggestions using similarity and co-occurrence. In ESWC, volume 4519 of Lecture Notes in Computer Science, pages 160-174. Springer.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Knowledge graph refinement: A survey of approaches and evaluation methods. Semantic web", "authors": [ { "first": "Heiko", "middle": [], "last": "Paulheim", "suffix": "" } ], "year": 2017, "venue": "", "volume": "8", "issue": "", "pages": "489--508", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heiko Paulheim. 2017. Knowledge graph refinement: A survey of approaches and evaluation methods. Se- mantic web, 8(3):489-508.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Ontology population and enrichment: State of the art. In Knowledge-Driven Multimedia Information Extraction and Ontology Evolution", "authors": [ { "first": "Georgios", "middle": [], "last": "Petasis", "suffix": "" }, { "first": "Vangelis", "middle": [], "last": "Karkaletsis", "suffix": "" }, { "first": "Georgios", "middle": [], "last": "Paliouras", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Krithara", "suffix": "" }, { "first": "Elias", "middle": [], "last": "Zavitsanos", "suffix": "" } ], "year": 2011, "venue": "Lecture Notes in Computer Science", "volume": "6050", "issue": "", "pages": "134--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgios Petasis, Vangelis Karkaletsis, Georgios Paliouras, Anastasia Krithara, and Elias Zavitsanos. 2011. Ontology population and enrichment: State of the art. In Knowledge-Driven Multimedia Infor- mation Extraction and Ontology Evolution, volume 6050 of Lecture Notes in Computer Science, pages 134-166. Springer.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "BOEMIE: reasoning-based information extraction", "authors": [ { "first": "Georgios", "middle": [], "last": "Petasis", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "M\u00f6ller", "suffix": "" }, { "first": "Vangelis", "middle": [], "last": "Karkaletsis", "suffix": "" } ], "year": 2013, "venue": "NLPAR@LPNMR", "volume": "1044", "issue": "", "pages": "60--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgios Petasis, Ralf M\u00f6ller, and Vangelis Karkalet- sis. 2013. BOEMIE: reasoning-based information extraction. In NLPAR@LPNMR, volume 1044 of CEUR Workshop Proceedings, pages 60-75. CEUR- WS.org.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "The probabilistic relevance framework: BM25 and beyond", "authors": [ { "first": "E", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Robertson", "suffix": "" }, { "first": "", "middle": [], "last": "Zaragoza", "suffix": "" } ], "year": 2009, "venue": "Foundations and Trends in Information Retrieval", "volume": "3", "issue": "4", "pages": "333--389", "other_ids": { "DOI": [ "10.1561/1500000019" ] }, "num": null, "urls": [], "raw_text": "Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and be- yond. Foundations and Trends in Information Re- trieval, 3(4):333-389.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Open-world knowledge graph completion", "authors": [ { "first": "Baoxu", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Weninger", "suffix": "" } ], "year": 2018, "venue": "AAAI", "volume": "", "issue": "", "pages": "1957--1964", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baoxu Shi and Tim Weninger. 2018. Open-world knowledge graph completion. In AAAI, pages 1957- 1964. AAAI Press.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Reasoning with neural tensor networks for knowledge base completion", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "926--934", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural ten- sor networks for knowledge base completion. In Advances in neural information processing systems, pages 926-934.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Observed versus latent features for knowledge base and text inference", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality", "volume": "", "issue": "", "pages": "57--66", "other_ids": { "DOI": [ "10.18653/v1/W15-4007" ] }, "num": null, "urls": [], "raw_text": "Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Composi- tionality, pages 57-66, Beijing, China. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Row-less universal schema", "authors": [ { "first": "Patrick", "middle": [], "last": "Verga", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2016, "venue": "The Association for Computer Linguistics", "volume": "", "issue": "", "pages": "63--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Verga and Andrew McCallum. 2016. Row-less universal schema. In AKBC@NAACL-HLT, pages 63-68. The Association for Computer Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Knowledge graph embedding: A survey of approaches and applications", "authors": [ { "first": "Quan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhendong", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2017, "venue": "IEEE Trans. Knowl. Data Eng", "volume": "29", "issue": "12", "pages": "2724--2743", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Trans. Knowl. Data Eng., 29(12):2724-2743.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Starspace: Embed all the things! In AAAI", "authors": [ { "first": "Yu", "middle": [], "last": "Ledell", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "5569--5577", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ledell Yu Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. 2018. Starspace: Embed all the things! In AAAI, pages 5569-5577. AAAI Press.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "The goal of entity prediction is to add the entity :Biden and all red edges to the KG.(b) The entity embedding is inferred from the document embedding in a joint embedding space. Triple plausibility is estimated via the KG embedding loss on joint embeddings (lower is better).", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "An example of entity prediction via a joint embedding model.", "type_str": "figure", "uris": null }, "FIGREF3": { "num": null, "text": "Impact of user feedback and graph features combined. Marker shapes indicate the user feedback method, their color the graph features.", "type_str": "figure", "uris": null }, "FIGREF4": { "num": null, "text": "Experimental results for FB15k-237 (left plots) and DBP50 (right plots). The red ellipses show the variance. Note that axes have different scales.", "type_str": "figure", "uris": null }, "FIGREF5": { "num": null, "text": "2b contrasts the UF and UF-s methods with TopN and an upper bound Upper.", "type_str": "figure", "uris": null }, "TABREF1": { "text": "Algorithm 1: UF: Iterative triple reconstruction with user feedback. The || symbol denotes list concatenation.Input: Knowledge graph G, proposed entity e * and its loss matrix S e * Result: List of proposed triples [(e * , r, t)]", "type_str": "table", "num": null, "content": "
/ * Build relation features* /
1 features \u2190 [] ;
2 for r \u2208 R do
3features(r) \u2190 [avg(S e * r ), var(S e * r ), . . . ] ;
4 end
/ * Predict initial candidate
relations* /
5 candidates \u2190 M highest scoring relations according
to clf(features(r)) ;
6 feedback \u2190 let the user select one relation from
candidates ;
/ * Select triples and iterate* /
7 triples = [] ;
8 while feedback is valid do
9r \u2190 feedback ;
", "html": null }, "TABREF2": { "text": "", "type_str": "table", "num": null, "content": "
reports
", "html": null }, "TABREF3": { "text": "Dataset sizes", "type_str": "table", "num": null, "content": "", "html": null } } } }