{ "paper_id": "P18-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:38:41.438744Z" }, "title": "Towards Understanding the Geometry of Knowledge Graph Embeddings", "authors": [ { "first": "Aditya", "middle": [], "last": "Sharma", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Science", "location": {} }, "email": "adityasharma@iisc.ac.in" }, { "first": "Partha", "middle": [], "last": "Talukdar", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored-we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem.", "pdf_parse": { "paper_id": "P18-1012", "_pdf_hash": "", "abstract": [ { "text": "Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored-we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities. Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others. These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City).", "cite_spans": [ { "start": 233, "end": 256, "text": "(Mitchell et al., 2015)", "ref_id": "BIBREF8" }, { "start": 259, "end": 287, "text": "YAGO (Suchanek et al., 2007)", "ref_id": null }, { "start": 303, "end": 327, "text": "(Bollacker et al., 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) . These methods represent entities and relations in a KG as vectors in high dimensional space. These vectors can then be used for various tasks, such as, link prediction, entity classification etc. Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities. These are additive models, as the vectors interact via addition and subtraction. Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function. All these methods employ a score function for distinguishing correct triples from incorrect ones.", "cite_spans": [ { "start": 144, "end": 165, "text": "(Bordes et al., 2013;", "ref_id": "BIBREF1" }, { "start": 166, "end": 183, "text": "Lin et al., 2015;", "ref_id": "BIBREF4" }, { "start": 184, "end": 204, "text": "Nguyen et al., 2016;", "ref_id": "BIBREF9" }, { "start": 205, "end": 225, "text": "Nickel et al., 2016;", "ref_id": "BIBREF10" }, { "start": 226, "end": 249, "text": "Trouillon et al., 2016)", "ref_id": "BIBREF16" }, { "start": 469, "end": 490, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF1" }, { "start": 550, "end": 569, "text": "(Wang et al., 2014)", "ref_id": "BIBREF17" }, { "start": 579, "end": 597, "text": "(Lin et al., 2015)", "ref_id": "BIBREF4" }, { "start": 602, "end": 631, "text": "STransE (Nguyen et al., 2016)", "ref_id": null }, { "start": 844, "end": 863, "text": "(Yang et al., 2014)", "ref_id": "BIBREF18" }, { "start": 871, "end": 892, "text": "(Nickel et al., 2016)", "ref_id": "BIBREF10" }, { "start": 907, "end": 931, "text": "(Trouillon et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow. A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings. However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap. In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space. We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.", "cite_spans": [ { "start": 154, "end": 180, "text": "(Mimno and Thompson, 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We make the following contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To the best of our knowledge, this is the first study of its kind. We also formalize various metrics which can be used to study geometry of a set of vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings. For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights. For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry. We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings. A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors. This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training. Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings. In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) . Please see Section 6.2 for more details.", "cite_spans": [ { "start": 202, "end": 228, "text": "(Mimno and Thompson, 2017)", "ref_id": "BIBREF7" }, { "start": 660, "end": 686, "text": "(Mimno and Thompson, 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Since KGs contain only positive triples, negative sampling has been used for training KG embeddings. Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) . In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.", "cite_spans": [ { "start": 185, "end": 209, "text": "(Toutanova et al., 2015)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network. Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) . Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.", "cite_spans": [ { "start": 250, "end": 271, "text": "(Socher et al., 2013)", "ref_id": "BIBREF13" }, { "start": 279, "end": 303, "text": "(Toutanova et al., 2015)", "ref_id": "BIBREF15" }, { "start": 312, "end": 335, "text": "(Dettmers et al., 2017)", "ref_id": "BIBREF2" }, { "start": 344, "end": 372, "text": "(Schlichtkrull et al., 2017)", "ref_id": "BIBREF12" }, { "start": 382, "end": 401, "text": "(Dong et al., 2014)", "ref_id": "BIBREF3" }, { "start": 416, "end": 442, "text": "(Ravishankar et al., 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) . We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training. On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function. The score functions optimized by these methods are summarized in Table 1 . Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T \u2282 E \u00d7 R \u00d7 E is the set of triples stored in the graph. Most of the KG embedding methods learn vectors e \u2208 R de for e \u2208 E, and r \u2208 R dr for r \u2208 R. Some methods also learn projection matrices M r \u2208 R dr\u00d7de for relations. The correctness of a triple is evaluated using a model specific score function \u03c3 : E \u00d7 R \u00d7 E \u2192 R. For learning the embeddings, a loss function L(T , T ; \u03b8), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters \u03b8 is optimized.", "cite_spans": [ { "start": 78, "end": 99, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF1" }, { "start": 109, "end": 127, "text": "(Lin et al., 2015)", "ref_id": "BIBREF4" }, { "start": 130, "end": 159, "text": "STransE (Nguyen et al., 2016)", "ref_id": null }, { "start": 171, "end": 190, "text": "(Yang et al., 2014)", "ref_id": "BIBREF18" }, { "start": 198, "end": 219, "text": "(Nickel et al., 2016)", "ref_id": "BIBREF10" }, { "start": 232, "end": 256, "text": "(Trouillon et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 757, "end": 764, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Overview of KG Embedding Methods", "sec_num": "3" }, { "text": "We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function \u03c3(h, r, t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of KG Embedding Methods", "sec_num": "3" }, { "text": "TransE (Bordes et al., 2013) \u2212 h + r \u2212 t 1 TransR (Lin et al., 2015) \u2212", "cite_spans": [ { "start": 7, "end": 28, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF1" }, { "start": 50, "end": 68, "text": "(Lin et al., 2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Additive", "sec_num": null }, { "text": "Mrh + r \u2212 Mrt 1 STransE (Nguyen et al., 2016) \u2212 M 1 r h + r \u2212 M 2 r t 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additive", "sec_num": null }, { "text": "DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) ing bold characters to represent their vector embeddings (e.g., h, r). We use bold capitalization (e.g., V) to represent a set of vectors. Matrices are represented by capital italics characters (e.g., M ).", "cite_spans": [ { "start": 9, "end": 28, "text": "(Yang et al., 2014)", "ref_id": "BIBREF18" }, { "start": 42, "end": 63, "text": "(Nickel et al., 2016)", "ref_id": "BIBREF10" }, { "start": 80, "end": 104, "text": "(Trouillon et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Multiplicative", "sec_num": null }, { "text": "This is the set of methods where entity and relation vectors interact via additive operations. The score function for these models can be expressed as below", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additive KG Embedding Methods", "sec_num": "3.1" }, { "text": "\u03c3(h, r, t) = \u2212 M 1 r h + r \u2212 M 2 r t 1 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additive KG Embedding Methods", "sec_num": "3.1" }, { "text": "where h, t \u2208 R de and r \u2208 R dr are vectors for head entity, tail entity and relation respectively. M 1 r , M 2 r \u2208 R dr\u00d7de are projection matrices from entity space R de to relation space R dr . TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d\u2212dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices. The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors. Pairwise ranking loss is then used to learn these vectors. Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.", "cite_spans": [ { "start": 202, "end": 223, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Additive KG Embedding Methods", "sec_num": "3.1" }, { "text": "TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE. Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r . The training is similar to TransE. STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors. The training is similar to TransE. STransE achieves better performance than the previous methods but at the cost of more number of parameters.", "cite_spans": [ { "start": 7, "end": 25, "text": "(Lin et al., 2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Additive KG Embedding Methods", "sec_num": "3.1" }, { "text": "Equation 1 is the score function used in STransE. TransE and TransR are special cases of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additive KG Embedding Methods", "sec_num": "3.1" }, { "text": "STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additive KG Embedding Methods", "sec_num": "3.1" }, { "text": "This is the set of methods where the vectors interact via multiplicative operations (usually dot product). The score function for these models can be expressed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c3(h, r, t) = r f (h, t)", "eq_num": "(2)" } ], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "where h, t, r \u2208 F d are vectors for head entity, tail entity and relation respectively. f (h, t) \u2208 F d measures compatibility of head and tail entities and is specific to the model. F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d . It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.", "cite_spans": [ { "start": 9, "end": 27, "text": "(Yang et al., 2014", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c3 DistM ult (h, r, t) = r (h t)", "eq_num": "(3)" } ], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "HolE (Nickel et al., 2016) also models entities and relations as vectors in R d . It uses circular correlation operator ( ) as compatibility function defined as", "cite_spans": [ { "start": 5, "end": 26, "text": "(Nickel et al., 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "[h t] k = d\u22121 i=0 h i t (k+i) mod d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "The score function is given as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c3 HolE (h, r, t) = r (h t)", "eq_num": "(4)" } ], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity We skipped very low values of Conicity as it was difficult to visualize. The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right). Please refer to Section 4 for more details.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "(O(d log d))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": ". For training, we use pairwise ranking loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d . The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.", "cite_spans": [ { "start": 8, "end": 32, "text": "(Trouillon et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "\u03c3 ComplEx (h, r, t) = Re(r (h t )) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function. Similar to DistMult, logistic loss is used for training the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiplicative KG Embedding Methods", "sec_num": "3.2" }, { "text": "For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "4" }, { "text": "ATM(v, V) = cosine v, 1 |V| x\u2208V x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "4" }, { "text": "We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin. In other words, the vectors in the set V are highly aligned with each other. In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "4" }, { "text": "Conicity(V) = 1 |V| v\u2208V ATM(v, V) 1 cosine(u, v) = u v u v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "4" }, { "text": "VS(V) = 1 |V| v\u2208V ATM(v, V)\u2212Conicity(V)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "4" }, { "text": "Figure 1 visually demonstrates these metrics for randomly generated 3-dimensional points. The left figure shows high Conicity and low vector spread while the right figure shows low Conicity and high vector spread. We define the length of a vector v as L 2 -norm of the vector v 2 and 'average vector length' (AVL) for the set of vectors V as ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "AVL(V) = 1 |V| v\u2208V v 2", "eq_num": "(" } ], "section": "Metrics", "sec_num": "4" }, { "text": "Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively. We detail the characteristics of these datasets in Table 2 . Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18. The plots for our experiments on WN18 can be found in the Supplementary Section. Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings. Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}. For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section. 2 2 For training, we used codes from https://github.", "cite_spans": [ { "start": 88, "end": 112, "text": "(Bollacker et al., 2008)", "ref_id": "BIBREF0" }, { "start": 117, "end": 139, "text": "WordNet (Miller, 1995)", "ref_id": null }, { "start": 164, "end": 185, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 253, "end": 260, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis. Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin. These set of sampled vectors are then used for our analysis. For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .", "cite_spans": [ { "start": 350, "end": 376, "text": "(Mimno and Thompson, 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Frequency Bins:", "sec_num": null }, { "text": "In this section, we evaluate the following questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "6" }, { "text": "\u2022 Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings? (Section 6.1) com/Mrlyk423/Relation_Extraction (TransE, TransR), https://github.com/datquocnguyen/ STransE (STransE), https://github.com/ mnick/holographic-embeddings (HolE) and https://github.com/ttrouill/complex (Com-plEx and DistMult). Figure 2 . Main findings from these plots are summarized in Section 6.1.", "cite_spans": [], "ref_spans": [ { "start": 339, "end": 347, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "6" }, { "text": "\u2022 Does negative sampling have any effect on the embedding geometry? (Section 6.2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "6" }, { "text": "\u2022 Does the dimension of embedding have any effect on its geometry? (Section 6.3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "6" }, { "text": "\u2022 How is task performance related to embedding geometry? (Section 6.4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "6" }, { "text": "In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "6" }, { "text": "Summary of Findings: Additive: Low conicity and high vector spread. Multiplicative: High conicity and low vector spread.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Model Type on Geometry", "sec_num": "6.1" }, { "text": "In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding. For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors). Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively. 3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models. The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread. Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector. This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread. From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space. This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models. We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins. However, no such pattern was observed for multiplicative models and Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3. The conicity of relation vectors generated using additive models is almost zero across frequency bands. This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space. Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.", "cite_spans": [], "ref_spans": [ { "start": 305, "end": 313, "text": "Figure 2", "ref_id": null }, { "start": 318, "end": 326, "text": "Figure 3", "ref_id": "FIGREF3" }, { "start": 449, "end": 457, "text": "Figure 2", "ref_id": null }, { "start": 1587, "end": 1593, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Effect of Model Type on Geometry", "sec_num": "6.1" }, { "text": "Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations. Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities. Conicity decreases, while average vector length remains constant (except HolE) for relations. For experiments in this section, we keep the vector dimension constant at 100. Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models. In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space. From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples. On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE. The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) . This constraint is also enforced by the additive models: TransE, TransR, and STransE. Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples. However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling. The average relation vector length is invariant for all multiplicative methods, except for HolE. We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that. Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.", "cite_spans": [ { "start": 1270, "end": 1291, "text": "(Nickel et al., 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 459, "end": 474, "text": "Figure 4 (left)", "ref_id": "FIGREF4" }, { "start": 787, "end": 795, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Effect of Number of Negative Samples on Geometry", "sec_num": "6.2" }, { "text": "We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths. It is interesting to note that we see exactly these effects in the geometry of multiplicative methods ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Number of Negative Samples on Geometry", "sec_num": "6.2" }, { "text": "Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method. On the other hand, additive models remain unaffected by these changes. SGNS tries to maximize a score function of the form w T \u2022 c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) . This is very similar to the score function of multiplicative models as seen in Table 1 . Hence, SGNS can be considered as a multiplicative model in the word domain.", "cite_spans": [ { "start": 128, "end": 154, "text": "(Mimno and Thompson, 2017)", "ref_id": "BIBREF7" }, { "start": 522, "end": 544, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 626, "end": 633, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Correlation with Geometry of Word Embeddings", "sec_num": "6.2.1" }, { "text": "Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .", "cite_spans": [ { "start": 201, "end": 227, "text": "(Mimno and Thompson, 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Correlation with Geometry of Word Embeddings", "sec_num": "6.2.1" }, { "text": "Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations. Multiplicative: Conicity decreases for both entities and relations with increasing dimension. Average vector length increases for both entities and relations, except for HolE entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Vector Dimension on Geometry", "sec_num": "6.3" }, { "text": "Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension. From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease. In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.", "cite_spans": [], "ref_spans": [ { "start": 169, "end": 177, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Effect of Vector Dimension on Geometry", "sec_num": "6.3" }, { "text": "As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE. In case of HolE, the average vector length remains constant at one. Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball. Similar constraints are enforced on entity vectors for additive models as well. Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 27, "text": "Figure 5 (right)", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Effect of Vector Dimension on Geometry", "sec_num": "6.3" }, { "text": "We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods. In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased. This is consistent with the other methods in the multiplicative family. This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length. Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation Embeddings:", "sec_num": null }, { "text": "Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance. Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance. No relationship for relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relating Geometry to Performance", "sec_num": "6.4" }, { "text": "In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) . Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length. 4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance. This performance gain is larger for higher numbers of negative samples (N). Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections. In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed. Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.", "cite_spans": [ { "start": 136, "end": 157, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF1" }, { "start": 317, "end": 318, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 160, "end": 168, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 249, "end": 257, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 334, "end": 342, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 750, "end": 766, "text": "Figure 6 (right)", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Relating Geometry to Performance", "sec_num": "6.4" }, { "text": "The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relating Geometry to Performance", "sec_num": "6.4" }, { "text": "We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training. Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE. Figure 6 (right) shows that average entity vector length of HolE is always one. These two observations point towards HolE's entity vectors lying in a tiny part of the space. This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.", "cite_spans": [], "ref_spans": [ { "start": 243, "end": 251, "text": "Figure 6", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Relating Geometry to Performance", "sec_num": "6.4" }, { "text": "We also did a similar study for relation vectors, but did not see any discernible patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relating Geometry to Performance", "sec_num": "6.4" }, { "text": "In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings. We have also explored the relationship between KG embedding geometry and its task performance. We have shared all our source code to foster further research in this area.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We also tried using the global mean instead of mean of the sampled set for calculating cosine similarity in ATM, and got very similar results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A more focused analysis for multiplicative models is presented in Section 3 of Supplementary material.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their constructive comments. This work is supported in part by the Ministry of Human Resources Development (Government of India), Intel, Intuit, and by gifts from Google and Accenture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Freebase: a collaboratively created graph database for structuring human knowledge", "authors": [ { "first": "Kurt", "middle": [], "last": "Bollacker", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Praveen", "middle": [], "last": "Paritosh", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Sturge", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 ACM SIGMOD international conference on Management of data", "volume": "", "issue": "", "pages": "1247--1250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, pages 1247-1250.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Translating embeddings for modeling multirelational data", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Garcia-Duran", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Oksana", "middle": [], "last": "Yakhnenko", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2787--2795", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in neural information processing systems. pages 2787-2795.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Convolutional 2D Knowledge Graph Embeddings", "authors": [ { "first": "T", "middle": [], "last": "Dettmers", "suffix": "" }, { "first": "P", "middle": [], "last": "Minervini", "suffix": "" }, { "first": "P", "middle": [], "last": "Stenetorp", "suffix": "" }, { "first": "S", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Dettmers, P. Minervini, P. Stenetorp, and S. Riedel. 2017. Convolutional 2D Knowledge Graph Embed- dings. ArXiv e-prints .", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Knowledge vault: A web-scale approach to probabilistic knowledge fusion", "authors": [ { "first": "Xin", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Geremy", "middle": [], "last": "Heitz", "suffix": "" }, { "first": "Wilko", "middle": [], "last": "Horn", "suffix": "" }, { "first": "Ni", "middle": [], "last": "Lao", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Strohmann", "suffix": "" }, { "first": "Shaohua", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "601--610", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowl- edge fusion. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 601-610.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning entity and relation embeddings for knowledge graph completion", "authors": [ { "first": "Yankai", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xuan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2015, "venue": "AAAI", "volume": "", "issue": "", "pages": "2181--2187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation em- beddings for knowledge graph completion. In AAAI. pages 2181-2187.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems. pages 3111-3119.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39- 41.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The strange geometry of skip-gram with negative sampling", "authors": [ { "first": "David", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "Laure", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2863--2868", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Mimno and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 2863-2868.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Never-ending learning", "authors": [ { "first": "T", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "W", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "E", "middle": [], "last": "Hruschka", "suffix": "" }, { "first": "P", "middle": [], "last": "Talukdar", "suffix": "" }, { "first": "J", "middle": [], "last": "Betteridge", "suffix": "" }, { "first": "A", "middle": [], "last": "Carlson", "suffix": "" }, { "first": "B", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "M", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "B", "middle": [], "last": "Kisiel", "suffix": "" }, { "first": "J", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "N", "middle": [], "last": "Lao", "suffix": "" }, { "first": "K", "middle": [], "last": "Mazaitis", "suffix": "" }, { "first": "T", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "N", "middle": [], "last": "Nakashole", "suffix": "" }, { "first": "E", "middle": [], "last": "Platanios", "suffix": "" }, { "first": "A", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "M", "middle": [], "last": "Samadi", "suffix": "" }, { "first": "B", "middle": [], "last": "Settles", "suffix": "" }, { "first": "R", "middle": [], "last": "Wang", "suffix": "" }, { "first": "D", "middle": [], "last": "Wijaya", "suffix": "" }, { "first": "A", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "X", "middle": [], "last": "Chen", "suffix": "" }, { "first": "A", "middle": [], "last": "Saparov", "suffix": "" }, { "first": "M", "middle": [], "last": "Greaves", "suffix": "" }, { "first": "J", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2015, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Bet- teridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2015. Never-ending learning. In Proceedings of AAAI.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Stranse: a novel embedding model of entities and relationships in knowledge bases", "authors": [ { "first": "Kairit", "middle": [], "last": "Dat Quoc Nguyen", "suffix": "" }, { "first": "Lizhen", "middle": [], "last": "Sirts", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Qu", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "460--466", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. 2016. Stranse: a novel embedding model of entities and relationships in knowledge bases. In Proceedings of NAACL-HLT. pages 460-466.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Holographic embeddings of knowledge graphs", "authors": [ { "first": "Maximilian", "middle": [], "last": "Nickel", "suffix": "" }, { "first": "Lorenzo", "middle": [], "last": "Rosasco", "suffix": "" }, { "first": "Tomaso", "middle": [ "A" ], "last": "Poggio", "suffix": "" } ], "year": 2016, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximilian Nickel, Lorenzo Rosasco, and Tomaso A. Poggio. 2016. Holographic embeddings of knowl- edge graphs. In AAAI.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Revisiting simple neural networks for learning representations of knowledge graphs", "authors": [ { "first": "Srinivas", "middle": [], "last": "Ravishankar", "suffix": "" }, { "first": "Chandrahas", "middle": [], "last": "", "suffix": "" }, { "first": "Partha", "middle": [ "Pratim" ], "last": "Talukdar", "suffix": "" } ], "year": 2017, "venue": "6th Workshop on Automated Knowledge Base Construction (AKBC) at NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srinivas Ravishankar, Chandrahas, and Partha Pratim Talukdar. 2017. Revisiting simple neural networks for learning representations of knowledge graphs. 6th Workshop on Automated Knowledge Base Con- struction (AKBC) at NIPS 2017 .", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Modeling Relational Data with Graph Convolutional Networks", "authors": [ { "first": "M", "middle": [], "last": "Schlichtkrull", "suffix": "" }, { "first": "T", "middle": [ "N" ], "last": "Kipf", "suffix": "" }, { "first": "P", "middle": [], "last": "Bloem", "suffix": "" }, { "first": "R", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "I", "middle": [], "last": "Berg", "suffix": "" }, { "first": "M", "middle": [], "last": "Titov", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Schlichtkrull, T. N. Kipf, P. Bloem, R. van den Berg, I. Titov, and M. Welling. 2017. Modeling Relational Data with Graph Convolutional Networks. ArXiv e- prints .", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Reasoning with neural tensor networks for knowledge base completion", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "926--934", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural ten- sor networks for knowledge base completion. In Ad- vances in Neural Information Processing Systems. pages 926-934.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Yago: a core of semantic knowledge", "authors": [ { "first": "M", "middle": [], "last": "Fabian", "suffix": "" }, { "first": "Gjergji", "middle": [], "last": "Suchanek", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Kasneci", "suffix": "" }, { "first": "", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowl- edge. In WWW.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Representing Text for Joint Embedding of Text and Knowledge Bases", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" }, { "first": "Pallavi", "middle": [], "last": "Choudhury", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" } ], "year": 2015, "venue": "Empirical Methods in Natural Language Processing (EMNLP). ACL Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoi- fung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing Text for Joint Embedding of Text and Knowledge Bases. In Empirical Methods in Natural Language Processing (EMNLP). ACL Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Complex embeddings for simple link prediction", "authors": [ { "first": "Th\u00e9o", "middle": [], "last": "Trouillon", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Welbl", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "\u00c9ric", "middle": [], "last": "Gaussier", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Bouchard", "suffix": "" } ], "year": 2016, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Th\u00e9o Trouillon, Johannes Welbl, Sebastian Riedel,\u00c9ric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In ICML.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Knowledge graph embedding by translating on hyperplanes", "authors": [ { "first": "Zhen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianwen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jianlin", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "AAAI. Citeseer", "volume": "", "issue": "", "pages": "1112--1119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by trans- lating on hyperplanes. In AAAI. Citeseer, pages 1112-1119.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Embedding entities and relations for learning and inference in knowledge bases", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6575" ] }, "num": null, "urls": [], "raw_text": "Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575 .", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Comparison of high vs low Conicity. Randomly generated vectors are shown in blue with their sample mean vector M in black. Figure on the left shows the case when vectors lie in narrow cone resulting in high Conicity value. Figure on the right shows the case when vectors are spread out having relatively lower Conicity value.", "uris": null }, "FIGREF3": { "num": null, "type_str": "figure", "text": "Alignment to Mean (ATM) vs Density plots for relation embeddings learned by various additive (top row) and multiplicative (bottom row) KG embedding methods. For each method, a plot averaged across entity frequency bins is shown. Trends in these plots are similar to those in", "uris": null }, "FIGREF4": { "num": null, "type_str": "figure", "text": "Conicity (left) and Average Vector Length (right) vs Number of negative samples for entity vectors learned using various KG embedding methods. In each bar group, first three models are additive, while the last three are multiplicative. Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins. For clarity, we have not shown different plots for individual frequency bins.", "uris": null }, "FIGREF5": { "num": null, "type_str": "figure", "text": "Conicity (left) and Average Vector Length (right) vs Number of Dimensions for entity vectors learned using various KG embedding methods. In each bar group, first three models are additive, while the last three are multiplicative. Main findings from these plots are summarized in Section 6.3. analyzed above.", "uris": null }, "FIGREF6": { "num": null, "type_str": "figure", "text": "Relationship between Performance (HITS@10) on a link prediction task vs Conicity (left) and Avg. Vector Length (right). For each point, N represents the number of negative samples used. Main findings are summarized in Section 6.4.", "uris": null }, "TABREF0": { "type_str": "table", "text": "Summary of various Knowledge Graph (KG) embedding methods used in the paper. Please see Section 3 for more details.", "html": null, "num": null, "content": "" }, "TABREF2": { "type_str": "table", "text": "Summary of datasets used in the paper.", "html": null, "num": null, "content": "
" } } } }