{ "paper_id": "P18-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:37:25.506325Z" }, "title": "Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures", "authors": [ { "first": "Luke", "middle": [], "last": "Vilnis", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts Amherst", "location": {} }, "email": "" }, { "first": "Xiang", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts Amherst", "location": {} }, "email": "xiangl@cs.umass.edu" }, { "first": "\u21e4", "middle": [ "Shikhar" ], "last": "Murty", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts Amherst", "location": {} }, "email": "smurty@cs.umass.edu" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts Amherst", "location": {} }, "email": "mccallum@cs.umass.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Embedding methods which enforce a partial order or lattice structure over the concept space, such as Order Embeddings (OE) (Vendrov et al., 2016), are a natural way to model transitive relational data (e.g. entailment graphs). However, OE learns a deterministic knowledge base, limiting expressiveness of queries and the ability to use uncertainty for both prediction and learning (e.g. learning from expectations). Probabilistic extensions of OE (Lai and Hockenmaier, 2017) have provided the ability to somewhat calibrate these denotational probabilities while retaining the consistency and inductive bias of ordered models, but lack the ability to model the negative correlations found in real-world knowledge. In this work we show that a broad class of models that assign probability measures to OE can never capture negative correlation, which motivates our construction of a novel box lattice and accompanying probability measure to capture anticorrelation and even disjoint concepts, while still providing the benefits of probabilistic modeling, such as the ability to perform rich joint and conditional queries over arbitrary sets of concepts, and both learning from and predicting calibrated uncertainty. We show improvements over previous approaches in modeling the Flickr and WordNet entailment graphs, and investigate the power of the model.", "pdf_parse": { "paper_id": "P18-1025", "_pdf_hash": "", "abstract": [ { "text": "Embedding methods which enforce a partial order or lattice structure over the concept space, such as Order Embeddings (OE) (Vendrov et al., 2016), are a natural way to model transitive relational data (e.g. entailment graphs). However, OE learns a deterministic knowledge base, limiting expressiveness of queries and the ability to use uncertainty for both prediction and learning (e.g. learning from expectations). Probabilistic extensions of OE (Lai and Hockenmaier, 2017) have provided the ability to somewhat calibrate these denotational probabilities while retaining the consistency and inductive bias of ordered models, but lack the ability to model the negative correlations found in real-world knowledge. In this work we show that a broad class of models that assign probability measures to OE can never capture negative correlation, which motivates our construction of a novel box lattice and accompanying probability measure to capture anticorrelation and even disjoint concepts, while still providing the benefits of probabilistic modeling, such as the ability to perform rich joint and conditional queries over arbitrary sets of concepts, and both learning from and predicting calibrated uncertainty. We show improvements over previous approaches in modeling the Flickr and WordNet entailment graphs, and investigate the power of the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Structured embeddings based on regions, densities, and orderings have gained popularity in recent years for their inductive bias towards the essential asymmetries inherent in problems such as image captioning (Vendrov et al., 2016) , lexical and textual entailment (Erk, 2009; Vilnis and McCallum, 2015; Lai and Hockenmaier, 2017; Athiwaratkun and Wilson, 2018) , and knowledge graph completion and reasoning (He et al., 2015; Nickel and Kiela, 2017; Li et al., 2017) .", "cite_spans": [ { "start": 209, "end": 231, "text": "(Vendrov et al., 2016)", "ref_id": "BIBREF18" }, { "start": 265, "end": 276, "text": "(Erk, 2009;", "ref_id": "BIBREF5" }, { "start": 277, "end": 303, "text": "Vilnis and McCallum, 2015;", "ref_id": "BIBREF19" }, { "start": 304, "end": 330, "text": "Lai and Hockenmaier, 2017;", "ref_id": "BIBREF12" }, { "start": 331, "end": 361, "text": "Athiwaratkun and Wilson, 2018)", "ref_id": "BIBREF0" }, { "start": 409, "end": 426, "text": "(He et al., 2015;", "ref_id": "BIBREF9" }, { "start": 427, "end": 450, "text": "Nickel and Kiela, 2017;", "ref_id": "BIBREF15" }, { "start": 451, "end": 467, "text": "Li et al., 2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Models that easily encode asymmetry, and related properties such as transitivity (the two components of commonplace relations such as partially ordered sets and lattices), have great utility in these applications, leaving less to be learned from the data than arbitrary relational models. At their best, they resemble a hybrid between embedding models and structured prediction. As noted by Vendrov et al. (2016) and Li et al. (2017) , while the models learn sets of embeddings, these parameters obey rich structural constraints. The entire set can be thought of as one, sometimes provably consistent, structured prediction, such as an ontology in the form of a single directed acyclic graph.", "cite_spans": [ { "start": 391, "end": 412, "text": "Vendrov et al. (2016)", "ref_id": "BIBREF18" }, { "start": 417, "end": 433, "text": "Li et al. (2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While the structured prediction analogy applies best to Order Embeddings (OE), which embeds consistent partial orders, other region-and density-based representations have been proposed for the express purpose of inducing a bias towards asymmetric relationships. For example, the Gaussian Embedding (GE) model (Vilnis and Mc-Callum, 2015) aims to represent the asymmetry and uncertainty in an object's relations and attributes by means of uncertainty in the representation. However, while the space of representations is a manifold of probability distributions, the model is not truly probabilistic in that it does not model asymmetries and relations in terms of prob-abilities, but in terms of asymmetric comparison functions such as the originally proposed KL divergence and the recently proposed thresholded divergences (Athiwaratkun and Wilson, 2018) .", "cite_spans": [ { "start": 309, "end": 337, "text": "(Vilnis and Mc-Callum, 2015)", "ref_id": null }, { "start": 822, "end": 853, "text": "(Athiwaratkun and Wilson, 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Probabilistic models are especially compelling for modeling ontologies, entailment graphs, and knowledge graphs. Their desirable properties include an ability to remain consistent in the presence of noisy data, suitability towards semisupervised training using the expectations and uncertain labels present in these large-scale applications, the naturality of representing the inherent uncertainty of knowledge they store, and the ability to answer complex queries involving more than 2 variables. Note that the final one requires a true joint probabilistic model with a tractable inference procedure, not something provided by e.g. matrix factorization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We take the dual approach to density-based embeddings and model uncertainty about relationships and attributes as explicitly probabilistic, while basing the probability on a latent space of geometric objects that obey natural structural biases for modeling transitive, asymmetric relations. The most similar work are the probabilistic order embeddings (POE) of Lai (Lai and Hockenmaier, 2017) , which apply a probability measure to each order embedding's forward cone (the set of points greater than the embedding in each dimension), assigning a finite and normalized volume to the unbounded space. However, POE suffers severe limitations as a probabilistic model, including an inability to model negative correlations between concepts, which motivates the construction of our box lattice model. Our model represents objects, concepts, and events as high-dimensional products-of-intervals (hyperrectangles or boxes), with an event's unary probability coming from the box volume and joint probabilities coming from overlaps. This contrasts with POE's approach of defining events as the forward cones of vectors, extending to infinity, integrated under a probability measure that assigns them finite volume.", "cite_spans": [ { "start": 365, "end": 392, "text": "(Lai and Hockenmaier, 2017)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One desirable property of a structured representation for ordered data, originally noted in (Vendrov et al., 2016) is a \"slackness\" shared by OE, POE, and our model: when the model predicts an \"edge\" or lack thereof (i.e. P (a|b) = 0 or 1, or a zero constraint violation in the case of OE), being exposed to that fact again will not update the model. Moreover, there are large degrees of freedom in parameter space that exhibit this slackness, giving the model the ability to embed complex structure with 0 loss when compared to models based on symmetric inner products or distances between embeddings, e.g. bilinear GLMs (Collins et al., 2002) , Trans-E (Bordes et al., 2013) , and other embedding models which must always be pushing and pulling parameters towards and away from each other.", "cite_spans": [ { "start": 622, "end": 644, "text": "(Collins et al., 2002)", "ref_id": "BIBREF4" }, { "start": 655, "end": 676, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our experiments demonstrate the power of our approach to probabilistic ordering-biased relational modeling. First, we investigate an instructive 2-dimensional toy dataset that both demonstrates the way the model self organizes its box event space, and enables sensible answers to queries involving arbitrary numbers of variables, despite being trained on only pairwise data. We achieve a new state of the art in denotational probability modeling on the Flickr entailment dataset (Lai and Hockenmaier, 2017) , and a matching state-of-the-art on WordNet hypernymy (Vendrov et al., 2016; Miller, 1995) with the concurrent work on thresholded Gaussian embedding of Athiwaratkun and Wilson (2018), achieving our best results by training on additional co-occurrence expectations aggregated from leaf types.", "cite_spans": [ { "start": 479, "end": 506, "text": "(Lai and Hockenmaier, 2017)", "ref_id": "BIBREF12" }, { "start": 562, "end": 584, "text": "(Vendrov et al., 2016;", "ref_id": "BIBREF18" }, { "start": 585, "end": 598, "text": "Miller, 1995)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We find that the strong empirical performance of probabilistic ordering models, and our box lattice model in particular, and their endowment of new forms of training and querying, make them a promising avenue for future research in representing structured knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In addition to the related work in structured embeddings mentioned in the introduction, our focus on directed, transitive relational modeling and ontology induction shares much with the rich field of directed graphical models and causal modeling (Pearl, 1988) , as well as learning the structure of those models (Heckerman et al., 1995) . Work in undirected structure learning such the Graphical Lasso (Friedman et al., 2008) is also relevant due to our desire to learn from pairwise joint/conditional probabilities and moment matrices, which are closely related in the setting of discrete variables.", "cite_spans": [ { "start": 246, "end": 259, "text": "(Pearl, 1988)", "ref_id": "BIBREF16" }, { "start": 312, "end": 336, "text": "(Heckerman et al., 1995)", "ref_id": "BIBREF10" }, { "start": 402, "end": 425, "text": "(Friedman et al., 2008)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Especially relevant research in Bayesian networks are applications towards learning taxonomic structure of relational data (Bansal et al., 2014) , although this work is often restricted towards tree-shaped ontologies, which allow efficient inference by Chu-Liu-Edmonds' algorithm (Chu and Liu, 1995) , while we focus on arbitrary DAGs.", "cite_spans": [ { "start": 123, "end": 144, "text": "(Bansal et al., 2014)", "ref_id": "BIBREF1" }, { "start": 280, "end": 299, "text": "(Chu and Liu, 1995)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As our model is based on populating a latent \"event space\" into boxes (products of intervals), it is especially reminiscent of the Mondrian process (Roy and Teh, 2009) . However, the Mondrian process partitions the space as a high dimensional tree (a non-parametric kd-tree), while our model allows the arbitrary box placement required for DAG structure, and is much more tractable in high dimensions compared to the Mondrian's Bayesian non-parametric inference.", "cite_spans": [ { "start": 148, "end": 167, "text": "(Roy and Teh, 2009)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Embedding applications to relational learning constitute a huge field to which it is impossible to do justice, but one general difference between our approaches is that e.g. a matrix factorization model treats the embeddings as objects to score relation links with, as opposed to POE or our model in which embeddings represent subsets of probabilistic event space which are directly integrated. They are full probabilistic models of the joint set of variables, rather than embedding-based approximations of only low-order joint and conditional probabilities. That is, any set of our parameters can answer any arbitrary probabilistic question (possibly requiring intractable computation), rather than being fixed to modeling only certain subsets of the joint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Embedding-based learning's large advantage over the combinatorial structure learning presented by classical PGM approaches is its applicability to large-scale probability distributions containing hundreds of thousands of events or more, as in both our WordNet and Flickr experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A non-strict partial ordered set (poset) is a set P equipped with a binary relation such that for all a, b, c 2 P ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Orders and Lattices", "sec_num": "3.1" }, { "text": "\u2022 a a (reflexivity)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Orders and Lattices", "sec_num": "3.1" }, { "text": "\u2022 a b a implies a = b (antisymmetry)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Orders and Lattices", "sec_num": "3.1" }, { "text": "\u2022 a b c implies a c (transitivity) This is simply a generalization of a totally ordered set that allows some elements to be incomparable, and is a good model for the kind of acyclic directed graph data found in knowledge bases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Orders and Lattices", "sec_num": "3.1" }, { "text": "A lattice is a poset where any subset has a a unique least upper and greatest lower bound, which will be true of all posets (lattices) considered in this paper. The least upper bound of two elements a, b 2 P is called the join, denoted a _ b, and the greatest lower bound is called the meet, denoted a^b.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Orders and Lattices", "sec_num": "3.1" }, { "text": "Additionally, in a bounded lattice we have two extra elements, called top, denoted > and bottom, denoted ?, which are respectively the least upper bound and greatest lower bound of the entire space. Using the extended real number line (adding points at infinity), all lattices considered in this paper are bounded lattices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Orders and Lattices", "sec_num": "3.1" }, { "text": "Vendrov et al. 2016introduced a method for embedding partially ordered sets and a task, partial order completion, an abstract term for things like hypernym or entailment prediction (learning transitive relations). The goal is to learn a mapping from the partially-ordered data domain to some other partially-ordered space that will enable generalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order Embeddings (OE)", "sec_num": "3.2" }, { "text": "Definition 1. Vendrov et al. (2016) A function f : (X, X ) ! (Y, Y ) is an order- embedding if for all u, v 2 X u X v () f (u) Y f (v)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order Embeddings (OE)", "sec_num": "3.2" }, { "text": "They choose Y to be a vector space, and the order Y to be based on the reverse product order on R n + , which specifies", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order Embeddings (OE)", "sec_num": "3.2" }, { "text": "x y () 8i 2 {1..n}, x i y i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order Embeddings (OE)", "sec_num": "3.2" }, { "text": "so an embedding is below another in the hierarchy if all of the coordinates are larger, and 0 provides a top element. Although Vendrov et al. (2016) do not explicitly discuss it, their model does not just capture partial orderings, but is a standard construction of a vector (Hilbert) lattice, in which the operations of meet and join can be defined as taking the pointwise maximum and minimum of two vectors, respectively (Zaanen, 1997) . This observation is also used in (Li et al., 2017) to generate extra constraints for training order embeddings.", "cite_spans": [ { "start": 127, "end": 148, "text": "Vendrov et al. (2016)", "ref_id": "BIBREF18" }, { "start": 423, "end": 437, "text": "(Zaanen, 1997)", "ref_id": "BIBREF20" }, { "start": 473, "end": 490, "text": "(Li et al., 2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Order Embeddings (OE)", "sec_num": "3.2" }, { "text": "As noted in the original work, these single point embeddings can be thought of as regions, i.e. the cone extending out from the vector towards infinity. All concepts \"entailed\" by a given concept must lie in this cone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order Embeddings (OE)", "sec_num": "3.2" }, { "text": "This ordering is optimized from examples of ordered elements and negative samples via a maxmargin loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order Embeddings (OE)", "sec_num": "3.2" }, { "text": "Lai and Hockenmaier (2017) built on the \"region\" idea to derive a probabilistic formulation (which we will refer to as POE) to model entailment probabilities in a consistent, hierarchical way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Order Embeddings (POE)", "sec_num": "3.3" }, { "text": "Noting that all of OE's regions obviously have the same infinite area under the standard (Lebesgue) measure of R n + , they propose a probabilistic interpretation where the Bernoulli probability of each concept a or joint set of concepts {a, b} with corresponding vectors {x, y} is given by its volume under the exponential measure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Order Embeddings (POE)", "sec_num": "3.3" }, { "text": "p(a) = exp( X i x i ) = Z z x exp( kzk 1 )dz p(a, b) = p(x^y) = exp( k max(x i , y i )k 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Order Embeddings (POE)", "sec_num": "3.3" }, { "text": "since the meet of two vectors is simply the intersection of their area cones, and replacing sums with`1 norms for brevity since all coordinates are positive. While having the intuition of measuring the areas of cones, this also automatically gives a valid probability distribution over concepts since this is just the product likelihood under a coordinatewise exponential distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Order Embeddings (POE)", "sec_num": "3.3" }, { "text": "However, they note a deficiency of their model -it can only model positive (Pearson) correlations between concepts (Bernoulli variables).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Order Embeddings (POE)", "sec_num": "3.3" }, { "text": "Consider two Bernoulli variables a and b, whose probabilities correspond to the areas of cones x and y. Recall the Bernoulli covariance formula (we will deal with covariances instead of correlations when convenient, since they always have the same sign):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Order Embeddings (POE)", "sec_num": "3.3" }, { "text": "cov(a, b) = p(a, b) p(a)p(b) = exp( k max(x i , y i )k 1 ) exp( kx i + y i k 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Order Embeddings (POE)", "sec_num": "3.3" }, { "text": "Since the sum of two positive vectors can only be greater than the sum of their pointwise maximum, this quantity will always be nonnegative. This has real consequences for probabilistic modeling in KBs: conditioning on more concepts will only make probabilities higher (or unchanged), e.g. p(dog|plant) p(dog).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Order Embeddings (POE)", "sec_num": "3.3" }, { "text": "Relations Probabilistic models have pleasing consistency properties for modeling asymmetric transitive relations, in particular compared to density-based embeddings -a pairwise conditional probability table can almost always (in the technical sense) be asymmetrized to produce a DAG by simply taking an edge if P (a|b) > P(b|a). A matrix of pairwise Gaussian KL divergences cannot be consistently asymmetrized in this manner. These claims are proven in Appendix C. While a high P (a|b) does not always indicate an edge in an ontology due to confounding variables, existing graphical model structure learning methods can be used to further prune on the base graph without adding a cycle, such as Graphical Lasso or simple thresholding (Fattahi and Sojoudi, 2017) .", "cite_spans": [ { "start": 734, "end": 761, "text": "(Fattahi and Sojoudi, 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Asymmetric Transitive", "sec_num": "3.4" }, { "text": "We develop a probabilistic model for lattices based on hypercube embeddings that can model both positive and negative correlations. Before describing this, we first motivate our choice to abandon OE/POE type cone-based models for this purpose. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "(x) = Q n i p i (x i ) on R n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": ", where F i , the associated CDF for p i , is monotone increasing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": "Proof. For any product measure we have Z", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": "z x p(z)dz = n Y i Z x i \uf8ffz i p i (z i )dz i = n Y i 1 F i (x i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": "This is just the area of the unique box corresponding to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": "Q n i [F i (x i ), 1] 2 [0, 1] n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": ", under the uniform measure. This box is unique as a monotone increasing univariate CDF is bijective with (0, 1)cones in R n can be invertibly mapped to boxes of equivalent measure inside the unit hypercube [0, 1] n . These boxes have only half their degrees of freedom, as they have the form [F i (x i ), 1] per dimension, (intuitively, they have one end \"stuck at infinity\" since the cone integrates to infinity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": "So W.L.O.G. we can consider two transformed cones x and y corresponding to our Bernoulli variables a and b, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": "letting F i (x i ) = u i and F i (y i ) = v i , their intersection in the unit hyper- cube is Q n i [max(u i , v i ), 1].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": "Pairing terms in the right-hand product, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": "p(a, b) p(a)p(b) = n Y i (1 max(u i , v i )) n Y i (1 u i )(1 v i ) 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": "since the right contains all the terms of the left and can only grow smaller. This argument is easily modified to the case of the nonnegative orthant,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": "mutatis mutandis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": "An open question for future work is what nonproduct measures this claim also applies to. Note that some non-product measures, such as multivariate Gaussian, can be transformed into product measures easily (whitening) and the above proof would still apply. It seems probable that some measures, nonlinearly entangled across dimensions, could encode negative correlations in cone volumes. However, it is not generally tractable to integrate high-dimensional cones under arbitrary non-product measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations from Cone Measures", "sec_num": "4.1" }, { "text": "The above proof gives us intuition about the possible form of a better representation. Cones can be mapped into boxes within the unit hypercube while preserving their measure, and the lack of negative correlation seems to come from the fact that they always have an overly-large intersection due to \"pinning\" the maximum in each dimension to 1. To remedy this, we propose to learn representations in the space of all boxes (axis-aligned hyperrectangles), gaining back an extra degree of freedom. These representations can be learned with a suitable probability measure in R n , the nonnegative orthant R n + , or directly in the unit hypercube with the uniform measure, which we elect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "We associate each concept with 2 vectors, the minimum and maximum value of the box at each dimension. Practically for numerical reasons these are stored as a minimum, a positive offset plus an \u270f term to prevent boxes from becoming too small and underflowing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "Let us define our box embeddings as a pair of vectors in [0, 1] n , (x m , x M ), representing the maximum and minimum at each coordinate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "Then we can define a partial ordering by inclusion of boxes, and a lattice structure as x^y = ? if x and y disjoint, else", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "x^y = Y i [max(x m,i , y m,i ), min(x M,i , y M,i )] x _ y = Y i [min(x m,i , y m,i ), max(x M,i , y M,i )]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "where the meet is the intersecting box, or bottom (the empty set) where no intersection exists, and join is the smallest enclosing box. This lattice, considered on its own terms as a non-probabilistic object, is strictly more general than the order embedding lattice in any dimension, which is proven in Appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "However, the finite sizes of all the lattice elements lead to a natural probabilistic interpretation under the uniform measure. Joint and marginal probabilities are given by the volume of the (intersection) box. For concept a with associ-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "ated box (x m , x M ), probability is simply p(a) = Q n i (x M,i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "x m,i ) (under the uniform measure). p(?) is of course zero since no probability mass is assigned to the empty set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "It remains to show that this representation can represent both positive and negative correlations. Proof. Boxes can clearly model disjointness (exactly 1 correlation if the total volume of the boxes equals 1). Two identical boxes give their concepts exactly correlation 1. The area of the meet is continuous with respect to translations of intersecting boxes, and all other terms in correlation stay constant, so by continuity of the correlation function our model can achieve all possible correlations for a pair of variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "This proof can be extended to boxes in R n with product measures by the previous reduction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "Limitations: Note that this model cannot perfectly describe all possible probability distributions or concepts as embedded objects. For example, the complement of a box is not a box. However, queries about complemented variables can be calculated by the Inclusion-Exclusion principle, made more efficient by the fact that all nonnegated terms can be grouped and calculated exactly. We show some toy exact calculations with negated variables in Appendix A. Also, note that in a knowledge graph often true complements are not required -for example mortal and immortal are not actually complements, because the concept color is neither.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "Additionally, requiring the total probability mass covered by boxes to equal 1, or exactly matching marginal box probabilities while modeling all correlations is a difficult box-packing-type problem and not generally possible. Modeling limitations aside, the union of boxes having mass < 1 can be seen as an open-world assumption on our KB (not all points in space have corresponding concepts, yet).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattices", "sec_num": "4.2" }, { "text": "While inference (calculation of pairwise joint, unary marginal, and pairwise conditional probabilities) is quite straightforward by taking intersections of boxes and computing volumes (and their ratios), learning does not appear easy at first glance. While the (sub)gradient of the joint probability is well defined when boxes intersect, it is non-differentiable otherwise. Instead we optimize a lower bound.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4.3" }, { "text": "Clearly p(a _ b) p(a [ b), with equality only when a = b, so this can give us a lower bound:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4.3" }, { "text": "p(a^b) = p(a) + p(b) p(a [ b) p(a) + p(b) p(a _ b)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4.3" }, { "text": "Where probabilities are always given by the volume of the associated box. This lower bound always exists and is differentiable, even when the joint is not. It is guaranteed to be nonpositive except when a and b intersect, in which case the true joint likelihood should be used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4.3" }, { "text": "While a negative bound on a probability is odd, inspecting the bound we see that its gradient will push the enclosing box to be smaller, while increasing areas of the individual boxes, until they intersect, which is a sensible learning strategy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4.3" }, { "text": "Since we are working with small probabilities it is advisable to negate this term and maximize the negative logarithm:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4.3" }, { "text": "log(p(a _ b) p(a) p(b))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4.3" }, { "text": "This still has an unbounded gradient as the lower bound approaches 0, so it is also useful to add a constant within the logarithm function to avoid numerical problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4.3" }, { "text": "Since the likelihood of the full data is usually intractable to compute as a conjunction of many negations, we optimize binary conditional and unary marginal terms separately by maximum likelihood.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4.3" }, { "text": "In this work, we parametrize the boxes as (min, = max min), with Euclidean projections after gradient steps to keep our parameters in the unit hypercube and maintain the minimum/delta constraints. Now that we have the ability to compute probabilities and (surrogate) gradients for arbitrary marginals in the model, and by extension conditionals, we will see specific examples in the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4.3" }, { "text": "We begin by investigating properties of our model in modeling a small toy problem, consisting of a small hand constructed ontology over 19 concepts, aggregated from atomic synthetic examples first into a probabilistic lattice (e.g. some rabbits are brown, some are white), and then a full CPD. We model it using only 2 dimensions to enable visualization of the way the model self-organizes its \"event space\", training the model by minimize weighted cross-entropy with both the unary marginals and pairwise conditional probabilities. We also conduct a parallel experiment with POE as embedded in the unit cube, where each representation is constrained to touch the faces x = 1, y = 1. In Figure 2 , we show the representation of lattice structures by POE and the box lattice model as compared to the abstract probabilistic lattice used to construct the data, shown in Figure 1 , and compare the conditional probabilities produced by our model to the ground truth, demonstrating the richer capacity of the box model in capturing strong positive and negative correlations. In Table 1 , we perform a series of multivariable conditional queries and demonstrate intuitive results on high-order queries containing up to 4 variables, despite the model being trained on only 2-way information.", "cite_spans": [], "ref_spans": [ { "start": 687, "end": 695, "text": "Figure 2", "ref_id": null }, { "start": 867, "end": 875, "text": "Figure 1", "ref_id": "FIGREF2" }, { "start": 1073, "end": 1080, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Warmup: 2D Embedding of a Toy Lattice", "sec_num": "5.1" }, { "text": "We experiment on WordNet hypernym prediction, using the same train, development and test split as Vendrov et al. (2016) , created by randomly taking 4,000 hypernym pairs from the 837,888- Since our model is probabilistic, we would like a sensible value for P (n), where n is a node. We assign these marginal probabilities by looking at the number of descendants in the hierarchy under a node, and normalizing over all nodes, taking", "cite_spans": [ { "start": 98, "end": 119, "text": "Vendrov et al. (2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "WordNet", "sec_num": "5.2" }, { "text": "P (n) = | descendants(n) | | nodes | .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet", "sec_num": "5.2" }, { "text": "Furthermore, we use the graph structure (only of the subset of edges in the training set to avoid leaking data) to augment the data with approximate conditional probabilities P (x|y). For each leaf, we consider all of its ancestors as pairwise co-occurences, then aggregate and divide by the number of leaves to get an approximate joint probability distribution,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet", "sec_num": "5.2" }, { "text": "P (x, y) = | x, y co-occur in ancestor set | | leaves |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet", "sec_num": "5.2" }, { "text": ". With this and the unary marginals, we can create a conditional probability table, which we prune based on the difference of P (x|y) and P (y|x) and add cross entropy with these conditional \"soft edges\" to the training data. We refer to experiments using this additional data as Box + CPD in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 293, "end": 300, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "WordNet", "sec_num": "5.2" }, { "text": "We use 50 dimensions in our experiments. Since our model has 2 parameters per dimension, we also perform an apples-to-apples comparison with a 100D POE model. As seen in Table 3 , we outperform POE significantly even with this added representational power. We also observe sensible negatively correlated examples, shown in 2, in the trained box model, while POE cannot represent such relationships. We tune our models on the development set, with parameters documented in Appendix D.1. We observe that not only does our model outperform POE, it beats all previous results on WordNet, aside from the concurrent work of Athiwaratkun and Wilson (2018) (using different train/dev negative examples), the baseline POE model does as well. This indicates that probabilistic embeddings for transitive relations are a promising avenue for future work. Additionally, the ability of the model to learn from the expected \"soft edges\" improves it to state-of-the-art level. We expect that co-occurrence counts gathered from real textual corpora, rather than merely aggregating up the WordNet lattice, would further strengthen this effect. Table 4 : KL and Pearson correlation between model and gold probability.", "cite_spans": [], "ref_spans": [ { "start": 170, "end": 177, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 1126, "end": 1133, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "WordNet", "sec_num": "5.2" }, { "text": "We conduct experiments on the large-scale Flickr entailment dataset of 45 million image caption pairs. We use the exactly same train/dev/test from Lai and Hockenmaier (2017) . We use a slightly different unseen word pairs and unseen words test data, obtained from the author. We include their published results and also use their published code, marked \u21e4, for comparison.", "cite_spans": [ { "start": 147, "end": 173, "text": "Lai and Hockenmaier (2017)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Flickr Entailment Graph", "sec_num": "5.3" }, { "text": "For these experiments, we relax our boxes from the unit hypercube to the nonnegative orthant and obtain probabilities under the exponential measure, p(x) = exp( x). We enforce the nonnegativity constraints by clipping the LSTMgenerated embedding (Hochreiter and Schmidhuber, 1997) for the box minimum with a ReLU, and parametrize our embeddings using a softplus activation to prevent dead units. As in Lai and Hockenmaier (2017) , we use 512 hidden units in our LSTM to compose sentence vectors. We then apply two single-layer feed-forward networks with 512 units applied to the final LSTM state to produce the embeddings.", "cite_spans": [ { "start": 246, "end": 280, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF11" }, { "start": 402, "end": 428, "text": "Lai and Hockenmaier (2017)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Flickr Entailment Graph", "sec_num": "5.3" }, { "text": "As we can see from Table 4 , we note large improvements in KL and Pearson correlation to the ground truth entailment probabilities. In further analysis, Figure 3 demonstrates that while the box model outperforms POE in nearly every regime, the highest gains come from the comparatively difficult to calibrate small entailment probabilities, indicating the greater capability of our model to produce fine-grained distinctions.", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 26, "text": "Table 4", "ref_id": null }, { "start": 153, "end": 161, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Flickr Entailment Graph", "sec_num": "5.3" }, { "text": "We have only scratched the surface of possible applications. An exciting direction is the incorporation of multi-relational data for general knowledge representation and inference. Secondly, more complex representations, such as 2n-dimensional products of 2-dimensional convex polyhedra, would offer greater flexibility in tiling event space. Improved inference of the latent boxes, either through better optimization or through Bayesian approaches is another natural extension. Our greatest interest is in the application of this powerful new tool to the many areas where other structured embeddings have shown promise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" } ], "back_matter": [ { "text": "We thank Alice Lai for making the code from her original paper public, and for providing the additional unseen pairs and unseen words data. We also thank Haw-Shiuan Chang, Laurent Dinh, and Ben Poole for helpful discussions. We also thank the anonymous reviewers for their constructive feedback.This work was supported in part by the Center for Intelligent Information Retrieval and the Center for Data Science, in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction., and in part by the National Science Foundation under Grant No. IIS-1514053. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On modeling hierarchical data via probabilistic order embeddings", "authors": [ { "first": "Ben", "middle": [], "last": "Athiwaratkun", "suffix": "" }, { "first": "Andrew", "middle": [ "Gordon" ], "last": "Wilson", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Athiwaratkun and Andrew Gordon Wilson. 2018. On modeling hierarchical data via probabilistic or- der embeddings. In International Conference on Learning Representations.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Structured learning for taxonomy induction with belief propagation", "authors": [ { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "David", "middle": [], "last": "Burkett", "suffix": "" }, { "first": "Gerard", "middle": [ "De" ], "last": "Melo", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1041--1051", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Bansal, David Burkett, Gerard De Melo, and Dan Klein. 2014. Structured learning for taxon- omy induction with belief propagation. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1, pages 1041-1051.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Translating embeddings for modeling multirelational data", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Garcia-Duran", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Oksana", "middle": [], "last": "Yakhnenko", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2787--2795", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in neural information processing systems, pages 2787-2795.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "On the shortest arborescence of a directed graph", "authors": [ { "first": "Y", "middle": [ "J" ], "last": "Chu", "suffix": "" }, { "first": "T", "middle": [ "H" ], "last": "Liu", "suffix": "" } ], "year": 1995, "venue": "Science Sinica", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. J. Chu and T. H. Liu. 1995. On the shortest arbores- cence of a directed graph. Science Sinica, 20.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A generalization of principal components analysis to the exponential family", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Sanjoy", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "Robert", "middle": [ "E" ], "last": "Schapire", "suffix": "" } ], "year": 2002, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "617--624", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins, Sanjoy Dasgupta, and Robert E Schapire. 2002. A generalization of principal com- ponents analysis to the exponential family. In Ad- vances in neural information processing systems, pages 617-624.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Representing words as regions in vector space", "authors": [ { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning, CoNLL '09", "volume": "", "issue": "", "pages": "57--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrin Erk. 2009. Representing words as regions in vector space. In Proceedings of the Thirteenth Con- ference on Computational Natural Language Learn- ing, CoNLL '09, pages 57-65, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Graphical lasso and thresholding: Equivalence and closedform solutions", "authors": [ { "first": "Salar", "middle": [], "last": "Fattahi", "suffix": "" }, { "first": "Somayeh", "middle": [], "last": "Sojoudi", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1708.09479" ] }, "num": null, "urls": [], "raw_text": "Salar Fattahi and Somayeh Sojoudi. 2017. Graphi- cal lasso and thresholding: Equivalence and closed- form solutions. arXiv preprint arXiv:1708.09479.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Sparse inverse covariance estimation with the graphical lasso", "authors": [ { "first": "Jerome", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Hastie", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Tibshirani", "suffix": "" } ], "year": 2008, "venue": "Biostatistics", "volume": "9", "issue": "3", "pages": "432--441", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerome Friedman, Trevor Hastie, and Robert Tibshi- rani. 2008. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432- 441.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Understanding the difficulty of training deep feedforward neural networks", "authors": [ { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "249--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neu- ral networks. In Proceedings of the Thirteenth In- ternational Conference on Artificial Intelligence and Statistics, pages 249-256.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning to represent knowledge graphs with gaussian embedding", "authors": [ { "first": "Shizhu", "middle": [], "last": "He", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Guoliang", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM '15", "volume": "", "issue": "", "pages": "623--632", "other_ids": { "DOI": [ "10.1145/2806416.2806502" ] }, "num": null, "urls": [], "raw_text": "Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. 2015. Learning to represent knowledge graphs with gaus- sian embedding. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM '15, pages 623- 632, New York, NY, USA. ACM.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning bayesian networks: The combination of knowledge and statistical data", "authors": [ { "first": "David", "middle": [], "last": "Heckerman", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Geiger", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Chickering", "suffix": "" } ], "year": 1995, "venue": "Machine learning", "volume": "20", "issue": "3", "pages": "197--243", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Heckerman, Dan Geiger, and David M Chicker- ing. 1995. Learning bayesian networks: The com- bination of knowledge and statistical data. Machine learning, 20(3):197-243.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning to predict denotational probabilities for modeling entailment", "authors": [ { "first": "Alice", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" } ], "year": 2017, "venue": "EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alice Lai and Julia Hockenmaier. 2017. Learning to predict denotational probabilities for modeling en- tailment. In EACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Improved representation learning for predicting commonsense ontologies", "authors": [ { "first": "Xiang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2017, "venue": "NIPS Workshop on Structured Prediction", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Li, Luke Vilnis, and Andrew McCallum. 2017. Improved representation learning for predicting commonsense ontologies. NIPS Workshop on Struc- tured Prediction.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "WordNet: a lexical database for English", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. WordNet: a lexical database for English. Communications of the ACM.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Poincar\u00e9 embeddings for learning hierarchical representations", "authors": [ { "first": "Maximillian", "middle": [], "last": "Nickel And Douwe Kiela", "suffix": "" }, { "first": ";", "middle": [ "I" ], "last": "Guyon", "suffix": "" }, { "first": "U", "middle": [ "V" ], "last": "Luxburg", "suffix": "" }, { "first": "S", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "H", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "R", "middle": [], "last": "Fergus", "suffix": "" }, { "first": "S", "middle": [], "last": "Vishwanathan", "suffix": "" }, { "first": "R", "middle": [], "last": "", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "6338--6347", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximillian Nickel and Douwe Kiela. 2017. Poincar\u00e9 embeddings for learning hierarchical representa- tions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 6338-6347. Curran As- sociates, Inc.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Probabilistic reasoning in intelligent systems", "authors": [ { "first": "Judea", "middle": [], "last": "Pearl", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judea Pearl. 1988. Probabilistic reasoning in intelli- gent systems.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The mondrian process", "authors": [ { "first": "M", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Yee", "middle": [ "W" ], "last": "Roy", "suffix": "" }, { "first": "", "middle": [], "last": "Teh", "suffix": "" } ], "year": 2009, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "1377--1384", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel M Roy and Yee W Teh. 2009. The mondrian process. In Advances in neural information process- ing systems, pages 1377-1384.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Order-embeddings of images and language", "authors": [ { "first": "Ivan", "middle": [], "last": "Vendrov", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Fidler", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Urtasun", "suffix": "" } ], "year": 2016, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. In ICLR.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Word representations via gaussian embedding", "authors": [ { "first": "Luke", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2015, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke Vilnis and Andrew McCallum. 2015. Word rep- resentations via gaussian embedding. In ICLR.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Introduction to Operator Theory in Riesz Spaces", "authors": [ { "first": "C", "middle": [], "last": "Adriaan", "suffix": "" }, { "first": "", "middle": [], "last": "Zaanen", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adriaan C. Zaanen. 1997. Introduction to Operator Theory in Riesz Spaces. Springer Berlin Heidelberg.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "For a pair of Bernoulli variables p(a) and p(b), cov(a, b) 0 if the Bernoulli probabilities come from the volume of a cone as measured under any product (coordinate-wise) probability measure p" }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "For a pair of Bernoulli variables p(a) and p(b), corr(a, b) can take on any value in [ 1, 1] if the probabilities come from the volume of associated boxes in [0, 1] n ." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Representation of the toy probabilistic lattice used in Section 5.1. Darker color corresponds to more unary marginal probability. The associated CPD is obtained by a weighted aggregation of leaf elements.(a) POE lattice (b) Box lattice (c) POE CPD (d) Box CPD Figure 2: Lattice representations and conditional probabilities from POE vs. box lattice. Note how the box lattice model's lack of \"anchoring\" to a corner allows it vastly more expressivity in matching the ground truth CPD seen in Figure 1." }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "R between model and gold probabilities." }, "TABREF1": { "content": "
: Negatively correlated variables produced
by the model.
MethodTest Accuracy %
transitive88.2
word2gauss86.6
OE90.6
Li et al. (2017)91.3
DOE (KL)92.3
POE91.6
POE (100 dim)91.7
Box92.2
Box + CPD92.3
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF2": { "content": "
: Classification accuracy on WordNet test
set.
edge transitive closure of the WordNet hypernym
hierarchy as positive training examples for the
development set, 4,000 for the test set, and us-
ing the rest as training data. Negative training
examples are created by randomly corrupting a
train/development/test edge (u, v) by replacing ei-
ther u or v with a randomly chosen negative node.
We use their specific train/dev/test split, while
Athiwaratkun and Wilson (2018) use a different
train/dev split with the same test set (personal
communication) to examine the effect of different
negative sampling techniques. We cite their best
performing model, called DOE (KL).
", "html": null, "type_str": "table", "num": null, "text": "" } } } }