{ "paper_id": "N18-1045", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:54:44.437944Z" }, "title": "Distributional Inclusion Vector Embedding for Unsupervised Hypernymy Detection", "authors": [ { "first": "Haw-Shiuan", "middle": [], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "settlement": "Amherst", "country": "USA" } }, "email": "hschang@cs.umass.edu" }, { "first": "Ziyun", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Luke", "middle": [], "last": "Vilnis", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "settlement": "Amherst", "country": "USA" } }, "email": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "settlement": "Amherst", "country": "USA" } }, "email": "mccallum@cs.umass.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Modeling hypernymy, such as poodle is-a dog, is an important generalization aid to many NLP tasks, such as entailment, coreference, relation extraction, and question answering. Supervised learning from labeled hypernym sources, such as WordNet, limits the coverage of these models, which can be addressed by learning hypernyms from unlabeled text. Existing unsupervised methods either do not scale to large vocabularies or yield unacceptably poor accuracy. This paper introduces distributional inclusion vector embedding (DIVE), a simple-to-implement unsupervised method of hypernym discovery via per-word non-negative vector embeddings which preserve the inclusion property of word contexts in a low-dimensional and interpretable space. In experimental evaluations more comprehensive than any previous literature of which we are aware-evaluating on 11 datasets using multiple existing as well as newly proposed scoring functions-we find that our method provides up to double the precision of previous unsupervised embeddings, and the highest average performance, using a much more compact word representation, and yielding many new state-of-the-art results.", "pdf_parse": { "paper_id": "N18-1045", "_pdf_hash": "", "abstract": [ { "text": "Modeling hypernymy, such as poodle is-a dog, is an important generalization aid to many NLP tasks, such as entailment, coreference, relation extraction, and question answering. Supervised learning from labeled hypernym sources, such as WordNet, limits the coverage of these models, which can be addressed by learning hypernyms from unlabeled text. Existing unsupervised methods either do not scale to large vocabularies or yield unacceptably poor accuracy. This paper introduces distributional inclusion vector embedding (DIVE), a simple-to-implement unsupervised method of hypernym discovery via per-word non-negative vector embeddings which preserve the inclusion property of word contexts in a low-dimensional and interpretable space. In experimental evaluations more comprehensive than any previous literature of which we are aware-evaluating on 11 datasets using multiple existing as well as newly proposed scoring functions-we find that our method provides up to double the precision of previous unsupervised embeddings, and the highest average performance, using a much more compact word representation, and yielding many new state-of-the-art results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Numerous applications benefit from compactly representing context distributions, which assign meaning to objects under the rubric of distributional semantics. In natural language processing, distributional semantics has long been used to assign meanings to words (that is, to lexemes in the dictionary, not individual instances of word tokens). The meaning of a word in the distributional sense is often taken to be the set of textual contexts (nearby tokens) in which that word appears, represented as a large sparse bag of words (SBOW). Without any supervision, Word2Vec (Mikolov et al., 2013) , among other approaches based on matrix factorization (Levy et al., 2015a) , successfully compress the SBOW into a much lower dimensional embedding space, increasing the scalability and applicability of the embeddings while preserving (or even improving) the correlation of geometric embedding similarities with human word similarity judgments.", "cite_spans": [ { "start": 573, "end": 595, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF24" }, { "start": 651, "end": 671, "text": "(Levy et al., 2015a)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While embedding models have achieved impressive results, context distributions capture more semantic information than just word similarity. The distributional inclusion hypothesis (DIH) (Weeds and Weir, 2003; Geffet and Dagan, 2005; Cimiano et al., 2005) posits that the context set of a word tends to be a subset of the contexts of its hypernyms. For a concrete example, most adjectives that can be applied to poodle can also be applied to dog, because dog is a hypernym of poodle (e.g. both can be obedient). However, the converse is not necessarily true -a dog can be straight-haired but a poodle cannot. Therefore, dog tends to have a broader context set than poodle. Many asymmetric scoring functions comparing SBOW features based on DIH have been developed for hypernymy detection (Weeds and Weir, 2003; Geffet and Dagan, 2005; Shwartz et al., 2017) .", "cite_spans": [ { "start": 186, "end": 208, "text": "(Weeds and Weir, 2003;", "ref_id": "BIBREF45" }, { "start": 209, "end": 232, "text": "Geffet and Dagan, 2005;", "ref_id": "BIBREF12" }, { "start": 233, "end": 254, "text": "Cimiano et al., 2005)", "ref_id": "BIBREF6" }, { "start": 787, "end": 809, "text": "(Weeds and Weir, 2003;", "ref_id": "BIBREF45" }, { "start": 810, "end": 833, "text": "Geffet and Dagan, 2005;", "ref_id": "BIBREF12" }, { "start": 834, "end": 855, "text": "Shwartz et al., 2017)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Hypernymy detection plays a key role in many challenging NLP tasks, such as textual entailment (Sammons et al., 2011) , coreference (Ponzetto and Strube, 2006) , relation extraction (Demeester et al., 2016 ) and question answering (Huang et al., 2008) . Leveraging the variety of contexts and inclusion properties in context distributions can greatly increase the ability to discover taxonomic structure among words (Shwartz et al., 2017) . The inability to preserve these features limits the semantic representation power and downstream applicability of some popular unsupervised learning approaches such as Word2Vec.", "cite_spans": [ { "start": 95, "end": 117, "text": "(Sammons et al., 2011)", "ref_id": "BIBREF33" }, { "start": 132, "end": 159, "text": "(Ponzetto and Strube, 2006)", "ref_id": "BIBREF31" }, { "start": 182, "end": 205, "text": "(Demeester et al., 2016", "ref_id": "BIBREF8" }, { "start": 231, "end": 251, "text": "(Huang et al., 2008)", "ref_id": "BIBREF14" }, { "start": 416, "end": 438, "text": "(Shwartz et al., 2017)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several recently proposed methods aim to en-code hypernym relations between words in dense embeddings, such as Gaussian embedding (Vilnis and McCallum, 2015; Athiwaratkun and Wilson, 2017) , Boolean Distributional Semantic Model (Kruszewski et al., 2015) , order embedding (Vendrov et al., 2016) , H-feature detector (Roller and Erk, 2016) , HyperVec (Nguyen et al., 2017) , dual tensor (Glava\u0161 and Ponzetto, 2017) , Poincar\u00e9 embedding (Nickel and Kiela, 2017) , and LEAR (Vuli\u0107 and Mrk\u0161i\u0107, 2017) . However, the methods focus on supervised or semisupervised settings where a massive amount of hypernym annotations are available (Vendrov et al., 2016; Roller and Erk, 2016; Nguyen et al., 2017; Glava\u0161 and Ponzetto, 2017; Vuli\u0107 and Mrk\u0161i\u0107, 2017) , do not learn from raw text (Nickel and Kiela, 2017) or lack comprehensive experiments on the hypernym detection task (Vilnis and Mc-Callum, 2015; Athiwaratkun and Wilson, 2017) .", "cite_spans": [ { "start": 130, "end": 157, "text": "(Vilnis and McCallum, 2015;", "ref_id": "BIBREF40" }, { "start": 158, "end": 188, "text": "Athiwaratkun and Wilson, 2017)", "ref_id": "BIBREF0" }, { "start": 229, "end": 254, "text": "(Kruszewski et al., 2015)", "ref_id": "BIBREF18" }, { "start": 273, "end": 295, "text": "(Vendrov et al., 2016)", "ref_id": "BIBREF39" }, { "start": 317, "end": 339, "text": "(Roller and Erk, 2016)", "ref_id": "BIBREF32" }, { "start": 342, "end": 372, "text": "HyperVec (Nguyen et al., 2017)", "ref_id": null }, { "start": 387, "end": 414, "text": "(Glava\u0161 and Ponzetto, 2017)", "ref_id": "BIBREF13" }, { "start": 436, "end": 460, "text": "(Nickel and Kiela, 2017)", "ref_id": "BIBREF28" }, { "start": 472, "end": 496, "text": "(Vuli\u0107 and Mrk\u0161i\u0107, 2017)", "ref_id": "BIBREF42" }, { "start": 628, "end": 650, "text": "(Vendrov et al., 2016;", "ref_id": "BIBREF39" }, { "start": 651, "end": 672, "text": "Roller and Erk, 2016;", "ref_id": "BIBREF32" }, { "start": 673, "end": 693, "text": "Nguyen et al., 2017;", "ref_id": "BIBREF27" }, { "start": 694, "end": 720, "text": "Glava\u0161 and Ponzetto, 2017;", "ref_id": "BIBREF13" }, { "start": 721, "end": 744, "text": "Vuli\u0107 and Mrk\u0161i\u0107, 2017)", "ref_id": "BIBREF42" }, { "start": 774, "end": 798, "text": "(Nickel and Kiela, 2017)", "ref_id": "BIBREF28" }, { "start": 864, "end": 892, "text": "(Vilnis and Mc-Callum, 2015;", "ref_id": null }, { "start": 893, "end": 923, "text": "Athiwaratkun and Wilson, 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent studies (Levy et al., 2015b; Shwartz et al., 2017) have underscored the difficulty of generalizing supervised hypernymy annotations to unseen pairs -classifiers often effectively memorize prototypical hypernyms ('general' words) and ignore relations between words. These findings motivate us to develop more accurate and scalable unsupervised embeddings to detect hypernymy and propose several scoring functions to analyze the embeddings from different perspectives.", "cite_spans": [ { "start": 15, "end": 35, "text": "(Levy et al., 2015b;", "ref_id": "BIBREF23" }, { "start": 36, "end": 57, "text": "Shwartz et al., 2017)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 A novel unsupervised low-dimensional embedding method via performing non-negative matrix factorization (NMF) on a weighted PMI matrix, which can be efficiently optimized using modified skip-grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contributions", "sec_num": "1.1" }, { "text": "\u2022 Theoretical and qualitative analysis illustrate that the proposed embedding can intuitively and interpretably preserve inclusion relations among word contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contributions", "sec_num": "1.1" }, { "text": "\u2022 Extensive experiments on 11 hypernym detection datasets demonstrate that the learned embeddings dominate previous low-dimensional unsupervised embedding approaches, achieving similar or better performance than SBOW, on both existing and newly proposed asymmetric scoring functions, while requiring much less memory and compute.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contributions", "sec_num": "1.1" }, { "text": "The distributional inclusion hypothesis (DIH) suggests that the context set of a hypernym tends to contain the context set of its hyponyms. When representing a word as the counts of contextual co-occurrences, the count in every dimension of hypernym y tends to be larger than or equal to the corresponding count of its hyponym x:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x y \u21d0\u21d2 \u2200c \u2208 V, #(x, c) \u2264 #(y, c),", "eq_num": "(1)" } ], "section": "Method", "sec_num": "2" }, { "text": "where x y means y is a hypernym of x, V is the set of vocabulary, and #(x, c) indicates the number of times that word x and its context word c co-occur in a small window with size |W | in the corpus of interest D. Notice that the concept of DIH could be applied to different context word representations. For example, Geffet and Dagan (2005) represent each word by the set of its co-occurred context words while discarding their counts. In this study, we define the inclusion property based on counts of context words in (1) because the counts are an effective and noise-robust feature for the hypernymy detection using only the context distribution of words (Clarke, 2009; Vuli\u0107 et al., 2016; Shwartz et al., 2017) .", "cite_spans": [ { "start": 318, "end": 341, "text": "Geffet and Dagan (2005)", "ref_id": "BIBREF12" }, { "start": 659, "end": 673, "text": "(Clarke, 2009;", "ref_id": "BIBREF7" }, { "start": 674, "end": 693, "text": "Vuli\u0107 et al., 2016;", "ref_id": "BIBREF41" }, { "start": 694, "end": 715, "text": "Shwartz et al., 2017)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "Our goal is to produce lower-dimensional embeddings preserving the inclusion property that the embedding of hypernym y is larger than or equal to the embedding of its hyponym x in every dimension. Formally, the desired property can be written as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "x y \u21d0\u21d2 x[i] \u2264 y[i] , \u2200i \u2208 {1, ..., L}, (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "where L is number of dimensions in the embedding space. We add additional non-negativity constraints, i.e. x[i] \u2265 0, y[i] \u2265 0, \u2200i, in order to increase the interpretability of the embeddings (the reason will be explained later in this section). This is a challenging task. In reality, there are a lot of noise and systematic biases that cause the violation of DIH in Equation (1) (i.e. #(x, c) > #(y, c) for some neighboring word c), but the general trend can be discovered by processing thousands of neighboring words in SBOW together (Shwartz et al., 2017) . After the compression, the same trend has to be estimated in a much smaller embedding space which discards most of the information in SBOW, so it is not surprising to see most of the unsupervised hypernymy detection studies focus on SBOW (Shwartz et al., 2017) and the existing unsupervised embedding methods like Gaussian embedding have degraded accuracy (Vuli\u0107 et al., 2016) .", "cite_spans": [ { "start": 536, "end": 558, "text": "(Shwartz et al., 2017)", "ref_id": "BIBREF37" }, { "start": 799, "end": 821, "text": "(Shwartz et al., 2017)", "ref_id": "BIBREF37" }, { "start": 917, "end": 937, "text": "(Vuli\u0107 et al., 2016)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "Popular methods of unsupervised word embedding are usually based on matrix factorization (Levy et al., 2015a) . The approaches first compute a co-occurrence statistic between the wth word and the cth context word as the (w, c)th element of the matrix", "cite_spans": [ { "start": 89, "end": 109, "text": "(Levy et al., 2015a)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Inclusion Preserving Matrix Factorization", "sec_num": "2.1" }, { "text": "M [w, c]. Next, the matrix M is factorized such that M [w, c] \u2248 w T c,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inclusion Preserving Matrix Factorization", "sec_num": "2.1" }, { "text": "where w is the low dimension embedding of wth word and c is the cth context embedding. The statistic in M [w, c] is usually related to pointwise mutual information (Levy et al., 2015a) : , and thus larger co-occurrence count #(w, c). However, the derivation has two flaws: (1) c could contain negative values and (2) lower #(w, c) could still lead to larger P M I(w, c) as long as the #(w) is small enough.", "cite_spans": [ { "start": 164, "end": 184, "text": "(Levy et al., 2015a)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Inclusion Preserving Matrix Factorization", "sec_num": "2.1" }, { "text": "P M I(w, c) = log( P (w,c) P (w)\u2022P (c) ), where P (w, c) = #(w,c) |D| , |D| =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inclusion Preserving Matrix Factorization", "sec_num": "2.1" }, { "text": "To preserve DIH, we propose a novel word embedding method, distributional inclusion vector embedding (DIVE), which fixes the two flaws by performing non-negative factorization (NMF) (Lee and Seung, 2001) ", "cite_spans": [ { "start": 182, "end": 203, "text": "(Lee and Seung, 2001)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Inclusion Preserving Matrix Factorization", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "on the matrix M , where M [w, c] = log( P (w, c) P (w) \u2022 P (c) \u2022 #(w) k I \u2022 Z ) = log( #(w, c)|V | #(c)k I ),", "eq_num": "(3)" } ], "section": "Inclusion Preserving Matrix Factorization", "sec_num": "2.1" }, { "text": "where k I is a constant which shifts PMI value like SGNS, Z = |D| |V | is the average word frequency, and |V | is the vocabulary size. We call this weighting term #(w) Z inclusion shift. After applying the non-negativity constraint and inclusion shift, the inclusion property in DIVE (i.e. Equation (2)) implies that Equation (1) (DIH) holds if the matrix is reconstructed perfectly. The derivation is simple: If the embedding of hypernym y is greater than or equal to the embedding of its hyponym x in every dimension (", "cite_spans": [ { "start": 163, "end": 167, "text": "#(w)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inclusion Preserving Matrix Factorization", "sec_num": "2.1" }, { "text": "x[i] \u2264 y[i] , \u2200i), x T c \u2264 y T c since context vector c is non- negative. Then, M [x, c] \u2264 M [y, c] tends to be true because w T c \u2248 M [w, c]. This leads to #(x, c) \u2264 #(y, c) because M [w, c] = log( #(w,c)|V | #(c)k I )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inclusion Preserving Matrix Factorization", "sec_num": "2.1" }, { "text": "and only #(w, c) changes with w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inclusion Preserving Matrix Factorization", "sec_num": "2.1" }, { "text": "Due to its appealing scalability properties during training time (Levy et al., 2015a) , we optimize our embedding based on the skip-gram with negative sampling (SGNS) (Mikolov et al., 2013) . The objective function of SGNS is", "cite_spans": [ { "start": 65, "end": 85, "text": "(Levy et al., 2015a)", "ref_id": "BIBREF22" }, { "start": 167, "end": 189, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Optimization", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "lSGNS = w\u2208V c\u2208V #(w, c) log \u03c3(w T c) + w\u2208V k c\u2208V #(w, c) E c N \u223cP D [log \u03c3(\u2212w T c N )],", "eq_num": "(4)" } ], "section": "Optimization", "sec_num": "2.2" }, { "text": "where w \u2208 R, c \u2208 R, c N \u2208 R, \u03c3 is the logistic sigmoid function, and k is a constant hyperparameter indicating the ratio between positive and negative samples. Levy and Goldberg (2014) demonstrate SGNS is equivalent to factorizing a shifted PMI matrix", "cite_spans": [ { "start": 160, "end": 184, "text": "Levy and Goldberg (2014)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Optimization", "sec_num": "2.2" }, { "text": "M , where M [w, c] = log( P (w,c) P (w)\u2022P (c) \u2022 1 k ). By setting k = k I \u2022Z #(w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization", "sec_num": "2.2" }, { "text": "and applying non-negativity constraints to the embeddings, DIVE can be optimized using the similar objective function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "lDIV E = w\u2208V c\u2208V #(w, c) log \u03c3(w T c) + kI w\u2208V Z #(w) c\u2208V #(w, c) E c N \u223cP D [log \u03c3(\u2212w T c N )],", "eq_num": "(5)" } ], "section": "Optimization", "sec_num": "2.2" }, { "text": "where w \u2265 0, c \u2265 0, c N \u2265 0, and k I is a constant hyper-parameter. P D is the distribution of negative samples, which we set to be the corpus word frequency distribution (not reducing the probability of drawing frequent words like SGNS) in this paper. Equation (5) is optimized by ADAM (Kingma and Ba, 2015) , a variant of stochastic gradient descent (SGD). The non-negativity constraint is implemented by projection (Polyak, 1969) (i.e. clipping any embedding which crosses the zero boundary after an update). The optimization process provides an alternative angle to explain how DIVE preserves DIH. The gradients for the word embedding w is", "cite_spans": [ { "start": 287, "end": 308, "text": "(Kingma and Ba, 2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Optimization", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "dlDIV E dw = c\u2208V #(w, c)(1 \u2212 \u03c3(w T c))c \u2212 kI c N \u2208V #(cN ) |V | \u03c3(w T c N )c N .", "eq_num": "(6)" } ], "section": "Optimization", "sec_num": "2.2" }, { "text": "Assume hyponym x and hypernym y satisfy DIH in Equation 1and the embeddings x and y are the same at some point during the gradient ascent. At this point, the gradients coming from negative sampling (the second term) decrease the same amount of embedding values for both x and y. However, the embedding of hypernym y would get higher or equal positive gradients from the first term than x in every dimension because #(x, c) \u2264 #(y, c). This means Equation (1) tends to imply Equation (2) because the hypernym has larger gradients everywhere in the embedding space. Combining the analysis from the matrix factorization viewpoint, DIH in Equation 1is approximately equivalent to the inclusion property in DIVE (i.e. Equation (2)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization", "sec_num": "2.2" }, { "text": "For a frequent target word, there must be many neighboring words that incidentally appear near the target word without being semantically meaningful, especially when a large context window size is used. The unrelated context words cause noise in both the word vector and the context vector of DIVE. We address this issue by filtering out context words c for each target word w when the PMI of the co-occurring words is too small (i.e. log( P (w,c) P (w)\u2022P (c) ) < log(k f )). That is, we set #(w, c) = 0 in the objective function. This preprocessing step is similar to computing PPMI in SBOW (Bullinaria and Levy, 2007) , where low PMI co-occurrences are removed from SBOW.", "cite_spans": [ { "start": 608, "end": 619, "text": "Levy, 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "PMI Filtering", "sec_num": "2.3" }, { "text": "After applying the non-negativity constraint, we observe that each latent factor in the embedding is interpretable as previous findings suggest (Pauca et al., 2004; Murphy et al., 2012) (i.e. each dimension roughly corresponds to a topic). Furthermore, DIH suggests that a general word appears in more diverse contexts/topics. By preserving DIH using inclusion shift, the embedding of a general word (i.e. hypernym of many other words) tends to have larger values in these dimensions (topics). This gives rise to a natural and intuitive interpretation of our word embeddings: the word embeddings can be seen as unnormalized probability distributions over topics. In Figure 1 , we visualize the unnormalized topical distribution of two words, rodent and mammal, as an example. Since rodent is a kind of mammal, the embedding (i.e. unnormalized topical distribution) of mammal includes the embedding of rodent when DIH holds. More examples are illustrated in our supplementary materials.", "cite_spans": [], "ref_spans": [ { "start": 666, "end": 674, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Interpretability", "sec_num": "2.4" }, { "text": "In this section, we compare DIVE with other unsupervised hypernym detection methods. In this paper, unsupervised approaches refer to the methods that only train on plaintext corpus without using any hypernymy or lexicon annotation. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Embedding Comparison", "sec_num": "3" }, { "text": "The embeddings are tested on 11 datasets. The first 4 datasets come from the recent review of Shwartz et al. 20171 : BLESS (Baroni and Lenci, 2011), EVALution (Santus et al., 2015) , Lenci/Benotto (Benotto, 2015) , and Weeds (Weeds et al., 2014) . The next 4 datasets are downloaded from the code repository of the H-feature detector (Roller and Erk, 2016) 2 : Medical (i.e., Levy 2014) , LEDS (also referred to as ENTAILMENT or Baroni 2012) (Baroni et al., 2012) , TM14 (i.e., Turney 2014) (Turney and Mohammad, 2015), and Kotlerman 2010 (Kotlerman et al., 2010) . In addition, the performance on the test set of Hy-peNet (Shwartz et al., 2016 ) (using the random train/test split), the test set of WordNet (Vendrov et al., 2016) , and all pairs in HyperLex (Vuli\u0107 et al., 2016) are also evaluated. The F1 and accuracy measurements are sometimes very similar even though the quality of prediction varies, so we adopted average precision, AP@all (Zhu, 2004 ) (equivalent to the area under the precision-recall curve when the constant interpolation is used), as the main evaluation metric. The HyperLex dataset has a continuous score on each candidate word pair, so we adopt Spearman rank coefficient \u03c1 (Fieller et al., 1957) as suggested by the review study of Vuli\u0107 et al. (2016) . Any OOV (out-of-vocabulary) word encountered in the testing data is pushed to the bottom of the prediction list (effectively assuming the word pair does not have hypernym relation).", "cite_spans": [ { "start": 159, "end": 180, "text": "(Santus et al., 2015)", "ref_id": "BIBREF34" }, { "start": 197, "end": 212, "text": "(Benotto, 2015)", "ref_id": "BIBREF4" }, { "start": 225, "end": 245, "text": "(Weeds et al., 2014)", "ref_id": "BIBREF44" }, { "start": 442, "end": 463, "text": "(Baroni et al., 2012)", "ref_id": "BIBREF1" }, { "start": 539, "end": 563, "text": "(Kotlerman et al., 2010)", "ref_id": "BIBREF17" }, { "start": 623, "end": 644, "text": "(Shwartz et al., 2016", "ref_id": "BIBREF36" }, { "start": 708, "end": 730, "text": "(Vendrov et al., 2016)", "ref_id": "BIBREF39" }, { "start": 759, "end": 779, "text": "(Vuli\u0107 et al., 2016)", "ref_id": "BIBREF41" }, { "start": 946, "end": 956, "text": "(Zhu, 2004", "ref_id": "BIBREF46" }, { "start": 1202, "end": 1224, "text": "(Fieller et al., 1957)", "ref_id": "BIBREF11" }, { "start": 1261, "end": 1280, "text": "Vuli\u0107 et al. (2016)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "3.1" }, { "text": "1 https://github.com/vered1986/ UnsupervisedHypernymy 2 https://github.com/stephenroller/ emnlp2016/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "3.1" }, { "text": "We trained all methods on the first 51.2 million tokens of WaCkypedia corpus (Baroni et al., 2009) because DIH holds more often in this subset (i.e. SBOW works better) compared with that in the whole WaCkypedia corpus. The window size |W | of DIVE and Gaussian embedding are set as 20 (left 10 words and right 10 words). The number of embedding dimensions in DIVE L is set to be 100. The other hyper-parameters of DIVE and Gaussian embedding are determined by the training set of HypeNet. Other experimental details are described in our supplementary materials.", "cite_spans": [ { "start": 77, "end": 98, "text": "(Baroni et al., 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "3.1" }, { "text": "If a pair of words has hypernym relation, the words tend to be similar (sharing some context words) and the hypernym should be more general than the hyponym. Section 2.4 has shown that the embedding could be viewed as an unnormalized topic distribution of its context, so the embedding of hypernym should be similar to the embedding of its hyponym but having larger magnitude. As in Hy-perVec (Nguyen et al., 2017), we score the hypernym candidates by multiplying two factors corresponding to these properties. The C\u2022\u2206S (i.e. the cosine similarity multiply the difference of summation) scoring function is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C \u2022 \u2206S(wq \u2192 wp) = w T q wp ||wq||2 \u2022 ||wp||2 \u2022 ( wp 1 \u2212 wq 1),", "eq_num": "(7)" } ], "section": "Results", "sec_num": "3.2" }, { "text": "where w p is the embedding of hypernym and w q is the embedding of hyponym. As far as we know, Gaussian embedding (GE) (Vilnis and McCallum, 2015) is the stateof-the-art unsupervised embedding method which can capture the asymmetric relations between a hypernym and its hyponyms. Gaussian embedding encodes the context distribution of each word as a multivariate Gaussian distribution, where the embeddings of hypernyms tend to have higher variance and overlap with the embedding of their hyponyms. In Table 1 , we compare DIVE with Gaussian embedding 3 using the code implemented by Athiwaratkun and Wilson (2017) 4 and with word cosine similarity using skip-grams. The performances of random scores are also presented for reference. As we can see, DIVE is usually significantly better than other unsupervised embedding.", "cite_spans": [ { "start": 119, "end": 146, "text": "(Vilnis and McCallum, 2015)", "ref_id": "BIBREF40" }, { "start": 584, "end": 614, "text": "Athiwaratkun and Wilson (2017)", "ref_id": "BIBREF0" }, { "start": 615, "end": 616, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 502, "end": 509, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "3.2" }, { "text": "Unlike Word2Vec, which only tries to preserve the similarity signal, the goals of DIVE cover preserving the capability of measuring not only the similarity but also whether one context distribution includes the other (inclusion signal) or being more general than the other (generality signal).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SBOW Comparison", "sec_num": "4" }, { "text": "In this experiment, we perform a comprehensive comparison between SBOW and DIVE using multiple scoring functions to detect the hypernym relation between words based on different types of signal. The window size |W | of SBOW is also set as 20, and experiment setups are the same as that described in Section 3.1. Notice that the comparison is inherently unfair because most of the information would be lost during the aggressive compression process of DIVE, and we would like to evaluate how well DIVE can preserve signals of interest using the number of dimensions which is several orders of magnitude less than that of SBOW.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SBOW Comparison", "sec_num": "4" }, { "text": "After trying many existing and newly proposed functions which score a pair of words to detect hypernym relation between them, we find that good scoring functions for SBOW are also good scoring functions for DIVE. Thus, in addition to C\u2022\u2206S used in Section 3.2, we also present 4 other best performing or representative scoring functions in the experiment (see our supplementary materials for more details):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Scoring Functions", "sec_num": "4.1" }, { "text": "3 Note that higher AP is reported for some models in previous literature: 80 (Vilnis and McCallum, 2015) in LEDS, 74.2 (Athiwaratkun and Wilson, 2017) in LEDS, and 20.6 (Vuli\u0107 et al., 2016) in HyperLex. The difference could be caused by different train/test setup (e.g. How the hyperparameters are tuned, different training corpus, etc.). However, DIVE beats even these results.", "cite_spans": [ { "start": 77, "end": 104, "text": "(Vilnis and McCallum, 2015)", "ref_id": "BIBREF40" }, { "start": 119, "end": 150, "text": "(Athiwaratkun and Wilson, 2017)", "ref_id": "BIBREF0" }, { "start": 169, "end": 189, "text": "(Vuli\u0107 et al., 2016)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Scoring Functions", "sec_num": "4.1" }, { "text": "4 https://github.com/benathi/word2gm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Scoring Functions", "sec_num": "4.1" }, { "text": "\u2022 Inclusion: CDE (Clarke, 2009) ). CDE measures the degree of violation of equation 1. Equation (1) holds if and only if CDE is 1. Due to noise in SBOW, CDE is rarely exactly 1, but hypernym pairs usually have higher CDE. Despite its effectiveness, the good performance could mostly come from the magnitude of embeddings/features instead of inclusion properties among context distributions. To measure the inclusion properties between context distributions d p and d q (w p and w q after normalization, respectively), we use negative asymmetric L1 distance (\u2212AL 1 ) 5 as one of our scoring function, where", "cite_spans": [ { "start": 17, "end": 31, "text": "(Clarke, 2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Scoring Functions", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "AL 1 = min a c w 0 \u2022 max(ad q [c] \u2212 d p [c], 0)+ max(d p [c] \u2212 ad q [c], 0),", "eq_num": "(8)" } ], "section": "Unsupervised Scoring Functions", "sec_num": "4.1" }, { "text": "and w 0 is a constant hyper-parameter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Scoring Functions", "sec_num": "4.1" }, { "text": "\u2022 Generality: When the inclusion property in 2holds,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Scoring Functions", "sec_num": "4.1" }, { "text": "||y|| 1 = i y[i] \u2265 i x[i] = ||x|| 1 . Thus, we use summation difference (||w p || 1 \u2212 ||w q || 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Scoring Functions", "sec_num": "4.1" }, { "text": "as our score to measure generality signal (\u2206S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Scoring Functions", "sec_num": "4.1" }, { "text": "\u2022 Similarity plus generality: Computing cosine similarity on skip-grams (i.e. Word2Vec + C in Table 1 ) is a popular way to measure the similarity of two words, so we multiply the Word2Vec similarity with summation difference of DIVE or SBOW (W\u2022\u2206S) as an alternative of C\u2022\u2206S.", "cite_spans": [], "ref_spans": [ { "start": 94, "end": 101, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Unsupervised Scoring Functions", "sec_num": "4.1" }, { "text": "\u2022 SBOW Freq: A word is represented by the frequency of its neighboring words. Applying PMI filter (set context feature to be 0 if its value is lower than log(k f )) to SBOW Freq only makes its performances closer to (but still much worse than) SBOW PPMI, so we omit the baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "\u2022 SBOW PPMI: SBOW which uses PPMI of its neighboring words as the features (Bullinaria and Levy, 2007) . Applying PMI filter to SBOW PPMI usually makes the performances worse, especially when k f is large. Similarly, a constant log(k ) shifting to SBOW PPMI (i.e. max(P M I \u2212 log(k ), 0)) is not helpful, so we set both k f and k to be 1. Table 2 : AP@all (%) of 10 datasets. The box at lower right corner compares the micro average AP across all 10 datasets. Numbers in different rows come from different feature or embedding spaces. Numbers in different columns come from different datasets and unsupervised scoring functions. We also present the micro average AP across the first 4 datasets (BLESS, EVALution, Lenci/Benotto and Weeds), which are used as a benchmark for unsupervised hypernym detection (Shwartz et al., 2017) . IS refers to inclusion shift on the shifted PMI matrix. \u2022 SBOW PPMI w/ IS (with additional inclusion shift): The matrix reconstructed by DIVE when k I = 1. \u2022 SBOW all wiki: SBOW using PPMI features trained on the whole WaCkypedia.", "cite_spans": [ { "start": 91, "end": 102, "text": "Levy, 2007)", "ref_id": "BIBREF5" }, { "start": 805, "end": 827, "text": "(Shwartz et al., 2017)", "ref_id": "BIBREF37" } ], "ref_spans": [ { "start": 339, "end": 346, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "\u2022 DIVE without the PMI filter (DIVE w/o PMI)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "\u2022 NMF on shifted PMI: Non-negative matrix factorization (NMF) on the shifted PMI without inclusion shift for DIVE (DIVE w/o IS). This is the same as applying the non-negative constraint on the skip-gram model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "\u2022 K-means (Freq NMF): The method first uses Mini-batch k-means (Sculley, 2010) to cluster words in skip-gram embedding space into 100 topics, and hashes each frequency count in SBOW into the corresponding topic. If running k-means on skip-grams is viewed as an approximation of clustering the SBOW context vectors, the method can be viewed as a kind of NMF (Ding et al., 2005) .", "cite_spans": [ { "start": 63, "end": 78, "text": "(Sculley, 2010)", "ref_id": "BIBREF35" }, { "start": 357, "end": 376, "text": "(Ding et al., 2005)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "DIVE performs non-negative matrix factorization on PMI matrix after applying inclusion shift and PMI filtering. To demonstrate the effectiveness of each step, we show the performances of DIVE after removing PMI filtering (DIVE w/o PMI), removing inclusion shift (DIVE w/o IS), and removing matrix factorization (SBOW PPMI w/ IS, SBOW PPMI, and SBOW all wiki). The methods based on frequency matrix are also tested (SBOW Freq and Freq NMF).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "In Table 2 , we first confirm the finding of the previous review study of Shwartz et al. (2017) : there is no single hypernymy scoring function which always outperforms others. One of the main reasons is that different datasets collect negative samples differently. For example, if negative samples come from random word pairs (e.g. WordNet dataset), a symmetric similarity measure is a good scoring function. On the other hand, negative samples come from related or similar words in Hy-peNet, EVALution, Lenci/Benotto, and Weeds, so only estimating generality difference leads to the best (or close to the best) performance. The negative samples in many datasets are composed of both random samples and similar words (such as BLESS), so the combination of similarity and generality difference yields the most stable results.", "cite_spans": [ { "start": 74, "end": 95, "text": "Shwartz et al. (2017)", "ref_id": "BIBREF37" } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "4.3" }, { "text": "DIVE performs similar or better on most of the scoring functions compared with SBOW consistently across all datasets in Table 2 and Table 3 , while using many fewer dimensions (see Table 4 ). This leads to 2-3 order of magnitude savings on both memory consumption and testing time. Furthermore, the low dimensional embedding makes the computational complexity independent of the vocabulary size, which drastically boosts the scalability of unsupervised hypernym detection especially with the help of GPU. It is surprising that we can achieve such aggressive compression while preserving the similarity, generality, and in-clusion signal in various datasets with different types of negative samples. Its results on C\u2022\u2206S and W\u2022\u2206S outperform SBOW Freq. Meanwhile, its results on AL 1 outperform SBOW PPMI. The fact that W\u2022\u2206S or C\u2022\u2206S usually outperform generality functions suggests that only memorizing general words is not sufficient. The best average performance on 4 and 10 datasets are both produced by W\u2022\u2206S on DIVE.", "cite_spans": [], "ref_spans": [ { "start": 120, "end": 139, "text": "Table 2 and Table 3", "ref_id": "TABREF6" }, { "start": 181, "end": 188, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "4.3" }, { "text": "SBOW PPMI improves the W\u2022\u2206S and C\u2022\u2206S from SBOW Freq but sacrifices AP on the inclusion functions. It generally hurts performance to directly include inclusion shift in PPMI (PPMI w/ IS) or compute SBOW PPMI on the whole WaCkypedia (all wiki) instead of the first 51.2 million tokens. The similar trend can also be seen in Table 3. Note that AL 1 completely fails in the Hy-perLex dataset using SBOW PPMI, which suggests that PPMI might not necessarily preserve the distributional inclusion property, even though it can have good performance on scoring functions combining similarity and generality signals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussions", "sec_num": "4.3" }, { "text": "Removing the PMI filter from DIVE slightly drops the overall precision while removing inclusion shift on shifted PMI (w/o IS) leads to poor performances. K-means (Freq NMF) produces similar AP compared with SBOW Freq but has worse AL 1 scores. Its best AP scores on different datasets are also significantly worse than the best AP of DIVE. This means that only making Word2Vec (skip-grams) non-negative or naively accumulating topic distribution in contexts cannot lead to satisfactory embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussions", "sec_num": "4.3" }, { "text": "Most previous unsupervised approaches focus on designing better hypernymy scoring functions for sparse bag of word (SBOW) features. They are well summarized in the recent study (Shwartz et al., 2017) . Shwartz et al. (2017) also evaluate the influence of different contexts, such as changing the window size of contexts or incorporating dependency parsing information, but neglect scalability issues inherent to SBOW methods.", "cite_spans": [ { "start": 177, "end": 199, "text": "(Shwartz et al., 2017)", "ref_id": "BIBREF37" }, { "start": 202, "end": 223, "text": "Shwartz et al. (2017)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "A notable exception is the Gaussian embedding model (Vilnis and McCallum, 2015) , which represents each word as a Gaussian distribution. However, since a Gaussian distribution is normalized, it is difficult to retain frequency information during the embedding process, and experiments on Hy-perLex (Vuli\u0107 et al., 2016) demonstrate that a sim-ple baseline only relying on word frequency can achieve good results. Follow-up work models contexts by a mixture of Gaussians (Athiwaratkun and Wilson, 2017) relaxing the unimodality assumption but achieves little improvement on hypernym detection tasks. Kiela et al. (2015) show that images retrieved by a search engine can be a useful source of information to determine the generality of lexicons, but the resources (e.g. pre-trained image classifier for the words of interest) might not be available in many domains.", "cite_spans": [ { "start": 52, "end": 79, "text": "(Vilnis and McCallum, 2015)", "ref_id": "BIBREF40" }, { "start": 298, "end": 318, "text": "(Vuli\u0107 et al., 2016)", "ref_id": "BIBREF41" }, { "start": 469, "end": 500, "text": "(Athiwaratkun and Wilson, 2017)", "ref_id": "BIBREF0" }, { "start": 598, "end": 617, "text": "Kiela et al. (2015)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Order embedding (Vendrov et al., 2016 ) is a supervised approach to encode many annotated hypernym pairs (e.g. all of the whole Word-Net (Miller, 1995) ) into a compact embedding space, where the embedding of a hypernym should be smaller than the embedding of its hyponym in every dimension. Our method learns embedding from raw text, where a hypernym embedding should be larger than the embedding of its hyponym in every dimension. Thus, DIVE can be viewed as an unsupervised and reversed form of order embedding.", "cite_spans": [ { "start": 16, "end": 37, "text": "(Vendrov et al., 2016", "ref_id": "BIBREF39" }, { "start": 128, "end": 151, "text": "Word-Net (Miller, 1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Non-negative matrix factorization (NMF) has a long history in NLP, for example in the construction of topic models (Pauca et al., 2004) . Non-negative sparse embedding (NNSE) (Murphy et al., 2012) and Faruqui et al. (2015) indicate that non-negativity can make embeddings more interpretable and improve word similarity evaluations. The sparse NMF is also shown to be effective in cross-lingual lexical entailment tasks but does not necessarily improve monolingual hypernymy detection (Vyas and Carpuat, 2016) . In our study, we show that performing NMF on PMI matrix with inclusion shift can preserve DIH in SBOW, and the comprehensive experimental analysis demonstrates its state-of-the-art performances on unsupervised hypernymy detection.", "cite_spans": [ { "start": 115, "end": 135, "text": "(Pauca et al., 2004)", "ref_id": "BIBREF29" }, { "start": 175, "end": 196, "text": "(Murphy et al., 2012)", "ref_id": "BIBREF26" }, { "start": 201, "end": 222, "text": "Faruqui et al. (2015)", "ref_id": "BIBREF10" }, { "start": 484, "end": 508, "text": "(Vyas and Carpuat, 2016)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Although large SBOW vectors consistently show the best all-around performance in unsupervised hypernym detection, it is challenging to compress them into a compact representation which preserves inclusion, generality, and similarity signals for this task. Our experiments suggest that the existing approaches and simple baselines such as Gaussian embedding, accumulating K-mean clusters, and non-negative skip-grams do not lead to satisfactory performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "To achieve this goal, we propose an interpretable and scalable embedding method called distributional inclusion vector embedding (DIVE) by performing non-negative matrix factorization (NMF) on a weighted PMI matrix. We demonstrate that scoring functions which measure inclusion and generality properties in SBOW can also be applied to DIVE to detect hypernymy, and DIVE performs the best on average, slightly better than SBOW while using many fewer dimensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Our experiments also indicate that unsupervised scoring functions which combine similarity and generality measurements work the best in general, but no one scoring function dominates across all datasets. A combination of unsupervised DIVE with the proposed scoring functions produces new state-of-the-art performances on many datasets in the unsupervised regime.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "This work was supported in part by the Center for Data Science and the Center for Intelligent Information Retrieval, in part by DARPA under agreement number FA8750-13-2-0020, in part by Defense Advanced Research Agency (DARPA) contract number HR0011-15-2-0036, in part by the National Science Foundation (NSF) grant numbers DMR-1534431 and IIS-1514053 and in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, or the U.S. Government, or the other sponsors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": "7" }, { "text": "The meaning and efficient implementation of AL1 are illustrated in our supplementary materials", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Multimodal word distributions", "authors": [ { "first": "Ben", "middle": [], "last": "Athiwaratkun", "suffix": "" }, { "first": "Andrew", "middle": [ "Gordon" ], "last": "Wilson", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Athiwaratkun and Andrew Gordon Wilson. 2017. Multimodal word distributions. In ACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Entailment above the word level in distributional semantics", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Ngoc-Quynh", "middle": [], "last": "Do", "suffix": "" }, { "first": "Chung-Chieh", "middle": [], "last": "Shan", "suffix": "" } ], "year": 2012, "venue": "EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In EACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The WaCky wide web: a collection of very large linguistically processed 493 web-crawled corpora", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Bernardini", "suffix": "" }, { "first": "Adriano", "middle": [], "last": "Ferraresi", "suffix": "" }, { "first": "Eros", "middle": [], "last": "Zanchetta", "suffix": "" } ], "year": 2009, "venue": "Language resources and evaluation", "volume": "43", "issue": "3", "pages": "209--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed 493 web-crawled corpora. Language resources and evaluation 43(3):209-226.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "How we BLESSed distributional semantic evaluation", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" } ], "year": 2011, "venue": "Workshop on GEometrical Models of Natural Language Semantics (GEMS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni and Alessandro Lenci. 2011. How we BLESSed distributional semantic evaluation. In Workshop on GEometrical Models of Natural Lan- guage Semantics (GEMS).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Distributional models for semantic relations: A study on hyponymy and antonymy", "authors": [ { "first": "Giulia", "middle": [], "last": "Benotto", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giulia Benotto. 2015. Distributional models for semantic relations: A study on hyponymy and antonymy. PhD Thesis, University of Pisa .", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior research methods", "authors": [ { "first": "A", "middle": [], "last": "John", "suffix": "" }, { "first": "Joseph P", "middle": [], "last": "Bullinaria", "suffix": "" }, { "first": "", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2007, "venue": "", "volume": "39", "issue": "", "pages": "510--526", "other_ids": {}, "num": null, "urls": [], "raw_text": "John A Bullinaria and Joseph P Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior re- search methods 39(3):510-526.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning concept hierarchies from text corpora using formal concept analysis", "authors": [ { "first": "Philipp", "middle": [], "last": "Cimiano", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Hotho", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Staab", "suffix": "" } ], "year": 2005, "venue": "J. Artif. Intell. Res.(JAIR)", "volume": "24", "issue": "1", "pages": "305--339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Cimiano, Andreas Hotho, and Steffen Staab. 2005. Learning concept hierarchies from text cor- pora using formal concept analysis. J. Artif. Intell. Res.(JAIR) 24(1):305-339.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Context-theoretic semantics for natural language: an overview", "authors": [ { "first": "Daoud", "middle": [], "last": "Clarke", "suffix": "" } ], "year": 2009, "venue": "workshop on geometrical models of natural language semantics", "volume": "", "issue": "", "pages": "112--119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daoud Clarke. 2009. Context-theoretic semantics for natural language: an overview. In workshop on geometrical models of natural language semantics. pages 112-119.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Lifted rule injection for relation embeddings", "authors": [ { "first": "Thomas", "middle": [], "last": "Demeester", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2016, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Demeester, Tim Rockt\u00e4schel, and Sebastian Riedel. 2016. Lifted rule injection for relation em- beddings. In EMNLP.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On the equivalence of nonnegative matrix factorization and spectral clustering", "authors": [ { "first": "Chris", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Xiaofeng", "middle": [], "last": "He", "suffix": "" }, { "first": "Horst D", "middle": [], "last": "Simon", "suffix": "" } ], "year": 2005, "venue": "ICDM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Ding, Xiaofeng He, and Horst D Simon. 2005. On the equivalence of nonnegative matrix factoriza- tion and spectral clustering. In ICDM.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sparse overcomplete word vector representations. In ACL", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah Smith. 2015. Sparse overcomplete word vector representations. In ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Tests for rank correlation coefficients. i", "authors": [ { "first": "C", "middle": [], "last": "Edgar", "suffix": "" }, { "first": "", "middle": [], "last": "Fieller", "suffix": "" }, { "first": "O", "middle": [], "last": "Herman", "suffix": "" }, { "first": "Egon S", "middle": [], "last": "Hartley", "suffix": "" }, { "first": "", "middle": [], "last": "Pearson", "suffix": "" } ], "year": 1957, "venue": "Biometrika", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edgar C Fieller, Herman O Hartley, and Egon S Pear- son. 1957. Tests for rank correlation coefficients. i. Biometrika .", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The distributional inclusion hypotheses and lexical entailment", "authors": [ { "first": "Maayan", "middle": [], "last": "Geffet", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2005, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maayan Geffet and Ido Dagan. 2005. The distribu- tional inclusion hypotheses and lexical entailment. In ACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Dual tensor model for detecting asymmetric lexicosemantic relations", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goran Glava\u0161 and Simone Paolo Ponzetto. 2017. Dual tensor model for detecting asymmetric lexico- semantic relations. In EMNLP.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Question classification using head words and their hypernyms", "authors": [ { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Thint", "suffix": "" }, { "first": "Zengchang", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2008, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiheng Huang, Marcus Thint, and Zengchang Qin. 2008. Question classification using head words and their hypernyms. In EMNLP.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Exploiting image generality for lexical entailment detection", "authors": [ { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Rimell", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vulic", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douwe Kiela, Laura Rimell, Ivan Vulic, and Stephen Clark. 2015. Exploiting image generality for lexical entailment detection. In ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "ADAM: A method for stochastic optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik Kingma and Jimmy Ba. 2015. ADAM: A method for stochastic optimization. In ICLR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Directional distributional similarity for lexical inference", "authors": [ { "first": "Lili", "middle": [], "last": "Kotlerman", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Idan", "middle": [], "last": "Szpektor", "suffix": "" }, { "first": "Maayan", "middle": [], "last": "Zhitomirsky-Geffet", "suffix": "" } ], "year": 2010, "venue": "Natural Language Engineering", "volume": "16", "issue": "4", "pages": "359--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distribu- tional similarity for lexical inference. Natural Lan- guage Engineering 16(4):359-389.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Deriving boolean structures from distributional vectors", "authors": [ { "first": "Germ\u00e1n", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Paperno", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2015, "venue": "TACL", "volume": "3", "issue": "", "pages": "375--388", "other_ids": {}, "num": null, "urls": [], "raw_text": "Germ\u00e1n Kruszewski, Denis Paperno, and Marco Baroni. 2015. Deriving boolean structures from distributional vectors. TACL 3:375-388. https: //tacl2013.cs.columbia.edu/ojs/ index.php/tacl/article/view/616.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Algorithms for non-negative matrix factorization", "authors": [ { "first": "D", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "H", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Sebastian Seung", "suffix": "" } ], "year": 2001, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel D Lee and H Sebastian Seung. 2001. Al- gorithms for non-negative matrix factorization. In NIPS.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Focused entailment graphs for open IE propositions", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Goldberger", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy, Ido Dagan, and Jacob Goldberger. 2014. Focused entailment graphs for open IE propositions. In CoNLL.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Neural word embedding as implicit matrix factorization", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In NIPS.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Improving distributional similarity with lessons learned from word embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "211--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015a. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics 3:211- 225.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Do supervised distributional methods really learn lexical inference relations", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Remus", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2015, "venue": "NAACL-HTL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. 2015b. Do supervised distributional meth- ods really learn lexical inference relations? In NAACL-HTL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39-41.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Learning effective and interpretable semantic models using non-negative sparse embedding", "authors": [ { "first": "Brian", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "Partha", "middle": [], "last": "Talukdar", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "1933--1950", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Murphy, Partha Talukdar, and Tom Mitchell. 2012. Learning effective and interpretable seman- tic models using non-negative sparse embedding. COLING pages 1933-1950.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Hierarchical embeddings for hypernymy detection and directionality", "authors": [ { "first": "Maximilian", "middle": [], "last": "Kim Anh Nguyen", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "K\u00f6per", "suffix": "" }, { "first": "Ngoc", "middle": [ "Thang" ], "last": "Schulte Im Walde", "suffix": "" }, { "first": "", "middle": [], "last": "Vu", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim Anh Nguyen, Maximilian K\u00f6per, Sabine Schulte im Walde, and Ngoc Thang Vu. 2017. Hierarchical embeddings for hypernymy detection and direction- ality. In EMNLP.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Poincar\u00e9 embeddings for learning hierarchical representations", "authors": [ { "first": "Maximilian", "middle": [], "last": "Nickel", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2017, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximilian Nickel and Douwe Kiela. 2017. Poincar\u00e9 embeddings for learning hierarchical representa- tions. In NIPS.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Text mining using nonnegative matrix factorizations", "authors": [ { "first": "V", "middle": [], "last": "", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Pauca", "suffix": "" }, { "first": "Farial", "middle": [], "last": "Shahnaz", "suffix": "" }, { "first": "W", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Robert", "middle": [ "J" ], "last": "Berry", "suffix": "" }, { "first": "", "middle": [], "last": "Plemmons", "suffix": "" } ], "year": 2004, "venue": "ICDM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Paul Pauca, Farial Shahnaz, Michael W Berry, and Robert J. Plemmons. 2004. Text mining using non- negative matrix factorizations. In ICDM.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Minimization of unsmooth functionals", "authors": [ { "first": "Boris", "middle": [], "last": "Teodorovich", "suffix": "" }, { "first": "Polyak", "middle": [], "last": "", "suffix": "" } ], "year": 1969, "venue": "USSR Computational Mathematics and Mathematical Physics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boris Teodorovich Polyak. 1969. Minimization of un- smooth functionals. USSR Computational Mathe- matics and Mathematical Physics .", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Exploiting semantic role labeling, wordnet and wikipedia for coreference resolution", "authors": [ { "first": "Paolo", "middle": [], "last": "Simone", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Ponzetto", "suffix": "" }, { "first": "", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2006, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, wordnet and wikipedia for coreference resolution. In ACL.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Relations such as hypernymy: Identifying and exploiting hearst patterns in distributional vectors for lexical entailment", "authors": [ { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2016, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Roller and Katrin Erk. 2016. Relations such as hypernymy: Identifying and exploiting hearst pat- terns in distributional vectors for lexical entailment. In EMNLP.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Recognizing textual entailment. Multilingual Natural Language Applications: From Theory to Practice", "authors": [ { "first": "Mark", "middle": [], "last": "Sammons", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Vydiswaran", "suffix": "" }, { "first": "", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Sammons, V Vydiswaran, and Dan Roth. 2011. Recognizing textual entailment. Multilingual Natu- ral Language Applications: From Theory to Prac- tice. Prentice Hall, Jun .", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "EVALution 1.0: an evolving semantic dataset for training and evaluation of distributional semantic models", "authors": [ { "first": "Enrico", "middle": [], "last": "Santus", "suffix": "" }, { "first": "Frances", "middle": [], "last": "Yung", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" }, { "first": "Chu-Ren", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2015, "venue": "Workshop on Linked Data in Linguistics (LDL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Enrico Santus, Frances Yung, Alessandro Lenci, and Chu-Ren Huang. 2015. EVALution 1.0: an evolving semantic dataset for training and evaluation of dis- tributional semantic models. In Workshop on Linked Data in Linguistics (LDL).", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Web-scale k-means clustering", "authors": [ { "first": "David", "middle": [], "last": "Sculley", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Sculley. 2010. Web-scale k-means clustering. In WWW.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Improving hypernymy detection with an integrated path-based and distributional method", "authors": [ { "first": "Vered", "middle": [], "last": "Shwartz", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In ACL.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Hypernyms under siege: Linguistically-motivated artillery for hypernymy detection", "authors": [ { "first": "Vered", "middle": [], "last": "Shwartz", "suffix": "" }, { "first": "Enrico", "middle": [], "last": "Santus", "suffix": "" }, { "first": "Dominik", "middle": [], "last": "Schlechtweg", "suffix": "" } ], "year": 2017, "venue": "EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vered Shwartz, Enrico Santus, and Dominik Schlechtweg. 2017. Hypernyms under siege: Linguistically-motivated artillery for hypernymy detection. In EACL.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Experiments with three approaches to recognizing lexical entailment", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" }, { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2015, "venue": "Natural Language Engineering", "volume": "21", "issue": "3", "pages": "437--476", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D Turney and Saif M Mohammad. 2015. Ex- periments with three approaches to recognizing lex- ical entailment. Natural Language Engineering 21(3):437-476.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Order-embeddings of images and language", "authors": [ { "first": "Ivan", "middle": [], "last": "Vendrov", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Fidler", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Urtasun", "suffix": "" } ], "year": 2016, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. In ICLR.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Word representations via gaussian embedding", "authors": [ { "first": "Luke", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2015, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke Vilnis and Andrew McCallum. 2015. Word rep- resentations via gaussian embedding. In ICLR.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Hyperlex: A largescale evaluation of graded lexical entailment", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Daniela", "middle": [], "last": "Gerz", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1608.02117" ] }, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2016. Hyperlex: A large- scale evaluation of graded lexical entailment. arXiv preprint arXiv:1608.02117 .", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Specialising word vectors for lexical entailment", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.06371" ] }, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107 and Nikola Mrk\u0161i\u0107. 2017. Specialising word vectors for lexical entailment. arXiv preprint arXiv:1710.06371 .", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Sparse bilingual word representations for cross-lingual lexical entailment", "authors": [ { "first": "Yogarshi", "middle": [], "last": "Vyas", "suffix": "" }, { "first": "Marine", "middle": [], "last": "Carpuat", "suffix": "" } ], "year": 2016, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yogarshi Vyas and Marine Carpuat. 2016. Sparse bilingual word representations for cross-lingual lex- ical entailment. In HLT-NAACL.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Learning to distinguish hypernyms and co-hyponyms", "authors": [ { "first": "Julie", "middle": [], "last": "Weeds", "suffix": "" }, { "first": "Daoud", "middle": [], "last": "Clarke", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Reffin", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Keller", "suffix": "" } ], "year": 2014, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julie Weeds, Daoud Clarke, Jeremy Reffin, David Weir, and Bill Keller. 2014. Learning to distinguish hyper- nyms and co-hyponyms. In COLING.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "A general framework for distributional similarity", "authors": [ { "first": "Julie", "middle": [], "last": "Weeds", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" } ], "year": 2003, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julie Weeds and David Weir. 2003. A general frame- work for distributional similarity. In EMNLP.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Recall, precision and average precision. Department of Statistics and Actuarial Science", "authors": [ { "first": "Mu", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2004, "venue": "", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mu Zhu. 2004. Recall, precision and average preci- sion. Department of Statistics and Actuarial Sci- ence, University of Waterloo, Waterloo 2:30.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": ", c) is number of cooccurrence word pairs in the corpus, P (w) = #(w) |D| , #(w) = c\u2208V #(w, c) is the frequency of the word w times the window size |W |, and similarly for P (c). For example, M [w, c] could be set as positive PMI (PPMI), max(P M I(w, c), 0), or shifted PMI, P M I(w, c) \u2212 log(k ), which (Levy and Goldberg, 2014) demonstrate is connected to skip-grams with negative sampling (SGNS). Intuitively, since M [w, c] \u2248 w T c, larger embedding values of w at every dimension seems to imply larger w T c, larger M [w, c], larger P M I(w, c)", "type_str": "figure" }, "FIGREF1": { "uris": null, "num": null, "text": "The embedding of the words rodent and mammal trained by the co-occurrence statistics of context words using DIVE. The index of dimensions is sorted by the embedding values of mammal and values smaller than 0.1 are neglected. The top 5 words (sorted by its embedding value of the dimension) tend to be more general or more representative on the topic than the top 51-105 words.", "type_str": "figure" }, "FIGREF2": { "uris": null, "num": null, "text": "Specifically, w[c] = max(log( P (w,c) P (w) * P (c) * Z #(w) ), 0).", "type_str": "figure" }, "TABREF0": { "type_str": "table", "content": "
Output: Embedding of every word (e.gidTop 1-5 wordsTop 51-55 words
1find, specie, species, animal, birdhunt, terrestrial, lion, planet, shark
2system, blood, vessel, artery, intestinefunction, red, urinary, urine, tumor
3head, leg, long, foot, handshoe, pack, food, short, right
4may, cell, protein, gene, receptorneuron, eukaryotic, immune, kinase, generally
5sea, lake, river, area, waterterrain, southern, mediterranean, highland, shallow
6cause, disease, effect, infection, increasestress, problem, natural, earth, hazard
7female, age, woman, male, householdspread, friend, son, city, infant
8food, fruit, vegetable, meat, potatofresh, flour, butter, leave, beverage
9element, gas, atom, rock, carbonlight, dense, radioactive, composition, deposit
10 number, million, total, population, estimateincrease, less, capita, reach, male
11 industry, export, industrial, economy, companycentre, chemical, construction, fish, small
", "num": null, "html": null, "text": "" }, "TABREF2": { "type_str": "table", "content": "", "num": null, "html": null, "text": "Comparison with other unsupervised embedding methods. The scores are AP@all (%) for the first 10 datasets and Spearman \u03c1 (%) for HyperLex. Avg (10 datasets) shows the micro-average AP of all datasets except HyperLex. Word2Vec+C scores word pairs using cosine similarity on skip-grams. GE+C and GE+KL compute cosine similarity and negative KL divergence on Gaussian embedding, respectively." }, "TABREF3": { "type_str": "table", "content": "
|| min(wp,wq)|| 1 ||wq|| 1
", "num": null, "html": null, "text": "computes the summation of element-wise minimum over the magnitude of hyponym embedding (i.e." }, "TABREF4": { "type_str": "table", "content": "
AP@all (%) CDE AL 1 SBOW Freq 6.3 7.3 PPMI 13.6 5.1 PPMI w/ IS 6.2 5.0 All wiki 12.1 5.2BLESS 5.6 5.6 5.5 6.911.0 17.2 12.4 12.55.9 15.3 5.8 13.4EVALution 35.3 32.6 36.2 33.0 30.4 27.7 34.1 31.9 36.0 27.5 36.3 32.9 28.5 27.1 30.3 29.936.3 34.3 36.4 31.0Lenci/Benotto 51.8 51.8 47.6 51.0 47.2 39.7 50.8 51.1 52.0 43.1 50.9 51.9 47.1 39.9 48.5 48.751.1 52.0 50.7 51.1
DIVEFull w/o PMI9.3 7.87.6 6.96.0 5.618.6 16.716.3 7.130.0 27.5 34.9 32.8 32.2 35.732.3 32.533.0 35.446.7 43.2 51.3 47.6 44.9 50.951.5 51.650.4 49.7
w/o IS Kmean (Freq NMF)9.0 6.56.2 7.37.3 5.66.2 10.97.3 5.824.3 25.0 22.9 33.7 27.2 36.223.5 33.023.9 36.238.8 38.1 38.2 49.6 42.5 51.038.2 51.838.4 51.2
Weeds 69.5 58.0 68.8 CDE AL 1 SBOW AP@all (%) Freq PPMI 61.0 50.3 70.3 PPMI w/ IS 67.6 52.2 69.4 All wiki 61.3 48.6 70.0 DIVE Full 59.2 55.0 69.7 w/o PMI 60.4 56.4 69.368.2 69.2 68.7 68.5 68.6 68.668.4 69.3 67.7 70.4 65.5 64.8Micro Average (4 datasets) 23.1 21.8 22.9 25.0 24.7 17.9 22.3 28.1 25.8 23.2 18.2 22.9 23.4 17.7 21.7 24.6 22.1 19.8 22.8 28.9 22.2 21.0 22.7 28.023.0 27.8 22.9 25.8 27.6 23.1Medical 19.4 19.2 14.1 23.4 8.7 13.2 22.8 10.6 13.7 22.3 8.9 12.2 11.7 9.3 13.7 10.7 8.4 13.318.4 20.1 18.6 17.6 21.4 19.815.3 24.4 17.0 21.1 19.2 16.2
w/o IS49.2 47.3 45.145.144.918.9 17.3 17.216.817.510.99.87.47.67.7
Kmean (Freq NMF)69.4 51.1 68.868.268.922.5 19.3 22.924.923.012.6 10.9 14.018.114.6
LEDS 82.7 70.4 70.7 CDE AL 1 SBOW AP@all (%) Freq PPMI 84.4 50.2 72.2 PPMI w/ IS 81.6 54.5 71.0 All wiki 83.1 49.7 67.983.3 86.5 84.7 82.973.3 84.5 73.1 81.4TM14 55.6 53.2 54.9 56.2 52.3 54.4 57.1 51.5 55.1 54.7 50.5 52.655.7 57.0 56.2 55.155.0 57.6 55.4 54.9Kotlerman 2010 37.0 35.9 40.5 34.5 37.0 39.1 30.9 33.0 37.4 31.0 34.4 37.8 38.5 31.2 32.2 35.435.4 36.3 35.9 35.3
DIVEFull w/o PMI w/o IS83.3 74.7 72.7 79.3 74.8 72.0 64.6 55.4 43.286.4 85.5 44.383.5 78.7 46.155.3 52.6 55.2 54.7 53.9 54.9 51.9 51.2 50.457.3 56.5 52.057.2 55.4 51.835.3 31.6 33.6 35.4 38.9 33.8 32.9 33.4 28.137.4 37.8 30.236.6 36.7 29.7
Kmean (Freq NMF)80.3 64.5 70.783.073.054.8 49.0 54.855.654.832.1 37.0 34.536.934.8
AP@all (%)HypeNetWordNetMicro Average (10 datasets)
SBOWFreq PPMI PPMI w/ IS 38.5 26.7 47.2 37.5 28.3 46.9 23.8 24.0 47.0 All wiki 23.0 24.5 40.535.9 32.5 35.5 30.543.4 33.1 37.6 29.756.6 55.2 55.5 57.7 53.9 55.6 57.0 54.1 55.7 57.4 53.1 56.056.2 56.8 56.6 56.455.6 57.2 55.7 57.331.1 28.2 31.5 30.1 23.0 31.1 31.8 24.1 31.5 29.0 23.1 29.231.6 32.9 32.1 30.231.2 33.5 30.3 31.1
DIVE Kmean (Freq NMF) Full w/o PMI w/o IS25.3 24.2 49.3 31.3 27.0 46.9 20.1 21.7 20.3 33.7 22.0 46.033.6 33.8 21.8 35.632.0 34.0 22.0 45.260.2 58.9 58.4 59.2 60.1 58.2 61.0 56.3 51.3 58.4 60.2 57.761.1 61.1 55.7 60.160.9 59.1 54.7 57.927.6 25.3 32.1 28.5 26.7 31.5 22.3 20.7 19.1 29.1 24.7 31.534.1 33.4 19.6 31.832.7 30.1 19.9 31.5
", "num": null, "html": null, "text": "\u2206S W\u2022\u2206S C\u2022\u2206S CDE AL 1 \u2206S W\u2022\u2206S C\u2022\u2206S CDE AL 1 \u2206S W\u2022\u2206S C\u2022\u2206S \u2206S W\u2022\u2206S C\u2022\u2206S CDE AL 1 \u2206S W\u2022\u2206S C\u2022\u2206S CDE AL 1 \u2206S W\u2022\u2206S C\u2022\u2206S \u2206S W\u2022\u2206S C\u2022\u2206S CDE AL 1 \u2206S W\u2022\u2206S C\u2022\u2206S CDE AL 1 \u2206S W\u2022\u2206S C\u2022\u2206S CDE AL 1 \u2206S W\u2022\u2206S C\u2022\u2206S CDE AL 1 \u2206S W\u2022\u2206S C\u2022\u2206S CDE AL 1 \u2206S W\u2022\u2206S C\u2022\u2206S" }, "TABREF6": { "type_str": "table", "content": "
SBOW Freq SBOW PPMI DIVE
5799380820
", "num": null, "html": null, "text": "Spearman \u03c1 (%) in HyperLex." }, "TABREF7": { "type_str": "table", "content": "", "num": null, "html": null, "text": "The average number of non-zero dimensions across all testing words in 10 datasets." } } } }