{ "paper_id": "N18-1043", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:49:36.886699Z" }, "title": "Can Network Embedding of Distributional Thesaurus be Combined with Word Vectors for Better Representation?", "authors": [ { "first": "Abhik", "middle": [], "last": "Jana", "suffix": "", "affiliation": { "laboratory": "", "institution": "IIT Kharagpur Kharagpur", "location": { "country": "India" } }, "email": "abhik.jana@iitkgp.ac.in" }, { "first": "Pawan", "middle": [], "last": "Goyal", "suffix": "", "affiliation": { "laboratory": "", "institution": "IIT Kharagpur Kharagpur", "location": { "country": "India" } }, "email": "pawang@cse.iitkgp.ac.in" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Distributed representations of words learned from text have proved to be successful in various natural language processing tasks in recent times. While some methods represent words as vectors computed from text using predictive model (Word2vec) or dense count based model (GloVe), others attempt to represent these in a distributional thesaurus network structure where the neighborhood of a word is a set of words having adequate context overlap. Being motivated by recent surge of research in network embedding techniques (DeepWalk, LINE, node2vec etc.), we turn a distributional thesaurus network into dense word vectors and investigate the usefulness of distributional thesaurus embedding in improving overall word representation. This is the first attempt where we show that combining the proposed word representation obtained by distributional thesaurus embedding with the state-of-the-art word representations helps in improving the performance by a significant margin when evaluated against NLP tasks like word similarity and relatedness, synonym detection, analogy detection. Additionally, we show that even without using any handcrafted lexical resources we can come up with representations having comparable performance in the word similarity and relatedness tasks compared to the representations where a lexical resource has been used.", "pdf_parse": { "paper_id": "N18-1043", "_pdf_hash": "", "abstract": [ { "text": "Distributed representations of words learned from text have proved to be successful in various natural language processing tasks in recent times. While some methods represent words as vectors computed from text using predictive model (Word2vec) or dense count based model (GloVe), others attempt to represent these in a distributional thesaurus network structure where the neighborhood of a word is a set of words having adequate context overlap. Being motivated by recent surge of research in network embedding techniques (DeepWalk, LINE, node2vec etc.), we turn a distributional thesaurus network into dense word vectors and investigate the usefulness of distributional thesaurus embedding in improving overall word representation. This is the first attempt where we show that combining the proposed word representation obtained by distributional thesaurus embedding with the state-of-the-art word representations helps in improving the performance by a significant margin when evaluated against NLP tasks like word similarity and relatedness, synonym detection, analogy detection. Additionally, we show that even without using any handcrafted lexical resources we can come up with representations having comparable performance in the word similarity and relatedness tasks compared to the representations where a lexical resource has been used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Natural language understanding has always been a primary challenge in natural language processing (NLP) domain. Learning word representations is one of the basic and primary steps in understanding text and nowadays there are predominantly two views of learning word representations. In one realm of representation, words are vectors of distributions obtained from analyzing their contexts in the text and two words are considered meaningfully similar if the vectors of those words are close in the euclidean space. In recent times, attempts have been made for dense representation of words, be it using predictive model like Word2vec or count-based model like GloVe (Pennington et al., 2014) which are computationally efficient as well. Another stream of representation talks about network like structure where two words are considered neighbors if they both occur in the same context above a certain number of times. The words are finally represented using these neighbors. Distributional Thesaurus is one such instance of this type, which gets automatically produced from a text corpus and identifies words that occur in similar contexts; the notion of which was used in early work about distributional semantics (Grefenstette, 2012; Lin, 1998; Curran and Moens, 2002) . One such representation is JoBimText proposed by that contains, for each word, a list of words that are similar with respect to their bigram distribution, thus producing a network representation. Later, introduced a highly scalable approach for computing this network. We mention this representation as a DT network throughout this article. With the emergence of recent trend of embedding large networks into dense low-dimensional vector space efficiently (Perozzi et al., 2014; Tang et al., 2015; Grover and Leskovec, 2016) which are focused on capturing different properties of the network like neighborhood structure, community structure, etc., we explore representing DT network in a dense vector space and evaluate its useful application in various NLP tasks.", "cite_spans": [ { "start": 666, "end": 691, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF35" }, { "start": 1215, "end": 1235, "text": "(Grefenstette, 2012;", "ref_id": "BIBREF20" }, { "start": 1236, "end": 1246, "text": "Lin, 1998;", "ref_id": "BIBREF31" }, { "start": 1247, "end": 1270, "text": "Curran and Moens, 2002)", "ref_id": "BIBREF9" }, { "start": 1729, "end": 1751, "text": "(Perozzi et al., 2014;", "ref_id": "BIBREF36" }, { "start": 1752, "end": 1770, "text": "Tang et al., 2015;", "ref_id": "BIBREF44" }, { "start": 1771, "end": 1797, "text": "Grover and Leskovec, 2016)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There has been attempt (Ferret, 2017) to turn distributional thesauri into word vectors for synonym extraction and expansion but the full utilization of DT embedding has not yet been explored. In this paper, as a main contribution, we investigate the best way of turning a Distributional Thesaurus (DT) network into word embeddings by applying efficient network embedding methods and analyze how these embeddings generated from DT network can improve the representations generated from prediction-based model like Word2vec or dense count based semantic model like GloVe. We experiment with several combination techniques and find that DT network embedding can be combined with Word2vec and GloVe to outperform the performances when used independently. Further, we show that we can use DT network embedding as a proxy of WordNet embedding in order to improve the already existing state-of-the-art word representations as both of them achieve comparable performance as far as word similarity and word relatedness tasks are concerned. Considering the fact that the vocabulary size of WordNet is small and preparing Word-Net like lexical resources needs huge human engagement, it would be useful to have a representation which can be generated automatically from corpus. We also attempt to combine both Word-Net and DT embeddings to improve the existing word representations and find that DT embedding still has some extra information to bring in leading to better performance when compared to combination of only WordNet embedding and state-of-theart word embeddings. While most of our experiments are focused on word similarity and relatedness tasks, we show the usefulness of DT embeddings on synonym detection and analogy detection as well. In both the tasks, combined representation of GloVe and DT embeddings shows promising performance gain over state-of-the-art embeddings.", "cite_spans": [ { "start": 23, "end": 37, "text": "(Ferret, 2017)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The core idea behind the construction of distributional thesauri is the distributional hypothesis (Firth, 1957) : \"You should know a word by the company it keeps\". The semantic neighbors of a target word are words whose contexts overlap with the context of a target word above a certain threshold. Some of the initial attempts for preparing distributional thesaurus are made by Lin (1998) , Curran and Moens (2002) , Grefenstette (2012) . The semantic relation between a target word and its neighbors can be of different types, e.g., synonymy, hypernymy, hyponymy or other relations (Adam et al., 2013; Budanitsky and Hirst, 2006 ) which prove to be very useful in different natural language tasks. Even though computation of sparse count based models used to be inefficient, in this era of high speed processors and storage, attempts are being made to streamline the computation with ease. One such effort is made by Kilgarriff et al. (2004) where they propose Sketch Engine, a corpus tool which takes as input a corpus of any language and corresponding grammar patterns, and generates word sketches for the words of that language and a thesaurus. Recently, introduce a new highly scalable approach for computing quality distributional thesauri by incorporating pruning techniques and using a distributed computation framework. They prepare distributional thesaurus from Google book corpus in a network structure and make it publicly available.", "cite_spans": [ { "start": 98, "end": 111, "text": "(Firth, 1957)", "ref_id": "BIBREF14" }, { "start": 378, "end": 388, "text": "Lin (1998)", "ref_id": "BIBREF31" }, { "start": 391, "end": 414, "text": "Curran and Moens (2002)", "ref_id": "BIBREF9" }, { "start": 417, "end": 436, "text": "Grefenstette (2012)", "ref_id": "BIBREF20" }, { "start": 583, "end": 602, "text": "(Adam et al., 2013;", "ref_id": "BIBREF0" }, { "start": 603, "end": 629, "text": "Budanitsky and Hirst, 2006", "ref_id": "BIBREF8" }, { "start": 918, "end": 942, "text": "Kilgarriff et al. (2004)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In another stream of literature, word embeddings represent words as dense unit vectors of real numbers, where vectors that are close together in euclidean space are considered to be semantically related. In this genre of representation, one of the captivating attempt is made by , where they propose Word2vec, basically a set of two predictive models for neural embedding whereas Pennington et al. (2014) propose GloVe, which utilizes a dense count based model to come up with word embeddings that approximate this. Comparisons have also been made between count-based and prediction-based distributional models upon various tasks like relatedness, analogy, concept categorization etc., where researchers show that prediction-based word embeddings outperform sparse count-based methods used for computing distributional semantic models. In other study, Levy and Goldberg (2014) show that dense count-based methods, using PPMI weighted co-occurrences and SVD, approximates neural word embeddings. Later, Levy et al. (2015) show the impact of various parameters and the best performing parameters for these methods. All these approaches are completely text based; no external knowledge source has been used.", "cite_spans": [ { "start": 380, "end": 404, "text": "Pennington et al. (2014)", "ref_id": "BIBREF35" }, { "start": 852, "end": 876, "text": "Levy and Goldberg (2014)", "ref_id": "BIBREF29" }, { "start": 1002, "end": 1020, "text": "Levy et al. (2015)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "More recently, a new direction of investigation has been opened up where researchers are trying to combine knowledge extracted from knowledge bases, images with distributed word representations prepared from text with the expectation of getting better representation. Some use Knowledge bases like WordNet (Miller, 1995) , FreeBase (Bollacker et al., 2008) , PPDB (Ganitkevitch et al., 2013), ConceptNet (Speer et al., 2017) , whereas others use ImageNet (Frome et al., 2013; Kiela and Bottou, 2014; for capturing visual representation of lexical items. There are various ways of combining multiple representations. Some of the works extract lists of relations from knowledge bases and use those to either modify the learning algorithms (Halawi et al., 2012; Wang et al., 2014; Tian et al., 2016; Rastogi et al., 2015) or postprocess pre-trained word representations (Faruqui et al., 2015) . Another line of literature prepares dense vector representation from each of the modes (text, knowledge bases, visual etc.) and tries to combine the vectors using various methods like concatenation, centroid computation, principal component analysis (Jolliffe, 1986) , canonical correlation analysis (Faruqui and Dyer, 2014) etc. One such recent attempt is made by Goikoetxea et al. (2016) where they prepare vector representation from WordNet following the method proposed by Goikoetxea et al. (2015) , which combines random walks over knowledge bases and neural network language model, and tries to improve the vector representation constructed from text using this. As in lexical knowledge bases, the number of lexical items involved is much less than the raw text and preparing such resources is a cumbersome task, our goal is to see whether we can use DT network instead of some knowledge bases like WordNet and achieve comparable performance on NLP tasks like word similarity and word relatedness. In order to prepare vector representation from DT network, we attempt to use various network embeddings like DeepWalk (Perozzi et al., 2014), LINE (Tang et al., 2015) , struc2vec (Ribeiro et al., 2017), node2vec (Grover and Leskovec, 2016) etc. Some of those try to capture the neighbourhood or community structure in the network while others attempt to capture structural similarity between nodes, second order proximity, etc.", "cite_spans": [ { "start": 306, "end": 320, "text": "(Miller, 1995)", "ref_id": "BIBREF33" }, { "start": 332, "end": 356, "text": "(Bollacker et al., 2008)", "ref_id": "BIBREF5" }, { "start": 404, "end": 424, "text": "(Speer et al., 2017)", "ref_id": "BIBREF43" }, { "start": 455, "end": 475, "text": "(Frome et al., 2013;", "ref_id": "BIBREF15" }, { "start": 476, "end": 499, "text": "Kiela and Bottou, 2014;", "ref_id": "BIBREF26" }, { "start": 737, "end": 758, "text": "(Halawi et al., 2012;", "ref_id": "BIBREF22" }, { "start": 759, "end": 777, "text": "Wang et al., 2014;", "ref_id": "BIBREF49" }, { "start": 778, "end": 796, "text": "Tian et al., 2016;", "ref_id": "BIBREF46" }, { "start": 797, "end": 818, "text": "Rastogi et al., 2015)", "ref_id": "BIBREF39" }, { "start": 867, "end": 889, "text": "(Faruqui et al., 2015)", "ref_id": "BIBREF10" }, { "start": 1142, "end": 1158, "text": "(Jolliffe, 1986)", "ref_id": "BIBREF25" }, { "start": 1192, "end": 1216, "text": "(Faruqui and Dyer, 2014)", "ref_id": "BIBREF11" }, { "start": 1257, "end": 1281, "text": "Goikoetxea et al. (2016)", "ref_id": "BIBREF17" }, { "start": 1369, "end": 1393, "text": "Goikoetxea et al. (2015)", "ref_id": "BIBREF18" }, { "start": 2043, "end": 2062, "text": "(Tang et al., 2015)", "ref_id": "BIBREF44" }, { "start": 2108, "end": 2135, "text": "(Grover and Leskovec, 2016)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our aim is to analyze the effect of integrating the knowledge of Distributional Thesaurus network with the state-of-the-art word representation models to prepare a better word representation. We first prepare vector representations from Distribu-tional Thesaurus (DT) network applying network representation learning model. Next we combine this thesaurus embedding with state-of-theart vector representations prepared using GloVe and Word2vec model for analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Methodology", "sec_num": "3" }, { "text": "Riedl and Biemann (2013) use the Google books corpus, consisting of texts from over 3.4 million digitized English books published between 1520 and 2008 and construct a distributional thesauri (DT) network using the syntactic n-gram data (Goldberg and Orwant, 2013) . The authors first compute the lexicographer's mutual information (LMI) (Kilgarriff et al., 2004) for each bigram, which gives a measure of the collocational strength of a bigram. Each bigram is broken into a word and a feature, where the feature consists of the bigram relation and the related word. Then the top 1000 ranked features for each word are taken and for each word pair, intersection of their corresponding feature set is obtained. The word pairs having number of overlapping features above a threshold are retained in the network. In a nutshell, the DT network contains, for each word, a list of words that are similar with respect to their bigram distribution . In the network, each word is a node and there is a weighted edge between a pair of words where the weight corresponds to the number of overlapping features. A sample snapshot of the DT is shown in Figure 1 . A sample snapshot of Distributional Thesaurus network where each node represents a word and the weight of an edge between two words is defined as the number of context features that these two words share in common.", "cite_spans": [ { "start": 237, "end": 264, "text": "(Goldberg and Orwant, 2013)", "ref_id": "BIBREF19" }, { "start": 338, "end": 363, "text": "(Kilgarriff et al., 2004)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 1139, "end": 1147, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Distributional Thesaurus (DT) Network", "sec_num": "3.1" }, { "text": "Now, from the DT network, we prepare the vector representation for each node using network representation learning models which produce vector representation for each of the node in a network. For this purpose, we use three state-of-the-art network representation learning models as discussed below. DeepWalk: DeepWalk (Perozzi et al., 2014) learns social representations of a graph's vertices by modeling a stream of short random walks. Social representations signify latent features of the vertices that capture neighborhood similarity and community membership. LINE: LINE (Tang et al., 2015) is a network embedding model suitable for arbitrary types of networks: undirected, directed and/or weighted. The model optimizes an objective which preserves both the local and global network structures by capturing both first-order and second-order proximity between vertices. node2vec: node2vec (Grover and Leskovec, 2016) is a semi-supervised algorithm for scalable feature learning in networks which maximizes the likelihood of preserving network neighborhoods of nodes in a d-dimensional feature space. This algorithm can learn representations that organize nodes based on their network roles and/or communities they belong to by developing a family of biased random walks, which efficiently explore diverse neighborhoods of a given node. Note that, by applying network embedding models on DT network we obtain 128 dimensional vectors for each word in the network. We only consider edges of the DT network having edge weight greater or equal to 50 for network embedding. Henceforth, we will use D2V-D, D2V-L and D2V-N to indicate vector representations obtained from DT network produced by DeepWalk, LINE and node2vec, respectively.", "cite_spans": [ { "start": 319, "end": 341, "text": "(Perozzi et al., 2014)", "ref_id": "BIBREF36" }, { "start": 575, "end": 594, "text": "(Tang et al., 2015)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Embedding Distributional Thesaurus", "sec_num": "3.2" }, { "text": "After obtaining vector representations, we also explore whether these can be combined with the pre-trained vector representation of Word2vec and GloVe to come up with a joint vector representation. For that purpose, we directly use very wellknown GloVe 1.2 embeddings (Pennington et al., 2014) trained on 840 billion words of the common crawl dataset having vector dimension of 300. As an instance of pre-trained vector of Word2vec, we use prominent pre-trained vector representations prepared by trained on 100 billion words of Google News using skip-grams with negative sampling, having dimension of 300.", "cite_spans": [ { "start": 268, "end": 293, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Embedding Distributional Thesaurus", "sec_num": "3.2" }, { "text": "In order to integrate the word vectors, we apply two strategies inspired by Goikoetxea et al. (2016) : concatenation (CC) and principal component analysis (PCA). Concatenation (CC): This corresponds to the simple vector concatenation operation. Vector representations of both GloVe and Word2vec are of 300 dimensions and word embeddings learnt form DT are of 128 dimensions. The concatenated representation we use are of 428 dimensions. Principal Component Analysis (PCA): Principal component analysis (Jolliffe, 1986 ) is a dimensionality reduction statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components (linear combinations of the original variables). We apply PCA to the concatenated representations (dimension of 428) reducing these to 300 dimensions. In addition to PCA, we try with truncated singular value decomposition procedure (Hansen, 1987) as well, but as per the experiment set up, it shows negligible improvement in performance compared to simple concatenation; hence we do not continue with the truncated singular value decomposition for dimensionality reduction. After obtaining the combined representations of words, we head towards evaluating the quality of the representation.", "cite_spans": [ { "start": 76, "end": 100, "text": "Goikoetxea et al. (2016)", "ref_id": "BIBREF17" }, { "start": 502, "end": 517, "text": "(Jolliffe, 1986", "ref_id": "BIBREF25" }, { "start": 993, "end": 1007, "text": "(Hansen, 1987)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Vector Combination Methods", "sec_num": "3.3" }, { "text": "In order to evaluate the quality of the word representations, we first conduct qualitative analysis of the joint representation. Next, we follow the most acceptable way of applying on different NLP tasks like word similarity and word relatedness, synonym detection and word analogy as described next.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Analysis", "sec_num": "4" }, { "text": "On qualitative analysis of some of the word pairs from the evaluation dataset, we observe that the joint representation (PCA (GloVe,D2V-N)) captures the notion of similarity much better than GloVe. For example, it gives a higher cosine similarity scores to the pairs (car, cab), (sea, ocean), (cottage,cabin), (vision, perception) etc. in com- parison to GloVe. However, in some cases, where words are not similar but are related, e.g., (airport, flight), (food, meat), (peeper, soup), (harbour, shore), the joint representation gives a lower cosine similarity score than GloVe comparatively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis:", "sec_num": "4.1" }, { "text": "In the next set of evaluation experiments, we observe this utility of joint representation towards word similarity task and word relatedness task to some extent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis:", "sec_num": "4.1" }, { "text": "In this genre of tasks, the human judgment score for each word pair is given; we report the Spearman's rank correlation coefficient (\u03c1) between human judgment score and the predicted score by distributional model. Note that, we take cosine similarity between vector representations of words in a word pair as the predicted score. Datasets: We use the benchmark datasets for evaluation of word representations. Four word similarity datasets and four word relatedness datasets are used for that purpose. The descriptions of the word similarity datasets are given below. WordSim353 Similarity (WSSim) : 203 word pairs extracted from WordSim353 dataset (Finkelstein et al., 2001 ) by manual classification, prepared by Agirre et al. (2009) , which deals with only similarity. SimLex999 (SimL) : 999 word pairs rated by 500 paid native English speakers, recruited via Amazon Mechanical Turk, 1 who were asked to rate the similarity. This dataset is introduced by Hill et al.", "cite_spans": [ { "start": 649, "end": 674, "text": "(Finkelstein et al., 2001", "ref_id": "BIBREF13" }, { "start": 715, "end": 735, "text": "Agirre et al. (2009)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Word Similarity and Relatedness", "sec_num": "4.2" }, { "text": ". RG-65 : It consists of 65 word pairs collected by Rubenstein and Goodenough (1965) . These word pairs are judged by 51 humans in a scale from 0 to 4 according to their similarity, but ig- Similarly, a brief overview of word relatedness datasets is given below: WordSim353 Relatedness (WSR) : 252 word pairs extracted from WordSim353 (Finkelstein et al., 2001 ) dataset by manual classification, prepared by Agirre et al. (2009) which deals with only relatedness. Along with these datasets we use the full Word-Sim353 (WS-353) dataset (includes both similarity and relatedness pairs) (Finkelstein et al., 2001) which contains 353 word pairs, each associated with an average of 13 to 16 human judgments in a scale of 0 to 10. Being inspired by , we consider only noun pairs from SimL and MEN datasets, which will be denoted as SimL-N and MEN-N whereas other datasets only contain the noun pairs.", "cite_spans": [ { "start": 52, "end": 84, "text": "Rubenstein and Goodenough (1965)", "ref_id": "BIBREF42" }, { "start": 335, "end": 360, "text": "(Finkelstein et al., 2001", "ref_id": "BIBREF13" }, { "start": 409, "end": 429, "text": "Agirre et al. (2009)", "ref_id": "BIBREF1" }, { "start": 585, "end": 611, "text": "(Finkelstein et al., 2001)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Word Similarity and Relatedness", "sec_num": "4.2" }, { "text": "We start with experiments to inspect individual performance of each of the vector representations for each of the datasets. Table 5 : Performance (\u03c1) reported for three combined representations: GloVe and DT embedding using node2vec (D2V-N), GloVe and WordNet embedding (WN2V), GloVe, WN2V and D2V-N. Results show that, DT embedding produces comparable performance in comparison to the WordNet embedding. Combining DT embedding along with WordNet embedding helps to boost performance further in many of the cases.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 131, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Word Similarity and Relatedness", "sec_num": "4.2" }, { "text": "sidering second order proximity in the DT network while embedding has an adverse effect on performance in word similarity and word relatedness tasks, whereas random walk based D2V-D and D2V-N which take care of neighborhood and community, produce decent performance. Hence- forth, we ignore the D2V-L model for the rest of our experiments. Next, we investigate whether network embeddings applied on Distributional Thesaurus network can be combined with GloVe and Word2vec to improve the performance on the pre-specified tasks. In order to do that, we combine the vector representations using two operations: concatenation (CC), and principal component analysis (PCA). Table 2 represents the performance of combining GloVe with D2V-D and D2V-N for all the datasets using these combination strategies. In general, PCA turns out to be better technique for vector combination than CC. Clearly, combining DT embeddings and GloVe boosts the performance for all the datasets except for the MEN-N dataset, where the combined representation produces comparable performance.", "cite_spans": [], "ref_spans": [ { "start": 668, "end": 675, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Word Similarity and Relatedness", "sec_num": "4.2" }, { "text": "In order to ensure that this observation is consistent, we try combining DT embeddings with Word2vec. The results are presented in Table 3 and we see very similar improvements in the performance except for a few cases, indicating the fact that combining word embeddings prepared form DT network is helpful in enhancing performances. From Tables 1, 2 and 3, we see that GloVe proves to be better than word2Vec for most of the cases, D2V-N is the best performing network embedding, and PCA turns out to be the best combination technique. Henceforth, we consider PCA (GloVe, D2V-N) as our model for comparison with the baselines for the rest of the experiments.", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 138, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Word Similarity and Relatedness", "sec_num": "4.2" }, { "text": "Further, to scrutinize that the achieved result is not just the effect of combining two different word vectors, we compare PCA (GloVe, D2V-N) against combination of GloVe and Word2vec (W2V). Table 4 shows the performance comparison on different datasets and it is evident that PCA (GloVe, D2V-N) gives better results compared to PCA (GloVe, W2V) in most of the cases. Now, as we observe that the network embedding from DT network helps to boost the performance of Word2vec and GloVe when combined with them, we further compare the performance against the case when text based embeddings are combined with embeddings from lexical resources. For that purpose, we take one baseline (Goikoetxea et al., 2016) , where authors combined the text based representation with WordNet based representation. Here we use GloVe as the text based representation and PCA as the combination method as prescribed by the author. Note that, WordNet based representation is made publicly available by Goikoetxea et al. (2016) . From the second and third columns of Table 5 , we observe that even though we do not use any manually created lexical resources like WordNet our approach achieves comparable performance. Additionally we check whether we gain in terms of performance if we integrate the three embeddings together. Fourth column of Table 5 shows that we gain for some of the datasets and for other cases, it has a negative effect. Looking at the performance, we can conclude that automatically generated DT network from corpus brings in useful additional information as far as word similarity and relatedness tasks are concerned.", "cite_spans": [ { "start": 679, "end": 704, "text": "(Goikoetxea et al., 2016)", "ref_id": "BIBREF17" }, { "start": 979, "end": 1003, "text": "Goikoetxea et al. (2016)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 191, "end": 198, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 1043, "end": 1050, "text": "Table 5", "ref_id": null }, { "start": 1319, "end": 1326, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Word Similarity and Relatedness", "sec_num": "4.2" }, { "text": "So far, we use concatenation and PCA as methods for combining two different representations. However, as per the literature, there are different ways of infusing knowledge from different lexical sources to improve the quality of pre-trained vector embeddings. So we compare our proposed way of combination with a completely different way of integrating information from both dimensions, known as retrofitting. Retrofitting is a novel way proposed by Faruqui et al. (2015) for refining vector space representations using relational information from semantic lexicons by encouraging linked words to have similar vector representations. Here instead of using semantic lexicons, we use the DT network to produce the linked words to have similar vector representation. Note that, for a target word, we consider only those words as linked words which are having edge weight greater than a certain threshold. While experimenting with various thresholds, the best results were obtained for a threshold value of 500. Table 6 shows the performance of GloVe representations when retrofitted with information from DT network. Even though in very few cases it gives little improved performance, compared to other combinations presented in Table2, the correlation is not very good, indicating the fact that retrofitting is probably not the best way of fusing knowledge from a DT network.", "cite_spans": [ { "start": 450, "end": 471, "text": "Faruqui et al. (2015)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 1008, "end": 1015, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Word Similarity and Relatedness", "sec_num": "4.2" }, { "text": "Further, we extend our study to investigate the usefulness of DT embedding on other NLP tasks like synonym detection, SAT analogy task as will be discussed next.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Similarity and Relatedness", "sec_num": "4.2" }, { "text": "We consider two gold standard datasets for the experiment of synonym detection. The descriptions of the used datasets are given below. TOEFL: It contains 80 multiple-choice synonym questions (4 choices per question) introduced by Landauer and Dumais (1997), as a way of evaluating algorithms for measuring degree of similarity between words. Being consistent with the previous experiments, we consider only nouns for our experiment and prepare TOEFL-N which contains 23 synonym questions. ESL: It contains 50 multiple-choice synonym questions (4 choices per question), along with a sentence for providing context for each of the question, introduced by Turney (2001) . Here also we consider only nouns for our experiment and prepare ESL-N which contains 22 synonym questions. Note that, in our experimental setup we do not use the context per question provided in the dataset for evaluation. While preparing both the datasets, we also keep in mind the availability of word vectors in both downloaded GloVe representation and prepared DT embedding. For evaluation of the word embeddings using TOEFL-N and ESL-N, we consider the option as the correct answer which is having highest cosine similarity with the question and report accuracy. From the results presented in Table 7, we see that DT embedding leads to boost the performance of GloVe representation.", "cite_spans": [ { "start": 653, "end": 666, "text": "Turney (2001)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Synonym Detection", "sec_num": "4.3" }, { "text": "For analogy detection we experiment with SAT analogy dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analogy Detection", "sec_num": "4.4" }, { "text": "This dataset contains 374 multiple-choice analogy questions (5 choices per question) introduced by Turney and Bigham (2003) Table 7 : Comparison of accuracies between GloVe representation, DT embedding using node2vec and combination of both where PCA is the combination technique. Clearly DT embedding is helping to improve the performance of GloVe for synonym detection as well as analogy detection.", "cite_spans": [ { "start": 99, "end": 123, "text": "Turney and Bigham (2003)", "ref_id": "BIBREF48" } ], "ref_spans": [ { "start": 124, "end": 131, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Analogy Detection", "sec_num": "4.4" }, { "text": "noun questions, we prepare SAT-N, which contains 159 questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analogy Detection", "sec_num": "4.4" }, { "text": "In order to find out the correct answer from the 5 options given for each question, we take up a score (s) metric proposed by Speer et al. (2017) , where for a question 'a 1 is to b 1 ', we will consider 'a 2 is to b 2 ' as the correct answer among the options, whose score (s) is the highest. Score (s) is defined by the author as follows: s = a 1 .a 2 + b 1 .b 2 + w 1 (b 2 \u2212 a 2 ).(b 1 \u2212 a 1 ) + w 2 (b 2 \u2212 b 1 ).(a 2 \u2212 a 1 ) As mentioned by the authors, the appropriate values of w 1 and w 2 are optimized separately for each system using grid search, to achieve the best performance. We use accuracy as the evaluation metric. The last row of Table 7 presents the comparison of accuracies (best for each model) obtained using different embeddings portraying the same observation that combination of GloVe and DT embeddings leads to better performance compared to GloVe and DT embeddings when used separately. Note that, the optimized values of (w 1 , w 2 ) are (0.2,0.2), (0.8,0.6), (6,0.6) for GloVe, DT embedding, combined representation of GloVe and DT embeddings, respectively, for the analogy task.", "cite_spans": [ { "start": 126, "end": 145, "text": "Speer et al. (2017)", "ref_id": "BIBREF43" } ], "ref_spans": [ { "start": 647, "end": 654, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Analogy Detection", "sec_num": "4.4" }, { "text": "In this paper we showed that both dense count based model (GloVe) and predictive model (Word2vec) lead to improved word representation when they are combined with word representation learned using network embedding methods on Distributional Thesaurus (DT) network. We tried with various network embedding models among which node2vec proved to be the best in our experimental setup. We also tried with different methodologies to combine vector representations and PCA turned out to be the best among them. The combined vector representation of words yielded the better performance for most of the similarity and relatedness datasets as compared to the performance of GloVe and Word2vec representation individually. Further we observed that we could use the information from DT as a proxy of WordNet in order to improve the stateof-the-art vector representation as we were getting comparable performances for most of the datasets. Similarly, for synonym detection task and analogy detection task, the same trend of combined vector representation continued, showing the superiority of the combined representation over state-of-theart embeddings. All the datasets used in our experiments which are not under any copyright protection, along with the DT embeddings are made publicly available 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In future we plan to investigate the effectiveness of the joint representation on other NLP tasks like text classification, sentence completion challenge, evaluation of common sense stories etc. The overall aim is to prepare a better generalized representation of words which can be used across languages in different NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "\u00c9valuer et am\u00e9liorer une ressource distributionnelle", "authors": [ { "first": "Cl\u00e9mentine", "middle": [], "last": "Adam", "suffix": "" }, { "first": "C\u00e9cile", "middle": [], "last": "Fabre", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Muller", "suffix": "" } ], "year": 2013, "venue": "Traitement Automatique des Langues", "volume": "54", "issue": "1", "pages": "71--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cl\u00e9mentine Adam, C\u00e9cile Fabre, and Philippe Muller. 2013.\u00c9valuer et am\u00e9liorer une ressource distri- butionnelle. Traitement Automatique des Langues 54(1):71-97.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A study on similarity and relatedness using distributional and wordnet-based approaches", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Enrique", "middle": [], "last": "Alfonseca", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jana", "middle": [], "last": "Kravalova", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pa\u015fca, and Aitor Soroa. 2009. A study on similarity and relatedness using distribu- tional and wordnet-based approaches. In Proceed- ings of Human Language Technologies: The 2009", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "19--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 19-27.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Germ\u00e1n", "middle": [], "last": "Kruszewski", "suffix": "" } ], "year": 2014, "venue": "ACL (1)", "volume": "", "issue": "", "pages": "238--247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In ACL (1). pages 238-247.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Text: Now in 2d! a framework for lexical expansion with contextual similarity", "authors": [ { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Riedl", "suffix": "" } ], "year": 2013, "venue": "Journal of Language Modelling", "volume": "1", "issue": "1", "pages": "55--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Biemann and Martin Riedl. 2013. Text: Now in 2d! a framework for lexical expansion with con- textual similarity. Journal of Language Modelling 1(1):55-95.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Freebase: a collaboratively created graph database for structuring human knowledge", "authors": [ { "first": "Kurt", "middle": [], "last": "Bollacker", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Praveen", "middle": [], "last": "Paritosh", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Sturge", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 ACM SIGMOD international conference on Management of data", "volume": "", "issue": "", "pages": "1247--1250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, pages 1247-1250.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Cross-modal knowledge transfer: Improving the word embedding of apple by looking at oranges", "authors": [ { "first": "Fabian", "middle": [], "last": "Both", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Thoma", "suffix": "" }, { "first": "Achim", "middle": [], "last": "Rettinger", "suffix": "" } ], "year": 2017, "venue": "The 9th International Conference on Knowledge Capture. International Conference on Knowledge Capture", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian Both, Steffen Thoma, and Achim Rettinger. 2017. Cross-modal knowledge transfer: Improving the word embedding of apple by looking at oranges. In K-CAP2017, The 9th International Conference on Knowledge Capture. International Conference on Knowledge Capture, ACM.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multimodal distributional semantics", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Nam-Khanh", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2014, "venue": "J. Artif. Intell. Res.(JAIR)", "volume": "49", "issue": "", "pages": "1--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Intell. Res.(JAIR) 49(2014):1-47.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Evaluating wordnet-based measures of lexical semantic relatedness", "authors": [ { "first": "Alexander", "middle": [], "last": "Budanitsky", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2006, "venue": "Computational Linguistics", "volume": "32", "issue": "1", "pages": "13--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Budanitsky and Graeme Hirst. 2006. Eval- uating wordnet-based measures of lexical semantic relatedness. Computational Linguistics 32(1):13- 47.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improvements in automatic thesaurus extraction", "authors": [ { "first": "R", "middle": [], "last": "James", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Curran", "suffix": "" }, { "first": "", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL-02 workshop on Unsupervised lexical acquisition", "volume": "9", "issue": "", "pages": "59--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "James R Curran and Marc Moens. 2002. Improve- ments in automatic thesaurus extraction. In Pro- ceedings of the ACL-02 workshop on Unsupervised lexical acquisition-Volume 9. Association for Com- putational Linguistics, pages 59-66.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Retrofitting word vectors to semantic lexicons", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "K", "middle": [], "last": "Sujay", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Jauhar", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Hovy", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Improving vector space word representations using multilingual correlation. Association for Computational Linguistics", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui and Chris Dyer. 2014. Improving vec- tor space word representations using multilingual correlation. Association for Computational Linguis- tics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Turning distributional thesauri into word vectors for synonym extraction and expansion", "authors": [ { "first": "Olivier", "middle": [], "last": "Ferret", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "273--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olivier Ferret. 2017. Turning distributional thesauri into word vectors for synonym extraction and ex- pansion. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers). volume 1, pages 273-283.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Placing search in context: The concept revisited", "authors": [ { "first": "Lev", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Yossi", "middle": [], "last": "Matias", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Rivlin", "suffix": "" }, { "first": "Zach", "middle": [], "last": "Solan", "suffix": "" }, { "first": "Gadi", "middle": [], "last": "Wolfman", "suffix": "" }, { "first": "Eytan", "middle": [], "last": "Ruppin", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 10th international conference on World Wide Web", "volume": "", "issue": "", "pages": "406--414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The con- cept revisited. In Proceedings of the 10th interna- tional conference on World Wide Web. ACM, pages 406-414.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis", "authors": [ { "first": "", "middle": [], "last": "John R Firth", "suffix": "" } ], "year": 1957, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John R Firth. 1957. A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis .", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Devise: A deep visual-semantic embedding model", "authors": [ { "first": "Andrea", "middle": [], "last": "Frome", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2121--2129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. 2013. De- vise: A deep visual-semantic embedding model. In Advances in neural information processing systems. pages 2121-2129.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Ppdb: The paraphrase database", "authors": [ { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2013, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "758--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In HLT-NAACL. pages 758-764.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Single or multiple? combining word representations independently learned from text and wordnet", "authors": [ { "first": "Josu", "middle": [], "last": "Goikoetxea", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Soroa", "suffix": "" } ], "year": 2016, "venue": "AAAI", "volume": "", "issue": "", "pages": "2608--2614", "other_ids": {}, "num": null, "urls": [], "raw_text": "Josu Goikoetxea, Eneko Agirre, and Aitor Soroa. 2016. Single or multiple? combining word representations independently learned from text and wordnet. In AAAI. pages 2608-2614.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Random walks and neural network language models on knowledge bases", "authors": [ { "first": "Josu", "middle": [], "last": "Goikoetxea", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Soroa", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Basque Country", "middle": [], "last": "Donostia", "suffix": "" } ], "year": 2015, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "1434--1439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Josu Goikoetxea, Aitor Soroa, Eneko Agirre, and Basque Country Donostia. 2015. Random walks and neural network language models on knowledge bases. In HLT-NAACL. pages 1434-1439.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A dataset of syntactic-ngrams over time from a very large corpus of english books", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Orwant", "suffix": "" } ], "year": 2013, "venue": "Second Joint Conference on Lexical and Computational Semantics (* SEM)", "volume": "1", "issue": "", "pages": "241--247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Jon Orwant. 2013. A dataset of syntactic-ngrams over time from a very large cor- pus of english books. In Second Joint Conference on Lexical and Computational Semantics (* SEM). volume 1, pages 241-247.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Explorations in automatic thesaurus discovery", "authors": [ { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2012, "venue": "", "volume": "278", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Grefenstette. 2012. Explorations in automatic thesaurus discovery, volume 278. Springer Science & Business Media.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "node2vec: Scalable feature learning for networks", "authors": [ { "first": "Aditya", "middle": [], "last": "Grover", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "855--864", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on Knowledge discovery and data mining. ACM, pages 855-864.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Large-scale learning of word relatedness with constraints", "authors": [ { "first": "Guy", "middle": [], "last": "Halawi", "suffix": "" }, { "first": "Gideon", "middle": [], "last": "Dror", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Yehuda", "middle": [], "last": "Koren", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "1406--1414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 1406-1414.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The truncatedsvd as a method for regularization", "authors": [ { "first": "", "middle": [], "last": "Per Christian Hansen", "suffix": "" } ], "year": 1987, "venue": "BIT Numerical Mathematics", "volume": "27", "issue": "4", "pages": "534--553", "other_ids": {}, "num": null, "urls": [], "raw_text": "Per Christian Hansen. 1987. The truncatedsvd as a method for regularization. BIT Numerical Mathe- matics 27(4):534-553.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2016. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics .", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Principal component analysis and factor analysis", "authors": [ { "first": "T", "middle": [], "last": "Ian", "suffix": "" }, { "first": "", "middle": [], "last": "Jolliffe", "suffix": "" } ], "year": 1986, "venue": "Principal component analysis", "volume": "", "issue": "", "pages": "115--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian T Jolliffe. 1986. Principal component analysis and factor analysis. In Principal component analysis, Springer, pages 115-128.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Learning image embeddings using convolutional neural networks for improved multi-modal semantics", "authors": [ { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "36--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douwe Kiela and L\u00e9on Bottou. 2014. Learning image embeddings using convolutional neural networks for improved multi-modal semantics. In EMNLP. pages 36-45.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Itri-04-08 the sketch engine. Information", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Rychly", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Smrz", "suffix": "" }, { "first": "David", "middle": [], "last": "Tugwell", "suffix": "" } ], "year": 2004, "venue": "Technology", "volume": "105", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Kilgarriff, Pavel Rychly, Pavel Smrz, and David Tugwell. 2004. Itri-04-08 the sketch engine. Infor- mation Technology 105:116.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge", "authors": [ { "first": "K", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Susan", "middle": [ "T" ], "last": "Landauer", "suffix": "" }, { "first": "", "middle": [], "last": "Dumais", "suffix": "" } ], "year": 1997, "venue": "Psychological review", "volume": "104", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas K Landauer and Susan T Dumais. 1997. A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and rep- resentation of knowledge. Psychological review 104(2):211.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Neural word embedding as implicit matrix factorization", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2177--2185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Ad- vances in neural information processing systems. pages 2177-2185.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Improving distributional similarity with lessons learned from word embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "211--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics 3:211-225.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Automatic retrieval and clustering of similar words", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 17th international conference on Computational linguistics", "volume": "2", "issue": "", "pages": "768--774", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 17th inter- national conference on Computational linguistics- Volume 2. Association for Computational Linguis- tics, pages 768-774.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 .", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39- 41.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Contextual correlates of semantic similarity", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" }, { "first": "G", "middle": [], "last": "Walter", "suffix": "" }, { "first": "", "middle": [], "last": "Charles", "suffix": "" } ], "year": 1991, "venue": "Language and cognitive processes", "volume": "6", "issue": "1", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller and Walter G Charles. 1991. Contex- tual correlates of semantic similarity. Language and cognitive processes 6(1):1-28.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP). pages 1532-1543.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Deepwalk: Online learning of social representations", "authors": [ { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 20th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, KDD '14", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "701--710", "other_ids": { "DOI": [ "10.1145/2623330.2623732" ] }, "num": null, "urls": [], "raw_text": "ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining. ACM, KDD '14, pages 701-710. https://doi.org/10. 1145/2623330.2623732.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A word at a time: computing word relatedness using temporal semantic analysis", "authors": [ { "first": "Kira", "middle": [], "last": "Radinsky", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Shaul", "middle": [], "last": "Markovitch", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 20th international conference on World wide web", "volume": "", "issue": "", "pages": "337--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web. ACM, pages 337-346.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Multiview lsa: Representation learning via generalized cca", "authors": [ { "first": "Pushpendre", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Raman", "middle": [], "last": "Arora", "suffix": "" } ], "year": 2015, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "556--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pushpendre Rastogi, Benjamin Van Durme, and Raman Arora. 2015. Multiview lsa: Representation learn- ing via generalized cca. In HLT-NAACL. pages 556- 566.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Learning node representations from structural identity", "authors": [ { "first": "", "middle": [], "last": "Leonardo Fr Ribeiro", "suffix": "" }, { "first": "H", "middle": [ "P" ], "last": "Pedro", "suffix": "" }, { "first": "Daniel", "middle": [ "R" ], "last": "Saverese", "suffix": "" }, { "first": "", "middle": [], "last": "Figueiredo", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "2", "issue": "", "pages": "385--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leonardo FR Ribeiro, Pedro HP Saverese, and Daniel R Figueiredo. 2017. struc2vec: Learn- ing node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining. ACM, pages 385-394.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Scaling to large3 data: An efficient and effective method to compute distributional thesauri", "authors": [ { "first": "Martin", "middle": [], "last": "Riedl", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "884--890", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Riedl and Chris Biemann. 2013. Scaling to large3 data: An efficient and effective method to compute distributional thesauri. In EMNLP. pages 884-890.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Contextual correlates of synonymy", "authors": [ { "first": "Herbert", "middle": [], "last": "Rubenstein", "suffix": "" }, { "first": "B", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Goodenough", "suffix": "" } ], "year": 1965, "venue": "Communications of the ACM", "volume": "8", "issue": "10", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communica- tions of the ACM 8(10):627-633.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "authors": [ { "first": "Robert", "middle": [], "last": "Speer", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chin", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" } ], "year": 2017, "venue": "AAAI", "volume": "", "issue": "", "pages": "4444--4451", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI. pages 4444-4451.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Line: Largescale information network embedding", "authors": [ { "first": "Jian", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Mingzhe", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Qiaozhu", "middle": [], "last": "Mei", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee", "volume": "", "issue": "", "pages": "1067--1077", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large- scale information network embedding. In Proceed- ings of the 24th International Conference on World Wide Web. International World Wide Web Confer- ences Steering Committee, pages 1067-1077.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Knowledge fusion via embeddings from text, knowledge graphs, and images", "authors": [ { "first": "Steffen", "middle": [], "last": "Thoma", "suffix": "" }, { "first": "Achim", "middle": [], "last": "Rettinger", "suffix": "" }, { "first": "Fabian", "middle": [], "last": "Both", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.06084" ] }, "num": null, "urls": [], "raw_text": "Steffen Thoma, Achim Rettinger, and Fabian Both. 2017. Knowledge fusion via embeddings from text, knowledge graphs, and images. arXiv preprint arXiv:1704.06084 .", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Learning better word embedding by asymmetric low-rank projection of knowledge graph", "authors": [ { "first": "Fei", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "En-Hong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Journal of Computer Science and Technology", "volume": "31", "issue": "3", "pages": "624--634", "other_ids": { "DOI": [ "10.1007/s11390-016-1651-5" ] }, "num": null, "urls": [], "raw_text": "Fei Tian, Bin Gao, En-Hong Chen, and Tie-Yan Liu. 2016. Learning better word embedding by asymmetric low-rank projection of knowledge graph. Journal of Computer Science and Technol- ogy 31(3):624-634. https://doi.org/10. 1007/s11390-016-1651-5.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Mining the web for synonyms: Pmi-ir versus lsa on toefl", "authors": [ { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2001, "venue": "Machine Learning: ECML", "volume": "", "issue": "", "pages": "491--502", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Turney. 2001. Mining the web for synonyms: Pmi-ir versus lsa on toefl. Machine Learning: ECML 2001 pages 491-502.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Combining independent modules to solve multiple-choice synonym and analogy", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Turney", "suffix": "" }, { "first": "", "middle": [], "last": "Bigham", "suffix": "" } ], "year": 2003, "venue": "Problems. Proceedings of the International Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D Turney and Jeffrey Bigham. 2003. Combin- ing independent modules to solve multiple-choice synonym and analogy. In Problems. Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP-03. Citeseer.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Knowledge graph and text jointly embedding", "authors": [ { "first": "Zhen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianwen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jianlin", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "14", "issue": "", "pages": "1591--1601", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph and text jointly em- bedding. In EMNLP. volume 14, pages 1591-1601.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Figure 1: A sample snapshot of Distributional Thesaurus network where each node represents a word and the weight of an edge between two words is defined as the number of context features that these two words share in common.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "www.mturk.com noring any other possible semantic relationships. MC-30 : 30 words judged by 38 subjects in a scale of 0 and 4 collected by Miller and Charles (1991).", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "MTURK771 (M771) : 771 word pairs evaluated by Amazon Mechanical Turk workers, with an average of 20 ratings for each word pair, where each judgment task consists of a batch of 50 word pairs. Ratings are collected on a 15 scale. This dataset is introduced by Halawi et al. (2012). MTURK287 (M287) : 287 word pairs evaluated by Amazon Mechanical Turk workers, with an average of 23 ratings for each word pair. This dataset is introduced by Radinsky et al. (2011). MEN : MEN consists of 3,000 word pairs with [0, 1]-normalized semantic relatedness ratings provided by Amazon Mechanical Turk workers. This dataset was introduced by Bruni et al. (2014).", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "content": "", "type_str": "table", "text": "Comparison of individual performances of different vector representation models w.r.t. word similarity and relatedness tasks. The performance metric is Spearman's rank correlation coefficient (\u03c1). Best result of each row in bold showing the best vector representation for each dataset.", "num": null, "html": null }, "TABREF2": { "content": "
represents indi-
", "type_str": "table", "text": "", "num": null, "html": null }, "TABREF3": { "content": "
Dataset WSSim SimL-N RG-65 MC-30W2V 0.779 0.454 0.777 0.819CC (W2V,D2V-D) 0.774 0.438 0.855 0.866PCA (W2V,D2V-D) 0.786 0.456 0.864 0.891CC (W2V,D2V-N) 0.806 0.448 0.867 0.903PCA (W2V,D2V-N) 0.805 0.493 0.875 0.909
WSR M771 M287 MEN-N0.631 0.655 0.755 0.7640.441 0.633 0.714 0.7030.443 0.637 0.701 0.7170.459 0.656 0.722 0.7140.497 0.676 0.755 0.747
WS-3530.6970.6020.610.6230.641
", "type_str": "table", "text": "Comparison of performances (Spearman's \u03c1) of GloVe against the combined representation of word representations obtained from DT network using network embeddings (DeepWalk, node2vec) with GloVe. Two combination methods -concatenation (CC) and PCA -are used among which PCA performs better than concatenation (CC) in most of the cases. Also the results show that the combined representation leads to better performance in almost all the cases.", "num": null, "html": null }, "TABREF4": { "content": "
Dataset WSSim SimL-N RG-65 MC-30PCA (GloVe,W2V) 0.8 0.476 0.794 0.832PCA (GloVe,D2V-N) 0.832 0.483 0.857 0.874
WSR M771 M287 MEN-N0.68 0.717 0.82 0.8290.657 0.719 0.82 0.817
WS-3530.7460.75
", "type_str": "table", "text": "A similar experiment asTable 2with Word2vec (W2V) instead of GloVe.", "num": null, "html": null }, "TABREF5": { "content": "
Dataset WSSim SimL-N RG-65 MC-30PCA (GloVe, D2V-N) 0.832 0.483 0.857 0.874PCA (GloVe, WN2V) 0.828 0.525 0.858 0.882PCA (GloVe, WN2V, D2V-N) 0.853 0.531 0.91 0.92
WSR M771 M287 MEN-N0.657 0.719 0.82 0.8170.699 0.762 0.816 0.8480.682 0.764 0.81 0.7993
WS-3530.750.78010.7693
: Comparison of performances (Spearman's \u03c1) between GloVe combined with Word2vec (W2V) against GloVe combined with DT embedding obtained using node2vec (D2V-N). PCA has been taken as com-bination method. Clearly, DT embedding outperforms Word2vec in terms of enhancing the performance of GloVe.
D, D2V-L and D2V-N for different datasets. In
most of the cases, GloVe produces the best re-
sults although no model is a clear winner for all
the datasets. Interestingly, D2V-D and D2V-N
give results comparable to GloVe and Word2vec
for the word similarity datasets, even surpassing
GloVe and Word2vec for few of these. D2V-L
gives very poor performance, indicating that con-
", "type_str": "table", "text": "", "num": null, "html": null }, "TABREF7": { "content": "
: Comparison of performances (Spearman's \u03c1) between GloVe representation and retrofitted (by DT network) GloVe representation. Clearly, DT retrofitting is not helping much to improve the performance of GloVe.
", "type_str": "table", "text": "", "num": null, "html": null }, "TABREF8": { "content": "
Dataset TOEFL-N ESL-NGloVe 0.826 0.636D2V-N 0.739 0.591PCA (GloVe, D2V-N) 0.869 0.682
SAT-N0.4650.5090.515
", "type_str": "table", "text": "as a way of evaluating algorithms for measuring relational similarity. Considering only", "num": null, "html": null } } } }