{ "paper_id": "D17-1024", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:17:13.962604Z" }, "title": "Dict2vec : Learning Word Embeddings using Lexical Dictionaries", "authors": [ { "first": "Julien", "middle": [], "last": "Tissier", "suffix": "", "affiliation": { "laboratory": "UMR 5516", "institution": "UJM Saint-Etienne CNRS", "location": { "postCode": "F-42023", "settlement": "Saint-Etienne", "country": "France" } }, "email": "" }, { "first": "Christophe", "middle": [], "last": "Gravier", "suffix": "", "affiliation": { "laboratory": "UMR 5516", "institution": "UJM Saint-Etienne CNRS", "location": { "postCode": "F-42023", "settlement": "Saint-Etienne", "country": "France" } }, "email": "" }, { "first": "Amaury", "middle": [], "last": "Habrard", "suffix": "", "affiliation": { "laboratory": "UMR 5516", "institution": "UJM Saint-Etienne CNRS", "location": { "postCode": "F-42023", "settlement": "Saint-Etienne", "country": "France" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks. The most efficient and popular approaches learn or retrofit such representations using additional external data. Resulting embeddings are generally better than their corpus-only counterparts, although such resources cover a fraction of words in the vocabulary. In this paper, we propose a new approach, Dict2vec, based on one of the largest yet refined datasource for describing words-natural language dictionaries. Dict2vec builds new word pairs from dictionary entries so that semantically-related words are moved closer, and negative sampling filters out pairs whose words are unrelated in dictionaries. We evaluate the word representations obtained using Dict2vec on eleven datasets for the word similarity task and on four datasets for a text classification task.", "pdf_parse": { "paper_id": "D17-1024", "_pdf_hash": "", "abstract": [ { "text": "Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks. The most efficient and popular approaches learn or retrofit such representations using additional external data. Resulting embeddings are generally better than their corpus-only counterparts, although such resources cover a fraction of words in the vocabulary. In this paper, we propose a new approach, Dict2vec, based on one of the largest yet refined datasource for describing words-natural language dictionaries. Dict2vec builds new word pairs from dictionary entries so that semantically-related words are moved closer, and negative sampling filters out pairs whose words are unrelated in dictionaries. We evaluate the word representations obtained using Dict2vec on eleven datasets for the word similarity task and on four datasets for a text classification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Learning word embeddings usually relies on the distributional hypothesis -words appearing in similar contexts must have similar meanings, and thus close representations. Finding such representations for words and sentences has been one hot topic over the last few years in Natural Language Processing (NLP) (Mikolov et al., 2013; Pennington et al., 2014) and has led to many improvements in core NLP tasks such as Word Sense Disambiguation (Iacobacci et al., 2016) , Machine Translation (Devlin et al., 2014) , Machine Comprehension (Hewlett et al., 2016) , and Semantic Role Labeling (Zhou and Xu, 2015; Collobert et al., 2011) -to name a few.", "cite_spans": [ { "start": 307, "end": 329, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF18" }, { "start": 330, "end": 354, "text": "Pennington et al., 2014)", "ref_id": "BIBREF22" }, { "start": 440, "end": 464, "text": "(Iacobacci et al., 2016)", "ref_id": "BIBREF13" }, { "start": 487, "end": 508, "text": "(Devlin et al., 2014)", "ref_id": "BIBREF6" }, { "start": 533, "end": 555, "text": "(Hewlett et al., 2016)", "ref_id": "BIBREF12" }, { "start": 585, "end": 604, "text": "(Zhou and Xu, 2015;", "ref_id": "BIBREF32" }, { "start": 605, "end": 628, "text": "Collobert et al., 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These methods suffer from a classic drawback of unsupervised learning: the lack of supervision between a word and those appearing in the associated contexts. Indeed, it is likely that some terms of the context are not related to the considered word. On the other hand, the fact that two words do not appear together -or more likely, not often enough together -in any context of the training corpora is not a guarantee that these words are not semantically related. Recent approaches have proposed to tackle this issue using an attentive model for context selection (Ling et al., 2015) , or by using external sources -like knowledge graphsin order to improve the embeddings . Similarities derived from such resources are part of the objective function during the learning phase (Yu and Dredze, 2014; Kiela et al., 2015) or used in a retrofitting scheme (Faruqui et al., 2015) . These approaches tend to specialize the embeddings to the resource used and its associated similarity measures -while the construction and maintenance of these resources are a set of complex, time-consuming, and error-prone tasks.", "cite_spans": [ { "start": 565, "end": 584, "text": "(Ling et al., 2015)", "ref_id": "BIBREF16" }, { "start": 777, "end": 798, "text": "(Yu and Dredze, 2014;", "ref_id": "BIBREF31" }, { "start": 799, "end": 818, "text": "Kiela et al., 2015)", "ref_id": "BIBREF15" }, { "start": 852, "end": 874, "text": "(Faruqui et al., 2015)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a novel word embedding learning strategy, called Dict2vec, that leverages existing online natural language dictionaries. We assume that dictionary entries (a definition of a word) contain latent word similarity and relatedness information that can improve language representations. Such entries provide, in essence, an additional context that conveys general semantic coverage for most words. Dict2vec adds new co-occurrences information based on the terms occurring in the definitions of a word. This information introduces weak supervision that can be used to improve the embeddings. We can indeed distinguish word pairs for which each word appears in the definition of the other (strong pairs) and pairs where only one appears in the definition of the other (weak pairs) -each having their own weight as two hyperparameters. Not only this information is useful at learning time to control words vectors to be close for such word pairs, but also it becomes possible to devise a controlled negative sampling. Controlled negative sampling as introduced in Dict2vec consists in filtering out random negative examples in conventional negative sampling that forms a (strong or weak) pair with the target word -they are obviously non-negative examples. Processing online dictionaries in Dict2vec does not require a human-in-the-loop -it is fully automated. The neural network architecture from Dict2vec (Section 3) extends Word2vec (Mikolov et al., 2013) approach which uses a Skip-gram model with negative sampling.", "cite_spans": [ { "start": 1454, "end": 1476, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our main results are as follows :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Dict2vec exhibits a statistically significant improvement around 12.5% against state-ofthe-art solutions on eleven most common evaluation datasets for the word similarity task when embeddings are learned using the full Wikipedia dump.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 This edge is even more significant for small training datasets (50 millions first tokens of Wikipedia) than using the full dataset, as the average improvement reaches 30%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Since Dict2vec does significantly better than competitors for small dimensions (in the [20; 100] range) for small corpus, it can yield smaller yet efficient embeddings -even when trained on smaller corpus -which is one of the utmost practical interest for the working natural language processing practitioners.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We also show that the embeddings learned by Dict2vec perform similarly to other baselines on an extrinsic text classification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Dict2vec software is an extension and an optimization from the original Word2vec framework leading to a more efficient learning. Source code to fetch dictionaries, train Dict2vec models and evaluate word embeddings are publicly availabe 1 and can be used by the community as a seed for future works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows. Section 2 presents related works, along with a special focus on Word2vec, which we later derive in our 1 https://github.com/tca19/dict2vec approach presented in Section 3. Our experimental setup and evaluation settings are introduced in Section 4 and we discuss the results in Section 5. Section 6 concludes the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the original model from Collobert and Weston (2008) , a window approach was used to feed a neural network and learn word embeddings. Since there are long-range relations between words, the window-based approach was later extended to a sentence-based approach (Collobert et al., 2011) leading to capture more semantic similarities into word vectors. Recurrent neural networks are another way to exploit the context of a word by considering the sequence of words preceding it (Mikolov et al., 2010; Sutskever et al., 2011) . Each neuron receives the current window as an input, but also its own output from the previous step. Mikolov et al. (2013) introduced the Skip-gram architecture built on a single hidden layer neural network to learn efficiently a vector representation for each word w of a vocabulary V from a large corpora of size C. Skip-gram iterates over all (target, context) pairs (w t ,w c ) from every window of the corpus and tries to predict w c knowing w t . The objective function is therefore to maximize the log-likelihood :", "cite_spans": [ { "start": 27, "end": 54, "text": "Collobert and Weston (2008)", "ref_id": "BIBREF4" }, { "start": 262, "end": 286, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF5" }, { "start": 477, "end": 499, "text": "(Mikolov et al., 2010;", "ref_id": "BIBREF19" }, { "start": 500, "end": 523, "text": "Sutskever et al., 2011)", "ref_id": "BIBREF26" }, { "start": 627, "end": 648, "text": "Mikolov et al. (2013)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "The Neural Network Approach", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C t=1 n k=\u2212n log p(w t+k |w t )", "eq_num": "(1)" } ], "section": "The Neural Network Approach", "sec_num": "2.1" }, { "text": "where n represents the size of the window (composed of n words around the central word w t ) and the probability can be expressed as :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Neural Network Approach", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(w t+k |w t ) = e v t+k \u2022vt w\u2208V e v\u2022vt", "eq_num": "(2)" } ], "section": "The Neural Network Approach", "sec_num": "2.1" }, { "text": "with v t+k (resp. v t ) the vector associated to w t+k (resp. w t ). This model relies on the principle \"You shall know a word by the company it keeps\" -Firth (1957) . Thus, words that are frequent within the context of the target word will tend to have close representations, as the model will update their vectors so that they will be closer. Two main drawbacks can be said about this approach. First, words within the same window are not always related. Consider the sentence \"Turing is widely considered to be the father of theoretical computer science and artificial intelligence.\" 2 , the words (Turing,widely) and (father,theoretical) will be moved closer while they are not semantically related. Second, strong semantic relations between words (like synonymy or meronymy) happens rarely within the same window, so these relations will not be well embedded into vectors. fastText introduced in Bojanowski et al. (2016) uses internal additional information from the corpus to solve the latter drawback. They train a Skipgram architecture to predict a word w c given the central word w t and all the n-grams G wt (subwords of 3 up to 6 letters) of w t . The objective function becomes :", "cite_spans": [ { "start": 159, "end": 165, "text": "(1957)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Neural Network Approach", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C t=1 n k=\u2212n w\u2208Gw t log p(w t+k |w)", "eq_num": "(3)" } ], "section": "The Neural Network Approach", "sec_num": "2.1" }, { "text": "Along learning one vector per word, fastText also learns one vector per n-gram. fastText is able to extract more semantic relations between words that share common n-gram(s) (like fish and fishing) which can also help to provide good embeddings for rare words since we can obtain a vector by summing vectors of its n-grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Neural Network Approach", "sec_num": "2.1" }, { "text": "In what follows, we report related works that leverage external resources in order to address the two raised issues about the window approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Neural Network Approach", "sec_num": "2.1" }, { "text": "Even with larger and larger text data available on the Web, extracting and encoding every linguistic relations into word embeddings directly from corpora is a difficult task. One way to add more relations into embeddings is to use external data. Lexical databases like WordNet or sets of synonyms like MyThes thesaurus can be used during learning or in a post-processing step to specialize word embeddings. For example, Yu and Dredze (2014) include prior knowledge about synonyms from WordNet and the Paraphrase Database in a joint model built upon Word2vec. Faruqui et al. (2015) introduce a graph-based retrofitting method where they post-process learned vectors with respect to semantic relationships extracted from additional lexical resources. Kiela et al. (2015) propose to specialize the embeddings either on similarity or relatedness relations in a Skip-gram joint learning approach by adding new contexts from external thesaurus or from a norm association base in the function to optimize. Bian et al.", "cite_spans": [ { "start": 420, "end": 440, "text": "Yu and Dredze (2014)", "ref_id": "BIBREF31" }, { "start": 559, "end": 580, "text": "Faruqui et al. (2015)", "ref_id": "BIBREF7" }, { "start": 749, "end": 768, "text": "Kiela et al. (2015)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Using External Resources", "sec_num": "2.2" }, { "text": "2 https://en.wikipedia.org/wiki/Alan_Turing (2014) combine several sources (syllables, POS tags, antonyms/synonyms, Freebase relations) and incorporate them into a CBOW model. These approaches have generally the objective to improve tasks such as document classification, synonym detection or word similarity. They rely on additional resources whose construction is a timeconsuming and error-prone task and tend generally to specialize the embeddings to the external corpus used. Moreover, lexical databases contain less information than dictionaries (117k entries in WordNet, 200k in a dictionary) and less accurate content (some different words in WordNet belong to the same synset thus have the same definition).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using External Resources", "sec_num": "2.2" }, { "text": "Another type of external resources are knowledge bases, containing triplets. Each triplet links two entities with a relation, for example Parisis capital of -France. Several methods (Weston et al., 2013; Xu et al., 2014) have been proposed to use the information from knowledge base to improve semantic relations in word embeddings, and extract more easily relational facts from text. These approaches are focused on knowledge base dependent task.", "cite_spans": [ { "start": 182, "end": 203, "text": "(Weston et al., 2013;", "ref_id": "BIBREF28" }, { "start": 204, "end": 220, "text": "Xu et al., 2014)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Using External Resources", "sec_num": "2.2" }, { "text": "The definition of a word is a group of words or sentences explaining its meaning. A dictionary is a set of tuples (word, definition) for several words. For example, one may find in a dictionary : car: A road vehicle, typically with four wheels, powered by an internal combustion engine and able to carry a small number of people. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dict2vec", "sec_num": "3" }, { "text": "The presence of words like \"vehicle\", \"road\" or \"engine\" in the definition of \"car\" illustrates the relevance of using word definitions for obtaining weak supervision allowing us to get semantically related pairs of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dict2vec", "sec_num": "3" }, { "text": "Dict2vec models this information by building strong and weak pairs of words ( \u00a73.1), in order to provide both a novel positive sampling objective ( \u00a73.2) and a novel controlled negative sampling objective ( \u00a73.3). These objectives participate to the global objective function of Dict2vec ( \u00a73.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dict2vec", "sec_num": "3" }, { "text": "In a definition, each word does not have the same semantic relevance. In the definition of \"car\", the words \"internal\" or \"number\" are less relevant than \"vehicle\". We introduce the concept of strong and weak pairs in order to capture this relevance. If the word w a is in the definition of the word w b and w b is in the definition of w a , they form a strong pair, as well as the K closest words to w a (resp. w b ) form a strong pair with w b (resp. w a ). If the word w a is in the definition of w b but w b is not in the definition of w a , they form a weak pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strong pairs, weak pairs", "sec_num": "3.1" }, { "text": "The word \"vehicle\" is in the definition of \"car\" and \"car\" is in the definition of \"vehicle\". Hence, (car-vehicle) is a strong pair. The word \"road\" is in the definition of \"car\", but \"car\" is not in the definition of \"road\". Therefore, (car-road) is a weak pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strong pairs, weak pairs", "sec_num": "3.1" }, { "text": "Some weak pairs can be promoted as strong pairs if the two words are among the K closest neighbours of each other. We chose the K closest words according to the cosine distance from a pretrained word embedding and find that using K = 5 is a good trade-off between semantic and syntactic extracted information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strong pairs, weak pairs", "sec_num": "3.1" }, { "text": "We introduce the concept of positive sampling based on strong and weak pairs. We move closer vectors of words forming either a strong or a weak pair in addition to moving vectors of words cooccurring within the same window.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive sampling", "sec_num": "3.2" }, { "text": "Let S(w) be the set of all words forming a strong pair with the word w and W(w) be the set of all words forming a weak pair with w. For each target w t from the corpus, we build V s (w t ) a random set of n s words drawn with replacement from S(w t ) and V w (w t ) a random set of n w words drawn with replacement from W(w t ). We compute the cost of positive sampling J pos for each target as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive sampling", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "J pos (w t ) = \u03b2 s w i \u2208Vs(wt) (v t \u2022 v i ) + \u03b2 w w j \u2208Vw(wt) (v t \u2022 v j )", "eq_num": "(4)" } ], "section": "Positive sampling", "sec_num": "3.2" }, { "text": "where is the logistic loss function defined by : x \u2192 log(1 + e \u2212x ) and v t (resp. v i and v j ) is the vector associated to w t (resp. w i and w j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive sampling", "sec_num": "3.2" }, { "text": "The objective is to minimize this cost for all targets, thus moving closer words forming a strong or a weak pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive sampling", "sec_num": "3.2" }, { "text": "The coefficients \u03b2 s and \u03b2 w , as well as the number of drawn pairs n s and n w , tune the importance of strong and weak pairs during the learning phase. We discuss the choice of these hyperparameters in Section 5. When \u03b2 s = 0 and \u03b2 w = 0, our model is the Skip-gram model of Mikolov et al. (2013) .", "cite_spans": [ { "start": 277, "end": 298, "text": "Mikolov et al. (2013)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Positive sampling", "sec_num": "3.2" }, { "text": "Negative sampling consists in considering two random words from the vocabulary V to be unrelated. For each word w t from the vocabulary, we generate a set F(w t ) of k randomly selected words from the vocabulary :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled negative sampling", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F(w t ) = {w i } k , w i \u2208 V \\ {w t }", "eq_num": "(5)" } ], "section": "Controlled negative sampling", "sec_num": "3.3" }, { "text": "The model aims at separating the vectors of words from F(w t ) and the vector of w t . More formally, this is equivalent to minimize the cost J neg for each target word w t as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled negative sampling", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "J neg (w t ) = w i \u2208F (wt) (\u2212v t \u2022 v i )", "eq_num": "(6)" } ], "section": "Controlled negative sampling", "sec_num": "3.3" }, { "text": "where the notation , v t and v i are the same as described in previous subsection. However, there is a non-zero probability that w i and w t are related. Therefore, the model will move their vectors further instead of moving them closer. With strong/weak word pairs in Dict2vec, it becomes possible to better ensure that this is less likely to occur: we prevent a negative example to be a word that forms a weak or strong pair with with w t . The negative sampling objective from Equation 6 becomes :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled negative sampling", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "J neg (w t ) = w i \u2208F (wt) w i / \u2208S(wt) w i / \u2208W(wt) (\u2212v t \u2022 v i )", "eq_num": "(7)" } ], "section": "Controlled negative sampling", "sec_num": "3.3" }, { "text": "In our experiments, we noticed this method discards around 2% of generated negative pairs. The influence on evaluation depends on the nature of the corpus and is discussed at Section 5.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled negative sampling", "sec_num": "3.3" }, { "text": "Our objective function is derived from the noisecontrastive estimation which is a more efficient objective function than the log-likelihood in Equation 1 according to Mikolov et al. (2013) . We add the positive sampling and the controlled negative sampling described before and compute the cost for each (target,context) pair (w t , w c ) from the corpus as follows:", "cite_spans": [ { "start": 167, "end": 188, "text": "Mikolov et al. (2013)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Global objective function", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "J(w t , w c ) = (v t \u2022 v c ) + J pos (w t ) + J neg (w t )", "eq_num": "(8)" } ], "section": "Global objective function", "sec_num": "3.4" }, { "text": "The global objective is obtained by summing every pair's cost over the entire corpus :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global objective function", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "J = C t=1 n c=\u2212n J(w t , w t+c )", "eq_num": "(9)" } ], "section": "Global objective function", "sec_num": "3.4" }, { "text": "4 Experimental setup", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global objective function", "sec_num": "3.4" }, { "text": "We extract all unique words with more than 5 occurrences from a full Wikipedia dump, representing around 2.2M words. Since there is no dictionary that contains a definition for all existing words (the word w might be in the dictionary D i but not in D j ), we combine several dictionaries to get a definition for almost all of these words (some words are too rare to have a definition anyway).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fetching online definitions", "sec_num": "4.1" }, { "text": "We use the English version of Cambridge, Oxford, Collins and dictionary.com. For each word, we download the 4 different webpages, and use regex to extract the definitions from the HTML template specific to each website, making the process fully accurate. Our approach does not focus on polysemy, so we concatenate all definitions for each word. Then we concatenate results from all dictionaries, remove stop words and punctuation and lowercase all words. For our illustrative example in Section 3, we obtain :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fetching online definitions", "sec_num": "4.1" }, { "text": "car: road vehicle engine wheels seats small [...] platform lift.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fetching online definitions", "sec_num": "4.1" }, { "text": "Among the 2.2M unique words, only 200K does have a definition. We generate strong and weak pairs from the downloaded definitions according to the rule described in subsection 3.1 leading to 417K strong pairs (when the parameter K from 3.1 is set to 5) and 3.9M weak pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fetching online definitions", "sec_num": "4.1" }, { "text": "We train our model with the generated pairs from subsection 4.1 and the November 2016 English dump from Wikipedia 4 . After removing all XML tags and converting all words to lowercase (with the help of Mahoney's script 5 ), we separate the corpus into 3 files containing respectively the first 50M tokens, the first 200M tokens, and the full dump. Our model uses additional knowledge during training. For a fair comparison against other frameworks, we also incorporate this information into the training data and create two versions for each file : one containing only data from Wikipedia (corpus A) and one with data from Wikipedia concatenated with the definitions extracted (corpus B). We use the same hyperparameters we usually find in the literature for all models. We use 5 negatives samples, 5 epochs, a window size of 5, a vector size of 100 (resp. 200 and 300) for the 50M file (resp. 200M and full dump) and we remove the words with less than 5 occurrences. We follow the same evaluation protocol as Word2vec and fastText to provide the fairest comparison against competitors, so every other hyperparameters (K, \u03b2 s , \u03b2 w , n s , n w ) are tuned using a grid search to maximize the weighted average score. For n s and n w , we go from 0 to 10 with a step of 1 and find the optimal values to be n s = 4 and n w = 5. For \u03b2 s and \u03b2 w we go from 0 to 2 with a step of 0.05 and find \u03b2 s = 0.8 and \u03b2 w = 0.45 to be the best values for our model. Table 1 reports training times for the three models (all experiments were run on a E3-1246 v3 processor).", "cite_spans": [], "ref_spans": [ { "start": 1450, "end": 1457, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Training settings", "sec_num": "4.2" }, { "text": "Word2vec 15m30 86m 2600m fastText 8m44 66m 1870m Dict2vec 4m09 26m 642m Table 1 : Training time (in min) of Word2vec, fast-Text and Dict2vec models for several corpus.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "50M 200M Full", "sec_num": null }, { "text": "We follow the standard method for word similarity evaluation by computing the Spearman's rank correlation coefficient (Spearman, 1904) between human similarity evaluation of pairs of words, and the cosine similarity of the corresponding word vectors. A score close to 1 indicates an embedding close to the human judgement. We use MC-30 (Miller and Charles, 1991), MEN (Bruni et al., 2014) , MTurk-287 (Radinsky et al., 2011) , MTurk-771 (Halawi et al., 2012) , RG-65 (Rubenstein and Goodenough, 1965) , RW (Luong et al., 2013) , SimVerb-3500 (Gerz et al., 2016) , WordSim-353 (Finkelstein et al., 2001 ) and YP-130 (Yang and Powers, 2006) classic datasets. We follow the same protocol used by Word2vec and fastText by discarding pairs which contain a word that is not in our embedding. Since all models are trained with the same corpora, the embeddings have the same words, therefore all competitors share the same OOV rates.", "cite_spans": [ { "start": 118, "end": 134, "text": "(Spearman, 1904)", "ref_id": "BIBREF25" }, { "start": 368, "end": 388, "text": "(Bruni et al., 2014)", "ref_id": "BIBREF3" }, { "start": 401, "end": 424, "text": "(Radinsky et al., 2011)", "ref_id": "BIBREF23" }, { "start": 437, "end": 458, "text": "(Halawi et al., 2012)", "ref_id": "BIBREF11" }, { "start": 467, "end": 500, "text": "(Rubenstein and Goodenough, 1965)", "ref_id": "BIBREF24" }, { "start": 506, "end": 526, "text": "(Luong et al., 2013)", "ref_id": "BIBREF17" }, { "start": 542, "end": 561, "text": "(Gerz et al., 2016)", "ref_id": "BIBREF10" }, { "start": 576, "end": 601, "text": "(Finkelstein et al., 2001", "ref_id": "BIBREF8" }, { "start": 615, "end": 638, "text": "(Yang and Powers, 2006)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Word similarity evaluation", "sec_num": "4.3" }, { "text": "We run each experiment 3 times and report in Table 2 the average score to minimize the effect of the neural network random initialization. We compute the average by weighting each score by the number of pairs evaluated in its dataset in the same way as Iacobacci et al. (2016) . We multiply each score by 1, 000 to improve readability.", "cite_spans": [ { "start": 253, "end": 276, "text": "Iacobacci et al. (2016)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 45, "end": 52, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Word similarity evaluation", "sec_num": "4.3" }, { "text": "Our text classification task follows the same setup as the one for fastText in . We train a neural network composed of a single hidden layer where the input layer corresponds to the bag of words of a document and the output layer is the probability to belong to each label. The weights between the input and the hidden layer are initialized with the generated embeddings and are fixed during training, so that the evaluation score solely depends on the embedding. We update the weights of the neural network classifier with gradient descent. We use the datasets AG-News 6 , DBpedia (Auer et al., 2007) and Yelp reviews (polarity and full) 7 . We split each datasets into a training and a test file. We use the same training and test files for all models and report the classification accuracy obtained on the test file.", "cite_spans": [ { "start": 582, "end": 601, "text": "(Auer et al., 2007)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Text classification evaluation", "sec_num": "4.4" }, { "text": "We train Word2vec 8 and fastText 9 on the same 3 files and their 2 respective versions (A and B) described in 4.2 and use the same hyperparameters also described in 4.2 for all models. We train Word2vec with the Skip-gram model since our method is based on the Skip-gram model. We also train GloVe with their respective hyperparameters described in Pennington et al. (2014) , but the results are lower than all other baselines (weighted average on word similarity task is 350 on the 50M file, 389 on the 200M file and 454 on the full dump) so we do no report GloVe's results.", "cite_spans": [ { "start": 349, "end": 373, "text": "Pennington et al. (2014)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.5" }, { "text": "We also retrofit the learned embeddings on corpus A with the Faruqui's method to compare another method using additional resources. The retrofitting introduces external knowledge from the WordNet semantic lexicon (Miller, 1995) . We use the Faruqui's Retrofitting 10 with the W N all semantic lexicon from WordNet and 10 iterations as advised in the paper of Faruqui et al. (2015) . Furthermore, we compare the performance of our method when using WordNet additional resources instead of dictionaries.", "cite_spans": [ { "start": 213, "end": 227, "text": "(Miller, 1995)", "ref_id": "BIBREF20" }, { "start": 359, "end": 380, "text": "Faruqui et al. (2015)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.5" }, { "text": "5 Results and model analysis 5.1 Semantic similarity Table 2 (top) reports the Spearman's rank correlation scores obtained with the method described in subsection 4.3. We observe that our model outperforms state-of-the-art approaches for most of the datasets on the 50M and 200M tokens files, and almost all datasets on the full dump (this is significant according to a two-sided Wilcoxon signedrank test with \u03b1 = 0.05). With the weighted average score, our model improves fastText's performance on raw corpus (column A) by 28.3% on the 50M file, by 17.7% on the 200M and by 12.8% on the full dump. Even when we train fastText with the same additional knowledge as ours (column B), our model improves performance by 2.9% on the 50M file, by 5.1% in the 200M and by 11.9% on the full dump.", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 60, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Baselines", "sec_num": "4.5" }, { "text": "We notice the column B (corpus composed of Wikipedia and definitions) has better results than the column A for the 50M (+24% on average) and the 200M file (+12% on average). This demonstrates the strong semantic relations one can find in definitions, and that simply incorporating definitions in small training file can boost the performance of the embeddings. Moreover, when the training file is large (full dump), our supervised method with pairs is more efficient, as the boost brought by the concatenation of definitions is insignificant (+1.5% on average).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.5" }, { "text": "We also note that the number of strong and weak pairs drawn must be set according to the size of the training file. For the 50M and 200M tokens files, we train our model with hyperparameters n s = 4 and n w = 5. For the full dump (20 Table 2 : Spearman's rank correlation coefficients between vectors' cosine similarity and human judgement for several datasets (top) and accuracies on text classification task (bottom). We train and evaluate each model 3 times and report the average score for each dataset, as well as the weighted average for all word similarity datasets. Table 3 : Percentage changes of word similarity scores for several datasets after the Faruqui's retrofitting method is applied. We compare each model to their own non-retrofitted version (vs self) and our nonretrofitted version (vs our). A positive percentage indicates the level of improvement of the retrofitting approach, while a negative percentage shows that the compared method is better without retrofitting. As an illustration: the +13.9% at the top left means that retrofitting Word2vec's vectors improves the initial vectors output by 13.9%, while the -7.3% below indicates that our approach without retrofitting is better than the retrofitted Word2vec's vectors.", "cite_spans": [], "ref_spans": [ { "start": 234, "end": 241, "text": "Table 2", "ref_id": null }, { "start": 574, "end": 581, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Baselines", "sec_num": "4.5" }, { "text": "50M 200M Full w2v R FT R our R w2v R FT R our R w2v R FT R our R MC-30 vs self +13.9% +9.2% +1.3% +5.8% +4.8% +3.0% +5.2% +2.9% +1.2% vs our -7.3% -4.4% \u2212 -3.6% -2.4% \u2212 -1.0% -0.6% \u2212 MEN-TR-3k vs self +0.9% -0.7% -0.1% +0.7% -1.9% +0.4% +1.4% -2.8% +1.6% vs our -4.2% -7.4% \u2212 -1.3% -1.6% \u2212 -1.7% -3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.5" }, { "text": "times larger than the 200M tokens file), the number of windows in the corpus is largely increased, so is the number of (target,context) pairs. Therefore, we need to adjust the influence of strong and weak pairs and decrease n s and n w . We set n s = 2, n w = 3 to train on the full dump. The Faruqui's retrofitting method improves the word similarity scores on all frameworks for all datasets, except on RW and WS353 (Table 3) . But even when Word2vec and fastText are retrofitted, their scores are still worse than our non-retrofitted model (every percentage on the vs our line are negative). We also notice that our model is compatible with a retrofitting improvement method as our scores are also increased with Faruqui's method.", "cite_spans": [], "ref_spans": [ { "start": 418, "end": 427, "text": "(Table 3)", "ref_id": null } ], "eq_spans": [], "section": "Baselines", "sec_num": "4.5" }, { "text": "We also observe that, although our model is superior on each corpus size, our model trained on the 50M tokens file outperforms the other models trained on the full dump (an improvement of 17% compared to the results of fastText, our best competitor, trained on the full dump). This means considering strong and weak pairs is more efficient than increasing the corpus size and that using dictionaries is a good way to improve the quality of the embeddings when the training file is small.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.5" }, { "text": "The models based on knowledge bases cited in \u00a72.2 do not provide word similarity scores on all the datasets we used. However, for the reported scores, Dict2vec outperforms these models : Kiela et al. (2015) Table 2 (bottom) reports the classification accuracy for the considered datasets. Our model achieves the same performances as Word2vec and fastText on the 50M file and slightly improves results on the 200M file and the full dump. Using supervision with pairs during training does not make our model specific to the word similarity task which shows that our embeddings can also be used in downstream extrinsic tasks.", "cite_spans": [ { "start": 187, "end": 206, "text": "Kiela et al. (2015)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 207, "end": 214, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Baselines", "sec_num": "4.5" }, { "text": "Note that for this experiment, the embeddings were fixed and not updated during learning (we only learned the classifier parameters) since our objective was rather to evaluate the capability of the embeddings to be used for another task rather than obtaining the best possible models. It is anyway possible to obtain better results by updating the embeddings and the classifier parameters with respect to the supervised information to adapt the embeddings to the classification task at hand as done in . Table 5 : Weighted average Spearman correlation score of Dict2vec vectors when trained without pairs and with WordNet or dictionary pairs.", "cite_spans": [], "ref_spans": [ { "start": 504, "end": 511, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Text classification accuracy", "sec_num": "5.2" }, { "text": "Raw R W N R dict 50M", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionaries vs. WordNet", "sec_num": "5.3" }, { "text": "We also trained Dict2vec with pairs from Word-Net as well as no additional pairs during training (in this case, this is the Skip-gram model from Word2vec). Results are reported in Table 5 . Training with WordNet pairs increases the scores, showing that the supervision brought by positive sampling is beneficial to the model, but lags behind the training using dictionary pairs demonstrating once again that dictionaries contain more semantic information than WordNet.", "cite_spans": [], "ref_spans": [ { "start": 180, "end": 187, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Dictionaries vs. WordNet", "sec_num": "5.3" }, { "text": "For the positive sampling, an empirical grid search shows that a 1 2 ratio between \u03b2 s and \u03b2 w is a good rule-of-thumb for tuning these hyperparameters. We also notice that when these coefficients are too low (\u03b2 s \u2264 0.5 and \u03b2 w \u2264 0.2), results get worse because the model does not take into account the information from the strong and weak pairs. On the other side, when they are too high (\u03b2 s \u2265 1.2 and \u03b2 w \u2265 0.6), the model discards too much the information from the context in favor of the information from the pairs. This behaviour is similar when the number of strong and weak pairs is too low or too high (n s , n w \u2264 2 or n s , n w \u2265 5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive and negative sampling", "sec_num": "5.4" }, { "text": "For the negative sampling, we notice that the control brought by the pairs increases the average weighted score by 0.7% compared to the uncontrolled version. We also observe that increasing the number of negative samples does not significantly improve the results except for the RW dataset where using 25 negative samples can boost performances by 10%. Indeed, this dataset is mostly composed of rare words so the embeddings must learn to differentiate unrelated words rather than moving closer related ones. In Fig. 1 , we observe that our model is still able to outperform state-of-the-art approaches when we reduce the dimension of the embeddings to 20 or 40. We also notice that increasing the vector size does increase the performance, but only until a dimension around 100, which is the common dimen-sion used when training on the 50M tokens file for related approaches reported here.", "cite_spans": [], "ref_spans": [ { "start": 512, "end": 518, "text": "Fig. 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Positive and negative sampling", "sec_num": "5.4" }, { "text": "In this paper, we presented Dict2vec, a new approach for learning word embeddings using lexical dictionaries. It is based on a Skip-gram model where the objective function is extended by leveraging word pairs extracted from the definitions weighted differently with respect to the strength of the pairs. Our approach shows better results than state-of-the-art word embeddings methods for the word similarity task, including methods based on a retrofitting from external sources. We also provide the full source code to reproduce the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Definition from Oxford dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://dumps.wikimedia.org/enwiki/20161101/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://mattmahoney.net/dc/textdata#appendixa", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.di.unipi.it/\u02dcgulli/AG_corpus_of_ news_articles.html 7 https://www.yelp.com/dataset_challenge 8 https://github.com/dav/word2vec 9 https://github.com/facebookresearch/fastText", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/mfaruqui/retrofitting", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Dbpedia: A nucleus for a web of open data. The semantic web pages", "authors": [ { "first": "S\u00f6ren", "middle": [], "last": "Auer", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Bizer", "suffix": "" }, { "first": "Georgi", "middle": [], "last": "Kobilarov", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Lehmann", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Cyganiak", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Ives", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "722--735", "other_ids": {}, "num": null, "urls": [], "raw_text": "S\u00f6ren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. The semantic web pages 722-735.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Knowledge-powered deep learning for word embedding", "authors": [ { "first": "Jiang", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases", "volume": "", "issue": "", "pages": "132--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Knowledge-powered deep learning for word embed- ding. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, pages 132-148.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.04606" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606 .", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Multimodal distributional semantics", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Nam-Khanh", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2014, "venue": "J. Artif. Intell. Res.(JAIR)", "volume": "49", "issue": "", "pages": "1--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Intell. Res.(JAIR) 49(1-47).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th international conference on Machine learning", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of the 25th international conference on Machine learning. ACM, pages 160-167.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493-2537.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Fast and robust neural network joint models for statistical machine translation", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Rabih", "middle": [], "last": "Zbib", "suffix": "" }, { "first": "Zhongqiang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Lamar", "suffix": "" }, { "first": "M", "middle": [], "last": "Richard", "suffix": "" }, { "first": "John", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2014, "venue": "ACL (1)", "volume": "", "issue": "", "pages": "1370--1380", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In ACL (1). Cite- seer, pages 1370-1380.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Retrofitting word vectors to semantic lexicons", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "Sujay", "middle": [], "last": "Kumar Jauhar", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1606--1615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 1606-1615.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Placing search in context: The concept revisited", "authors": [ { "first": "Lev", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Yossi", "middle": [], "last": "Matias", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Rivlin", "suffix": "" }, { "first": "Zach", "middle": [], "last": "Solan", "suffix": "" }, { "first": "Gadi", "middle": [], "last": "Wolfman", "suffix": "" }, { "first": "Eytan", "middle": [], "last": "Ruppin", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 10th international conference on World Wide Web", "volume": "", "issue": "", "pages": "406--414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The con- cept revisited. In Proceedings of the 10th interna- tional conference on World Wide Web. ACM, pages 406-414.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Papers in Linguistics 1934-1951: Repr", "authors": [ { "first": "John", "middle": [], "last": "Rupert", "suffix": "" }, { "first": "Firth", "middle": [], "last": "", "suffix": "" } ], "year": 1957, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Rupert Firth. 1957. Papers in Linguistics 1934- 1951: Repr. Oxford University Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Simverb-3500: A largescale evaluation set of verb similarity", "authors": [ { "first": "Daniela", "middle": [], "last": "Gerz", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1608.00869" ] }, "num": null, "urls": [], "raw_text": "Daniela Gerz, Ivan Vuli\u0107, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. Simverb-3500: A large- scale evaluation set of verb similarity. arXiv preprint arXiv:1608.00869 .", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Large-scale learning of word relatedness with constraints", "authors": [ { "first": "Guy", "middle": [], "last": "Halawi", "suffix": "" }, { "first": "Gideon", "middle": [], "last": "Dror", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Yehuda", "middle": [], "last": "Koren", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "1406--1414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 1406-1414.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Wikireading: A novel large-scale language understanding task over wikipedia", "authors": [ { "first": "Daniel", "middle": [], "last": "Hewlett", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Lacoste", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Fandrianto", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Han", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Kelcey", "suffix": "" }, { "first": "David", "middle": [], "last": "Berthelot", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1608.03542" ] }, "num": null, "urls": [], "raw_text": "Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over wikipedia. arXiv preprint arXiv:1608.03542 .", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Embeddings for word sense disambiguation: An evaluation study", "authors": [ { "first": "Ignacio", "middle": [], "last": "Iacobacci", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Taher Pilehvar", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "897--907", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for word sense disambiguation: An evaluation study. In Proceed- ings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics. volume 1, pages 897-907.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.01759" ] }, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759 .", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Specializing word embeddings for similarity or relatedness", "authors": [ { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2015, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douwe Kiela, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for similarity or re- latedness. In Proceedings of EMNLP.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Not all contexts are created equal: Better word representations with variable attention", "authors": [ { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Chu-Cheng", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Silvio", "middle": [], "last": "Amir", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1367--1372", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang Ling, Lin Chu-Cheng, Yulia Tsvetkov, and Sil- vio Amir. 2015. Not all contexts are created equal: Better word representations with variable attention. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1367-1372.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Better word representations with recursive neural networks for morphology", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "CoNLL", "volume": "", "issue": "", "pages": "104--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Richard Socher, and Christo- pher D. Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL. pages 104-113.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 .", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Recurrent neural network based language model", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Karafi\u00e1t", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Burget", "suffix": "" } ], "year": 2010, "venue": "Interspeech", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In Inter- speech. volume 2, page 3.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39- 41.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Contextual correlates of semantic similarity", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" }, { "first": "G", "middle": [], "last": "Walter", "suffix": "" }, { "first": "", "middle": [], "last": "Charles", "suffix": "" } ], "year": 1991, "venue": "Language and cognitive processes", "volume": "6", "issue": "1", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller and Walter G Charles. 1991. Contex- tual correlates of semantic similarity. Language and cognitive processes 6(1):1-28.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "14", "issue": "", "pages": "1532--1575", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532- 43.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A word at a time: computing word relatedness using temporal semantic analysis", "authors": [ { "first": "Kira", "middle": [], "last": "Radinsky", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Shaul", "middle": [], "last": "Markovitch", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 20th international conference on World wide web", "volume": "", "issue": "", "pages": "337--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web. ACM, pages 337-346.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Contextual correlates of synonymy", "authors": [ { "first": "Herbert", "middle": [], "last": "Rubenstein", "suffix": "" }, { "first": "B", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Goodenough", "suffix": "" } ], "year": 1965, "venue": "Communications of the ACM", "volume": "8", "issue": "10", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communica- tions of the ACM 8(10):627-633.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "The proof and measurement of association between two things", "authors": [ { "first": "Charles", "middle": [], "last": "Spearman", "suffix": "" } ], "year": 1904, "venue": "The American journal of psychology", "volume": "15", "issue": "1", "pages": "72--101", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Spearman. 1904. The proof and measurement of association between two things. The American journal of psychology 15(1):72-101.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Generating text with recurrent neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "James", "middle": [], "last": "Martens", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th International Conference on Machine Learning (ICML-11)", "volume": "", "issue": "", "pages": "1017--1024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, James Martens, and Geoffrey E Hin- ton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11). pages 1017-1024.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Knowledge graph and text jointly embedding", "authors": [ { "first": "Zhen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianwen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jianlin", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "EMNLP. Citeseer", "volume": "", "issue": "", "pages": "1591--1601", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph and text jointly em- bedding. In EMNLP. Citeseer, pages 1591-1601.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Connecting language and knowledge bases with embedding models for relation extraction", "authors": [ { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Oksana", "middle": [], "last": "Yakhnenko", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1307.7973" ] }, "num": null, "urls": [], "raw_text": "Jason Weston, Antoine Bordes, Oksana Yakhnenko, and Nicolas Usunier. 2013. Connecting language and knowledge bases with embedding models for re- lation extraction. arXiv preprint arXiv:1307.7973 .", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Rcnet: A general framework for incorporating knowledge into word representations", "authors": [ { "first": "Chang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yalong", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Gang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaoguang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "1219--1228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. 2014. Rc- net: A general framework for incorporating knowl- edge into word representations. In Proceedings of the 23rd ACM International Conference on Confer- ence on Information and Knowledge Management. ACM, pages 1219-1228.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Verb similarity on the taxonomy of WordNet", "authors": [ { "first": "Dongqiang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "M", "middle": [ "W" ], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Powers", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dongqiang Yang and David MW Powers. 2006. Verb similarity on the taxonomy of WordNet. Masaryk University.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Improving lexical embeddings with semantic knowledge", "authors": [ { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "545--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In ACL (2). pages 545-550.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "End-to-end learning of semantic role labeling using recurrent neural networks", "authors": [ { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2015, "venue": "ACL (1)", "volume": "", "issue": "", "pages": "1127--1137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural net- works. In ACL (1). pages 1127-1137.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "achieves a correlation of 0.72 on the MEN dataset (vs. 0.756); Xu et al. (2014) achieves 0.683 on the WS353-ALL dataset (vs. 0.758).", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "Spearman's rank correlation coefficient for RW-STANFORD (RW) and WS-353-ALL (WS) on the fastText model (FT) and our, with different vector size. Training is done on the corpus A of 50M tokens.", "num": null, "type_str": "figure" }, "TABREF3": { "text": "", "content": "
: Weighted average Spearman correla-
tion score of raw vectors and after retrofitting
with WordNet pairs (R W N ) and dictionary pairs
(R dict ).
", "type_str": "table", "html": null, "num": null }, "TABREF4": { "text": "reports the Spearman's rank correlation score for vectors obtained after training (Raw column) and the scores after we retrofit those vectors with pairs from WordNet (R W N ) and extracted pairs from dictionaries (R dict ).", "content": "
Retrofitting with
", "type_str": "table", "html": null, "num": null } } } }