{ "paper_id": "P18-1001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:41:22.129650Z" }, "title": "Probabilistic FastText for Multi-Sense Word Embeddings", "authors": [ { "first": "Ben", "middle": [], "last": "Athiwaratkun", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Andrew", "middle": [ "Gordon" ], "last": "Wilson", "suffix": "", "affiliation": {}, "email": "andrew@cornell.edu" }, { "first": "Anima", "middle": [], "last": "Anandkumar", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.", "pdf_parse": { "paper_id": "P18-1001", "_pdf_hash": "", "abstract": [ { "text": "We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word embeddings are foundational to natural language processing. In order to model language, we need word representations to contain as much semantic information as possible. Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space. Following the * Work done partly during internship at Amazon. seminal work of Mikolov et al. (2013a) , there have been numerous works looking to learn efficient word embeddings.", "cite_spans": [ { "start": 245, "end": 268, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF21" }, { "start": 430, "end": 452, "text": "Mikolov et al. (2013a)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words. To overcome this limitation, character-level word embeddings have been proposed. FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings. In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram. The benefit of this approach is that the training process can then share strength across words composed of common roots. For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding. In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g. Romance languages share latent roots.", "cite_spans": [ { "start": 291, "end": 316, "text": "(Bojanowski et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A different promising direction involves representing words with probability distributions, instead of point vectors. For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information. Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings. For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\". Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "cite_spans": [ { "start": 131, "end": 157, "text": "Vilnis and McCallum (2014)", "ref_id": "BIBREF33" }, { "start": 247, "end": 277, "text": "Athiwaratkun and Wilson (2017)", "ref_id": "BIBREF1" }, { "start": 526, "end": 556, "text": "Athiwaratkun and Wilson (2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words. The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures. PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses. In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively. Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information. We also derive an efficient energybased max-margin training procedure for PFT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture). Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset. We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models. We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning. Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets. Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies. In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "cite_spans": [ { "start": 309, "end": 334, "text": "(Bojanowski et al., 2016)", "ref_id": "BIBREF5" }, { "start": 1074, "end": 1105, "text": "(Athiwaratkun and Wilson, 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words. Our code and embeddings are publicly available. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Early word embeddings which capture semantic information include Bengio et al. (2003) , Col-lobert and Weston (2008), and Mikolov et al. (2011) . Later, Mikolov et al. (2013a) developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text. Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "cite_spans": [ { "start": 65, "end": 85, "text": "Bengio et al. (2003)", "ref_id": "BIBREF4" }, { "start": 88, "end": 102, "text": "Col-lobert and", "ref_id": null }, { "start": 103, "end": 121, "text": "Weston (2008), and", "ref_id": "BIBREF8" }, { "start": 122, "end": 143, "text": "Mikolov et al. (2011)", "ref_id": "BIBREF23" }, { "start": 153, "end": 175, "text": "Mikolov et al. (2013a)", "ref_id": "BIBREF21" }, { "start": 424, "end": 449, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible. This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance. These works focus on modeling subword information in neural networks for tasks such as language modeling.", "cite_spans": [ { "start": 189, "end": 207, "text": "(Kim et al., 2016)", "ref_id": "BIBREF16" }, { "start": 235, "end": 254, "text": "(Kuru et al., 2016)", "ref_id": "BIBREF17" }, { "start": 281, "end": 303, "text": "(Zhao and Zhang, 2016;", "ref_id": "BIBREF35" }, { "start": 304, "end": 321, "text": "Lee et al., 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors. The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) . Neelakantan et al. (2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word. incorporates an external dataset WORDNET to learn sense vectors. We compare these models with our multimodal embeddings in Section 4.", "cite_spans": [ { "start": 205, "end": 225, "text": "(Huang et al., 2012)", "ref_id": "BIBREF14" }, { "start": 301, "end": 320, "text": "(Tian et al., 2014)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure. We describe the probabilistic subword representation in Section 3.1. We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3. We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic FastText", "sec_num": "3" }, { "text": "We represent each word with a Gaussian mixture with K Gaussian components. That is, a word w is associated with a density function", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Subword Representation", "sec_num": "3.1" }, { "text": "f (x) = K i=1 p w,i N (x; \u00b5 w,i , \u03a3 w,i ) where {\u00b5 w,i } K k=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Subword Representation", "sec_num": "3.1" }, { "text": "are the mean vectors and {\u03a3 w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Subword Representation", "sec_num": "3.1" }, { "text": "The mean vectors of Gaussian components hold much of the semantic information in density embeddings. While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus. We propose using subword structures to estimate the mean vectors. We outline the formulation below.", "cite_spans": [ { "start": 186, "end": 213, "text": "(Vilnis and McCallum, 2014;", "ref_id": "BIBREF33" }, { "start": 214, "end": 244, "text": "Athiwaratkun and Wilson, 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Subword Representation", "sec_num": "3.1" }, { "text": "For word w, we estimate the mean vector \u00b5 w with the average over n-gram vectors and its dictionary-level vector. That is,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Subword Representation", "sec_num": "3.1" }, { "text": "\u00b5 w = 1 |N G w | + 1 \uf8eb \uf8ed v w + g\u2208N Gw z g \uf8f6 \uf8f8 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Subword Representation", "sec_num": "3.1" }, { "text": "where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Subword Representation", "sec_num": "3.1" }, { "text": "\u2022 3-grams: be, bea, eau, aut, uti, tif, ful, ul", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Subword Representation", "sec_num": "3.1" }, { "text": "\u2022 4-grams: bea, beau .., iful ,ful", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Subword Representation", "sec_num": "3.1" }, { "text": "This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector. Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM). In the Gaussian case, we represent each mean vector with a subword estimation. For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based. This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "cite_spans": [ { "start": 46, "end": 71, "text": "(Bojanowski et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 407, "end": 416, "text": "Figure 1a", "ref_id": "FIGREF0" }, { "start": 468, "end": 477, "text": "Figure 1b", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Probabilistic Subword Representation", "sec_num": "3.1" }, { "text": "Traditionally, if words are represented by vectors, a common similarity metric is a dot product. In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space \u2022, \u2022 L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) . We define the energy E(f, g) between two words f and g to be and K i=1 q i = 1, the energy has a closed form:", "cite_spans": [ { "start": 274, "end": 295, "text": "(Jebara et al., 2004)", "ref_id": "BIBREF15" }, { "start": 359, "end": 364, "text": "and K", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Measure between Words", "sec_num": "3.2" }, { "text": "E(f, g) = log f, g L 2 = log f (x)g(x) dx. With Gaussian mixtures f (x) = K i=1 p i N (x; \u00b5 f,i , \u03a3 f,i ) and g(x) = K i=1 q i N (x; \u00b5 g,i , \u03a3 g,i ), K i=1 p i = 1,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Measure between Words", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E(f, g) = log K j=1 K i=1 p i q j e \u03be i,j", "eq_num": "(2)" } ], "section": "Similarity Measure between Words", "sec_num": "3.2" }, { "text": "where \u03be j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Measure between Words", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03be i,j \u2261 log N (0; \u00b5 f,i \u2212 \u00b5 g,j , \u03a3 f,i + \u03a3 g,j ) = \u2212 1 2 log det(\u03a3 f,i + \u03a3 g,j ) \u2212 D 2 log(2\u03c0) \u2212 1 2 ( \u00b5 f,i \u2212 \u00b5 g,j ) (\u03a3 f,i + \u03a3 g,j ) \u22121 ( \u00b5 f,i \u2212 \u00b5 g,j )", "eq_num": "(3)" } ], "section": "Similarity Measure between Words", "sec_num": "3.2" }, { "text": "Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Measure between Words", "sec_num": "3.2" }, { "text": "Interaction between GM components rock:0 pop:0 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Measure between Words", "sec_num": "3.2" }, { "text": "pop:1 rock:1 \u21e0 0,1 \u21e0 0,0 \u21e0 1,1 \u21e0 1,0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Measure between Words", "sec_num": "3.2" }, { "text": "The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome:", "cite_spans": [ { "start": 248, "end": 268, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Loss Function", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(f, g) = max [0, m \u2212 E(f, g) + E(f, n)] .", "eq_num": "(4)" } ], "section": "Loss Function", "sec_num": "3.3" }, { "text": "We describe how to sample words as well as its positive and negative contexts in Section 3.5. This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words. That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning. For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loss Function", "sec_num": "3.3" }, { "text": "In theory, it can be beneficial to have covariance matrices as learnable parameters. In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources. Using spherical covariances for each component, we can further simplify the energy function as follows:", "cite_spans": [ { "start": 98, "end": 128, "text": "Athiwaratkun and Wilson (2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Energy Simplification", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03be i,j = \u2212 \u03b1 2 \u2022 ||\u00b5 f,i \u2212 \u00b5 g,j || 2 ,", "eq_num": "(5)" } ], "section": "Energy Simplification", "sec_num": "3.4" }, { "text": "where the hyperparameter \u03b1 is the scale of the inverse covariance term in Equation 3. We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Energy Simplification", "sec_num": "3.4" }, { "text": "To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length . We also use a word sampling technique similar to Mikolov et al. (2013b) . This subsampling procedure selects words for training with lower probabilities if they appear frequently. This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc. In particular, word w has probability P (w) = 1 \u2212 t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold. A negative context word is selected using a distribution P n (w) \u221d U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "cite_spans": [ { "start": 163, "end": 185, "text": "Mikolov et al. (2013b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Word Sampling", "sec_num": "3.5" }, { "text": "We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach. In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "First, we describe the training details in Section 4.1. We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures. Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM). We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4. Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "cite_spans": [ { "start": 358, "end": 383, "text": "(Bojanowski et al., 2016)", "ref_id": "BIBREF5" }, { "start": 424, "end": 451, "text": "(Vilnis and McCallum, 2014)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We train our models on both English and foreign language datasets. For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words. We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "cite_spans": [ { "start": 129, "end": 150, "text": "(Baroni et al., 2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "4.1" }, { "text": "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses. We note that our model should be applicable for other languages as well. We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively. We use the same threshold, filtering out words that occur less than 5 times in each corpus. We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "cite_spans": [ { "start": 247, "end": 268, "text": "(Baroni et al., 2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "4.1" }, { "text": "We adjust the hyperparameters on the English corpus and use them for foreign languages. Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale \u03b1 in Equation 5. We search for the optimal hyperparameters in a grid m \u2208 {0.01, 0.1, 1, 10, 100} and \u03b1 \u2208 { 1 5\u00d710 \u22123 , 1 10 \u22123 , 1 2\u00d710 \u22124 , 1 1\u00d710 \u22124 } on our English corpus. The hyperpameter \u03b1 affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each \u03b1. In particular, the learning rates used are \u03b3 = {10 \u22124 , 10 \u22125 , 10 \u22126 } for the respective \u03b1 values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "4.1" }, { "text": "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 \u22125 . Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "4.1" }, { "text": "We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors. Table 1 shows examples of polysemous words such as rock, star, and cell. Table 1 shows the nearest neighbors of polysemous words. We note that subword embeddings prefer words with overlapping characters as nearest neighbors. For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\". For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned. However, all words shown are in the top-100 nearest words.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 112, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 178, "end": 185, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Qualitative Evaluation -Nearest neighbors", "sec_num": "4.2" }, { "text": "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank. The single-component case also has interesting behavior. We observe that the subword embeddings of polysemous words can represent both meanings. For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Evaluation -Nearest neighbors", "sec_num": "4.2" }, { "text": "We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) . Each dataset contains a list of word pairs with a human score of how related or similar the two words are. We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set. We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness. We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "cite_spans": [ { "start": 88, "end": 107, "text": "(Hill et al., 2014)", "ref_id": "BIBREF13" }, { "start": 117, "end": 143, "text": "(Finkelstein et al., 2002)", "ref_id": "BIBREF10" }, { "start": 153, "end": 173, "text": "(Bruni et al., 2014)", "ref_id": "BIBREF6" }, { "start": 182, "end": 208, "text": "(Miller and Charles, 1991)", "ref_id": "BIBREF24" }, { "start": 217, "end": 250, "text": "(Rubenstein and Goodenough, 1965)", "ref_id": "BIBREF28" }, { "start": 260, "end": 283, "text": "(Yang and Powers, 2006)", "ref_id": "BIBREF34" }, { "start": 303, "end": 326, "text": "(Radinsky et al., 2011;", "ref_id": "BIBREF27" }, { "start": 327, "end": 347, "text": "Halawi et al., 2012)", "ref_id": "BIBREF12" }, { "start": 360, "end": 380, "text": "(Luong et al., 2013)", "ref_id": "BIBREF20" }, { "start": 811, "end": 831, "text": "(Huang et al., 2012)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Word Similarity Evaluation", "sec_num": "4.3" }, { "text": "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "cite_spans": [ { "start": 17, "end": 54, "text": "Spearman correlation (Spearman, 1904)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word Similarity Evaluation", "sec_num": "4.3" }, { "text": "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.:0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings. The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels. The scores we use are cosine-similarity scores between the mean vectors. In the case of Gaussian mixtures, we use the pairwise maximum score:", "cite_spans": [ { "start": 1146, "end": 1157, "text": "starsailor,", "ref_id": null }, { "start": 1158, "end": 1163, "text": "Star,", "ref_id": null }, { "start": 1164, "end": 1173, "text": "starsign,", "ref_id": null }, { "start": 1174, "end": 1186, "text": "cell/tumour,", "ref_id": null }, { "start": 1187, "end": 1199, "text": "left/joined,", "ref_id": null }, { "start": 1200, "end": 1208, "text": "leaving,", "ref_id": null }, { "start": 1209, "end": 1214, "text": "left,", "ref_id": null }, { "start": 1215, "end": 1221, "text": "right,", "ref_id": null }, { "start": 1222, "end": 1228, "text": "right,", "ref_id": null }, { "start": 1229, "end": 1234, "text": "left)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word Similarity Evaluation", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s(f, g) = max i\u22081,...,K max j\u22081,...,K \u00b5 f,i \u2022 \u00b5 g,j ||\u00b5 f,i || \u2022 ||\u00b5 g,j || .", "eq_num": "(6)" } ], "section": "Word Similarity Evaluation", "sec_num": "4.3" }, { "text": "The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings. Therefore, this similarity score yields the most related senses of a given word pair. This score reduces to a cosine similarity in the Gaussian case (K = 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Similarity Evaluation", "sec_num": "4.3" }, { "text": "We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors. The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) . For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 . We calculate Spearman's correlations for each of the word similarity datasets. These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity. For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "cite_spans": [ { "start": 228, "end": 258, "text": "Athiwaratkun and Wilson (2017)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 94, "end": 101, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "sec_num": "4.3.1" }, { "text": "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM. Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT. We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets. However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise. Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models. In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "cite_spans": [], "ref_spans": [ { "start": 307, "end": 314, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "sec_num": "4.3.1" }, { "text": "In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM. We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses. We use the maximum similarity score (Equation 6), denoted as MAXSIM. AVESIM denotes the average of the similarity scores, rather than the maximum. We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information. Our model achieves state-of-the-art results, similar to that of Neelakantan et al. (2014) .", "cite_spans": [ { "start": 216, "end": 236, "text": "(Huang et al., 2012)", "ref_id": "BIBREF14" }, { "start": 688, "end": 713, "text": "Neelakantan et al. (2014)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Comparison Against Multi-Prototype Models", "sec_num": "4.3.2" }, { "text": "We evaluate the foreign-language embeddings on word similarity datasets in respective languages. We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models. For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) . For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses. We also train dictionary-level models W2G, and W2GM for comparison. Table 4 shows the Spearman's correlation results of our models. We outperform FASTTEXT on many word similarity benchmarks. Our results are also significantly better than the dictionary-based models, W2G and W2GM. We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "cite_spans": [ { "start": 146, "end": 174, "text": "(Leviant and Reichart, 2015)", "ref_id": "BIBREF19" }, { "start": 212, "end": 228, "text": "(Gurevych, 2005)", "ref_id": "BIBREF11" }, { "start": 271, "end": 297, "text": "(Finkelstein et al., 2002)", "ref_id": "BIBREF10" }, { "start": 404, "end": 429, "text": "(Bojanowski et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 606, "end": 613, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation on Foreign Language Embeddings", "sec_num": "4.4" }, { "text": "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation. For example, piano in Italian can mean \"floor\" or \"slow\". These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure). Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "cite_spans": [], "ref_spans": [ { "start": 421, "end": 428, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Evaluation on Foreign Language Embeddings", "sec_num": "4.4" }, { "text": "One of the motivations for using subword information is the ability to handle out-of-vocabulary words. Another benefit is the ability to help improve the semantics of rare words via subword sharing. Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently. Training these words to have a good semantic representation is challenging if done at the word level alone. However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics. Figure 3 shows the contribution of n-grams to the final representation. We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores. We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \" 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words. Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Figure 3 : Contribution of each n-gram vector to the final representation for word \"abnormal\" (top) and \"abnormality\" (bottom). The x-axis is the cosine similarity between each n-gram vector z (w) g and the final vector \u00b5 w .", "cite_spans": [ { "start": 77, "end": 107, "text": "Athiwaratkun and Wilson (2017)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 431, "end": 439, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Numbers of Components", "sec_num": "5" }, { "text": "Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Numbers of Components", "sec_num": "5" }, { "text": "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model. For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Numbers of Components", "sec_num": "5" }, { "text": "In addition, we observe that our model with K components can capture more than K meanings. For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model. In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) . However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "cite_spans": [ { "start": 392, "end": 412, "text": "(Arora et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Numbers of Components", "sec_num": "5" }, { "text": "We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words. The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations. Our models offer better semantic quality, outperforming competing models on word similarity benchmarks. Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance. This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "cite_spans": [ { "start": 338, "end": 369, "text": "(Athiwaratkun and Wilson, 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Other future work involves co-training PFT on many languages. Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies. We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment.", "cite_spans": [ { "start": 163, "end": 183, "text": "(Smith et al., 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "https://github.com/benathi/multisense-prob-fasttext", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The orderings of indices of the components for each word are arbitrary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/benathi/word2gm 4 https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki. en.zip", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/facebookresearch/fastText.git", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Linear algebraic structure of word senses, with applications to polysemy", "authors": [ { "first": "Sanjeev", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Yuanzhi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yingyu", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Tengyu", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Andrej", "middle": [], "last": "Risteski", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. Linear al- gebraic structure of word senses, with appli- cations to polysemy. CoRR abs/1601.03764.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Multimodal word distributions", "authors": [ { "first": "Ben", "middle": [], "last": "Athiwaratkun", "suffix": "" }, { "first": "Andrew", "middle": [ "Gordon" ], "last": "Wilson", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Athiwaratkun and Andrew Gordon Wilson. 2017. Multimodal word distributions. In ACL. https://arxiv.org/abs/1704.08424.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "On modeling hierarchical data via probabilistic order embeddings", "authors": [ { "first": "Ben", "middle": [], "last": "Athiwaratkun", "suffix": "" }, { "first": "Andrew", "middle": [ "Gordon" ], "last": "Wilson", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Athiwaratkun and Andrew Gordon Wilson. 2018. On modeling hierarchical data via probabilistic or- der embeddings. ICLR .", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The wacky wide web: a collection of very large linguistically processed web-crawled corpora", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Bernardini", "suffix": "" }, { "first": "Adriano", "middle": [], "last": "Ferraresi", "suffix": "" }, { "first": "Eros", "middle": [], "last": "Zanchetta", "suffix": "" } ], "year": 2009, "venue": "Language Resources and Evaluation", "volume": "43", "issue": "3", "pages": "209--226", "other_ids": { "DOI": [ "10.1007/s10579-009-9081-4" ] }, "num": null, "urls": [], "raw_text": "Marco Baroni, Silvia Bernardini, Adriano Fer- raresi, and Eros Zanchetta. 2009. The wacky wide web: a collection of very large linguis- tically processed web-crawled corpora. Lan- guage Resources and Evaluation 43(3):209-226. https://doi.org/10.1007/s10579-009-9081-4.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Janvin", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vin- cent, and Christian Janvin. 2003. A neu- ral probabilistic language model. Journal of Machine Learning Research 3:1137-1155. http://www.jmlr.org/papers/v3/bengio03a.html.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. CoRR abs/1607.04606. http://arxiv.org/abs/1607.04606.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multimodal distributional semantics", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Nam", "middle": [ "Khanh" ], "last": "Tran", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2014, "venue": "J. Artif. Int. Res", "volume": "49", "issue": "1", "pages": "1--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elia Bruni, Nam Khanh Tran, and Marco Ba- roni. 2014. Multimodal distributional se- mantics. J. Artif. Int. Res. 49(1):1-47.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A unified model for word sense representation and disambiguation", "authors": [ { "first": "Xinxiong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1025--1035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense represen- tation and disambiguation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25- 29, 2014, Doha, Qatar, A meeting of SIGDAT, a Spe- cial Interest Group of the ACL. pages 1025-1035. http://aclweb.org/anthology/D/D14/D14-1110.pdf.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A unified architecture for natural language processing: deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008)", "volume": "", "issue": "", "pages": "160--167", "other_ids": { "DOI": [ "http://doi.acm.org/10.1145/1390156.1390177" ] }, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A uni- fied architecture for natural language processing: deep neural networks with multitask learning. In Machine Learning, Proceedings of the Twenty- Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008. pages 160-167. http://doi.acm.org/10.1145/1390156.1390177.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [ "C" ], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for on- line learning and stochastic optimization. Jour- nal of Machine Learning Research 12:2121-2159. http://dl.acm.org/citation.cfm?id=2021068.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Placing search in context: the concept revisited", "authors": [ { "first": "Lev", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Yossi", "middle": [], "last": "Matias", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Rivlin", "suffix": "" }, { "first": "Zach", "middle": [], "last": "Solan", "suffix": "" }, { "first": "Gadi", "middle": [], "last": "Wolfman", "suffix": "" }, { "first": "Eytan", "middle": [], "last": "Ruppin", "suffix": "" } ], "year": 2002, "venue": "ACM Trans. Inf. Syst", "volume": "20", "issue": "1", "pages": "116--131", "other_ids": { "DOI": [ "http://doi.acm.org/10.1145/503104.503110" ] }, "num": null, "urls": [], "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: the con- cept revisited. ACM Trans. Inf. Syst. 20(1):116-131. http://doi.acm.org/10.1145/503104.503110.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Using the structure of a conceptual network in computing semantic relatedness", "authors": [ { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2005, "venue": "Natural Language Processing -IJCNLP 2005", "volume": "", "issue": "", "pages": "767--778", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iryna Gurevych. 2005. Using the structure of a concep- tual network in computing semantic relatedness. In Natural Language Processing -IJCNLP 2005, Sec- ond International Joint Conference, Jeju Island, Ko- rea, October 11-13, 2005, Proceedings. pages 767- 778.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Large-scale learning of word relatedness with constraints", "authors": [ { "first": "Guy", "middle": [], "last": "Halawi", "suffix": "" }, { "first": "Gideon", "middle": [], "last": "Dror", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Yehuda", "middle": [], "last": "Koren", "suffix": "" } ], "year": 2012, "venue": "The 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '12, Beijing", "volume": "", "issue": "", "pages": "1406--1414", "other_ids": { "DOI": [ "http://doi.acm.org/10.1145/2339530.2339751" ] }, "num": null, "urls": [], "raw_text": "Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In The 18th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, KDD '12, Bei- jing, China, August 12-16, 2012. pages 1406-1414. http://doi.acm.org/10.1145/2339530.2339751.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2014. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. CoRR abs/1408.3456. http://arxiv.org/abs/1408.3456.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improving word representations via global context and multiple word prototypes", "authors": [ { "first": "Eric", "middle": [ "H" ], "last": "Huang", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference", "volume": "1", "issue": "", "pages": "873--882", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric H. Huang, Richard Socher, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In The 50th Annual Meeting of the As- sociation for Computational Linguistics, Proceed- ings of the Conference, July 8-14, 2012, Jeju Island, Korea -Volume 1: Long Papers. pages 873-882. http://www.aclweb.org/anthology/P12-1092.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Probability product kernels", "authors": [ { "first": "Tony", "middle": [], "last": "Jebara", "suffix": "" }, { "first": "Risi", "middle": [], "last": "Kondor", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Howard", "suffix": "" } ], "year": 2004, "venue": "Journal of Machine Learning Research", "volume": "5", "issue": "", "pages": "819--844", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tony Jebara, Risi Kondor, and Andrew Howard. 2004. Probability product kernels. Journal of Machine Learning Research 5:819-844.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Character-aware neural language models", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "David", "middle": [], "last": "Sontag", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "2741--2749", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2016. Character-aware neural lan- guage models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12- 17, 2016, Phoenix, Arizona, USA.. pages 2741- 2749.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Charner: Character-level named entity recognition", "authors": [ { "first": "Onur", "middle": [], "last": "Kuru", "suffix": "" }, { "first": "Deniz", "middle": [], "last": "Ozan Arkan Can", "suffix": "" }, { "first": "", "middle": [], "last": "Yuret", "suffix": "" } ], "year": 2016, "venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16", "volume": "", "issue": "", "pages": "911--921", "other_ids": {}, "num": null, "urls": [], "raw_text": "Onur Kuru, Ozan Arkan Can, and Deniz Yuret. 2016. Charner: Character-level named entity recogni- tion. In COLING 2016, 26th International Con- ference on Computational Linguistics, Proceed- ings of the Conference: Technical Papers, Decem- ber 11-16, 2016, Osaka, Japan. pages 911-921.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Fully character-level neural machine translation without explicit segmentation", "authors": [ { "first": "Jason", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2017, "venue": "TACL", "volume": "5", "issue": "", "pages": "365--378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without ex- plicit segmentation. TACL 5:365-378.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Judgment language matters: Multilingual vector space models for judgment language aware lexical semantics", "authors": [ { "first": "Ira", "middle": [], "last": "Leviant", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ira Leviant and Roi Reichart. 2015. Judgment lan- guage matters: Multilingual vector space models for judgment language aware lexical semantics. CoRR abs/1508.00106. http://arxiv.org/abs/1508.00106.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Better word representations with recursive neural networks for morphology", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Richard Socher, and Christo- pher D. Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL. Sofia, Bulgaria.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word repre- sentations in vector space. CoRR abs/1301.3781. http://arxiv.org/abs/1301.3781.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Efficient estimation of word repre- sentations in vector space. CoRR abs/1301.3781. http://arxiv.org/abs/1301.3781.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Extensions of recurrent neural network language model", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Kombrink", "suffix": "" }, { "first": "Luk\u00e1s", "middle": [], "last": "Burget", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Cernock\u00fd", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "5528--5531", "other_ids": { "DOI": [ "10.1109/ICASSP.2011.5947611" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Stefan Kombrink, Luk\u00e1s Burget, Jan Cernock\u00fd, and Sanjeev Khudanpur. 2011. Exten- sions of recurrent neural network language model. In Proceedings of the IEEE International Confer- ence on Acoustics, Speech, and Signal Processing, ICASSP 2011, May 22-27, 2011, Prague Congress Center, Prague, Czech Republic. pages 5528-5531. https://doi.org/10.1109/ICASSP.2011.5947611.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Contextual Correlates of Semantic Similarity. Language & Cognitive Processes", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "Walter", "middle": [ "G" ], "last": "Charles", "suffix": "" } ], "year": 1991, "venue": "", "volume": "6", "issue": "", "pages": "1--28", "other_ids": { "DOI": [ "10.1080/01690969108406936" ] }, "num": null, "urls": [], "raw_text": "George A. Miller and Walter G. Charles. 1991. Contextual Correlates of Semantic Similarity. Language & Cognitive Processes 6(1):1-28. https://doi.org/10.1080/01690969108406936.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Efficient nonparametric estimation of multiple embeddings per word in vector space", "authors": [ { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Jeevan", "middle": [], "last": "Shankar", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Passos", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1059--1069", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arvind Neelakantan, Jeevan Shankar, Alexandre Pas- sos, and Andrew McCallum. 2014. Efficient non- parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Spe- cial Interest Group of the ACL. pages 1059-1069. http://aclweb.org/anthology/D/D14/D14-1113.pdf.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Spe- cial Interest Group of the ACL. pages 1532-1543. http://aclweb.org/anthology/D/D14/D14-1162.pdf.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A word at a time: Computing word relatedness using temporal semantic analysis", "authors": [ { "first": "Kira", "middle": [], "last": "Radinsky", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Shaul", "middle": [], "last": "Markovitch", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 20th International Conference on World Wide Web. WWW '11", "volume": "", "issue": "", "pages": "337--346", "other_ids": { "DOI": [ "http://doi.acm.org/10.1145/1963405.1963455" ] }, "num": null, "urls": [], "raw_text": "Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: Computing word relatedness using temporal semantic analysis. In Proceed- ings of the 20th International Conference on World Wide Web. WWW '11, pages 337-346. http://doi.acm.org/10.1145/1963405.1963455.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Contextual correlates of synonymy", "authors": [ { "first": "Herbert", "middle": [], "last": "Rubenstein", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Goodenough", "suffix": "" } ], "year": 1965, "venue": "Commun. ACM", "volume": "8", "issue": "10", "pages": "627--633", "other_ids": { "DOI": [ "http://doi.acm.org/10.1145/365628.365657" ] }, "num": null, "urls": [], "raw_text": "Herbert Rubenstein and John B. Goode- nough. 1965. Contextual correlates of syn- onymy. Commun. ACM 8(10):627-633.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax", "authors": [ { "first": "L", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" }, { "first": "H", "middle": [ "P" ], "last": "David", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Turban", "suffix": "" }, { "first": "Nils", "middle": [ "Y" ], "last": "Hamblin", "suffix": "" }, { "first": "", "middle": [], "last": "Hammerla", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel L. Smith, David H. P. Turban, Steven Ham- blin, and Nils Y. Hammerla. 2017. Offline bilin- gual word vectors, orthogonal transformations and the inverted softmax. CoRR abs/1702.03859.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "The proof and measurement of association between two things", "authors": [ { "first": "C", "middle": [], "last": "Spearman", "suffix": "" } ], "year": 1904, "venue": "American Journal of Psychology", "volume": "15", "issue": "", "pages": "88--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Spearman. 1904. The proof and measurement of association between two things. American Journal of Psychology 15:88-103.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A probabilistic model for learning multi-prototype word embeddings", "authors": [ { "first": "Fei", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Hanjun", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Enhong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers", "volume": "", "issue": "", "pages": "151--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Tian, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A prob- abilistic model for learning multi-prototype word embeddings. In COLING 2014, 25th International Conference on Computational Linguistics, Proceed- ings of the Conference: Technical Papers, Au- gust 23-29, 2014, Dublin, Ireland. pages 151-160. http://aclweb.org/anthology/C/C14/C14-1016.pdf.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Word representations via gaussian embedding", "authors": [ { "first": "Luke", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke Vilnis and Andrew McCallum. 2014. Word representations via gaussian embedding. CoRR abs/1412.6623. http://arxiv.org/abs/1412.6623.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Verb similarity on the taxonomy of wordnet", "authors": [ { "first": "Dongqiang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "M", "middle": [ "W" ], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Powers", "suffix": "" } ], "year": 2006, "venue": "the 3rd International WordNet Conference (GWC-06)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dongqiang Yang and David M. W. Powers. 2006. Verb similarity on the taxonomy of wordnet. In In the 3rd International WordNet Conference (GWC-06), Jeju Island, Korea.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "An efficient character-level neural machine translation", "authors": [ { "first": "Shenjian", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Zhihua", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shenjian Zhao and Zhihua Zhang. 2016. An efficient character-level neural machine translation. CoRR abs/1608.04738. http://arxiv.org/abs/1608.04738.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Human behavior and the principle of least effort: an introduction to human ecology", "authors": [ { "first": "G", "middle": [ "K" ], "last": "Zipf", "suffix": "" } ], "year": 1949, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G.K. Zipf. 1949. Human behavior and the principle of least effort: an introduction to human ecology. Addison-Wesley Press.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "(1a) a Gaussian component and its subword structure. The bold arrow represents the final mean vector, estimated from averaging the grey n-gram vectors. (1b) PFT-G model: Each Gaussian component's mean vector is a subword vector. (1c) PFT-GM model: For each Gaussian mixture distribution, one component's mean vector is estimated by a subword structure whereas other components are dictionary-based vectors." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "The interactions among Gaussian components of word rock and word pop. The partial energy is the highest for the pair rock:0 (the zeroth component of rock) and pop:1 (the first component of pop), reflecting the similarity in meanings." }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "time mi-temps (half-time), partiel (partial), Temps (time), annualis (annualized), horaires (schedule) (FR) voler steal envoler (fly), voleuse (thief), cambrioler (burgle), voleur (thief), violer (violate), picoler (tipple) (FR) voler fly airs (air), vol (flight), volent (fly), envoler (flying), atterrir (land)" }, "TABREF1": { "num": null, "html": null, "content": "
D50300
W2GW2GM PFT-G PFT-GM FASTTEXT W2G W2GM PFT-G PFT-GM
SL-999 29.35 29.31 27.3434.1338.0338.84 39.62 35.8539.60
WS-353 71.53 73.47 67.1771.1073.8878.25 79.38 73.7576.11
MEN-3K 72.58 73.55 70.6173.9076.3778.40 78.76 77.7879.65
MC-3076.48 79.08 73.5479.7581.2082.42 84.58 81.9080.93
RG-6573.30 74.51 70.4378.1979.9880.34 80.95 77.5779.81
YP-130 41.96 45.07 37.1040.9153.3346.40 47.12 48.5254.93
MT-287 64.79 66.60 63.9667.6567.9367.74 69.65 66.4169.44
MT-771 60.86 60.82 60.4063.8666.8970.10 70.36 67.1869.68
RW-2K28.78 28.62 44.0542.7848.0935.49 42.73 50.3749.36
AVG.42.32 42.76 44.3546.4749.2847.71 49.54 49.8651.10
", "text": "Nearest neighbors of PFT-GM (top) and PFT-G (bottom). The notation w:i denotes the i th mixture component of the word w.", "type_str": "table" }, "TABREF2": { "num": null, "html": null, "content": "", "text": "", "type_str": "table" }, "TABREF4": { "num": null, "html": null, "content": "
", "text": "Spearman's Correlation \u03c1 \u00d7 100 on word similarity dataset SCWS.", "type_str": "table" }, "TABREF6": { "num": null, "html": null, "content": "
", "text": "Nearest neighbors of polysemies based on our foreign language PFT-GM models.", "type_str": "table" } } } }