{ "paper_id": "P15-1014", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:10:52.311111Z" }, "title": "Learning Word Representations by Jointly Modeling Syntagmatic and Paradigmatic Relations", "authors": [ { "first": "Fei", "middle": [], "last": "Sun", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "", "affiliation": {}, "email": "guojiafeng@ict.ac.cn" }, { "first": "Yanyan", "middle": [], "last": "Lan", "suffix": "", "affiliation": {}, "email": "lanyanyan@ict.ac.cn" }, { "first": "Jun", "middle": [], "last": "Xu", "suffix": "", "affiliation": {}, "email": "junxu@ict.ac.cn" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Vector space representation of words has been widely used to capture fine-grained linguistic regularities, and proven to be successful in various natural language processing tasks in recent years. However, existing models for learning word representations focus on either syntagmatic or paradigmatic relations alone. In this paper, we argue that it is beneficial to jointly modeling both relations so that we can not only encode different types of linguistic properties in a unified way, but also boost the representation learning due to the mutual enhancement between these two types of relations. We propose two novel distributional models for word representation using both syntagmatic and paradigmatic relations via a joint training objective. The proposed models are trained on a public Wikipedia corpus, and the learned representations are evaluated on word analogy and word similarity tasks. The results demonstrate that the proposed models can perform significantly better than all the state-of-the-art baseline methods on both tasks.", "pdf_parse": { "paper_id": "P15-1014", "_pdf_hash": "", "abstract": [ { "text": "Vector space representation of words has been widely used to capture fine-grained linguistic regularities, and proven to be successful in various natural language processing tasks in recent years. However, existing models for learning word representations focus on either syntagmatic or paradigmatic relations alone. In this paper, we argue that it is beneficial to jointly modeling both relations so that we can not only encode different types of linguistic properties in a unified way, but also boost the representation learning due to the mutual enhancement between these two types of relations. We propose two novel distributional models for word representation using both syntagmatic and paradigmatic relations via a joint training objective. The proposed models are trained on a public Wikipedia corpus, and the learned representations are evaluated on word analogy and word similarity tasks. The results demonstrate that the proposed models can perform significantly better than all the state-of-the-art baseline methods on both tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Vector space models of language represent each word with a real-valued vector that captures both semantic and syntactic information of the word. The representations can be used as basic features in a variety of applications, such as information retrieval (Manning et al., 2008) , named entity recognition (Collobert et al., 2011 ), question answering (Tellex et al., 2003) , disambiguation (Sch\u00fctze, 1998) , and parsing (Socher et al., 2011) .", "cite_spans": [ { "start": 255, "end": 277, "text": "(Manning et al., 2008)", "ref_id": "BIBREF14" }, { "start": 305, "end": 328, "text": "(Collobert et al., 2011", "ref_id": "BIBREF2" }, { "start": 351, "end": 372, "text": "(Tellex et al., 2003)", "ref_id": "BIBREF28" }, { "start": 390, "end": 405, "text": "(Sch\u00fctze, 1998)", "ref_id": "BIBREF25" }, { "start": 420, "end": 441, "text": "(Socher et al., 2011)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A common paradigm for acquiring such representations is based on the distributional hypothesis (Harris, 1954; Firth, 1957) , which states that words occurring in similar contexts tend to have similar meanings. Based on this hypothesis, various models on learning word representations have been proposed during the last two decades. According to the leveraged distributional information, existing models can be grouped into two categories (Sahlgren, 2008) . The first category mainly concerns the syntagmatic relations among the words, which relate the words that co-occur in the same text region. For example, \"wolf\" is close to \"fierce\" since they often co-occur in a sentence, as shown in Figure 1 . This type of models learn the distributional representations of words based on the text region that the words occur in, as exemplified by Latent Semantic Analysis (LSA) model (Deerwester et al., 1990) and Non-negative Matrix Factorization (NMF) model (Lee and Seung, 1999) . The second category mainly captures paradigmatic relations, which relate words that occur with similar contexts but may not cooccur in the text. For example, \"wolf\" is close to \"tiger\" since they often have similar context words. This type of models learn the word representations based on the surrounding words, as exemplified by the Hyperspace Analogue to Language (HAL) model (Lund et al., 1995) , Continuous Bag-of-Words (CBOW) model and Skip-Gram (SG) model (Mikolov et al., 2013a) .", "cite_spans": [ { "start": 95, "end": 109, "text": "(Harris, 1954;", "ref_id": "BIBREF6" }, { "start": 110, "end": 122, "text": "Firth, 1957)", "ref_id": "BIBREF5" }, { "start": 438, "end": 454, "text": "(Sahlgren, 2008)", "ref_id": "BIBREF24" }, { "start": 877, "end": 902, "text": "(Deerwester et al., 1990)", "ref_id": "BIBREF3" }, { "start": 953, "end": 974, "text": "(Lee and Seung, 1999)", "ref_id": "BIBREF10" }, { "start": 1356, "end": 1375, "text": "(Lund et al., 1995)", "ref_id": "BIBREF12" }, { "start": 1440, "end": 1463, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 691, "end": 699, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we argue that it is important to take both syntagmatic and paradigmatic relations into account to build a good distributional model. Firstly, in distributional meaning acquisition, it is expected that a good representation should be able to encode a bunch of linguistic properties. For example, it can put semantically related words close (e.g., \"microsoft\" and \"office\"), and also be able to capture syntactic regularities like \"big is to bigger as deep is to deeper\". Obviously, these linguistic properties are related to both syntagmatic and paradigmatic relations, and cannot be well modeled by either alone. Secondly, syntagmatic and paradigmatic relations are complimentary rather than conflicted in representation learning. That is relating the words that co-occur within the same text region (e.g., \"wolf\" and \"fierce\" as well as \"tiger\" and \"fierce\") can better relate words that occur with similar contexts (e.g., \"wolf\" and \"tiger\"), and vice versa. Based on the above analysis, we propose two new distributional models for word representation using both syntagmatic and paradigmatic relations. Specifically, we learn the distributional representations of words based on the text region (i.e., the document) that the words occur in as well as the surrounding words (i.e., word sequences within some window size). By combining these two types of relations either in a parallel or a hierarchical way, we obtain two different joint training objectives for word representation learning. We evaluate our new models in two tasks, i.e., word analogy and word similarity. The experimental results demonstrate that the proposed models can perform significantly better than all of the state-ofthe-art baseline methods in both of the tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The distributional hypothesis has provided the foundation for a class of statistical methods for word representation learning. According to the leveraged distributional information, existing models can be grouped into two categories, i.e., syntagmatic models and paradigmatic models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Syntagmatic models concern combinatorial relations between words (i.e., syntagmatic relations), which relate words that co-occur within the same text region (e.g., sentence, paragraph or document).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For example, sentences have been used as the text region to acquire co-occurrence information by (Rubenstein and Goodenough, 1965; Miller and Charles, 1991) . However, as pointed our by Picard (1999) , the smaller the context regions are that we use to collect syntagmatic information, the worse the sparse-data problem will be for the resulting representation. Therefore, syntagmatic models tend to favor the use of larger text regions as context. Specifically, a document is often taken as a natural context of a word following the literature of information retrieval. In these methods, a words-by-documents co-occurrence matrix is built to collect the distributional information, where the entry indicates the (normalized) frequency of a word in a document. A low-rank decomposition is then conducted to learn the distributional word representations. For example, LSA (Deerwester et al., 1990 ) employs singular value decomposition by assuming the decomposed matrices to be orthogonal. In (Lee and Seung, 1999), non-negative matrix factorization is conducted over the wordsby-documents matrix to learn the word representations.", "cite_spans": [ { "start": 97, "end": 130, "text": "(Rubenstein and Goodenough, 1965;", "ref_id": "BIBREF23" }, { "start": 131, "end": 156, "text": "Miller and Charles, 1991)", "ref_id": "BIBREF17" }, { "start": 186, "end": 199, "text": "Picard (1999)", "ref_id": "BIBREF21" }, { "start": 871, "end": 895, "text": "(Deerwester et al., 1990", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Paradigmatic models concern substitutional relations between words (i.e., paradigmatic relations), which relate words that occur in the same context but may not at the same time. Unlike syntagmatic model, paradigmatic models typically collect distributional information in a words-bywords co-occurrence matrix, where entries indicate how many times words occur together within a context window of some size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For example, the Hyperspace Analogue to Language (HAL) model (Lund et al., 1995) constructed a high-dimensional vector for words based on the word co-occurrence matrix from a large corpus of text. However, a major problem with HAL is that the similarity measure will be dominated by the most frequent words due to its weight scheme. Various methods have been proposed to address the drawback of HAL. For example, the Correlated Occurrence Analogue to Lexical Semantic (COALS) (Rohde et al., 2006) transformed the co-occurrence matrix by an entropy or correlation based normalization. Bullinaria and Levy (2007) , and Levy and Goldberg (2014b) suggested that positive pointwise mutual information (PPMI) is a good transformation. More recently, Lebret and Collobert (2014) obtained the word representations through a Hellinger PCA (HPCA) of the words-by-words co-occurrence matrix. Pennington et al. (2014) explicitly factorizes the words-by-words co-occurrence matrix to obtain the Global Vectors (GloVe) for word representation.", "cite_spans": [ { "start": 61, "end": 80, "text": "(Lund et al., 1995)", "ref_id": "BIBREF12" }, { "start": 476, "end": 496, "text": "(Rohde et al., 2006)", "ref_id": "BIBREF22" }, { "start": 584, "end": 610, "text": "Bullinaria and Levy (2007)", "ref_id": "BIBREF1" }, { "start": 744, "end": 771, "text": "Lebret and Collobert (2014)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Alternatively, neural probabilistic language models (NPLMs) (Bengio et al., 2003) learn word representations by predicting the next word given previously seen words. Unfortunately, the training of NPLMs is quite time consuming, since computing probabilities in such model requires normalizing over the entire vocabulary. Recently, Mnih and Teh (2012) applied Noise Contrastive Estimation (NCE) to approximately maximize the probability of the softmax in NPLM. Mikolov et al. (2013a) further proposed continuous bagof-words (CBOW) and skip-gram (SG) models, which use a simple single-layer architecture based on inner product between two word vectors. Both models can be learned efficiently via a simple variant of Noise Contrastive Estimation, i.e., Negative sampling (NS) (Mikolov et al., 2013b) .", "cite_spans": [ { "start": 60, "end": 81, "text": "(Bengio et al., 2003)", "ref_id": "BIBREF0" }, { "start": 331, "end": 350, "text": "Mnih and Teh (2012)", "ref_id": "BIBREF18" }, { "start": 460, "end": 482, "text": "Mikolov et al. (2013a)", "ref_id": "BIBREF15" }, { "start": 773, "end": 796, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this paper, we argue that it is important to jointly model both syntagmatic and paradigmatic relations to learn good word representations. In this way, we not only encode different types of linguistic properties in a unified way, but also boost the representation learning due to the mutual enhancement between these two types of relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Models", "sec_num": "3" }, { "text": "We propose two joint models that learn the distributional representations of words based on both the text region that the words occur in (i.e., syntagmatic relations) and the surrounding words (i.e., paradigmatic relations). To model syntagmatic relations, we follow the previous work (Deerwester et al., 1990; Lee and Seung, 1999) to take document as a nature text region of a word. To model paradigmatic relations, we are inspired by the recent work from Mikolov et al. (Mikolov et al., 2013a; Mikolov et al., 2013b) , where simple models over word sequences are introduced for efficient and effective word representation learning.", "cite_spans": [ { "start": 285, "end": 310, "text": "(Deerwester et al., 1990;", "ref_id": "BIBREF3" }, { "start": 311, "end": 331, "text": "Lee and Seung, 1999)", "ref_id": "BIBREF10" }, { "start": 472, "end": 495, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF15" }, { "start": 496, "end": 518, "text": "Mikolov et al., 2013b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Our Models", "sec_num": "3" }, { "text": "In the following, we introduce the notations used in this paper, followed by detailed model descriptions, ending with some discussions of the proposed models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Models", "sec_num": "3" }, { "text": "Before presenting our models, we first list the notations used in this paper. Let D={d 1 , . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation", "sec_num": "3.1" }, { "text": "i\u22121 c n i+1 c n i+2 c n i\u22122 d n w n i . . . . . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation", "sec_num": "3.1" }, { "text": "The framework for PDC model. Four words (\"the\", \"cat\", \"on\" and \"the\") are used to predict the center word (\"sat\"). Besides, the document in which the word sequence occurs is also used to predict the center word (\"sat\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projection Figure 2:", "sec_num": null }, { "text": "w n i \u2208W (i.e. i-th word in document d n ) are the words surrounding it in an L-sized window (c n i\u2212L , . . . , c n i\u22121 , c n i+1 , . . . , c n i+L ) \u2208 H, where c n j \u2208 W, j\u2208{i\u2212L, . . . , i\u22121, i+1, . . . , i+L}. Each doc- ument d \u2208 D, each word w \u2208 W and each con- text c \u2208 W is associated with a vector \u20d7 d \u2208 R K , \u20d7 w \u2208 R K and \u20d7 c \u2208 R K , respectively", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projection Figure 2:", "sec_num": null }, { "text": ", where K is the embedding dimensionality. The entries in the vectors are treated as parameters to be learned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projection Figure 2:", "sec_num": null }, { "text": "The first proposed model architecture is shown in Figure 2 . In this model, a target word is predicted by its surrounding context, as well as the document it occurs in. The former prediction task captures the paradigmatic relations, since words with similar context will tend to have similar representations. While the latter prediction task models the syntagmatic relations, since words co-occur in the same document will tend to have similar representations. More detailed analysis on this will be presented in Section 3.4. The model can be viewed as an extension of CBOW model (Mikolov et al., 2013a) , by adding an extra document branch. Since both the context and document are parallel in predicting the target word, we call this model the Parallel Document Context (PDC) model. More formally, the objective function of PDC model is the log likelihood of all words", "cite_spans": [ { "start": 580, "end": 603, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 50, "end": 58, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Parallel Document Context Model", "sec_num": "3.2" }, { "text": "\u2113 = N \u2211 n=1 \u2211 w n i \u2208dn ( log p(w n i |h n i )+ log p(w n i |d n ) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Document Context Model", "sec_num": "3.2" }, { "text": "where h n i denotes the projection of w n i 's contexts, defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Document Context Model", "sec_num": "3.2" }, { "text": "h n i = f (c n i\u2212L , . . . , c n i\u22121 , c n i+1 , . . . , c n i+L )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Document Context Model", "sec_num": "3.2" }, { "text": "where f (\u2022) can be sum, average, concatenate or max pooling of context vectors 1 . In this paper, we use average, as that of word2vec tool. We use softmax function to define the probabilities p(w n i |h n i ) and p(w n i |d n ) as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Document Context Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(w n i |h n i ) = exp( \u20d7 w n i \u2022 \u20d7 h n i ) \u2211 w\u2208W exp( \u20d7 w \u2022 \u20d7 h n i ) (1) p(w n i |d n ) = exp( \u20d7 w n i \u2022 \u20d7 d n ) \u2211 w\u2208W exp( \u20d7 w \u2022 \u20d7 d n )", "eq_num": "(2)" } ], "section": "Parallel Document Context Model", "sec_num": "3.2" }, { "text": "where \u20d7 h n i denotes projected vector of w n i 's contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Document Context Model", "sec_num": "3.2" }, { "text": "To learn the model, we adopt the negative sampling technique (Mikolov et al., 2013b) for efficient learning since the original objective is intractable for direct optimization. The negative sampling actually defines an alternate training objective function as follows", "cite_spans": [ { "start": 61, "end": 84, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Parallel Document Context Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2113= N \u2211 n=1 \u2211 w n i \u2208dn ( log \u03c3( \u20d7 w n i \u2022 \u20d7 h n i )+ log \u03c3( \u20d7 w n i \u2022 \u20d7 d n ) + k \u2022 E w \u2032 \u223cPnw log \u03c3( \u20d7 w \u2032 \u2022 \u20d7 h n i ) + k \u2022 E w \u2032 \u223cPnw log \u03c3( \u20d7 w \u2032 \u2022 \u20d7 d n ) )", "eq_num": "(3)" } ], "section": "Parallel Document Context Model", "sec_num": "3.2" }, { "text": "where \u03c3(x) = 1/(1 + exp(\u2212x)), k is the number of \"negative\" samples, w \u2032 denotes the sampled word, and P nw denotes the distribution of negative word samples. We use stochastic gradient descent (SGD) for optimization, and the gradient is calculated via back-propagation algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Document Context Model", "sec_num": "3.2" }, { "text": "Since the above PDC model can be viewed as an extension of CBOW model, it is natural to introduce the same document-word prediction layer into the SG model. This becomes our second 1 Note that the context window size L can be a function of the target word w n i . In this paper, we use the same strategy as word2vec tools which uniformly samples from the set ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Context Model", "sec_num": "3.3" }, { "text": "{1, 2, \u2022 \u2022 \u2022 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Context Model", "sec_num": "3.3" }, { "text": "c n i\u22121 c n i+1 c n i+2 c n i\u22122 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 Projection Projection w n i Figure 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Context Model", "sec_num": "3.3" }, { "text": "The framework for HDC model. The document is used to predict the target word (\"sat\"). Then, the word (\"sat\") is used to predict the surrounding words (\"the\", \"cat\", \"on\" and \"the\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Context Model", "sec_num": "3.3" }, { "text": "model architecture as shown in Figure 3 . Specifically, the document is used to predict a target word, and the target word is further used to predict its surrounding context words. Since the prediction is conducted in a hierarchical manner, we name this model the Hierarchical Document Context (HDC) model. Similar as the PDC model, the syntagmatic relation in HDC is modeled by the document-word prediction layer and the wordcontext prediction layer models the paradigmatic relation.", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 39, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Hierarchical Document Context Model", "sec_num": "3.3" }, { "text": "Formally, the objective function of HDC model is the log likelihood of all words:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Context Model", "sec_num": "3.3" }, { "text": "\u2113= N \u2211 n=1 \u2211 w n i \u2208dn ( i+L \u2211 j=i\u2212L j\u0338 =i log p(c n j |w n i )+ log p(w n i |d n ) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Context Model", "sec_num": "3.3" }, { "text": "where p(w n i |d n ) is defined the same as in Equation (2), and p(c n j |w n i ) is also defined by a softmax function as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Context Model", "sec_num": "3.3" }, { "text": "p(c n j |w n i ) = exp( \u20d7 c n j \u2022 \u20d7 w n i ) \u2211 c\u2208W exp(\u20d7 c \u2022 \u20d7 w n i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Context Model", "sec_num": "3.3" }, { "text": "Similarly, we adopt the negative sampling technique for learning, which defines the following training objective function", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Context Model", "sec_num": "3.3" }, { "text": "\u2113 = N \u2211 n=1 \u2211 w n i \u2208dn ( i+L \u2211 j=i\u2212L j\u0338 =i ( log \u03c3( \u20d7 c n j \u2022 \u20d7 w n i ) + k \u2022 E c \u2032 \u223cPnc log \u03c3( \u20d7 c \u2032 \u2022 \u20d7 w n i ) ) + log \u03c3( \u20d7 w n i \u2022 \u20d7 d n ) + k\u2022E w \u2032 \u223cPnw log \u03c3( \u20d7 w \u2032 \u2022 \u20d7 d n ) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Context Model", "sec_num": "3.3" }, { "text": "where k is the number of the negative samples, c \u2032 and w \u2032 denotes the sampled context and word respectively, and P nc and P nw denotes the distribution of negative context and word samples respectively 2 . We also employ SGD for optimization, and calculate the gradient via back-propagation algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Context Model", "sec_num": "3.3" }, { "text": "In this section we first show how PDC and HDC models capture the syntagmatic and paradigmatic relations from the viewpoint of matrix factorization. We then talk about the relationship of our models with previous work. As pointed out in (Sahlgren, 2008) , to capture syntagmatic relations, the implementational basis is to collect text data in a words-by-documents cooccurrence matrix in which the entry indicates the (normalized) frequency of occurrence of a word in a document (or, some other type of text region, e.g., a sentence). While the implementational basis for paradigmatic relations is to collect text data in a words-by-words co-occurrence matrix that is populated by counting how many times words occur together within the context window. We now take the proposed PDC model as an example to show how it achieves these goals, and similar results can be shown for HDC model.", "cite_spans": [ { "start": 236, "end": 252, "text": "(Sahlgren, 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": "The objective function of PDC with negative sampling in Equation (3) can be decomposed into the following two parts:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2113 1 = \u2211 w\u2208W \u2211 h\u2208H ( #(w, h)\u2022 log \u03c3( \u20d7 w \u2022 \u20d7 h) +k\u2022#(h)\u2022p nw (w)log \u03c3(\u2212 \u20d7 w\u2022 \u20d7 h) )", "eq_num": "(4)" } ], "section": "Discussions", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2113 2 = \u2211 d\u2208D \u2211 w\u2208W ( #(w, d)\u2022 log \u03c3( \u20d7 w \u2022 \u20d7 d) +k\u2022|d|\u2022p nw (w)log \u03c3(\u2212 \u20d7 w\u2022 \u20d7 d) )", "eq_num": "(5)" } ], "section": "Discussions", "sec_num": "3.4" }, { "text": "where #(\u2022, \u2022) denotes the number of times the pair", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": "(\u2022, \u2022) appears in D, #(h)= \u2211 w\u2208W #(w, h), |d|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": "2 Pnc is not necessary to be the same as Pnw.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": "denotes the length of document d, the objective function \u2113 1 corresponds to the context-word prediction task and \u2113 2 corresponds to the documentword prediction task. Following the idea introduced by (Levy and Goldberg, 2014a), it is easy to show that the solution of the objective function \u2113 1 follows that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": "\u20d7 w \u2022 \u20d7 h = log( #(w, h) #(h) \u2022 p nw (w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": ") \u2212 log k and the solution of the objective function \u2113 2 follows that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": "\u20d7 w \u2022 \u20d7 d = log( #(w, d) |d| \u2022 p nw (w) ) \u2212 log k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": "It reveals that the PDC model with negative sampling is actually factorizing both a words-bycontexts co-occurrence matrix and a words-bydocuments co-occurrence matrix simultaneously.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": "In this way, we can see that the implementational basis of the PDC model is consistent with that of syntagmatic and paradigmatic models. In other words, PDC can indeed capture both syntagmatic and paradigmatic relations by processing the right distributional information. Please notice that the PDC model is not equivalent to direct combination of existing matrix factorization methods, due to the fact that the matrix entries defined in PDC model are more complicated than the simple cooccurrence frequency (Lee and Seung, 1999). When considering existing models, one may connect our models to the Distributed Memory model of Paragraph Vectors (PV-DM) and the Distributed Bag of Words version of Paragraph Vectors (PV-DBOW) (Le and Mikolov, 2014) . However, both of them are quite different from our models. In PV-DM, the paragraph vector and context vectors are averaged or concatenated to predict the next word. Therefore, the objective function of PV-DM can no longer decomposed as the PDC model as shown in Equation 4and (5). In other words, although PV-DM leverages both paragraph and context information, it is unclear how these information is collected and used in this model. As for PV-DBOW, it simply leverages paragraph vector to predict words in the paragraph. It is easy to show that it only uses the words-by-documents co-occurrence matrix, and thus only captures syntagmatic relations.", "cite_spans": [ { "start": 725, "end": 747, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": "Another close work is the Global Context-Aware Neural Language Model (GCANLM for short) (Huang et al., 2012) . The model defines two scoring components that contribute to the final score of a (word sequence, document) pair. The architecture of GCANLM seems similar to our PDC model, but exhibits lots of differences as follows: (1) GCANLM employs neural networks as components while PDC resorts to simple model structure without non-linear hidden layers;", "cite_spans": [ { "start": 88, "end": 108, "text": "(Huang et al., 2012)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": "(2) GCANLM uses weighted average of all word vectors to represent the document, which turns out to model words-by-words co-occurrence (i.e., paradigmatic relations) again rather than wordsby-documents co-occurrence (i.e., syntagmatic relations); (3) GCANLM is a language model which predicts the next word given the preceding words, while PDC model leverages both preceding and succeeding contexts for prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.4" }, { "text": "In this section, we first describe our experimental settings including the corpus, hyper-parameter selections, and baseline methods. Then we compare our models with baseline methods on two tasks, i.e., word analogy and word similarity. After that, we conduct some case studies to show that our model can better capture both syntagmatic and paradigmatic relations and how it improves the performances on semantic tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We select Wikipedia, the largest online knowledge base, to train our models. We adopt the publicly available April 2010 dump 3 (Shaoul and Westbury, 2010) , which is also used by (Huang et al., 2012; Luong et al., 2013; Neelakantan et al., 2014) . The corpus in total has 3, 035, 070 articles and about 1 billion tokens. In preprocessing, we lowercase the corpus, remove pure digit words and non-English characters 4 .", "cite_spans": [ { "start": 127, "end": 154, "text": "(Shaoul and Westbury, 2010)", "ref_id": "BIBREF26" }, { "start": 179, "end": 199, "text": "(Huang et al., 2012;", "ref_id": "BIBREF7" }, { "start": 200, "end": 219, "text": "Luong et al., 2013;", "ref_id": "BIBREF13" }, { "start": 220, "end": 245, "text": "Neelakantan et al., 2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.1" }, { "text": "Following the practice in (Pennington et al., 2014) , we set context window size as 10 and use 10 negative samples. The noise distributions for context and words are set as the same as used in (Mikolov et al., 2013a) , p nw (w) \u221d #(w) 0.75 . We also adopt the same linear learning rate strategy described in (Mikolov et al., 2013a) , where the initial learning rate of PDC model is 0.05, and We compare our models with various state-ofthe-art models including C&W (Collobert et al., 2011) , GCANLM (Huang et al., 2012) , CBOW, SG (Mikolov et al., 2013a) , GloVe (Pennington et al., 2014) , PV-DM, PV-DBOW (Le and Mikolov, 2014) and HPCA (Lebret and Collobert, 2014) . For C&W, GCANLM 6 , GloVe and HPCA, we use the word embeddings they provided. For CBOW and SG model, we reimplement these two models since the original word2vec tool uses SGD but cannot shuffle the data. Besides, we also implement PV-DM and PV-DBOW models due to (Le and Mikolov, 2014) has not released source codes. We train these four models on the same dataset with the same hyper-parameter settings as our models for fair comparison. The statistics of the corpora used in baseline models are shown in Table 1 . Moreover, since different papers report different dimensionality, to be fair, we conduct evaluations on three dimensions (i.e., 50, 100, 300) to cover the publicly available results 7 .", "cite_spans": [ { "start": 26, "end": 51, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF20" }, { "start": 193, "end": 216, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF15" }, { "start": 308, "end": 331, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF15" }, { "start": 464, "end": 488, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF2" }, { "start": 498, "end": 518, "text": "(Huang et al., 2012)", "ref_id": "BIBREF7" }, { "start": 530, "end": 553, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF15" }, { "start": 562, "end": 587, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF20" }, { "start": 605, "end": 627, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF8" }, { "start": 637, "end": 665, "text": "(Lebret and Collobert, 2014)", "ref_id": "BIBREF9" }, { "start": 931, "end": 953, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 1173, "end": 1180, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.1" }, { "text": "The word analogy task is introduced by Mikolov et al. (2013a) to quantitatively evaluate the linguistic regularities between pairs of word representations. The task consists of questions like \"a is to b as c is to \", where is missing and must be guessed from the entire vocabulary. To answer such questions, we need to find a word vector \u20d7 x, which is the closest to \u20d7 b \u2212 \u20d7 a + \u20d7 c according to the cosine similarity:", "cite_spans": [ { "start": 39, "end": 61, "text": "Mikolov et al. (2013a)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "4.2" }, { "text": "arg max x\u2208W,x\u0338 =a x\u0338 =b, x\u0338 =c ( \u20d7 b + \u20d7 c \u2212 \u20d7 a) \u2022 \u20d7 x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "4.2" }, { "text": "The question is judged as correctly answered only if x is exactly the answer word in the evaluation set. The evaluation metric for this task is the percentage of questions answered correctly. The dataset contains 5 types of semantic analogies and 9 types of syntactic analogies 8 . The semantic analogy contains 8, 869 questions, typically about people and place like \"Beijing is to China as Paris is to France\", while the syntactic analogy contains 10, 675 questions, mostly on forms of adjectives or verb tense, such as \"good is to better as bad to worse\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "4.2" }, { "text": "Result Table 2 shows the results on word analogy task. As we can see that CBOW, SG and GloVe are much stronger baselines as compare with C&W, GCANLM and HPCA. Even so, our PDC model still performs significantly better than these state-of-the-art methods (p-value < 0.01), especially with smaller vector dimensionality. More interestingly, by only training on 1 billion words, our models can outperform the GloVe model which is trained on 6 billion words. The results demonstrate that by modeling both syntagmatic and paradigmatic relations, we can learn better word representations capturing linguistic regularities.", "cite_spans": [], "ref_spans": [ { "start": 7, "end": 14, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Word Analogy", "sec_num": "4.2" }, { "text": "Besides, CBOW, SG and PV-DBOW can be viewed as sub-models of our proposed models, since they use either context (i.e., paradigmatic relations) or document (i.e., syntagmatic relations) alone to predict the target word. By comparing with these sub-models, we can see that the PDC and HDC models can perform significantly better on both syntactic and semantic subtasks. It shows that by jointly modeling the two relations, one can boost the representation learning and better capture both semantic and syntactic regularities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "4.2" }, { "text": "Besides the word analogy task, we also evaluate our models on three different word similarity tasks, including WordSim-353 (Finkelstein et al., 2002) , Stanford's Contextual Word Similarities (SCWS) (Huang et al., 2012) and rare word (RW) (Luong et al., 2013) . These datasets contain word paris together with human assigned similarity scores. We compute the Spearman rank correlation between similarity scores based on learned word representations and the human judgements. In all experiments, we removed the word pairs that cannot be found in the vocabulary.", "cite_spans": [ { "start": 123, "end": 149, "text": "(Finkelstein et al., 2002)", "ref_id": "BIBREF4" }, { "start": 199, "end": 219, "text": "(Huang et al., 2012)", "ref_id": "BIBREF7" }, { "start": 239, "end": 259, "text": "(Luong et al., 2013)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Word Similarity", "sec_num": "4.3" }, { "text": "Results Figure 4 shows results on three different word similarity datasets. First of all, our proposed PDC model always achieves the best performances on the three tasks. Besides, if we compare the PDC and HDC models with their corresponding sub-models (i.e., CBOW and SG) respectively, we can see performance gain by adding syntagmatic information via document. This gain becomes even larger for rare words with low dimensionality as shown on RW dataset. Moreover, on the SCWS dataset, our PDC model using the single-prototype representations under dimensionality 50 can achieve a comparable result (65.63) to the state-of-the-art GCANLM (65.7 as the best performance reported in (Huang et al., 2012) ) which uses multi-prototype vectors 9 .", "cite_spans": [ { "start": 681, "end": 701, "text": "(Huang et al., 2012)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 8, "end": 16, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Word Similarity", "sec_num": "4.3" }, { "text": "Here we conduct some case studies to (1) gain some intuition on how these two relations affect To show how syntagmatic and paradigmatic relations affect the learned representations, we present the 5 most similar words (by cosine similarity with 50-dimensional vectors) to a given target word under the PDC and HDC models, as well as three sub-models, i.e., CBOW, SG, and PV-DBOW. The results are shown in table 3, where words in italic are those often co-occurred with the target word (i.e., syntagmatic relations), while words in bold are whose substitutable to the target word (i.e., paradigmatic relation).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "4.4" }, { "text": "Clearly, top words from CBOW and SG models are more under paradigmatic relations, while those from PV-DBOW model are more under syn- tagmatic relations, which is quite consistent with the model design. By modeling both relations, the top words from PDC and HDC models become more diverse, i.e., more syntagmatic relations than CBOW and SG models, and more paradigmatic relations than PV-DBOW model. The results reveal that the word representations learned by PDC and HDC models are more balanced with respect to the two relations as compared with sub-models. The next question is why learning a joint model can work better on previous tasks? We first take one example from the word analogy task, which is the question \"big is to bigger as deep is to \" with the correct answer as \"deeper\". Our PDC model produce the right answer but the CBOW model fails with the answer \"shallower\". We thus embedding the learned word vectors from the two models into a 3-D space to illustrate and analyze the reason.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "4.4" }, { "text": "As shown in Figure 5 , we can see that by jointly modeling two relations, PDC model not only requires that \"deep\" to be close to \"deeper\" (in cosine similarity), but also requires that \"deep\" and \"deeper\" to be close to \"crevasses\". The additional requirements further drag these three words closer as compared with those from the CBOW model, and this make our model outperform the CBOW model on this question. As for the word similarity tasks, we find that the word pairs are either syntagmatic (e.g., \"bank\" and \"money\") or paradigmatic (e.g., \"left\" and \"abandon\"). It is, therefore, not surprising to see that a more balanced representation can achieve much better performance than a biased representation.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Case Study", "sec_num": "4.4" }, { "text": "Existing work on word representations models either syntagmatic or paradigmatic relations. In this paper, we propose two novel distributional models for word representation, using both syntagmatic and paradigmatic relations via a joint training objective. The experimental results on both word analogy and word similarity tasks show that the proposed joint models can learn much better word representations than the state-of-the-art methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Several directions remain to be explored. In this paper, the syntagmatic and paradigmatic relations are equivalently important in both PDC and HDC models. An interesting question would then be whether and how we can add different weights for syntagmatic and paradigmatic relations. Besides, we may also try to learn the multi-prototype word representations for polysemous words based on our proposed models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "http://www.psych.ualberta.ca/\u223cwestburylab/downloads/ westburylab.wikicorp.download.html4 We ignore the words less than 20 occurrences during training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Codes avaiable at http://www.bigdatalab.ac.cn/benchma rk/bm/bd?code=PDC, http://www.bigdatalab.ac.cn/benchma rk/bm/bd?code=HDC.6 Here, we use GCANLM's single-prototype embedding. 7 C&W and GCANLM only released the vectors with 50 dimensions, and HPCA released vectors with 50 and 100 dimensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://code.google.com/p/word2vec/source/browse/trunk /questions-words.txt", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note, in Figure 4, the performance of GCANLM is computed based on their released single-prototype vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "OmerLevy and Yoav Goldberg, 2014b. Proceedings of the Eighteenth Conference on Computational Natural Language Learning, chapter Linguistic Regularities in Sparse and Explicit Word Representations, pages 171-180. Association for Computational Linguistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was funded by 973 Program of China under Grants No. 2014CB340401 and 2012CB316303, and the National Natural Science Foundation of China (NSFC) under Grants No. 61232010, 61433014, 61425016, 61472401 and 61203298. We thank Ronan Collobert, Eric H. Huang, R\u00e9mi Lebret, Jeffrey Pennington and Tomas Mikolov for their kindness in sharing codes and word vectors. We also thank the anonymous reviewers for their helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Janvin", "suffix": "" } ], "year": 2003, "venue": "J. Mach. Learn. Res", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. J. Mach. Learn. Res., 3:1137-1155, March.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Extracting semantic representations from word cooccurrence statistics: A computational study", "authors": [ { "first": "John", "middle": [ "A" ], "last": "Bullinaria", "suffix": "" }, { "first": "Joseph", "middle": [ "P" ], "last": "Levy", "suffix": "" } ], "year": 2007, "venue": "Behavior Research Methods", "volume": "39", "issue": "3", "pages": "510--526", "other_ids": {}, "num": null, "urls": [], "raw_text": "John A. Bullinaria and Joseph P. Levy. 2007. Ex- tracting semantic representations from word co- occurrence statistics: A computational study. Be- havior Research Methods, 39(3):510-526.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "J. Mach. Learn. Res", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493-2537, November.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Indexing by latent semantic analysis", "authors": [ { "first": "Scott", "middle": [], "last": "Deerwester", "suffix": "" }, { "first": "Susan", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "George", "middle": [ "W" ], "last": "Furnas", "suffix": "" }, { "first": "Thomas", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Harshman", "suffix": "" } ], "year": 1990, "venue": "Journal of the American Society for Information Science", "volume": "41", "issue": "6", "pages": "391--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott Deerwester, Susan T. Dumais, George W. Fur- nas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of the American Society for Information Science, 41(6):391-407.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Placing search in context: The concept revisited", "authors": [ { "first": "Lev", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Yossi", "middle": [], "last": "Matias", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Rivlin", "suffix": "" }, { "first": "Zach", "middle": [], "last": "Solan Andgadi Wolfman", "suffix": "" }, { "first": "Eytan", "middle": [], "last": "Ruppin", "suffix": "" } ], "year": 2002, "venue": "ACM Trans. Inf. Syst", "volume": "20", "issue": "1", "pages": "116--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan andGadi Wolfman, and Ey- tan Ruppin. 2002. Placing search in context: The concept revisited. ACM Trans. Inf. Syst., 20(1):116- 131, January.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A synopsis of linguistic theory 1930-55. Studies in Linguistic Analysis (special volume of the Philological Society", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Firth", "suffix": "" } ], "year": 1957, "venue": "", "volume": "", "issue": "", "pages": "1952--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. R. Firth. 1957. A synopsis of linguistic theory 1930- 55. Studies in Linguistic Analysis (special volume of the Philological Society), 1952-59:1-32.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Distributional structure. Word", "authors": [ { "first": "Zellig", "middle": [], "last": "Harris", "suffix": "" } ], "year": 1954, "venue": "", "volume": "10", "issue": "", "pages": "146--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zellig Harris. 1954. Distributional structure. Word, 10(23):146-162.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving word representations via global context and multiple word prototypes", "authors": [ { "first": "Eric", "middle": [ "H" ], "last": "Huang", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", "volume": "1", "issue": "", "pages": "873--882", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric H. Huang, Richard Socher, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics: Long Papers -Volume 1, ACL '12, pages 873- 882, Stroudsburg, PA, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 31st International Conference on Machine Learning (ICML-14)", "volume": "", "issue": "", "pages": "1188--1196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In Tony Jebara and Eric P. Xing, editors, Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1188-1196. JMLR Workshop and Conference Proceedings.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Word embeddings through hellinger pca", "authors": [ { "first": "R\u00e9mi", "middle": [], "last": "Lebret", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "482--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "R\u00e9mi Lebret and Ronan Collobert. 2014. Word em- beddings through hellinger pca. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 482-490. Association for Computational Linguis- tics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning the parts of objects by non-negative matrix factorization", "authors": [ { "first": "D", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "H", "middle": [ "Sebastian" ], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Seung", "suffix": "" } ], "year": 1999, "venue": "Nature", "volume": "401", "issue": "6755", "pages": "788--791", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel D. Lee and H. Sebastian Seung. 1999. Learning the parts of objects by non-negative matrix factoriza- tion. Nature, 401(6755):788-791, october.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Neural word embedding as implicit matrix factorization", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems", "volume": "27", "issue": "", "pages": "2177--2185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014a. Neural word embedding as implicit matrix factorization. In Ad- vances in Neural Information Processing Systems 27, pages 2177-2185. Curran Associates, Inc., Mon- treal, Quebec, Canada.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Semantic and associative priming in a highdimensional semantic space", "authors": [ { "first": "Kevin", "middle": [], "last": "Lund", "suffix": "" }, { "first": "Curt", "middle": [], "last": "Burgess", "suffix": "" }, { "first": "Ruth", "middle": [ "Ann" ], "last": "Atchley", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 17th Annual Conference of the Cognitive Science Society", "volume": "", "issue": "", "pages": "660--665", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Lund, Curt Burgess, and Ruth Ann Atchley. 1995. Semantic and associative priming in a high- dimensional semantic space. In Proceedings of the 17th Annual Conference of the Cognitive Science Society, pages 660-665.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Better word representations with recursive neural networks for morphology", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "104--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Richard Socher, and Christo- pher D. Manning. 2013. Better word representa- tions with recursive neural networks for morphol- ogy. In Proceedings of the Seventeenth Confer- ence on Computational Natural Language Learning, pages 104-113. Association for Computational Lin- guistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Introduction to Information Retrieval", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Prabhakar", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Workshop of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. In Proceedings of Workshop of ICLR.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "26", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed repre- sentations of words and phrases and their compo- sitionality. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Contextual correlates of semantic similarity. Language & Cognitive Processes", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" }, { "first": "G", "middle": [], "last": "Walter", "suffix": "" }, { "first": "", "middle": [], "last": "Charles", "suffix": "" } ], "year": 1991, "venue": "", "volume": "6", "issue": "", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller and Walter G Charles. 1991. Contex- tual correlates of semantic similarity. Language & Cognitive Processes, 6(1):1-28.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A fast and simple algorithm for training neural probabilistic language models", "authors": [ { "first": "Andriy", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "Yee Whye", "middle": [], "last": "Teh", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 29th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1751--1758", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In Proceedings of the 29th In- ternational Conference on Machine Learning, pages 1751-1758.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Efficient non-parametric estimation of multiple embeddings per word in vector space", "authors": [ { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Jeevan", "middle": [], "last": "Shankar", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Passos", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1059--1069", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arvind Neelakantan, Jeevan Shankar, Alexandre Pas- sos, and Andrew McCallum. 2014. Efficient non-parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 1059- 1069, Doha, Qatar, October. Association for Com- putational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532-1543.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Finding content-bearing terms using term similarities", "authors": [ { "first": "Justin", "middle": [], "last": "Picard", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Ninth Conference on European Chapter of the Association for Computational Linguistics, EACL '99", "volume": "", "issue": "", "pages": "241--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Picard. 1999. Finding content-bearing terms us- ing term similarities. In Proceedings of the Ninth Conference on European Chapter of the Association for Computational Linguistics, EACL '99, pages 241-244, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "An improved model of semantic similarity based on lexical co-occurence", "authors": [ { "first": "L", "middle": [ "T" ], "last": "Douglas", "suffix": "" }, { "first": "Laura", "middle": [ "M" ], "last": "Rohde", "suffix": "" }, { "first": "David", "middle": [ "C" ], "last": "Gonnerman", "suffix": "" }, { "first": "", "middle": [], "last": "Plaut", "suffix": "" } ], "year": 2006, "venue": "Communications of the ACM", "volume": "8", "issue": "", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas L. T. Rohde, Laura M. Gonnerman, and David C. Plaut. 2006. An improved model of semantic similarity based on lexical co-occurence. Communications of the ACM, 8:627-633.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Contextual correlates of synonymy", "authors": [ { "first": "Herbert", "middle": [], "last": "Rubenstein", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Goodenough", "suffix": "" } ], "year": 1965, "venue": "Commun. ACM", "volume": "8", "issue": "10", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Commun. ACM, 8(10):627-633, October.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The distributional hypothesis", "authors": [ { "first": "Magnus", "middle": [], "last": "Sahlgren", "suffix": "" } ], "year": 2008, "venue": "Italian Journal of Linguistics", "volume": "20", "issue": "1", "pages": "33--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Magnus Sahlgren. 2008. The distributional hypothe- sis. Italian Journal of Linguistics, 20(1):33-54.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Automatic word sense discrimination", "authors": [ { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1998, "venue": "Comput. Linguist", "volume": "24", "issue": "1", "pages": "97--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hinrich Sch\u00fctze. 1998. Automatic word sense discrimination. Comput. Linguist., 24(1):97-123, March.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The westbury lab wikipedia corpus", "authors": [ { "first": "Cyrus", "middle": [], "last": "Shaoul", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Westbury", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cyrus Shaoul and Chris Westbury. 2010. The westbury lab wikipedia corpus. Edmonton, AB: University of Alberta.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Parsing natural scenes and natural language with recursive neural networks", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Cliff", "middle": [ "C" ], "last": "Lin", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th International Conference on Machine Learning (ICML-11)", "volume": "", "issue": "", "pages": "129--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Cliff C. Lin, Chris Manning, and An- drew Y. Ng. 2011. Parsing natural scenes and nat- ural language with recursive neural networks. In Lise Getoor and Tobias Scheffer, editors, Proceed- ings of the 28th International Conference on Ma- chine Learning (ICML-11), pages 129-136, New York, NY, USA. ACM.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Quantitative evaluation of passage retrieval algorithms for question answering", "authors": [ { "first": "Stefanie", "middle": [], "last": "Tellex", "suffix": "" }, { "first": "Boris", "middle": [], "last": "Katz", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Fernandes", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Marton", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, SIGIR '03", "volume": "", "issue": "", "pages": "41--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefanie Tellex, Boris Katz, Jimmy Lin, Aaron Fernan- des, and Gregory Marton. 2003. Quantitative eval- uation of passage retrieval algorithms for question answering. In Proceedings of the 26th Annual Inter- national ACM SIGIR Conference on Research and Development in Informaion Retrieval, SIGIR '03, pages 41-47, New York, NY, USA. ACM.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Example for syntagmatic and paradigmatic relations.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "Spearman rank correlation on three datasets. Results are grouped by dimensionality.", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "The 3-D embedding of learned word vectors of \"deep\", \"deeper\" and \"crevasses\" under CBOW and PDC models.", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "num": null, "content": "
the
c ncat
on
the
. . . the cat sat on the . . .
", "html": null, "type_str": "table", "text": ". . , d N } denote a corpus of N documents over the word vocabulary W . The contexts for word sat" }, "TABREF1": { "num": null, "content": "
thecatonthe
sat
. . . the cat sat on the . . .d n
", "html": null, "type_str": "table", "text": "L}." }, "TABREF2": { "num": null, "content": "
modelcorpussize
C&WWikipedia 2007 + Reuters RCV10.85B
HPCAWikipedia 20121.6B
GloVeWikipedia 2014+ Gigaword56B
GCANLM, CBOW, SG PV-DBOW, PV-DMWikipedia 20101B
HDC is 0.025. No additional regularization is used
in our models 5 .
", "html": null, "type_str": "table", "text": "Corpora used in baseline models." }, "TABREF3": { "num": null, "content": "
modelsize dim semantic syntactictotal
C&W0.85B509.3311.33 10.98
GCANLM 1B502.610.77.34
HPCA1.6B503.369.897.2
GloVe6B5048.4645.24 46.22
CBOW1B5054.3849.64 52.01
SG1B5053.7346.12 49.04
PV-DBOW 1B5055.0244.17 49.34
PV-DM1B5045.0843.22 44.25
PDC1B5061.2154.55 57.88
HDC1B5057.849.74 53.41
HPCA1.6B 1004.1615.73 10.79
GloVe6B 10065.3461.51 63.11
CBOW1B 10070.7363.01 66.87
SG1B 10067.6659.72 63.45
PV-DBOW 1B 10067.4956.29 61.51
PV-DM1B 10057.7258.81 58.45
PDC1B 10072.7767.68 70.35
HDC1B 10069.5763.75 66.67
GloVe6B 30077.4467.7571.7
CBOW1B 30076.268.44 72.39
SG1B 30078.965.72 71.88
PV-DBOW 1B 30066.8558.5 62.08
PV-DM1B 30056.8868.35 63.39
PDC1B 30079.5569.71 74.76
HDC1B 30079.6767.1 73.13
", "html": null, "type_str": "table", "text": "Results on the word analogy task. Underlined scores are the best within groups of the same dimensionality, while bold scores are the best overall." }, "TABREF4": { "num": null, "content": "
feynman
CBOWeinstein, schwinger, bohm, bethe relativity
SGschwinger, quantum, bethe, einstein semiclassical
PDCgeometrodynamics, bethe, semiclassical schwinger, perturbative
HDCschwinger, electrodynamics, bethe semiclassical, quantum
PV-DBOWphysicists, spacetime, geometrodynamics tachyons, einstein
moon
CBOWearth, moons, pluto, sun, nebula
SGearth, sun, mars, planet, aquarius
PDCsun, moons,
", "html": null, "type_str": "table", "text": "Target words and their 5 most similar words under different representations. Words in italic often co-occur with the target words, while words in bold are substitutable to the target words. lunar, heavens, earth HDC earth, sun, mars, planet, heavens PV-DBOW lunar, moons, celestial, sun, ecliptic the representation learning, and (2) analyze why the joint model can perform better." } } } }