{ "paper_id": "Y11-1033", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:39:14.696048Z" }, "title": "Modelling Word Meaning using Efficient Tensor Representations", "authors": [ { "first": "Mike", "middle": [], "last": "Symonds", "suffix": "", "affiliation": { "laboratory": "", "institution": "Queensland University of Technology", "location": { "settlement": "Brisbane", "region": "Queensland", "country": "Australia" } }, "email": "m.symonds@qut.edu.au" }, { "first": "Peter", "middle": [], "last": "Bruza", "suffix": "", "affiliation": { "laboratory": "", "institution": "Queensland University of Technology", "location": { "settlement": "Brisbane", "region": "Queensland", "country": "Australia" } }, "email": "p.bruza@qut.edu.au" }, { "first": "Laurianne", "middle": [], "last": "Sitbon", "suffix": "", "affiliation": { "laboratory": "", "institution": "Queensland University of Technology", "location": { "settlement": "Brisbane", "region": "Queensland", "country": "Australia" } }, "email": "l.sitbon@qut.edu.au" }, { "first": "Ian", "middle": [], "last": "Turner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Queensland University of Technology", "location": { "settlement": "Brisbane", "region": "Queensland", "country": "Australia" } }, "email": "i.turner@qut.edu.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Models of word meaning, built from a corpus of text, have demonstrated success in emulating human performance on a number of cognitive tasks. Many of these models use geometric representations of words to store semantic associations between words. Often word order information is not captured in these models. The lack of structural information used by these models has been raised as a weakness when performing cognitive tasks. This paper presents an efficient tensor based approach to modelling word meaning that builds on recent attempts to encode word order information, while providing flexible methods for extracting task specific semantic information.", "pdf_parse": { "paper_id": "Y11-1033", "_pdf_hash": "", "abstract": [ { "text": "Models of word meaning, built from a corpus of text, have demonstrated success in emulating human performance on a number of cognitive tasks. Many of these models use geometric representations of words to store semantic associations between words. Often word order information is not captured in these models. The lack of structural information used by these models has been raised as a weakness when performing cognitive tasks. This paper presents an efficient tensor based approach to modelling word meaning that builds on recent attempts to encode word order information, while providing flexible methods for extracting task specific semantic information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Research in the area of natural language processing has demonstrated that psychologically relevant models of word meaning can be learnt from exposure to natural language (Landauer and Dumais, 1997; Lund and Burgess, 1996; McRoy, 1992; Turney, 2008) . Many of these models are based on vector representations built from word co-occurrence statistics that aim to model various semantic relationships. Even though these semantic space models appear to identify words with similar meanings, it has been argued that they do not incorporate syntax or achieve other basic cognitive language abilities (Perfetti, 1998) .", "cite_spans": [ { "start": 170, "end": 197, "text": "(Landauer and Dumais, 1997;", "ref_id": "BIBREF3" }, { "start": 198, "end": 221, "text": "Lund and Burgess, 1996;", "ref_id": "BIBREF4" }, { "start": 222, "end": 234, "text": "McRoy, 1992;", "ref_id": "BIBREF5" }, { "start": 235, "end": 248, "text": "Turney, 2008)", "ref_id": "BIBREF15" }, { "start": 594, "end": 610, "text": "(Perfetti, 1998)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, a number of semantic space models, that learn directly from unstructured text, have been developed that encode word order into the semantic space, hence capturing more structural information about word associations (Jones and Mewhort, 2007; Sahlgren et al., 2008) . Jones and Mewhort (2007) concluded that a model that pays attention to both context and word order while learning, stands a greater chance of matching the trends found in human data. The strength of a geometric approach to encode word order is in the ability to work within a mathematically well defined framework, including the availability of many existing operators from linear algebra, such as Kronecker products. However, to our knowledge there has been very few efficient methods for implementing uncompressed Kronecker products when encoding word order information within a semantic space.", "cite_spans": [ { "start": 225, "end": 250, "text": "(Jones and Mewhort, 2007;", "ref_id": "BIBREF1" }, { "start": 251, "end": 273, "text": "Sahlgren et al., 2008)", "ref_id": "BIBREF11" }, { "start": 276, "end": 300, "text": "Jones and Mewhort (2007)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contribution of this paper is to present a novel, efficient approach to using Kronecker products to encode word order information within a semantic space. The other significant contribution is to demonstrate how applications can use our single representation to access various task specific semantic information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main areas of research that provide a theoretical framework for our model include: (i) the structuralist approaches to defining word meaning, and (ii) the use of semantic spaces to model word meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Ferdinand de Saussure (1916) argued that meaning arose from the relationships between words. He called the two types of relationships that created this meaning: (i) syntagmatic and (ii) paradigmatic associations. Saussure's structuralist ideas provide a relatively clean linguistic framework, free of psychology, sociology and anthropology, within which we can distinguish between two types of word associations that can be used to model word meaning (Holland, 1992) . This structuralist approach to linguistics has been used to motivate other semantic space models (Sahlgren et al., 2008) .", "cite_spans": [ { "start": 13, "end": 28, "text": "Saussure (1916)", "ref_id": null }, { "start": 451, "end": 466, "text": "(Holland, 1992)", "ref_id": "BIBREF1" }, { "start": 566, "end": 589, "text": "(Sahlgren et al., 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Word Meaning", "sec_num": "2.1" }, { "text": "A syntagmatic association exists between two words if they co-occur more frequently than expected from chance. Some common examples may include \"coffee-drink\" and \"sunhot\". A paradigmatic association exists between two words if they can substitute for one another in a sentence without affecting the grammaticality or acceptability of the sentence. Some common examples may include \"drink-eat\" and \"quick-fast\" (Rapp, 2002) .", "cite_spans": [ { "start": 411, "end": 423, "text": "(Rapp, 2002)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Word Meaning", "sec_num": "2.1" }, { "text": "Linked to structuralist ideas of linguistics, researchers have argued that word meaning can be modelled by comparing the distributions of words within text (Sch\u00fctze, 1993) . A popular approach to representing these word distributions is to collect word occurrence frequencies and place them in high-dimensional context vectors (Turney and Pantel, 2010) . This approach allows techniques from linear algebra to be used to model relationships between objects, including semantic associations, within the geometric space.", "cite_spans": [ { "start": 156, "end": 171, "text": "(Sch\u00fctze, 1993)", "ref_id": "BIBREF13" }, { "start": 327, "end": 352, "text": "(Turney and Pantel, 2010)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Space Models", "sec_num": "2.2" }, { "text": "Two of the most well-known semantic space models in literature are HAL (Hyperspace Analogue to Language; Lund and Burgess (1996) ) and LSA (Latent Semantic Analysis; Landauer and Dumais (1997) ). These two models differ in the way they build their context vectors. HAL builds context vectors by storing pre-and post-order word cooccurrence frequencies in a word-by-word matrix. Consider the HAL matrix, shown in table 1, created by the sentence \"a dog bit the mailman\", using a sliding context window with radius 2. The co-occurrence information preceding and post-ceding each word are recorded separately by the row and column vectors. a dog bit the dog 2 0 0 0 bit 1 2 0 0 the 0 1 2 0 mailman 0 0 1 2 LSA differs from HAL in that LSA's context vectors are formed by collecting the word occurrence frequencies within each document to create a word-document matrix. A costly technique, known as single value decomposition (SVD), is then used to reduce the dimensions of the word-document matrix to the k most significant latent concepts. Even though models based on LSA and HAL have been shown to simulate human performance on a number of cognitive tasks, it has been argued by Perfetti (1998) that these models do not capture concepts such as syntax or achieve other basic cognitive language abilities. A relevant example, includes the fact that LSA chose nurse over doctor when asked to determine the closest match to physician in a synonym judgement test. The lack of word order information in LSA is a result of the way in which it builds its context vectors, however, even though HAL would appear to hold word order information, it has been argued by Jones and Mewhort (2007) that HAL does not explicitly encode order information. A number of recent semantic space models have tried to increase the amount of structural information encoded within the representations. These include the Bound Encoding of the Aggregate Language Environment (BEAGLE) model (Jones and Mewhort, 2007) and a permutation model (Sahlgren et al., 2008) based on Random Indexing (RI) (Kanerva et al., 2000) . Both BEAGLE and the permuted RI model build their semantic spaces from a set of fixed length environment vectors. This approach allows the dimensionality of the semantic space to be contained. These fixed dimension approaches rely on the random assignment of environment vectors to create an approximately orthogonal basis, which is required to use many of the popular geometric distance measures.", "cite_spans": [ { "start": 105, "end": 128, "text": "Lund and Burgess (1996)", "ref_id": "BIBREF4" }, { "start": 166, "end": 192, "text": "Landauer and Dumais (1997)", "ref_id": "BIBREF3" }, { "start": 1197, "end": 1212, "text": "Perfetti (1998)", "ref_id": "BIBREF7" }, { "start": 1978, "end": 2003, "text": "(Jones and Mewhort, 2007)", "ref_id": "BIBREF1" }, { "start": 2028, "end": 2051, "text": "(Sahlgren et al., 2008)", "ref_id": "BIBREF11" }, { "start": 2082, "end": 2104, "text": "(Kanerva et al., 2000)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 637, "end": 721, "text": "a dog bit the dog 2 0 0 0 bit 1 2 0 0 the 0 1 2 0 mailman 0 0 1 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Semantic Space Models", "sec_num": "2.2" }, { "text": "In addition to forming context vectors, by summing environment vectors for terms that co-occur within the sliding context window, both BEAGLE and the permuted RI model create order vectors. To build order vectors BEAGLE binds the environment vectors using a circular convolution operation ( ), which is a mathematical function that compresses the Kronecker (outer) product of two vectors. The compression avoids the explosion in tensor order associated with Kronecker products, and is achieved by summing along the trans-diagonal elements of the outer product, giving rise to a vector dubbed a holographic reduced representation (HHR) (Plate, 1991) . The resulting HHR created by the n-grams within the context window are added to the term's order vector. Circular convolution is non-commutative, such that a b = b a for distinct vectors a, b. Non-commutativity is crucial as word order is usually not commutative.", "cite_spans": [ { "start": 635, "end": 648, "text": "(Plate, 1991)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Space Models", "sec_num": "2.2" }, { "text": "The main drawback of BEAGLE's encoding method comes from the cost of the binding process and the loss of information through compression of the Kronecker products (Mitchell and Lapata, 2010) . In the case of the permuted RI model, word order encoding is performed by rotating the coordinates of the sparse environment vectors in the direction of the co-occurrence (with preceding opposite to post-ceding) before summing the result with the order vector. This approach is much more efficient than circular convolution. The results of both BEAGLE and the permuted RI model show that including order information improves performance on a synonym judgement task over context information alone. We now present a model that formally encodes word order and provides the ability to compute semantic associations that underpin word meaning.", "cite_spans": [ { "start": 163, "end": 190, "text": "(Mitchell and Lapata, 2010)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Space Models", "sec_num": "2.2" }, { "text": "Our tensor encoding (TE) model builds its semantic space using an efficient binding process based on Kronecker products of theoretically unbounded unit vectors. The result contains both context and order information in a single representation we call the memory tensor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Tensor Encoding Model's Semantic Space", "sec_num": "3" }, { "text": "The way in which the TE model encodes word order is illustrated by considering our binding process for the following example sentence, \"A dog bit the mailman\", where A and the are considered to be stop words (noisy, low information terms that are ignored) and hence will not be included in the vocabulary. The resulting vocabulary includes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "Term-id Term Environment vector 1 dog e dog = (1 0 0) T 2 bit e bit = (0 1 0) T 3 mailman e mailman = (0 0 1) T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "The memory tensor for each term in the vocabulary is constructed by summing the resulting Kronecker products of the environment vectors within a sliding context window over the text. The number of environment vectors bound using Kronecker products impacts the order of the memory tensors. For this research a second order binding process was used, and results in second order tensors (matrices) being formed. Higher order TE models, which capture the co-occurrence frequencies of n-tuples, are left for future work. The second order binding process for the TE model is defined by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M w = k\u227aw k\u2208CW e k \u2297 e T w + k w k\u2208CW e w \u2297 e T k ,", "eq_num": "(1)" } ], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "where w is the target term, k is a non-stop word found within the sliding context window (CW ), k \u227a w indicates that term k appears before term w in the context window, and k w indicates that term k appears after term w. Note, stop words are not bound, but they are included when determining the window boundaries. Consider the memory matrices created for the vocabulary terms using a sliding context window with radius 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "Binding Step 1: A s [dog] bit the s mailman M dog = e dog \u2297 e T bit = \uf8eb \uf8ed 1 0 0 \uf8f6 \uf8f8 (0 1 0) = \uf8eb \uf8ed 0 1 0 0 0 0 0 0 0 \uf8f6 \uf8f8 . Binding", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "Step 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "A s dog [bit] the s mailman M bit = e dog \u2297 e T bit + e bit \u2297 e T mailman = \uf8eb \uf8ed 1 0 0 \uf8f6 \uf8f8 (0 1 0) + \uf8eb \uf8ed 0 1 0 \uf8f6 \uf8f8 (0 0 1) = \uf8eb \uf8ed 0 1 0 0 0 1 0 0 0 \uf8f6 \uf8f8 . Binding", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "Step 3: A s dog bit the s [mailman]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "M mailman = e bit \u2297 e T mailman = \uf8eb \uf8ed 0 1 0 \uf8f6 \uf8f8 (0 0 1) = \uf8eb \uf8ed 0 0 0 0 0 1 0 0 0 \uf8f6 \uf8f8 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "The resulting pattern is that all non-zero elements are situated on the row or column corresponding to the target term's term-id. If this vocabulary building process was performed over the entire corpus the general form of a memory matrix would be:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "M w = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed 0, . . . , 0, f 1w , 0, . . . , 0 . . . 0, . . . , 0, f (w\u22121)w , 0, . . . , 0 f w1 , . . . , f w(w\u22121) , f ww , f w(w+1) , . . . , f wN 0, . . . , 0, f (w+1)w , 0, . . . , 0 . . . 0, . . . , 0, f N w , 0, . . . , 0 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "where f iw is the value in row i column w of the matrix which represents the ordered co-occurrence frequencies of term i before term w, f wj is the value in row w column j of the matrix that represents the ordered co-occurrence of term j after term w, and N is the number of unique terms in the vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TE Binding Process", "sec_num": "3.1" }, { "text": "Similar to HAL, our TE model captures stronger proximity information by weighting the strength of a co-occurrence inversely proportional to the distance between the target term and the interacting term. Formally, the binding process in equation (1) becomes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Stronger Proximity Information", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M w = k\u227aw k\u2208CW (R \u2212 d k + 1).e k \u2297 e T w + k w k\u2208CW (R \u2212 d k + 1).e w \u2297 e T k ,", "eq_num": "(2)" } ], "section": "Capturing Stronger Proximity Information", "sec_num": "3.2" }, { "text": "where R is the radius of the sliding context window, and d k is the distance between term k and target term w. To demonstrate, consider our previous example sentence, noting bit and mailman are 2 words apart in the sentence (as stop words are included when calculating distance within the context window):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Stronger Proximity Information", "sec_num": "3.2" }, { "text": "Binding", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Stronger Proximity Information", "sec_num": "3.2" }, { "text": "Step (with proximity scaling):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Stronger Proximity Information", "sec_num": "3.2" }, { "text": "A s dog [bit] the s mailman M bit = 2 \u00d7 e dog \u2297 e T bit + e bit \u2297 e T mailman = 2 \u00d7 \uf8eb \uf8ed 1 0 0 \uf8f6 \uf8f8 (0 1 0) + \uf8eb \uf8ed 0 1 0 \uf8f6 \uf8f8 (0 0 1) = \uf8eb \uf8ed 0 2 0 0 0 1 0 0 0 \uf8f6 \uf8f8 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Stronger Proximity Information", "sec_num": "3.2" }, { "text": "Unlike BEAGLE and the permuted RI model, the TE model has the ability to access explicit context and order information within the one geometric representations. This means that order information can be easily ignored by combining rows and columns of the memory tensors. This can be efficiently achieved within similarity measures, as will be demonstrated in section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flexible use of Word Order", "sec_num": "3.3" }, { "text": "By using environment vectors that are unit vectors, our second order binding process creates sparse N -by-N memory matrices, with the percent sparseness proportional to 1 \u2212 2 N + 1 N 2 . This sparseness, along with the fact that no multiplication of elements is required in the binding process, allows memory matrices to be efficiently computed and stored at an implementation level. To demonstrate, consider the memory matrix for bit in the proximity scaled example above. M bit can be stored as a fixed dimensional vector of term-id (T), co-occurrence frequency (CF) pairs, (T CF):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Implementation of Tensor Computations", "sec_num": "3.4" }, { "text": "Storage vector for M bit = [(\u22121 2) (3 1)] ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Implementation of Tensor Computations", "sec_num": "3.4" }, { "text": "where parenthesis have been added to illustrate implicit grouping of (T CF) pairs, and the sign of the T component is used to capture the word order. Knowing that a context window of radius 2 was used, the storage vector above indicates that the word dog (term 1) appeared directly before (as indicated by the negative sign) the word bit, and the word mailman (term 3) occurred two words after bit. By storing the memory matrix in this way, the process of building memory matrices is achieved by searching the (T CF) pair list in the focus term's storage vector, to find a matching, ordered term-id. If a match is found then the co-occurrence frequency element of the pair is incremented.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Implementation of Tensor Computations", "sec_num": "3.4" }, { "text": "Even for applications where the vocabulary is small and the context window radius is small, there will be a number of noisy terms that co-occur with many terms. These co-occurrences with noisy terms will quickly fill the storage vectors. To ensure the model is scalable and these noisy terms are managed a number of methods are used:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Implementation of Tensor Computations", "sec_num": "3.4" }, { "text": "1. Stop-list: A stop-list is used to remove common high frequency terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Implementation of Tensor Computations", "sec_num": "3.4" }, { "text": "2. Co-occurrence frequency ratio cut-offs: Frequency cut-offs are commonly used in semantic space models (Rohde et al., 2006) . Traditionally, the cut-off is applied to the collection frequency of a term. In contrast, our approach is to use a co-occurrence frequency ratio (CFR) cut-off, and apply it during the vocabulary building process when a storage vector is full and no match on term-id exists. The CFR is used to identify a (T CF) pair to be replaced, and is determined by comparing CF Fw , where F w is the collection frequency of the target term w, to a threshold value. If the CFR is below the threshold value the pair is moved to the end of the list and updated with the (T CF) details of this new co-occurrence. The success of these storage vector management methods can be evaluated by considering their impact on the model's performance on a synonym judgement task when the dimensionality of the storage vector is varied, as shown in figure 1. The task was taken from the synonym-finding part of the Test of English as a Foreign Language (TOEFL). TOEFL is a standardized test employed by American universities to evaluate foreign applicants' knowledge of the English language, and is further explained in section 4.2.", "cite_spans": [ { "start": 105, "end": 125, "text": "(Rohde et al., 2006)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Efficient Implementation of Tensor Computations", "sec_num": "3.4" }, { "text": "The superior performance achieved by the TE model for lower dimensionality vectors is particularly beneficial when contrasting computational complexity of the various models. Both BEAGLE and RI have been shown to achieve improved performance as the environment vector dimensionality is increased, often greater than 2000 (Sahlgren et al., 2008) . The relatively superior effectiveness for storage vectors with dimensionality between 250 and 1000, compared to those greater than 1000 may be due to our storage vector management technique removing low information items when the storage vector becomes full. At larger dimensions we predict that these low information terms are not removed and this may introduce noise into the TE model's synonym judgement.", "cite_spans": [ { "start": 321, "end": 344, "text": "(Sahlgren et al., 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Efficient Implementation of Tensor Computations", "sec_num": "3.4" }, { "text": "The time complexity of the TE model's vocabulary building operation is determined by considering the worst case, in which the storage vector is full and a replacement operation is needed. In this case, the basic operation of the binding process becomes a full search of the (T CF) list, giving:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Implementation of Tensor Computations", "sec_num": "3.4" }, { "text": "T T E (n) = O( D SV", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Implementation of Tensor Computations", "sec_num": "3.4" }, { "text": "2 ), where D SV is the storage vector dimensionality. For the synonym judgement task, optimal performance is when D SV = 1000. The binding operation of the permuted RI model involves the summing of an environment vector with a context vector and a permuted environment vector with an order vector. Assumng the dimensionality of the vectors are D RI , the time complexity of the permuted RI model would be T RI (n) = O(2.D RI ), and from our discussion above D RI \u2265 2000. Therefore, our approach is argued to build the semantic space more efficiently than the permuted RI approach on the synonym judgement task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Implementation of Tensor Computations", "sec_num": "3.4" }, { "text": "One of the major advantages of our approach to encoding word order, compared to BEA-GLE and the permuted RI model, is that it captures explicit word co-occurrence frequencies. This allows probabilistic measures to be used in addition to geometric measures when extracting information from the semantic space. The following section outlines two features that effectively measure the strength of syntagmatic or paradigmatic associations crucial in modelling word meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing Word Meaning", "sec_num": "4" }, { "text": "When developing these measures we have tried to generalise the result to support the similarity between a sequence of priming words Q = (q 1 , . . . , q p ) and any vocabulary term w. This was done so that the TE model could be more easily applied to a wider range of information processing tasks. The memory matrix for the sequence of priming terms is formed by summing the memory matrices of these terms,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing Word Meaning", "sec_num": "4" }, { "text": "M Q = M q1 + . . . + M qp .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing Word Meaning", "sec_num": "4" }, { "text": "One of the most popular measures of similarity between two geometric representations is the cosine of the angle formed between them. For the unique structure of the memory matrices used in our model, two interesting results were identified when developing a cosine measure: (i) that there exists a very efficient expression for calculating the cosine of the angle between memory matrices, and (ii) the resulting expression provides an excellent measure of the strength of syntagmatic associations between the terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Measure of Syntagmatic Associations", "sec_num": "4.1" }, { "text": "For the extended general case and using linear algebra techniques, the cosine of the angle \u03b8 between memory matrices, M Q and M w , is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Measure of Syntagmatic Associations", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "cos \u03b8 = N j=1 w\u2208Q s 2 w f 2 jw + N j=1 j =w w\u2208Q s 2 w f 2 wj + qm i=q 1 i =w (s 2 i f 2 iw + s 2 i f 2 wi ) qm i=q 1 N j=1 s 2 i f 2 ji + N j=1 j =i s 2 i f 2 ij N j=1 f 2 jw + N j=1 j =w f 2 wj ,", "eq_num": "(3)" } ], "section": "A Measure of Syntagmatic Associations", "sec_num": "4.1" }, { "text": "where q 1 , . . . , q m are the list of m unique priming terms found in the sequence of all priming terms Q having m \u2264 p, s i is the number of times term q i appears in Q, f ab is the co-occurrence frequency of term a appearing before term b in the vocabulary, f ba is the co-occurrence frequency of term a appearing after term b.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Measure of Syntagmatic Associations", "sec_num": "4.1" }, { "text": "The time complexity of this measure would appear to be linear with N , the size of the vocabulary. However, the storage vectors hold a maximum of D SV 2 (T CF) pairs, where D SV is the dimensionality of the storage vector. This means that the cosine measure has maximum time complexity when the storage vector is full and hence T (n) = O( D SV 2 .|Q|), where |Q| is the number of priming terms. An additional saving when computing the cosine scores for the vocabulary terms is gained by noting that the numerator in equation 3will only be non-zero if term w has at least one interaction with a priming term (q 1 , . . . , q p ), or is a priming term itself. Therefore, equation (3) will only need to be computed for term-ids found in the storage vectors of the priming terms, (q 1 , . . . , q p ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Measure of Syntagmatic Associations", "sec_num": "4.1" }, { "text": "Nearest neighbours: Due to the unique construction of our memory matrices, it can be seen from equation 3that the cosine measure extracts primarily syntagmatic associations of the priming terms and the focus term w. Access to syntagmatic relationships can be useful for many tasks including the identification of terms most likely to precede or succeed a target term. Within our representations, this can be achieved by isolating co-occurrence frequencies in the direction of interest, effectively setting elements to 0 on the row or column not of interest in the memory matrices, M Q and M w . For example, to identify the term w that most likely precedes a sequence of priming terms Q, equation 3becomes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Measure of Syntagmatic Associations", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "cos pr \u03b8 = N j=1 w\u2208Q s 2 w f 2 jw + qm i=q 1 i =w s 2 i f 2 iw qm i=q 1 N j=1 s 2 i f 2 ji N j=1 f 2 jw ,", "eq_num": "(4)" } ], "section": "A Measure of Syntagmatic Associations", "sec_num": "4.1" }, { "text": "with an equivalent expression, using f wx instead of f xw , created to calculate most likely succeeding terms. Table 2 provides a list of most likely preceding and succeeding terms produced by the TE model for a list of target words identified in Jones and Mewhort (2007) for the BEAGLE model. The results illustrate the influence of the asymmetric nature of the memory matrices, and the effectiveness of the cosine measure to identify the strongest ordered syntagmatic associations. ", "cite_spans": [ { "start": 247, "end": 271, "text": "Jones and Mewhort (2007)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 111, "end": 118, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "A Measure of Syntagmatic Associations", "sec_num": "4.1" }, { "text": "One of the main advantages of our TE model, over BEAGLE and the permuted RI model, is the ability to capture explicit co-occurrence frequencies within the geometric representations. This result provides the model with the ability to use the element values of the geometric representations to calculate direct probabilistic measures between vocabulary terms. As an example, we developed an expression to estimate the strength of paradigmatic associations between a sequence of priming terms Q = (q 1 , . . . , q p ) and a vocabulary term w. The measure is based on enhancing terms that co-occur with the same terms as Q, and is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Measure of Paradigmatic Associations", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P par (w|Q) = 1 Z par qp j=q 1 N i=1 f ij f iw + f ji f wi f j f w ,", "eq_num": "(5)" } ], "section": "A Measure of Paradigmatic Associations", "sec_num": "4.2" }, { "text": "where f j is the vocabulary frequency of term j, f ji is the ordered co-occurrence frequency of term j before term i, N is the size of the vocabulary, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Measure of Paradigmatic Associations", "sec_num": "4.2" }, { "text": "Z par = w\u2208V k qp j=q 1 N i=1 f ij f iw +f ji f wi f j fw", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Measure of Paradigmatic Associations", "sec_num": "4.2" }, { "text": ". Since the storage vector holds a maximum of D SV", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Measure of Paradigmatic Associations", "sec_num": "4.2" }, { "text": "(T CF) pairs, the worst case time complexity of this paradigmatic measure is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "T (n) = O( D 2 SV 4 .|Q|),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "where D SV is the dimensionality of the storage vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "Synonym judgements: Paradigmatic associations are heavily used in the process of making synonym judgements. Therefore, we will evaluate our paradigmatic measure on the synonym-finding task in the TOEFL, after using the TASA (Touchstone Applied Science Associates, Inc.) corpus to build our semantic space. TASA contains 12-million words, and is a collection of English text articles that are reportedly equivalent to what the average college-level student has read in his or her lifetime. It has been extensively used to learn semantic relationships within semantic space models evaluated on TOEFL (Landauer and Dumais, 1997; Jones and Mewhort, 2007; Sahlgren et al., 2008) . In the synonym-finding part of TOEFL the participant is asked to choose one of four provided words as the most similar to the question word. It was reported that for a large sample of applicants to U.S. colleges, coming from non-English speaking countries, the average result on the synonym test was 51.6 items correct out of 80 (or 64.5%) (Landauer and Dumais, 1997) .", "cite_spans": [ { "start": 598, "end": 625, "text": "(Landauer and Dumais, 1997;", "ref_id": "BIBREF3" }, { "start": 626, "end": 650, "text": "Jones and Mewhort, 2007;", "ref_id": "BIBREF1" }, { "start": 651, "end": 673, "text": "Sahlgren et al., 2008)", "ref_id": "BIBREF11" }, { "start": 1016, "end": 1043, "text": "(Landauer and Dumais, 1997)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "A review of past papers using the TOEFL synonym test as a benchmark 1 , suggests that the corpus used, preprocessing of documents and resulting vocabulary size may impact the performance achieved (Stone et al., 2008) . Therefore, comparisons of TOEFL performance between papers is likely unreliable. A more robust comparison may be achieved by evaluating the models of interest on the same data configuration, hence we built BEAGLE and the permuted RI model. 2 In our experiments a 416 word stop list 3 was used, with the exception of the words enough, often and alone, which were present as a question or answer within TOEFL. We did not use any stemming on the vocabulary, however, the TOEFL question, expeditiously, was not found in the TASA corpus, whereas expeditious was, therefore that TOEFL question was updated to use expeditious. We also chose to remove TASA terms that contained numerics. These steps resulted in a vocabulary size of 134,054 unique terms. The performance achieved by each model is shown in figure 2. Since BEAGLE and the permuted RI model use random environment vectors, a number of runs were performed to calculate the average score. The best average results were: (i) BEAGLE=61.25% (49/80) using a context window radius (cwr) of 2, and environment vector length (evl) of 2048, and (ii) permuted RI model=38% (30/80) using cwr=5 and evl=2,000. The best TE model result was 67.5% (54/80) using cwr=1 and a storage vector length of 1,000. The BEAGLE results were similar to those reported in Jones and Mewhort (2007) , with any improvement likely due to the reduced context window length used in our experiments. The permuted RI model result is much lower than that reported in Sahlgren et al. (2008) , possibly due to the difference in vocabulary size. Their TASA vocabulary was reduced to 74,100 terms by using stemming and high frequency cut-offs.", "cite_spans": [ { "start": 196, "end": 216, "text": "(Stone et al., 2008)", "ref_id": "BIBREF14" }, { "start": 1518, "end": 1542, "text": "Jones and Mewhort (2007)", "ref_id": "BIBREF1" }, { "start": 1704, "end": 1726, "text": "Sahlgren et al. (2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "Addressing weaknesses in LSA: Landauer and Dumais (1997) indicated that some of the TOEFL errors produced by LSA, that were not made by students, may be attributed to the fact that LSA was more sensitive to paradigmatic associations, and not syntagmatic. For example, Perfetti (1998) commented that on the TOEFL, LSA chose nurse (0.47) over doctor (0.41) for the question word of physician. Even though this is Perfetti's selective example, we found that the TE model was more likely to choose doctor (P (w|Q)=0.01926) over nurse (P (w|Q) = 0.01818) for the same question.", "cite_spans": [ { "start": 30, "end": 56, "text": "Landauer and Dumais (1997)", "ref_id": "BIBREF3" }, { "start": 268, "end": 283, "text": "Perfetti (1998)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "The aim of this paper has been to present a model of word meaning that goes beyond existing semantic space models by using Kronecker products to capture word order and co-occurrence information. Our TE model overcomes weaknesses in previous models attempting to encode greater structural information by reducing the information loss without computational cost. It also provides applications with more flexibility when extracting task specific semantic information without relying on existing knowledge or POS taggers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "The ability to extend the evaluation of this model to other information processing tasks, such as word sense disambiguation, query expansion, and document retrieval, is an area for future research. Another area for further investigation includes extending the current vocabulary binding process to form higher order tensors that would allow larger n-tuple associations to be encoded in the representations underpinning the semantic space. Using higher order TE models may have advantages similar to those highlighted by Baroni and Lenci (2010) .", "cite_spans": [ { "start": 520, "end": 543, "text": "Baroni and Lenci (2010)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "http://aclweb.org/aclwiki", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The permuted RI model functions were supplied by http://code.google.com/p/semanticvectors/ 3 Stoplist taken from the Lemur toolkit for information retrieval: http://www.lemurproject.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Distributional memory: A general framework for corpus-based semantics", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" } ], "year": 2010, "venue": "Computational Linguistics", "volume": "36", "issue": "", "pages": "673--721", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baroni, Marco and Alessandro Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36, 673-721.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Representing word meaning and order information in a composite holographic lexicon", "authors": [ { "first": "Norman", "middle": [ "N" ], "last": "Holland", "suffix": "" }, { "first": "", "middle": [], "last": "Jones", "suffix": "" }, { "first": "N", "middle": [], "last": "Michael", "suffix": "" }, { "first": "J", "middle": [ "K" ], "last": "Douglas", "suffix": "" }, { "first": "", "middle": [], "last": "Mewhort", "suffix": "" } ], "year": 1992, "venue": "Psychological Review", "volume": "114", "issue": "", "pages": "1--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holland, Norman N. 1992. The Critical I. Columbia University Press, New York, USA. Jones, Michael N. and Douglas J. K. Mewhort. 2007. Representing word meaning and order information in a composite holographic lexicon. Psychological Review, 114, 1-37.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Random indexing of text samples for latent semantic analysis", "authors": [ { "first": "Pentti", "middle": [], "last": "Kanerva", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Kristoferson", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Holst", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 22nd Annual Conference of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kanerva, Pentti, Jan Kristoferson, and Anders Holst. 2000. Random indexing of text samples for latent semantic analysis. In Proceedings of the 22nd Annual Conference of the Cognitive Science Society, p. 1036.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A solution to plato's problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge", "authors": [ { "first": "T", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "S", "middle": [ "T" ], "last": "Dumais", "suffix": "" } ], "year": 1997, "venue": "Psychological Review", "volume": "104", "issue": "", "pages": "211--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Landauer, T. K. and S. T. Dumais. 1997. A solution to plato's problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104, 211-240.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Producing high-dimensional semantic spaces from lexical cooccurrence. Behavior research methods, instruments and computers", "authors": [ { "first": "K", "middle": [], "last": "Lund", "suffix": "" }, { "first": "C", "middle": [], "last": "Burgess", "suffix": "" } ], "year": 1996, "venue": "", "volume": "28", "issue": "", "pages": "203--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lund, K. and C. Burgess. 1996. Producing high-dimensional semantic spaces from lexical co- occurrence. Behavior research methods, instruments and computers, 28, 203-208.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Using multiple knowledge sources for word sense discrimination", "authors": [ { "first": "S", "middle": [ "W" ], "last": "Mcroy", "suffix": "" } ], "year": 1992, "venue": "Computational Linguistics", "volume": "18", "issue": "1", "pages": "1--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "McRoy, S. W. 1992. Using multiple knowledge sources for word sense discrimination. Computa- tional Linguistics, 18(1), 1-30.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Composition in distributional models of semantics", "authors": [ { "first": "Jeff", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2010, "venue": "Cognitive Science", "volume": "34", "issue": "8", "pages": "1388--1429", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell, Jeff and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8), 1388-1429.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The limits of co-occurrence: Tools and theories in language research", "authors": [ { "first": "Charles", "middle": [ "A" ], "last": "Perfetti", "suffix": "" } ], "year": 1998, "venue": "Discourse Processes", "volume": "25", "issue": "", "pages": "363--377", "other_ids": {}, "num": null, "urls": [], "raw_text": "Perfetti, Charles A. 1998. The limits of co-occurrence: Tools and theories in language research. Discourse Processes, 25, 363-377.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Holographic reduced representations: Convolution algebra for compositional distributed representations", "authors": [ { "first": "Tony", "middle": [], "last": "Plate", "suffix": "" } ], "year": 1991, "venue": "International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "30--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Plate, Tony. 1991. Holographic reduced representations: Convolution algebra for compositional distributed representations. In International Joint Conference on Artificial Intelligence, pp. 30-35. Morgan Kaufmann.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The computation of word associations: comparing syntagmatic and paradigmatic approaches", "authors": [ { "first": "Reinhard", "middle": [], "last": "Rapp", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th international conference on Computational linguistics", "volume": "1", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rapp, Reinhard. 2002. The computation of word associations: comparing syntagmatic and paradigmatic approaches. In Proceedings of the 19th international conference on Computa- tional linguistics -Volume 1, pp. 1-7, Morristown, NJ, USA. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An improved model of semantic similarity based on lexical co-occurence", "authors": [ { "first": "Douglas", "middle": [ "L T" ], "last": "Rohde", "suffix": "" }, { "first": "M", "middle": [], "last": "Laura", "suffix": "" }, { "first": "David", "middle": [ "C" ], "last": "Gonnerman", "suffix": "" }, { "first": "", "middle": [], "last": "Plaut", "suffix": "" } ], "year": 2006, "venue": "Communications of the ACM", "volume": "8", "issue": "", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rohde, Douglas L. T., Laura M. Gonnerman, and David C. Plaut. 2006. An improved model of semantic similarity based on lexical co-occurence. Communications of the ACM, 8, 627-633.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Permutations as a means to encode order in word space", "authors": [ { "first": "Magnus", "middle": [], "last": "Sahlgren", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Holst", "suffix": "" }, { "first": "Pentti", "middle": [], "last": "Kanerva", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 30th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sahlgren, Magnus, Anders Holst, and Pentti Kanerva. 2008. Permutations as a means to encode order in word space. In V. Sloutsky, B. Love, and K. Mcrae, eds., Proceedings of the 30th", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Annual Conference of the Cognitive Science Society", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1300--1305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Conference of the Cognitive Science Society, pp. 1300-1305. Cognitive Science Society, Austin, TX.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Word space", "authors": [ { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1993, "venue": "Advances in Neural Information Processing Systems", "volume": "5", "issue": "", "pages": "895--902", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sch\u00fctze, Hinrich. 1993. Word space. In Advances in Neural Information Processing Systems 5, pp. 895-902. Morgan Kaufmann.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A systematic comparison of semantic models on human similarity rating data: The effectiveness of subspacing", "authors": [ { "first": "Benjamin", "middle": [ "P" ], "last": "Stone", "suffix": "" }, { "first": "J", "middle": [], "last": "Simon", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Dennis", "suffix": "" }, { "first": "", "middle": [], "last": "Kwantes", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Thirteeth Conference of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stone, Benjamin P., Simon J. Dennis, and Peter J. Kwantes. 2008. A systematic comparison of se- mantic models on human similarity rating data: The effectiveness of subspacing. In Proceedings of the Thirteeth Conference of the Cognitive Science Society. Cognitive Science Society.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A uniform approach to analogies, synonyms, antonyms, and associations", "authors": [ { "first": "Peter", "middle": [ "D" ], "last": "Turney", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "905--912", "other_ids": {}, "num": null, "urls": [], "raw_text": "Turney, Peter D. 2008. A uniform approach to analogies, synonyms, antonyms, and associations. In Proceedings of the 22nd International Conference on Computational Linguistics -Volume 1, COLING '08, pp. 905-912, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "From frequency to meaning: vector space models of semantics", "authors": [ { "first": "Peter", "middle": [ "D" ], "last": "Turney", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2010, "venue": "Journal of Artificial Intelligence Research", "volume": "37", "issue": "", "pages": "141--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Turney, Peter D. and Patrick Pantel. 2010. From frequency to meaning: vector space models of semantics. Journal of Artificial Intelligence Research, 37, 141-188, January.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Performance on a synonym judgement task for storage vectors of various dimensions." }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "TOEFL performance for the Tensor Encoding, BEAGLE and permuted RI model." }, "TABREF0": { "num": null, "content": "", "html": null, "text": "Example HAL Space", "type_str": "table" }, "TABREF1": { "num": null, "content": "
KINGPRESIDENTWARSEA
kingkingpresidentpresidentwarwarseasea
luther:0.419jr:0.945vice:0.905roosevelt:0.948civil:0.989ii:0.918mediterran-ean:0.995level:0.972
martin:0.288midas:0.695elected:0.834kennedy:0.927world:0.851ended:0.298caribbean:0.857anemone:0.315
dr:0.185arthur:0.419former:0.14nixon: 0.876revolution-ary: 0.524effort: 0.056baltic:0.738urchins:0.256
french:0.146minos:0.307new:0.07johnson:0.613spanish-american:0.306began:0.038caspian:0.714captains:0.252
rex:0.03queen:0.193our:0.036lincoln:0.522during:0.122between:0.024aegean:0.675gull:0.157
english:0.025myron:0.165twenty-seventh:0.012carter:0.386declare:0.085broke:0.024sargasso:0.592gulls:0.154
", "html": null, "text": "Top 6 lexical representations produced for a word preceding or succeeding a target word.", "type_str": "table" } } } }