{ "paper_id": "P08-1048", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:35:34.487703Z" }, "title": "Evaluating Roget's Thesauri", "authors": [ { "first": "Alistair", "middle": [], "last": "Kennedy", "suffix": "", "affiliation": {}, "email": "akennedy@site.uottawa.ca" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Roget's Thesaurus has gone through many revisions since it was first published 150 years ago. But how do these revisions affect Roget's usefulness for NLP? We examine the differences in content between the 1911 and 1987 versions of Roget's, and we test both versions with each other and WordNet on problems such as synonym identification and word relatedness. We also present a novel method for measuring sentence relatedness that can be implemented in either version of Roget's or in WordNet. Although the 1987 version of the Thesaurus is better, we show that the 1911 version performs surprisingly well and that often the differences between the versions of Roget's and WordNet are not statistically significant. We hope that this work will encourage others to use the 1911 Roget's Thesaurus in NLP tasks.", "pdf_parse": { "paper_id": "P08-1048", "_pdf_hash": "", "abstract": [ { "text": "Roget's Thesaurus has gone through many revisions since it was first published 150 years ago. But how do these revisions affect Roget's usefulness for NLP? We examine the differences in content between the 1911 and 1987 versions of Roget's, and we test both versions with each other and WordNet on problems such as synonym identification and word relatedness. We also present a novel method for measuring sentence relatedness that can be implemented in either version of Roget's or in WordNet. Although the 1987 version of the Thesaurus is better, we show that the 1911 version performs surprisingly well and that often the differences between the versions of Roget's and WordNet are not statistically significant. We hope that this work will encourage others to use the 1911 Roget's Thesaurus in NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Roget's Thesaurus, first introduced over 150 years ago, has gone through many revisions to reach its current state. We compare two versions, the 1987 and 1911 editions of the Thesaurus with each other and with WordNet 3.0. Roget's Thesaurus has a unique structure, quite different from WordNet, of which the NLP community has yet to take full advantage. In this paper we demonstrate that although the 1911 version of the Thesaurus is very old, it can give results comparable to systems that use WordNet or newer versions of Roget's Thesaurus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main motivation for working with the 1911 Thesaurus instead of newer versions is that it is in the public domain, along with related NLP-oriented software packages. For applications that call for an NLP-friendly thesaurus, WordNet has become the de-facto standard. Although WordNet is a fine resources, we believe that ignoring other thesauri is a serious oversight. We show on three applications how useful the 1911 Thesaurus is. We ran the wellestablished tasks of determining semantic relatedness of pairs of terms and identifying synonyms (Jarmasz and Szpakowicz, 2004) . We also proposed a new method of representing the meaning of sentences or other short texts using either WordNet or Roget's Thesaurus, and tested it on the data set provided by Li et al. (2006) . We hope that this work will encourage others to use Roget's Thesaurus in their own NLP tasks.", "cite_spans": [ { "start": 547, "end": 577, "text": "(Jarmasz and Szpakowicz, 2004)", "ref_id": "BIBREF8" }, { "start": 757, "end": 773, "text": "Li et al. (2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous research on the 1987 version of Roget's Thesaurus includes work of Jarmasz and Szpakowicz (2004) . They propose a method of determining semantic relatedness between pairs of terms. Terms that appear closer together in the Thesaurus get higher weights than those farther apart. The experiments aimed at identifying synonyms using a modified version of the proposed semantic similarity function. Similar experiments were carried out using WordNet in combination with a variety of semantic relatedness functions. Roget's Thesaurus was found generally to outperform WordNet on these problems. We have run similar experiments using the 1911Thesaurus.", "cite_spans": [ { "start": 76, "end": 105, "text": "Jarmasz and Szpakowicz (2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lexical chains have also been developed using the 1987 Roget's Thesaurus (Jarmasz and Szpakowicz, 2003) . The procedure maps words in a text to the Head (a Roget's concept) from which they are most likely to come. Although we did not experiment with lexical chains here, they were an inspiration for our sentence relatedness function.", "cite_spans": [ { "start": 73, "end": 103, "text": "(Jarmasz and Szpakowicz, 2003)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Roget's Thesaurus does not explicitly label the relations between its terms, as WordNet does. Instead, it groups terms together with implied relations. Kennedy and Szpakowicz (2007) show how disambiguating one of these relations, hypernymy, can help improve the semantic similarity functions in (Jarmasz and Szpakowicz, 2004) . These hypernym relations were also put towards solving analogy questions.", "cite_spans": [ { "start": 152, "end": 181, "text": "Kennedy and Szpakowicz (2007)", "ref_id": "BIBREF10" }, { "start": 295, "end": 325, "text": "(Jarmasz and Szpakowicz, 2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This is not the first time the 1911 version of Roget's Thesaurus has been used in NLP research. Cassidy (2000) used it to build the semantic network FACTOTUM. This required significant (manual) restructuring, so FACTOTUM cannot really be considered a true version of Roget's Thesaurus.", "cite_spans": [ { "start": 96, "end": 110, "text": "Cassidy (2000)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The 1987 data come from Penguin's Roget's Thesaurus (Kirkpatrick, 1987) . The 1911 version is available from Project Gutenberg 1 . We use WordNet 3.0, the latest version (Fellbaum, 1998) . In the experiments we present here, we worked with an interface to Roget's Thesaurus implemented in Java 5.0 2 . It is built around a large index which stores the location in the thesaurus of each word or phrase; the system individually indexes all words within each phrase, as well as the phrase itself. This was shown to improve results in a few applications, which we will discuss later in the paper.", "cite_spans": [ { "start": 52, "end": 71, "text": "(Kirkpatrick, 1987)", "ref_id": null }, { "start": 170, "end": 186, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although the 1987 and 1911 Thesauri are very similar in structure, there are a few differences, among them, the number of levels and the number of partsof-speech represented. For example, the 1911 version contains some pronouns as well as more sections dedicated to phrases. There are nine levels in Roget's Thesaurus hierarchy, from Class down to Word. We show them in Table 1 along with the counts of instances of each level. An example of a Class in the 1911 Thesaurus is \"Words Expressing Abstract Relations\", a Section in that Class is \"Quantity\" with a Subsection \"Comparative Quantity\". Heads can be thought of as the heart of the Thesaurus because it is at this level that the lexical material, organized into approximately a thousand concepts, resides. Head Groups often pair up opposites, for example Head #1 \"Existence\" and Head #2 \"Nonexistence\" are found in the same Head Group in both versions of the Thesaurus. Terms in the Thesaurus may be labelled with cross-references to other words in different Heads. We did not use these references in our experiments. The part-of-speech level is a little confusing, since clearly no such grouping contains an exhaustive list of all nouns, all verbs etc. We will write \"POS\" to indicate a structure in Roget's and \"part-of-speech\" to indicate the word category in general. The four main parts-of-speech represented in a POS are nouns, verbs, adjectives and adverbs. Interjections are also included in both the 1911 and 1987 thesauri; they are usually phrases followed by an exclamation mark, such as \"for God's sake!\" and \"pshaw!\". The Paragraph and Semicolon Group are not given names, but can often be represented by the first word.", "cite_spans": [], "ref_spans": [ { "start": 370, "end": 377, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Content comparison of the 1911 and 1987 Thesauri", "sec_num": "2" }, { "text": "The 1911 version also contains phrases (mostly quotations), prefixes and pronouns. There are only three prefixes -\"tri-\", \"tris-\", \"laevo-\" -and six pronouns -\"he\", \"him\", \"his\", \"she\", \"her\", \"hers\". Table 2 shows the frequency of paragraphs, semicolon groups and both total and unique words in a given type of POS. Many terms occur both in the 1911 and 1987 Thesauri, but many more are unique to either. Surprisingly, quite a few 1911 terms do not appear in the 1987 data, as shown in Table 3 ; many of them may have been considered obsolete and thus dropped from the 1987 version. For example \"ingrafted\" appears in the same semicolon group as \"implanted\" in the older but not the newer version. Some mismatches may be due to small changes in spelling, for example, \"Nirvana\" is capitalized in the 1911 version, but not in the 1987 version. The lexical data in Project Gutenberg's 1911 Roget's appear to have been somewhat added to. For example, the citation \"Go ahead, make my day!\" from the 1971 movie Dirty Harry appears twice (in Heads #715-Defiance and #761-Prohibition) within the Phrase POS. It is not clear to what extent new terms have been added to the original 1911 Roget's Thesaurus, or what the criteria for adding such new elements could have been.", "cite_spans": [], "ref_spans": [ { "start": 201, "end": 208, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 487, "end": 494, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Content comparison of the 1911 and 1987 Thesauri", "sec_num": "2" }, { "text": "In the end, there are many differences between the 1987 and 1911 Roget's Thesauri, primarily in con-tent rather than in structure. The 1987 Thesaurus is largely an expansion of the 1911 version, with three POSs (phrases, pronouns and prefixes) removed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content comparison of the 1911 and 1987 Thesauri", "sec_num": "2" }, { "text": "In this section we consider how the two versions of Roget's Thesaurus and WordNet perform in three applications -measuring word relatedness, synonym identification, and sentence relatedness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison on applications", "sec_num": "3" }, { "text": "Relatedness can be measured by the closeness of the words or phrases -henceforth referred to as termsin the structure of the thesaurus. Two terms in the same semicolon group score 16, in the same paragraph -14, and so on (Jarmasz and Szpakowicz, 2004) . The score is 0 if the terms appear in different classes, or if either is missing. Pairs of terms get higher scores for being closer together. When there are multiple senses of two terms A and B, we want to select senses a \u2208 A and b \u2208 B that maximize the relatedness score. We define a distance function:", "cite_spans": [ { "start": 221, "end": 251, "text": "(Jarmasz and Szpakowicz, 2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Word relatedness", "sec_num": "3.1" }, { "text": "semDist(A, B) = max a\u2208A,b\u2208B 2 * (depth(lca(a, b)))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word relatedness", "sec_num": "3.1" }, { "text": "lca is the lowest common ancestor and depth is the depth in the Roget's hierarchy; a Class has depth 0, Section 1, ..., Semicolon Group 8. If we think of the function as counting edges between concepts in the Roget's hierarchy, then it could also be written as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word relatedness", "sec_num": "3.1" }, { "text": "semDist(A, B) = max a\u2208A,b\u2208B 16\u2212edgesBetween(a, b)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word relatedness", "sec_num": "3.1" }, { "text": "We do not count links between words in the same semicolon group, so in effect these methods find distances between semicolon groups, that is to say, these two functions will give the same results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word relatedness", "sec_num": "3.1" }, { "text": "The 1911 and 1987 Thesauri were compared with WordNet 3.0 on the three data sets containing pairs of words with manually assigned similarity scores: 30 pairs (Miller and Charles, 1991) , 65 pairs (Rubenstein and Goodenough, 1965) and 353 pairs 3 (Finkelstein et al., 2001) . We assume that all terms are nouns, so that we can have a fair comparison of the two Thesauri with WordNet. We measure the correlation with Pearson's Correlation Coefficient. A preliminary experiment set out to determine whether there is any advantage to indexing the words in a phrase separately, for example, whether the phrase \"change of direction\" should be indexed only as a whole, or as all of \"change\", \"of\", \"direction\" and \"change of direction\". The outcome of this experiment appears in Table 4 . There is a clear improvement: breaking phrases up gives superior results on all three data sets, for both versions of Roget's. In the remaining experiments, we have each word in a phrase indexed.", "cite_spans": [ { "start": 158, "end": 184, "text": "(Miller and Charles, 1991)", "ref_id": "BIBREF18" }, { "start": 196, "end": 229, "text": "(Rubenstein and Goodenough, 1965)", "ref_id": "BIBREF23" }, { "start": 246, "end": 272, "text": "(Finkelstein et al., 2001)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 772, "end": 779, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Word relatedness", "sec_num": "3.1" }, { "text": "We compare the results for the 1911 and 1987 Roget's Thesauri with a variety of WordNet-based semantic relatedness measures -see Table 5 . We consider 10 measures, noted in the table as J&C (Jiang and Conrath, 1997) , Resnik (Resnik, 1995) , Lin (Lin, 1998) , W&P (Wu and Palmer, 1994) , L&C (Leacock and Chodorow, 1998) , H&SO (Hirst and St-Onge, 1998) , Path (counts edges between synsets), Lesk (Banerjee and Pedersen, 2002) , and finally Vector and Vector Pair (Patwardhan, 2003) . The latter two work with large vectors of cooccurring terms from a corpus, so WordNet is only part of the system. We used Pedersen's Semantic Distance software package (Pedersen et al., 2004) .", "cite_spans": [ { "start": 190, "end": 215, "text": "(Jiang and Conrath, 1997)", "ref_id": "BIBREF9" }, { "start": 218, "end": 239, "text": "Resnik (Resnik, 1995)", "ref_id": "BIBREF22" }, { "start": 246, "end": 257, "text": "(Lin, 1998)", "ref_id": "BIBREF16" }, { "start": 264, "end": 285, "text": "(Wu and Palmer, 1994)", "ref_id": "BIBREF27" }, { "start": 292, "end": 320, "text": "(Leacock and Chodorow, 1998)", "ref_id": "BIBREF13" }, { "start": 328, "end": 353, "text": "(Hirst and St-Onge, 1998)", "ref_id": "BIBREF5" }, { "start": 398, "end": 427, "text": "(Banerjee and Pedersen, 2002)", "ref_id": "BIBREF0" }, { "start": 465, "end": 483, "text": "(Patwardhan, 2003)", "ref_id": "BIBREF20" }, { "start": 654, "end": 677, "text": "(Pedersen et al., 2004)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 129, "end": 136, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Word relatedness", "sec_num": "3.1" }, { "text": "The results suggest that neither version of Roget's is best for these data sets. In fact, the Vector method is superior on all three sets, and the Lesk algorithm performs very closely to Roget's 1987. Even on the largest set (Finkelstein et al., 2001) , however, the differences between Roget's Thesaurus and the Vector method are not statistically significant at the p < 0.05 level for either thesaurus on a two-tailed test 4 . The difference between the 1911 Thesaurus and Vector would be statistically signifi- (Miller and Charles, 1991) and (Rubenstein and Goodenough, 1965) data sets the best system did not show a statistically significant improvement over the 1911 or 1987 Roget's Thesauri, even at p < 0.1 for a two-tailed test. These data sets are too small for a meaningful comparison of systems with close correlation scores.", "cite_spans": [ { "start": 225, "end": 251, "text": "(Finkelstein et al., 2001)", "ref_id": "BIBREF4" }, { "start": 514, "end": 540, "text": "(Miller and Charles, 1991)", "ref_id": "BIBREF18" }, { "start": 545, "end": 578, "text": "(Rubenstein and Goodenough, 1965)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Word relatedness", "sec_num": "3.1" }, { "text": "In this problem we take a term q and we seek the correct synonym s from a set C. There are two steps. We used the system from (Jarmasz and Szpakowicz, 2004) for identifying synonyms with Roget's. First we find a set of terms B \u2286 C with the maximum relatedness between q and each term x \u2208 C:", "cite_spans": [ { "start": 126, "end": 156, "text": "(Jarmasz and Szpakowicz, 2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Synonym identification", "sec_num": "3.2" }, { "text": "B = {x | argmax x\u2208C semDist(x, q)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synonym identification", "sec_num": "3.2" }, { "text": "Next, we take the set of terms A \u2286 B where each a \u2208 A has the maximum number of shortest paths between a and q.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synonym identification", "sec_num": "3.2" }, { "text": "A = {x | argmax x\u2208B numberShortestP aths(x, q)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synonym identification", "sec_num": "3.2" }, { "text": "If s \u2208 A and |A| = 1, the correct synonym has been selected. Often the sets A and B will contain just one item. If s \u2208 A and |A| > 1, there is a tie. If s / \u2208 A then the selected synonyms are incorrect. If a multi-word phrase c \u2208 C of length n is not found, it is replaced by each of its words c 1 , c 2 ..., c n , and each of these words is considered in turn. The c i that is closest to q is chosen to represent c. When searching for a word in Roget's or WordNet, we look for all forms of the word. The results of these experiments appear in Table 6. \"Yes\" indicates correct answers, \"No\" -incorrect answers, and \"Tie\" is for ties. QNF stands for \"Question word Not Found\", ANF for \"Answer word Not Found\" and ONF for \"Other word Not Found\". We used three data sets for this application: 80 questions taken from the Test of English as a Foreign Language (TOEFL) (Landauer and Dumais, 1997) , 50 questions -from the English as a Second Language test (ESL) (Turney, 2001 ) and 300 questions -from the Reader's Digest Word Power Game (RDWP) (Lewis, 2000 and .", "cite_spans": [ { "start": 864, "end": 891, "text": "(Landauer and Dumais, 1997)", "ref_id": "BIBREF12" }, { "start": 957, "end": 970, "text": "(Turney, 2001", "ref_id": "BIBREF26" }, { "start": 1040, "end": 1056, "text": "(Lewis, 2000 and", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Synonym identification", "sec_num": "3.2" }, { "text": "Lesk and the Vector-based systems perform better than all others, including Roget's 1911 and 1987. Even so, both versions of Roget's Thesaurus performed well, and were never worse than the worst WordNet systems. In fact, six of the ten Word-Net-based methods are consistently worse than the 1911 Thesaurus. Since the two Vector-based systems make use of additional data beyond WordNet, Lesk is the only completely WordNet-based system to outperform Roget's 1987. One advantage of Roget's Thesaurus is that both versions generally have fewer missing terms than WordNet, though Lesk, Hirst & St-Onge and the two vector based methods had fewer missing terms than Roget's. This may be because the other WordNet methods will only work for nouns and verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synonym identification", "sec_num": "3.2" }, { "text": "Our final experiment concerns sentence relatedness. We worked with a data set from (Li et al., 2006) 5 . They took a subset of the term pairs from (Rubenstein and Goodenough, 1965) and chose sentences to represent these terms; the sentences are definitions from the Collins Cobuild dictionary (Sinclair, 2001) . Thirty people were then asked to assign relatedness scores to these sentences, and the average of these similarities was taken for each sentence.", "cite_spans": [ { "start": 83, "end": 100, "text": "(Li et al., 2006)", "ref_id": "BIBREF15" }, { "start": 293, "end": 309, "text": "(Sinclair, 2001)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence relatedness", "sec_num": "3.3" }, { "text": "Other methods of determining sentence semantic relatedness expand term relatedness functions to create a sentence relatedness function (Islam and Inkpen, 2007; Mihalcea et al., 2006) . We propose to approach the task by exploiting in other ways the commonalities in the structure of Roget's Thesaurus and of WordNet. We use the OpenNLP toolkit 6 for segmentation and part-of-speech tagging.", "cite_spans": [ { "start": 135, "end": 159, "text": "(Islam and Inkpen, 2007;", "ref_id": "BIBREF6" }, { "start": 160, "end": 182, "text": "Mihalcea et al., 2006)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence relatedness", "sec_num": "3.3" }, { "text": "We use a method of sentence representation that involves mapping the sentence into weighted concepts in either Roget's or WordNet. We mean a concept in Roget's to be either a Class, Section, ..., Semicolon Group, while a concept in WordNet is any synset. Essentially a concept is a grouping of words from either resource. Concepts are weighted by two criteria. The first is how frequently words from the sentence appear in these concepts. The second is the depth (or specificity) of the concept itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence relatedness", "sec_num": "3.3" }, { "text": "Each word and punctuation mark w in a sentence is given a score of 1. (Naturally, only open-category words will be found in the thesaurus.) If w has n word senses w 1 , ..., w n , each sense gets a score of 1/n, so that 1/n is added to each concept in the Roget's hierarchy (semicolon group, paragraph, ..., class) or WordNet hierarchy that contains w i . We weight concepts in this way simply because, unable to determine which sense is correct, we assume that all senses are equally probable. Each concept in Roget's Thesaurus and WordNet gets the sum of the scores of the concepts below it in its hierarchy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting based on word frequency", "sec_num": "3.3.1" }, { "text": "We will define the scores recursively for a concept c in a sentence s and sub-concepts c i . For example, in Roget's if the concept c were a Class, then each c i would be a Section. Likewise, in WordNet if c were a synset, then each c i would be a hyponym synset of c. Obviously if c is a word sense w i (a word in either a synset or a Semicolon Group), then there can be no sub-concepts c i . When c = w i , the score for c is the sum of all occurrences of the word w in sentence s divided by the number of senses of the word w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting based on word frequency", "sec_num": "3.3.1" }, { "text": "score(c, s) = instancesOf (w,s) sensesOf (w) if c = w i c i \u2208c score(c i , s) otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting based on word frequency", "sec_num": "3.3.1" }, { "text": "See Table 7 for an example of how this sentence representation works. The sentence \"A gem is a jewel or stone that is used in jewellery.\" is represented using the 1911 Roget's. A concept is identi-fied by a name and a series of up to 9 numbers that indicate where in the thesaurus it appears. The first number represents the Class, the second the Section, ..., the ninth the word. We only show concepts with weights greater than 1.0. Words not in the thesaurus keep a weight of 1.0, but this weight will not increase the weight of any concepts in Roget's or WordNet. Apart from the function words \"or\", \"in\", \"that\" and \"a\" and the period, only the word \"jewellery\" had a weight above 1.0. The categories labelled 6, 6.2 and 6.2.2 are the only ancestors of the word \"use\" that ended up with the weights above 1.0. The words \"gem\", \"is\", \"jewel\", \"stone\" and \"used\" all contributed weight to the categories shown in Table 7 , and to some categories with weights lower than 1.0, but no sense of the words themselves had a weight greater than 1.0.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 11, "text": "Table 7", "ref_id": null }, { "start": 915, "end": 922, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Weighting based on word frequency", "sec_num": "3.3.1" }, { "text": "It is worth noting that this method only relies on the hierarchies in Roget's and WordNet. We do not take advantage of other WordNet relations such as hyponymy, nor do we use any cross-reference links that exist in Roget's Thesaurus. Including such relations might improve our sentence relatedness system, but that has been left for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting based on word frequency", "sec_num": "3.3.1" }, { "text": "To determine sentence relatedness, one could, for example, flatten the structures like those in Table 7 into vectors and measure their closeness by some vector distance function such as cosine similarity. There is a problem with this, though. A concept inherits the weights of all its sub-concepts, so the concepts that appear closer to the root of the tree will far outweigh others. Some sort of weighting function should be used to re-adjust the weights of particular concepts. Were this an Information Retrieval task, weighting schemes such as tf.idf for each concept could apply, but for sentence relatedness we propose an ad hoc weighting scheme based on assumptions about which concepts are most important to sentence representation. This weighting scheme is the second element of our sentence relatedness function.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 103, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Weighting based on specificity", "sec_num": "3.3.2" }, { "text": "We weight a concept in Roget's and in WordNet by how many words in a sentence give weight to it. We need to re-weight it based on how specific it is. Clearly, concepts near the leaves of the hierarchy are more specific than those close to the root of the hierarchy. We define specificity as the distance in levels between a given word and each concept found above Table 7 : \"A gem is a jewel or stone that is used in jewellery.\" as represented using Roget's 1911.", "cite_spans": [], "ref_spans": [ { "start": 364, "end": 371, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Weighting based on specificity", "sec_num": "3.3.2" }, { "text": "it in the hierarchy. In Roget's Thesaurus there are exactly 9 levels from the term to the class. In WordNet there will be as many levels as a word has ancestors up the hypernymy chain. In Roget's, a term has specificity 1, a Semicolon Group 2, a Paragraph 3, ..., a Class 9. In WordNet, the specificity of a word is 1, its synset -2, the synset's hypernym -3, its hypernym -4, and so on. Words not found in the Thesaurus or in WordNet get specificity 1. We seek a function that, given s, assigns to all concepts of specificity s a weight progressively larger than to their neighbours. The weights in this function should be assigned based on specificity, so that all concepts of the same specificity receive the same score. Weights will differ depending on a combination of specificity and how frequently words that signal the concepts appear in a sentence. The weight of concepts with specificity s should be the highest, of those with specificity s \u00b1 1 -lower, of those with specificity s \u00b1 2 lower still, and so on. In order to achieve this effect, we weight the concepts using a normal distribution, where the mean is s:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting based on specificity", "sec_num": "3.3.2" }, { "text": "f (x) = 1 \u03c3 \u221a 2\u03c0 e \" \u2212 (x\u2212s) 2 2\u03c3 2 \u00ab", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting based on specificity", "sec_num": "3.3.2" }, { "text": "Since the Head is often considered the main category in Roget's, we expect a specificity of 5 to be best, but we decided to test the values 1 through 9 as a possible setting for specificity. We do not claim that this weighting scheme is optimal; other weighting schemes might do better. For the purpose of comparing the 1911 and 1987 Thesauri and Word-Net, however, this method appears sufficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting based on specificity", "sec_num": "3.3.2" }, { "text": "With this weighting scheme, we determine the distance between two sentences using cosine similarity:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting based on specificity", "sec_num": "3.3.2" }, { "text": "cosSim(A, B) = a i * b i a 2 i * b 2 i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting based on specificity", "sec_num": "3.3.2" }, { "text": "For this problem we used the MIT Java WordNet Interface version 1.1.1 7 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting based on specificity", "sec_num": "3.3.2" }, { "text": "We used this method of representation for Roget's of 1911 and of 1987, as well as for WordNet 3.0see Figure 1 . For comparison, we also implemented a baseline method that we refer to as Simple: we built vectors out of words and their count.", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 109, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Sentence similarity results", "sec_num": "3.3.3" }, { "text": "It can be seen in Figure 1 that each system is superior for at least one of the nine specificities. The Simple method is best at a specificity of 1, 8 and 9, Roget's Thesaurus 1911 is best at 6, Roget's Thesaurus 1987 is best at 4, 5 and 7, and WordNet is best at 2 and 3. The systems based on Roget's and WordNet more or less followed a bell-shaped curve, with the curves of the 1911 and 1987 Thesauri following each other fairly closely and peaking close together. WordNet clearly peaked first and then fell the farthest.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 26, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Sentence similarity results", "sec_num": "3.3.3" }, { "text": "The best correlation result for the 1987 Roget's Thesaurus is 0.8725 when the mean is 4, the POS. The maximum correlation for the 1911 Thesaurus is 0.8367, where the mean is 5, the Head. The maximum for WordNet is 0.8506, where the mean is 3, or the first hypernym synset. This suggests that the POS and Head are most important for representing text in Roget's Thesaurus, while the first hypernym is most important for representing text using Word-Net. For the Simple method, we found a more modest correlation of 0.6969. Several other methods have given very good scores on this data set. For the system in (Li et al., 2006) , where this data set was first introduced, a correlation of 0.816 with the human annotators was achieved. The mean of all human annotators had a score of 0.825, with a standard deviation of 0.072. In (Islam and Inkpen, 2007) , an even better system was proposed, with a correlation of 0.853.", "cite_spans": [ { "start": 608, "end": 625, "text": "(Li et al., 2006)", "ref_id": "BIBREF15" }, { "start": 827, "end": 851, "text": "(Islam and Inkpen, 2007)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence similarity results", "sec_num": "3.3.3" }, { "text": "Selecting the mean that gives the best correlation could be considered as training on test data. However, were we simply to have selected a value somewhere in the middle of the graph, as was our original intuition, it would have given an unfair advantage to either version of Roget's Thesaurus over Word-Net. Our system shows good results for both versions of Roget's Thesauri and WordNet. The 1987 Thesaurus once again performs better than the 1911 version and than WordNet. Much like (Miller and Charles, 1991) , the data set used here is not large enough to determine if any system's improvement is statistically significant.", "cite_spans": [ { "start": 486, "end": 512, "text": "(Miller and Charles, 1991)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence similarity results", "sec_num": "3.3.3" }, { "text": "The 1987 version of Roget's Thesaurus performed better than the 1911 version on all our tests, but we did not find the differences to be statistically significant. It is particularly interesting that the 1911 Thesaurus performed as well as it did, given that it is almost 100 years old. On problems such as semantic word relatedness, the 1911 Thesaurus performance was fairly close to that of the 1987 Thesaurus, and was comparable to many WordNet-based measures. For problems of identifying synonyms both versions of Roget's Thesaurus performed relatively well compared to most WordNet-based methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future work", "sec_num": "4" }, { "text": "We have presented a new method of sentence representation that attempts to leverage the structure found in Roget's Thesaurus and similar lexical ontologies (among them WordNet). We have shown that given this style of text representation both versions of Roget's Thesaurus work comparably to WordNet. All three perform fairly well compared to the baseline Simple method. Once again, the 1987 version is superior to the 1911 version, but the 1911 version still works quite well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future work", "sec_num": "4" }, { "text": "We hope to investigate further the representation of sentences and other short texts using Roget's Thesaurus. These kinds of measurements can help with problems such as identifying relevant sentences for extractive text summarization, or possibly paraphrase identification (Dolan et al., 2004) . Another -longer-term -direction of future work could be merging Roget's Thesaurus with WordNet.", "cite_spans": [ { "start": 273, "end": 293, "text": "(Dolan et al., 2004)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future work", "sec_num": "4" }, { "text": "We also plan to study methods of automatically updating the 1911 Roget's Thesaurus with modern words. Some work has been done on adding new terms and relations to WordNet (Snow et al., 2006) and FACTOTUM (O'Hara and Wiebe, 2003) . Similar methods could be used for identifying related terms and assigning them to a correct semicolon group or paragraph.", "cite_spans": [ { "start": 171, "end": 190, "text": "(Snow et al., 2006)", "ref_id": "BIBREF25" }, { "start": 195, "end": 228, "text": "FACTOTUM (O'Hara and Wiebe, 2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future work", "sec_num": "4" }, { "text": "http://www.gutenberg.org/ebooks/22 2 http://rogets.site.uottawa.ca/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.cs.technion.ac.il/\u02dcgabr/resources/data/ wordsim353/wordsim353.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://faculty.vassar.edu/lowry/rdiff.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.docm.mmu.ac.uk/STAFF/D.McLean/ SentenceResults.htm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://opennlp.sourceforge.net", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.mit.edu/\u02dcmarkaf/projects/wordnet/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Our research is supported by the Natural Sciences and Engineering Research Council of Canada and the University of Ottawa. We thank Dr. Diana Inkpen, Anna Kazantseva and Oana Frunza for many useful comments on the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An adapted lesk algorithm for word sense disambiguation using wordnet", "authors": [ { "first": "S", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "T", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2002, "venue": "Proc. CICLing", "volume": "", "issue": "", "pages": "136--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Banerjee and T. Pedersen. 2002. An adapted lesk al- gorithm for word sense disambiguation using wordnet. In Proc. CICLing 2002, pages 136-145.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An investigation of the semantic relations in the roget's thesaurus: Preliminary results", "authors": [ { "first": "P", "middle": [], "last": "Cassidy", "suffix": "" } ], "year": 2000, "venue": "Proc. CICLing", "volume": "", "issue": "", "pages": "181--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Cassidy. 2000. An investigation of the semantic rela- tions in the roget's thesaurus: Preliminary results. In Proc. CICLing 2000, pages 181-204.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources", "authors": [ { "first": "B", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "C", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "C", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2004, "venue": "Proc. COLING", "volume": "", "issue": "", "pages": "350--356", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Dolan, C. Quirk, and C. Brockett. 2004. Unsuper- vised construction of large paraphrase corpora: ex- ploiting massively parallel news sources. In Proc. COLING 2004, pages 350-356, Morristown, NJ.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A semantic network of english verbs", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "WordNet: An Electronic Lexical Database", "volume": "", "issue": "", "pages": "69--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Fellbaum. 1998. A semantic network of english verbs. In C. Fellbaum, editor, WordNet: An Electronic Lexi- cal Database, pages 69-104. MIT Press, Cambridge, MA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Placing search in context: the concept revisited", "authors": [ { "first": "L", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "E", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matias", "suffix": "" }, { "first": "E", "middle": [], "last": "Rivlin", "suffix": "" }, { "first": "Z", "middle": [], "last": "Solan", "suffix": "" }, { "first": "G", "middle": [], "last": "Wolfman", "suffix": "" }, { "first": "E", "middle": [], "last": "Ruppin", "suffix": "" } ], "year": 2001, "venue": "Proc. 10th International Conf. on World Wide Web", "volume": "", "issue": "", "pages": "406--414", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Finkelstein, E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin. 2001. Plac- ing search in context: the concept revisited. In Proc. 10th International Conf. on World Wide Web, pages 406-414, New York, NY, USA. ACM Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Lexical chains as representation of context for the detection and correction malapropisms", "authors": [ { "first": "G", "middle": [], "last": "Hirst", "suffix": "" }, { "first": "D", "middle": [], "last": "St-Onge", "suffix": "" } ], "year": 1998, "venue": "WordNet: An Electronic Lexical Database", "volume": "", "issue": "", "pages": "305--322", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Hirst and D. St-Onge. 1998. Lexical chains as rep- resentation of context for the detection and correc- tion malapropisms. In C. Fellbaum, editor, WordNet: An Electronic Lexical Database, pages 305-322. MIT Press, Cambridge, MA.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Semantic similarity of short texts", "authors": [ { "first": "A", "middle": [], "last": "Islam", "suffix": "" }, { "first": "D", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2007, "venue": "Proc. RANLP 2007", "volume": "", "issue": "", "pages": "291--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Islam and D. Inkpen. 2007. Semantic similarity of short texts. In Proc. RANLP 2007, pages 291-297, September.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Not as easy as it seems: Automating the construction of lexical chains using roget's thesaurus", "authors": [ { "first": "M", "middle": [], "last": "Jarmasz", "suffix": "" }, { "first": "S", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2003, "venue": "Proc. 16th Canadian Conf. on Artificial Intelligence", "volume": "", "issue": "", "pages": "544--549", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Jarmasz and S. Szpakowicz. 2003. Not as easy as it seems: Automating the construction of lexical chains using roget's thesaurus. In Proc. 16th Canadian Conf. on Artificial Intelligence, pages 544-549.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Roget's thesaurus and semantic similarity", "authors": [ { "first": "M", "middle": [], "last": "Jarmasz", "suffix": "" }, { "first": "S", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2004, "venue": "Recent Advances in Natural Language Processing III: Selected Papers from RANLP 2003", "volume": "260", "issue": "", "pages": "111--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Jarmasz and S. Szpakowicz. 2004. Roget's thesaurus and semantic similarity. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing III: Selected Papers from RANLP 2003, Current Issues in Linguistic The- ory, volume 260, pages 111-120. John Benjamins.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semantic similarity based on corpus statistics and lexical taxonomy", "authors": [ { "first": "J", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "D", "middle": [], "last": "Conrath", "suffix": "" } ], "year": 1997, "venue": "Proc. 10th International Conf. on Research on Computational Linguistics", "volume": "", "issue": "", "pages": "19--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Jiang and D. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proc. 10th International Conf. on Research on Com- putational Linguistics, pages 19-33.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Disambiguating hypernym relations for roget's thesaurus", "authors": [ { "first": "A", "middle": [], "last": "Kennedy", "suffix": "" }, { "first": "S", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2007, "venue": "Proc. TSD", "volume": "", "issue": "", "pages": "66--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Kennedy and S. Szpakowicz. 2007. Disambiguating hypernym relations for roget's thesaurus. In Proc. TSD 2007, pages 66-75.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Roget's Thesaurus of English Words and Phrases", "authors": [], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Kirkpatrick, editor. 1987. Roget's Thesaurus of En- glish Words and Phrases. Penguin, Harmondsworth, Middlesex, England.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge", "authors": [ { "first": "T", "middle": [], "last": "Landauer", "suffix": "" }, { "first": "S", "middle": [], "last": "Dumais", "suffix": "" } ], "year": 1997, "venue": "Psychological Review", "volume": "104", "issue": "", "pages": "211--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Landauer and S. Dumais. 1997. A solution to Plato's problem: The latent semantic analysis theory of ac- quisition, induction, and representation of knowledge. Psychological Review, 104:211-240.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Combining local context and wordnet sense similiarity for word sense disambiguation", "authors": [ { "first": "C", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "M", "middle": [], "last": "Chodorow", "suffix": "" } ], "year": 1998, "venue": "WordNet: An Electronic Lexical Database", "volume": "", "issue": "", "pages": "265--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Leacock and M. Chodorow. 1998. Combining local context and wordnet sense similiarity for word sense disambiguation. In C. Fellbaum, editor, WordNet: An Electronic Lexical Database, pages 265-284. MIT Press, Cambridge, MA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Readers Digest Magazines Canada Limited", "authors": [], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Lewis, editor. 2000 and 2001. Readers Digest, 158(932, 934, 935, 936, 937, 938, 939, 940), 159(944, 948). Readers Digest Magazines Canada Limited.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Sentence similarity based on semantic nets and corpus statistics", "authors": [ { "first": "D", "middle": [], "last": "Li", "suffix": "" }, { "first": "Z", "middle": [ "A" ], "last": "Mclean", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Bandar", "suffix": "" }, { "first": "K", "middle": [], "last": "O'shea", "suffix": "" }, { "first": "", "middle": [], "last": "Crockett", "suffix": "" } ], "year": 2006, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "18", "issue": "8", "pages": "1138--1150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, D. McLean, Z. A. Bandar, J. D. O'Shea, and K. Crockett. 2006. Sentence similarity based on se- mantic nets and corpus statistics. IEEE Transactions on Knowledge and Data Engineering, 18(8):1138- 1150.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An information-theoretic definition of similarity", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proc. 15th International Conf. on Machine Learning", "volume": "", "issue": "", "pages": "296--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Lin. 1998. An information-theoretic definition of similarity. In Proc. 15th International Conf. on Ma- chine Learning, pages 296-304, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Corpus-based and knowledge-based measures of text semantic similarity", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "C", "middle": [], "last": "Corley", "suffix": "" }, { "first": "C", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2006, "venue": "Proc. 21st National Conf. on Artificial Intelligence", "volume": "", "issue": "", "pages": "775--780", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Mihalcea, C. Corley, and C. Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proc. 21st National Conf. on Artificial Intelligence, pages 775-780. AAAI Press.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Contextual correlates of semantic similarity. Language and Cognitive Process", "authors": [ { "first": "G", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "W", "middle": [ "G" ], "last": "Charles", "suffix": "" } ], "year": 1991, "venue": "", "volume": "6", "issue": "", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. A. Miller and W. G. Charles. 1991. Contextual corre- lates of semantic similarity. Language and Cognitive Process, 6(1):1-28.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Classifying functional relations in factotum via wordnet hypernym associations", "authors": [ { "first": "T", "middle": [ "P" ], "last": "O'hara", "suffix": "" }, { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2003, "venue": "Proc. CICLing", "volume": "", "issue": "", "pages": "347--359", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. P. O'Hara and J. Wiebe. 2003. Classifying functional relations in factotum via wordnet hypernym associa- tions. In Proc. CICLing 2003), pages 347-359.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Incorporating dictionary and corpus information into a vector measure of semantic relatedness. Master's thesis", "authors": [ { "first": "S", "middle": [], "last": "Patwardhan", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Patwardhan. 2003. Incorporating dictionary and cor- pus information into a vector measure of semantic re- latedness. Master's thesis, University of Minnesota, Duluth, August.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Wordnet::similarity -measuring the relatedness of concepts", "authors": [ { "first": "T", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "S", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "J", "middle": [], "last": "Michelizzi", "suffix": "" } ], "year": 2004, "venue": "Proc. of the 19th National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1024--1025", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Pedersen, S. Patwardhan, and J. Michelizzi. 2004. Wordnet::similarity -measuring the relatedness of concepts. In Proc. of the 19th National Conference on Artificial Intelligence., pages 1024-1025.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Using information content to evaluate semantic similarity", "authors": [ { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1995, "venue": "Proc. 14th International Joint Conf. on Artificial Intelligence", "volume": "", "issue": "", "pages": "448--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Resnik. 1995. Using information content to evaluate semantic similarity. In Proc. 14th International Joint Conf. on Artificial Intelligence, pages 448-453.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Contextual correlates of synonymy", "authors": [ { "first": "H", "middle": [], "last": "Rubenstein", "suffix": "" }, { "first": "J", "middle": [ "B" ], "last": "Goodenough", "suffix": "" } ], "year": 1965, "venue": "Communication of the ACM", "volume": "8", "issue": "10", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Rubenstein and J. B. Goodenough. 1965. Contextual correlates of synonymy. Communication of the ACM, 8(10):627-633.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Collins Cobuild English Dictionary for Advanced Learners", "authors": [ { "first": "J", "middle": [], "last": "Sinclair", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Sinclair. 2001. Collins Cobuild English Dictionary for Advanced Learners. Harper Collins Pub.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Semantic taxonomy induction from heterogenous evidence", "authors": [ { "first": "R", "middle": [], "last": "Snow", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2006, "venue": "Proc COLING/ACL 2006", "volume": "", "issue": "", "pages": "801--808", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Snow, D. Jurafsky, and A. Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proc COLING/ACL 2006, pages 801-808.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Mining the web for synonyms: Pmi-ir versus lsa on toefl", "authors": [ { "first": "P", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2001, "venue": "Proc. 12th European Conf. on Machine Learning", "volume": "", "issue": "", "pages": "491--502", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Turney. 2001. Mining the web for synonyms: Pmi-ir versus lsa on toefl. In Proc. 12th European Conf. on Machine Learning, pages 491-502.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Verb semantics and lexical selection", "authors": [ { "first": "Z", "middle": [], "last": "Wu", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 1994, "venue": "Proc. 32nd Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "133--138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Wu and M. Palmer. 1994. Verb semantics and lex- ical selection. In Proc. 32nd Annual Meeting of the ACL, pages 133-138, New Mexico State University, Las Cruces, New Mexico.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Correlation data for all four systems." }, "TABREF1": { "type_str": "table", "num": null, "html": null, "content": "", "text": "Frequencies of each level of the hierarchy in the 1911 and 1987 Thesauri." }, "TABREF3": { "type_str": "table", "num": null, "html": null, "content": "
: Frequencies of paragraphs, semicolon groups,
total words and unique words by their part of speech; we
omitted prefixes and pronouns.
POSBoth Only 1911 Only 1987
All353432442565127
N.186851110837502
Vb.8618653215998
Adj.8584415513030
Adv.168413322460
Int.68416315
Phr.020380
", "text": "" }, "TABREF4": { "type_str": "table", "num": null, "html": null, "content": "", "text": "" }, "TABREF6": { "type_str": "table", "num": null, "html": null, "content": "
", "text": "Pearson's coefficient values when not breaking / breaking phrases up." }, "TABREF8": { "type_str": "table", "num": null, "html": null, "content": "
", "text": "" }, "TABREF10": { "type_str": "table", "num": null, "html": null, "content": "
", "text": "Synonym selection experiments." } } } }