Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N03-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:07:23.988329Z"
},
"title": "Unsupervised methods for developing taxonomies by combining syntactic and statistical information",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "dwiddows@csli.stanford.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes an unsupervised algorithm for placing unknown words into a taxonomy and evaluates its accuracy on a large and varied sample of words. The algorithm works by first using a large corpus to find semantic neighbors of the unknown word, which we accomplish by combining latent semantic analysis with part-of-speech information. We then place the unknown word in the part of the taxonomy where these neighbors are most concentrated, using a class-labelling algorithm developed especially for this task. This method is used to reconstruct parts of the existing Word-Net database, obtaining results for common nouns, proper nouns and verbs. We evaluate the contribution made by part-of-speech tagging and show that automatic filtering using the class-labelling algorithm gives a fourfold improvement in accuracy.",
"pdf_parse": {
"paper_id": "N03-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes an unsupervised algorithm for placing unknown words into a taxonomy and evaluates its accuracy on a large and varied sample of words. The algorithm works by first using a large corpus to find semantic neighbors of the unknown word, which we accomplish by combining latent semantic analysis with part-of-speech information. We then place the unknown word in the part of the taxonomy where these neighbors are most concentrated, using a class-labelling algorithm developed especially for this task. This method is used to reconstruct parts of the existing Word-Net database, obtaining results for common nouns, proper nouns and verbs. We evaluate the contribution made by part-of-speech tagging and show that automatic filtering using the class-labelling algorithm gives a fourfold improvement in accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The importance of automatic methods for enriching lexicons, taxonomies and knowledge bases from free text is well-recognized. For rapidly changing domains such as current affairs, static knowledge bases are inadequate for responding to new developments, and the cost of building and maintaining resources by hand is prohibitive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes experiments which develop automatic methods for taking an original taxonomy as a skeleton and fleshing it out with new terms which are discovered in free text. The method is completely automatic and it is completely unsupervised apart from using the original taxonomic skeleton to suggest possible classifications for new terms. We evaluate how accurately our methods can reconstruct the WordNet taxonomy (Fellbaum, 1998) .",
"cite_spans": [
{
"start": 426,
"end": 442,
"text": "(Fellbaum, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The problem of enriching the lexical information in a taxonomy can be posed in two complementary ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Firstly, given a particular taxonomic class (such as fruit) one could seek members of this class (such as apple, banana) . This problem is addressed by Riloff and Shepherd (1997) , Roark and Charniak (1998) and more recently by . Secondly, given a particular word (such as apple), one could seek suitable taxonomic classes for describing this object (such as fruit, foodstuff). The work in this paper addresses the second of these questions.",
"cite_spans": [
{
"start": 113,
"end": 120,
"text": "banana)",
"ref_id": null
},
{
"start": 152,
"end": 178,
"text": "Riloff and Shepherd (1997)",
"ref_id": "BIBREF12"
},
{
"start": 181,
"end": 206,
"text": "Roark and Charniak (1998)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of automatically placing new words into a taxonomy has been attempted in various ways for at least ten years (Hearst and Sch\u00fctze, 1993) . The process for placing a word w in a taxonomy T using a corpus C often contains some version of the following stages:",
"cite_spans": [
{
"start": 118,
"end": 144,
"text": "(Hearst and Sch\u00fctze, 1993)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 For a word w, find words from the corpus C whose occurrences are similar to those of w. Consider these the 'corpus-derived neighbors' N (w) of w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Assuming that at least some of these neighbors are already in the taxonomy T , map w to the place in the taxonomy where these neighbors are most concentrated. Hearst and Sch\u00fctze (1993) added 27 words to Word-Net using a version of this process, with a 63% accuracy at assigning new words to one of a number of disjoint WordNet 'classes' produced by a previous algorithm. (Direct comparison with this result is problematic since the number of classes used is not stated.) A more recent example is the top-down algorithm of Alfonseca and Manandhar (2001) , which seeks the node in T which shares the most collocational properties with the word w, adding 42 concepts taken from The Lord of the Rings with an accuracy of 28%.",
"cite_spans": [
{
"start": 161,
"end": 186,
"text": "Hearst and Sch\u00fctze (1993)",
"ref_id": "BIBREF6"
},
{
"start": 524,
"end": 554,
"text": "Alfonseca and Manandhar (2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The algorithm as presented above leaves many degrees of freedom and open questions. What methods should be used to obtain the corpus-derived neighbors N (w)? This question is addressed in Section 2. Given a collection of neighbors, how should we define a \"place in the taxonomy where these neighbors are most concentrated?\" This question is addressed in Section 3, which defines a robust class-labelling algorithm for mapping a list of words into a taxonomy. In Section 4 we describe experiments, determining the accuracy with which these methods can be used to reconstruct the WordNet taxonomy. To our knowledge, this is the first such evaluation for a large sample of words. Section 5 discusses related work and other problems to which these techniques can be adapted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Finding semantic neighbors: Combining latent semantic analysis with part-of-speech information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are many empirical techniques for recognizing when words are similar in meaning, rooted in the idea that \"you shall know a word by the company it keeps\" (Firth, 1957) . It is certainly the case that words which repeatedly occur with similar companions often have related meanings, and common features used for determining this similarity include shared collocations (Lin, 1999) , co-occurrence in lists of objects and latent semantic analysis (Landauer and Dumais, 1997; Hearst and Sch\u00fctze, 1993) .",
"cite_spans": [
{
"start": 159,
"end": 172,
"text": "(Firth, 1957)",
"ref_id": "BIBREF5"
},
{
"start": 372,
"end": 383,
"text": "(Lin, 1999)",
"ref_id": "BIBREF9"
},
{
"start": 449,
"end": 476,
"text": "(Landauer and Dumais, 1997;",
"ref_id": "BIBREF7"
},
{
"start": 477,
"end": 502,
"text": "Hearst and Sch\u00fctze, 1993)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The method used to obtain semantic neighbors in our experiments was a version of latent semantic analysis, descended from that used by Hearst and Sch\u00fctze (1993, \u00a74) . First, 1000 frequent words were chosen as column labels (after removing stopwords (Baeza-Yates and Ribiero-Neto, 1999, p. 167) ). Other words were assigned co-ordinates determined by the number of times they occured within the same context-window (15 words) as one of the 1000 column-label words in a large corpus. This gave a matrix where every word is represented by a rowvector determined by its co-occurence with frequently occuring, meaningful words. Since this matrix was very sparse, singular value decomposition (known in this context as latent semantic analysis (Landauer and Dumais, 1997) ) was used to reduce the number of dimensions from 1000 to 100. This reduced vector space is called WordSpace (Hearst and Sch\u00fctze, 1993, \u00a74) . Similarity between words was then computed using the cosine similarity measure (Baeza-Yates and Ribiero-Neto, 1999, p. 28) . Such techniques for measuring similarity between words have been shown to capture semantic properties: for example, they have been used successfully for recognizing synonymy (Landauer and Dumais, 1997) and for finding correct translations of individual terms . The corpus used for these experiments was the British National Corpus, which is tagged for parts-of-speech. This enabled us to build syntactic distinctions into WordSpace -instead of just giving a vector for the string test we were able to build separate vectors for the nouns, verbs and adjectives test. An example of the contribu-tion of part-of-speech information to extracting semantic neighbors of the word fire is shown in Table 2 . As can be seen, the noun fire (as in the substance/element) and the verb fire (mainly used to mean firing some sort of weapon) are related to quite different areas of meaning. Building a single vector for the string fire confuses this distinction -the neighbors of fire treated just as a string include words related to both the meaning of fire as a noun (more frequent in the BNC) and as a verb.",
"cite_spans": [
{
"start": 135,
"end": 164,
"text": "Hearst and Sch\u00fctze (1993, \u00a74)",
"ref_id": null
},
{
"start": 249,
"end": 293,
"text": "(Baeza-Yates and Ribiero-Neto, 1999, p. 167)",
"ref_id": null
},
{
"start": 738,
"end": 765,
"text": "(Landauer and Dumais, 1997)",
"ref_id": "BIBREF7"
},
{
"start": 876,
"end": 906,
"text": "(Hearst and Sch\u00fctze, 1993, \u00a74)",
"ref_id": null
},
{
"start": 988,
"end": 1031,
"text": "(Baeza-Yates and Ribiero-Neto, 1999, p. 28)",
"ref_id": null
},
{
"start": 1208,
"end": 1235,
"text": "(Landauer and Dumais, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 1724,
"end": 1731,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Part of the goal of our experiments was to investigate the contribution that this part-of-speech information made for mapping words into taxonomies. As far as we are aware, these experiments are the first to investigate the combination of latent semantic indexing with part-ofspeech information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a collection of words or multiword expressions which are semantically related, it is often important to know what these words have in common. All adults with normal language competence and world knowledge are adept at this task -we know that plant, animal and fungus are all living things, and that plant, factory and works are all kinds of buildings. This ability to classify objects, and to work out which of the possible classifications of a given object is appropriate in a particular context, is essential for understanding and reasoning about linguistic meaning. We will refer to this process as class-labelling. The approach demonstrated here uses a hand-built taxonomy to assign class-labels to a collection of similar nouns. As with much work of this nature, the taxonomy used is WordNet (version 1.6), a freely-available broadcoverage lexical database for English (Fellbaum, 1998) . Our algorithm finds the hypernyms which subsume as many as possible of the original nouns, as closely as possible 1 . The concept v is said to be a hypernym of w if w is a kind of v. For this reason this sort of a taxonomy is sometimes referred to as an 'IS A hierarchy'. For example, the possible hypernyms given for the word oak in WordNet 1.6 are oak \u21d2 wood \u21d2 plant material \u21d2 material, stuff \u21d2 substance, matter \u21d2 object, physical object \u21d2 entity, something 1 Another method which could be used for classlabelling is given by the conceptual density algorithm of Agirre and Rigau (1996) , which those authors applied to wordsense disambiguation. A different but related idea is presented by Li and Abe (1998) , who use a principle from information theory to model selectional preferences for verbs using different classes from a taxonomy. Their algorithm and goals are different from ours: we are looking for a single class-label for semantically related words, whereas for modelling selectional preferences several classes may be appropriate. Let S be a set of nouns or verbs. If the word w \u2208 S is recognized by WordNet, the WordNet taxonomy assigns to w an ordered set of hypernyms H(w).",
"cite_spans": [
{
"start": 880,
"end": 896,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 1361,
"end": 1362,
"text": "1",
"ref_id": null
},
{
"start": 1465,
"end": 1488,
"text": "Agirre and Rigau (1996)",
"ref_id": "BIBREF0"
},
{
"start": 1593,
"end": 1610,
"text": "Li and Abe (1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finding class-labels: Mapping collections of words into a taxonomy",
"sec_num": "3"
},
{
"text": "Consider the union",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding class-labels: Mapping collections of words into a taxonomy",
"sec_num": "3"
},
{
"text": "H = w\u2208S H(w).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding class-labels: Mapping collections of words into a taxonomy",
"sec_num": "3"
},
{
"text": "This is the set of all hypernyms of any member of S. Our intuition is that the most appropriate class-label for the set S is the hypernym h \u2208 H which subsumes as many as possible of the members of S as closely as possible in the hierarchy. There is a trade-off here between subsuming 'as many as possible' of the members of S, and subsuming them 'as closely as possible'. This line of reasoning can be used to define a whole collection of 'classlabelling algorithms'. For each w \u2208 S and for each h \u2208 H, define the affinity score function \u03b1(w, h) between w and h to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding class-labels: Mapping collections of words into a taxonomy",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1(w, h) = f (dist(w, h)) if h \u2208 H(w) \u2212g(w, h) if h / \u2208 H(w),",
"eq_num": "(1)"
}
],
"section": "Finding class-labels: Mapping collections of words into a taxonomy",
"sec_num": "3"
},
{
"text": "where dist(w, h) is a measure of the distance between w and h, f is some positive, monotonically decreasing function, and g is some positive (possibly constant) function. The function f accords 'positive points' to h if h subsumes w, and the condition that f be monotonically decreasing ensures that h gets more positive points the closer it is to w. The function g subtracts 'penalty points' if h does not subsume w. This function could depend in many ways on w and h -for example, there could be a smaller penalty if h is a very specific concept than if h is a very general concept.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding class-labels: Mapping collections of words into a taxonomy",
"sec_num": "3"
},
{
"text": "The distance measure dist(w, h) could take many forms, and there are already a number of distance measures available to use with WordNet (Budanitsky and Hirst, 2001 ). The easiest method for assigning a distance between words and their hypernyms is to count the number of intervening levels in the taxonomy. This assumes that the distance in specificity between ontological levels is constant, which is of course not the case, a problem addressed by Resnik (1999) .",
"cite_spans": [
{
"start": 137,
"end": 164,
"text": "(Budanitsky and Hirst, 2001",
"ref_id": "BIBREF3"
},
{
"start": 450,
"end": 463,
"text": "Resnik (1999)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finding class-labels: Mapping collections of words into a taxonomy",
"sec_num": "3"
},
{
"text": "Given an appropriate affinity score, it is a simple matter to define the best class-label for a collection of objects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding class-labels: Mapping collections of words into a taxonomy",
"sec_num": "3"
},
{
"text": "Definition 1 Let S be a set of nouns, let H = w\u2208S H(w) be the set of hypernyms of S and let \u03b1(w, h) be an affinity score function as defined in equation 1. The best class-label h max (S) for S is the node h max \u2208 H with the highest total affinity score summed over all the members of S, so h max is the node which gives the maximum score max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding class-labels: Mapping collections of words into a taxonomy",
"sec_num": "3"
},
{
"text": "h\u2208H w\u2208S \u03b1(w, h).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding class-labels: Mapping collections of words into a taxonomy",
"sec_num": "3"
},
{
"text": "Since H is determined by S, h max is solely determined by the set S and the affinity score \u03b1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding class-labels: Mapping collections of words into a taxonomy",
"sec_num": "3"
},
{
"text": "In the event that h max is not unique, it is customary to take the most specific class-label available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding class-labels: Mapping collections of words into a taxonomy",
"sec_num": "3"
},
{
"text": "A particularly simple example of this kind of algorithm is used by Hearst and Sch\u00fctze (1993) . First they partition the WordNet taxonomy into a number of disjoint sets which are used as class-labels. Thus each concept has a single 'hypernym', and the 'affinity-score' between a word w and a class h is simply the set membership function, \u03b1(w, h) = 1 if w \u2208 h and 0 otherwise. A collection of words is assigned a class-label by majority voting.",
"cite_spans": [
{
"start": 67,
"end": 92,
"text": "Hearst and Sch\u00fctze (1993)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": null
},
{
"text": "In theory, rather than a class-label for related strings, we would like one for related meanings -the concepts to which the strings refer. To implement this for a set of words, we alter our affinity score function \u03b1 as follows. Let C(w) be the set of concepts to which the word w could refer. (So each c \u2208 C is a possible sense of w.) Then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1(w, h) = max c\u2208C(w) f (dist(c, h)) if h \u2208 H(c) \u2212g(w, c) if h / \u2208 H(c),",
"eq_num": "(2)"
}
],
"section": "Ambiguity",
"sec_num": "3.1"
},
{
"text": "This implies that the 'preferred-sense' of w with respect to the possible subsumer h is the sense closest to h. In practice, our class-labelling algorithm implements this preference by computing the affinity score \u03b1(c, h) for all c \u2208 C(w) and only using the best match. This selective approach is much less noisy than simply averaging the probability mass of the word over each possible sense (the technique used in (Li and Abe, 1998) , for example).",
"cite_spans": [
{
"start": 416,
"end": 434,
"text": "(Li and Abe, 1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.1"
},
{
"text": "The precise choice of class-labelling algorithm depends on the functions f and g in the affinity score function \u03b1 of equation 2. There is some tension here between being correct and being informative: 'correct' but uninformative class-labels (such as entity, something) can be obtained easily by preferring nodes high up in the hierarchy, but since our goal in this work was to classify unknown words in an informative and accurate fashion, the functions f and g had to be chosen to give an appropriate balance. After a variety of heuristic tests, the function f was chosen to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of scoring functions for the class-labelling algorithm",
"sec_num": "3.2"
},
{
"text": "f = 1 dist(w, h) 2 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of scoring functions for the class-labelling algorithm",
"sec_num": "3.2"
},
{
"text": "where for the distance function dist(w, h) we chose the computationally simple method of counting the number of taxonomic levels between w and h (inclusively to avoid dividing by zero). For the penalty function g we chose the constant g = 0.25.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of scoring functions for the class-labelling algorithm",
"sec_num": "3.2"
},
{
"text": "The net effect of choosing the reciprocal-distancesquared and a small constant penalty function was that hypernyms close to the concept in question received magnified credit, but possible class-labels were not penalized too harshly for missing out a node. This made the algorithm simple and robust to noise but with a strong preference for detailed information-bearing class-labels. This configuration of the class-labelling algorithm was used in all the experiments described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of scoring functions for the class-labelling algorithm",
"sec_num": "3.2"
},
{
"text": "To test the success of our approach to placing unknown words into the WordNet taxonomy on a large and significant sample, we designed the following experiment. If the algorithm is successful at placing unknown words in the correct new place in a taxonomy, we would expect it to place already known words in their current position. The experiment to test this worked as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "\u2022 For a word w, find the neighbors N (w) of w in WordSpace. Remove w itself from this set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "\u2022 Find the best class-label h max (N (w)) for this set (using Definition 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "\u2022 Test to see if, according to WordNet, h max is a hypernym of the original word w, and if so check how closely h max subsumes w in the taxonomy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "Since our class-labelling algorithm gives a ranked list of possible hypernyms, credit was given for correct classifications in the top 4 places. This algorithm was tested on singular common nouns (PoS-tag nn1), proper nouns (PoS-tag np0) and finite present-tense verbs (PoS-tag vvb). For each of these classes, a random sample of words was selected with corpus frequencies ranging from 1000 to 250. For the noun categories, 600 words were sampled, and for the finite verbs, 420. For each word w, we found semantic neighbors with and without using part-ofspeech information. The same experiments were carried out using 3, 6 and 12 neighbors: we will focus on the results for 3 and 12 neighbors since those for 6 neighbors turned out to be reliably 'somewhere in between' these two.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "The best results for reproducing WordNet classifications were obtained for common nouns, and are summarized in Table 2 , which shows the percentage of test words w which were given a class-label h which was a correct hypernym according to WordNet (so for which h \u2208 H(w)). For these words for which a correct classification was found, the 'Height' columns refer to the number of levels in the hierarchy between the target word w and the class-label h. If the algorithm failed to find a class-label h which is a hypernym of w, the result was counted as 'Wrong'. The 'Missing' column records the number of words in the sample which are not in WordNet at all.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results for Common Nouns",
"sec_num": null
},
{
"text": "The following trends are apparent. For finding any correct class-label, the best results were obtained by taking 12 neighbors and using part-of-speech information, which found a correct classification for 485/591 = 82% of the common nouns that were included in Word-Net. This compares favorably with previous experiments, though as stated earlier it is difficult to be sure we are comparing like with like. Finding the hypernym which immediately subsumes w (with no intervening nodes) exactly reproduces a classification given by WordNet, and as such was taken to be a complete success. Taking fewer neighbors and using PoS-information both improved this success rate, the best accuracy obtained being 86/591 = 15%. However, this configuration actually gave the worst results at obtaining a correct classification overall. In conclusion, taking more neighbors makes the chances of obtaining some correct classification for a word w greater, but taking fewer neighbors increases the chances of 'hitting the nail on the head'. The use of partof-speech information reliably increases the chances of correctly obtaining both exact and broadly correct classifications, though careful tuning is still necessary to obtain optimal results for either.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Common Nouns",
"sec_num": null
},
{
"text": "The results for proper nouns and verbs (also in Table 2 ) demonstrate some interesting problems. On the whole, the mapping is less reliable than for common nouns, at least when it comes to reconstructing WordNet as it currently stands.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 56,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results for Proper Nouns and Verbs",
"sec_num": null
},
{
"text": "Proper nouns are rightly recognized as one of the categories where automatic methods for lexical acquisition are most important (Hearst and Sch\u00fctze, 1993, \u00a74) . It is impossible for a single knowledge base to keep up-todate with all possible meanings of proper names, and this would be undesirable without considerable filtering abilities because proper names are often domain-specific.",
"cite_spans": [
{
"start": 128,
"end": 158,
"text": "(Hearst and Sch\u00fctze, 1993, \u00a74)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Proper Nouns and Verbs",
"sec_num": null
},
{
"text": "Ih our experiments, the best results for proper nouns were those obtained using 12 neighbors, where a correct classification was found for 206/266 = 77% of the proper nouns that were included in WordNet, using no part-of-speech information. Part-of-speech information still helps for mapping proper nouns into exactly the right place, but in general degrades performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Proper Nouns and Verbs",
"sec_num": null
},
{
"text": "Several of the proper names tested are geographical, and in the BNC they often refer to regions of the British Isles which are not in WordNet. For example, hampshire is labelled as a territorial division, which as an English county it certainly is, but in WordNet hampshire is instead a hyponym of domestic sheep. For many of the proper names which our evaluation labelled as 'wrongly classified', the classification was in fact correct but a different meaning from those given in WordNet. The challenge for these situations is how to recognize when corpus methods give a correct meaning which is different from the meaning already listed in a knowledge base. Many of these meanings will be systematically related (such as the way a region is used to name an item or product from that region, as with the hampshire example above) by generative processes which are becoming well understood by theoretical linguists (Pustejovsky, 1995) , and linguistic theory may help our statistical algorithms considerably by predicting what sort of new meanings we might expect a known word to assume through metonymy and systematic polysemy.",
"cite_spans": [
{
"start": 914,
"end": 933,
"text": "(Pustejovsky, 1995)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Proper Nouns and Verbs",
"sec_num": null
},
{
"text": "Typical first names of people such as lisa and ralph almost always have neighbors which are also first names (usually of the same gender), but these words are not represented in WordNet. This lexical category is ripe for automatic discovery: preliminary experiments using the two names above as 'seed-words' (Roark and Charniak, 1998; show that by taking a few known examples, finding neighbors and removing words which are already in WordNet, we can collect first names of the same gender with at least 90% accuracy.",
"cite_spans": [
{
"start": 308,
"end": 334,
"text": "(Roark and Charniak, 1998;",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Proper Nouns and Verbs",
"sec_num": null
},
{
"text": "Verbs pose special problems for knowledge bases. The usefulness of an IS A hierarchy for pinpointing information and enabling inference is much less clear-cut than for nouns. For example, sleeping does entail breathing and arriving does imply moving, but the aspectual properties, argument structure and case roles may all be different. The more restrictive definition of troponymy is used in WordNet to describe those properties of verbs that are inherited through the taxonomy (Fellbaum, 1998, Ch 3) . In practice, the taxonomy of verbs in WordNet tends to have fewer levels and many more branches than the noun taxonomy. This led to problems for our classlabelling algorithm -class-labels obtained for the verb play included exhaust, deploy, move and behave, all of which are 'correct' hypernyms according to WordNet, while possible class-labels obtained for the verb appeal included keep, defend, reassert and examine, all of which were marked 'wrong'. For our methods, the WordNet taxonomy as it stands appears to give much less reliable evaluation criteria for verbs than for common nouns. It is also plausible that similarity measures based upon simple co-occurence are better for modelling similarity between nominals than between verbs, an observation which is compatible with psychological experiments on word-association (Fellbaum, 1998, p. 90) .",
"cite_spans": [
{
"start": 479,
"end": 501,
"text": "(Fellbaum, 1998, Ch 3)",
"ref_id": null
},
{
"start": 1332,
"end": 1355,
"text": "(Fellbaum, 1998, p. 90)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Proper Nouns and Verbs",
"sec_num": null
},
{
"text": "In our experiments, the best results for verbs were clearly those obtained using 12 neighbors and no partof-speech information, for which some correct classification was found for 273/406 = 59% of the verbs that were included in WordNet, and which achieved better results than those using part-of-speech information even for finding exact classifications. The shallowness of the taxonomy for verbs means that most classifications which were successful at all were quite close to the word in question, which should be taken into account when interpreting the results in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 569,
"end": 576,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results for Proper Nouns and Verbs",
"sec_num": null
},
{
"text": "As we have seen, part-of-speech information degraded performance overall for proper nouns and verbs. This may be because combining all uses of a particular wordform into a single vector is less prone to problems of data sparseness, especially if these word-forms are semantically related in spite of part-of-speech differences 2 . It is also plausible that discarding part-of-speech information should improve the classification of verbs for the following reason. Classification using corpus-derived neighbors is markedly better for common nouns than for verbs, and most of the verbs in our sample (57%) also occur as common nouns in WordSpace. (In contrast, only 13% of our common nouns also occur as verbs, a reliable asymmetry for English.) Most of these noun senses are semantically related in some way to the corresponding verbs. Since using neighboring words for classification is demonstrably more reliable for nouns than for verbs, putting these parts-of-speech together in a single vector in WordSpace might be expected to improve performance for verbs but degrade it for nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Proper Nouns and Verbs",
"sec_num": null
},
{
"text": "One of the benefits of the class-labelling algorithm (Definition 1) presented in this paper is that it returns not just class-labels but an affinity score measuring how well each class-label describes the class of objects in question. The affinity score turns out to be signficantly correlated with the likelihood of obtaining a successful classification. This can be seen very clearly in Table 3 , which shows the average affinity score for correct class-labels of different heights above the target word, and for incorrect class-labels -as a rule, correct and informative classlabels have significantly higher affinity scores than incorrect class-labels. It follows that the affinity score can be used as an indicator of success, and so filtering out classlabels with poor scores can be used as a technique for improving accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 389,
"end": 396,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Filtering using Affinity scores",
"sec_num": null
},
{
"text": "To test this, we repeated our experiments using 3 neighbors and this time only using class-labels with an affinity score greater than 0.75, the rest being marked 'unknown'. Without filtering, there were 1143 successful and 1380 unsuccessful outcomes: with filtering, these numbers changed to 660 and 184 respectively. Filtering discarded some 87% of the incorrect labels and kept more than half of the correct ones, which amounts to at least a fourfold improvement in accuracy. The improvement was particularly dramatic for proper nouns, where filtering removed 270 out of 283 incorrect results and still retained half of the correct ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Filtering using Affinity scores",
"sec_num": null
},
{
"text": "For common nouns, where WordNet is most reliable, our mapping algorithm performs comparatively well, accurately classifying several words and finding some correct information about most others. The optimum number of neighbors is smaller if we want to try for an exact classification and larger if we want information that is broadly reliable. Part-of-speech information noticeably improves the process of both broad and narrow classification. For proper names, many classifications are correct, and many which are absent or incorrect according to WordNet are in fact correct meanings which should be added to the knowledge base for (at least) the domain in question. Results for verbs are more difficult to interpret: reasons for this might include the shallowness and breadth of the WordNet verb hierarchy, the suitability of our WordSpace similarity measure, and many theoretical issues which should be taken into account for a successful approach to the classification of verbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": null
},
{
"text": "Filtering using the affinity score from the classlabelling algorithm can be used to dramatically increase performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": null
},
{
"text": "The experiments in this paper describe one combination of algorithms for lexical acquisition: both the finding of semantic neighbors and the process of class-labelling could take many alternative forms, and an exhaustive evaluation of such combinations is far beyond the scope of this paper. Various mathematical models and distance measures are available for modelling semantic proximity, and more detailed linguistic preprocessing (such as chunking, parsing and morphology) could be used in a variety of ways. As an initial step, the way the granularity of part-of-speech classification affects our results for lexical acquistion will be investigated. The class-labelling algorithm could be adapted to use more sensitive measures of distance (Budanitsky and Hirst, 2001) , and correlations between taxonomic distance and WordSpace similarity used as a filter.",
"cite_spans": [
{
"start": 744,
"end": 772,
"text": "(Budanitsky and Hirst, 2001)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work and future directions",
"sec_num": "5"
},
{
"text": "The coverage and accuracy of the initial taxonomy we are hoping to enrich has a great influence on success rates for our methods as they stand. Since these are precisely the aspects of the taxonomy we are hoping to improve, this raises the question of whether we can use automatically obtained hypernyms as well as the hand-built ones to help classification. This could be tested by randomly removing many nodes from WordNet before we begin, and measuring the effect of using automatically derived classifications for some of these words (possibly those with high confidence scores) to help with the subsequent classification of others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work and future directions",
"sec_num": "5"
},
{
"text": "The use of semantic neighbors and class-labelling for computing with meaning go far beyond the experimental set up for lexical acquisition described in this paper -for example, Resnik (1999) used the idea of a most informative subsuming node (which can be regarded as a kind of class-label) for disambiguation, as did Agirre and Rigau (1996) with the conceptual density algorithm. Taking a whole domain as a 'context', this approach to disambiguation can be used for lexical tuning. For example, using the Ohsumed corpus of medical abstracts, the top few neighbors of operation are amputation, disease, therapy and resection. Our algorithm gives medical care, medical aid and therapy as possible classlabels for this set, which successfully picks out the sense of operation which is most important for the medical domain.",
"cite_spans": [
{
"start": 177,
"end": 190,
"text": "Resnik (1999)",
"ref_id": "BIBREF11"
},
{
"start": 318,
"end": 341,
"text": "Agirre and Rigau (1996)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work and future directions",
"sec_num": "5"
},
{
"text": "The level of detail which is appropriate for defining and grouping terms depends very much on the domain in question. For example, the immediate hypernyms offered by WordNet for the word trout include fish, foodstuff, salmonid, malacopterygian, teleost fish, food fish, saltwater fish Many of these classifications are inappropriately finegrained for many circumstances. To find a degree of abstraction which is suitable for the way trout is used in the BNC, we found its semantic neighbors which include herring swordfish turbot salmon tuna. The highestscoring class-labels for this set are 2.911 saltwater fish 2.600 food fish 1.580 fish 1.400 scombroid, scombroid 0.972 teleost fish",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work and future directions",
"sec_num": "5"
},
{
"text": "The preferred labels are the ones most humans would answer if asked what a trout is. This process can be used to select the concepts from an ontology which are appropriate to a particular domain in a completely unsupervised fashion, using only the documents from that domain whose meanings we wish to describe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work and future directions",
"sec_num": "5"
},
{
"text": "Interactive demonstrations of the class-labelling algorithm and WordSpace are available on the web at http://infomap.stanford.edu/classes and http://infomap.stanford.edu/webdemo. An interface to WordSpace incorporating the part-of-speech information is currently under consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Demonstration",
"sec_num": null
},
{
"text": "This issue is reminiscent of the question of whether stemming improves or harms information retrieval(Baeza-Yates and Ribiero-Neto, 1999) -the received wisdom is that stemming (at best) improves recall at the expense of precision and our findings for proper nouns are consistent with this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part by the Research Collaboration between the NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation and CSLI, Stanford University, and by EC/NSF grant IST-1999-11438 for the MUCHMORE project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word sense disambiguation using conceptual density",
"authors": [
{
"first": "E",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Rigau",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of COL-ING'96",
"volume": "",
"issue": "",
"pages": "16--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Agirre and G. Rigau. 1996. Word sense disambigua- tion using conceptual density. In Proceedings of COL- ING'96, pages 16-22, Copenhagen, Denmark.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving an ontology refinement method with hyponymy patterns",
"authors": [
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
}
],
"year": 2001,
"venue": "Third International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "235--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrique Alfonseca and Suresh Manandhar. 2001. Im- proving an ontology refinement method with hy- ponymy patterns. In Third International Conference on Language Resources and Evaluation, pages 235- 239, Las Palmas, Spain.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Modern Information Retrieval",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Baeza",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "Berthier",
"middle": [],
"last": "Ribiero-Neto",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricardo Baeza-Yates and Berthier Ribiero-Neto. 1999. Modern Information Retrieval. Addison Wesley / ACM press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semantic distance in wordnet: An experimental, application-oriented evaluation of five measures",
"authors": [
{
"first": "A",
"middle": [],
"last": "Budanitsky",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2001,
"venue": "Workshop on WordNet and Other Lexical Resources",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Budanitsky and G. Hirst. 2001. Semantic distance in wordnet: An experimental, application-oriented evalu- ation of five measures. In Workshop on WordNet and Other Lexical Resources, Pittsburgh, PA. NAACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "WordNet: An electronic lexical database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An elec- tronic lexical database. MIT press, Cambridge MA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A synopsis of linguistic theory 1930-1955. Studies in Linguistic Analysis",
"authors": [
{
"first": "J",
"middle": [],
"last": "Firth",
"suffix": ""
}
],
"year": 1957,
"venue": "Philological Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Firth. 1957. A synopsis of linguistic theory 1930- 1955. Studies in Linguistic Analysis, Philological So- ciety, Oxford, reprinted in Palmer, F. (ed. 1968) Se- lected Papers of J. R. Firth, Longman, Harlow.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Customizing a lexicon to better suit a computational task",
"authors": [
{
"first": "Marti",
"middle": [],
"last": "Hearst",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1993,
"venue": "ACL SIGLEX Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti Hearst and Hinrich Sch\u00fctze. 1993. Customizing a lexicon to better suit a computational task. In ACL SIGLEX Workshop, Columbus, Ohio.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A solution to plato's problem: The latent semantic analysis theory of acquisition",
"authors": [
{
"first": "T",
"middle": [],
"last": "Landauer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological Review",
"volume": "104",
"issue": "2",
"pages": "211--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Landauer and S. Dumais. 1997. A solution to plato's problem: The latent semantic analysis theory of acqui- sition. Psychological Review, 104(2):211-240.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generalizing case frames using a thesaurus and the mdl principle",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Naoki",
"middle": [],
"last": "Abe",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "2",
"pages": "217--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hang Li and Naoki Abe. 1998. Generalizing case frames using a thesaurus and the mdl principle. Computa- tional Linguistics, 24(2):217-244.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic identification of noncompositional phrases",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1999,
"venue": "ACL:1999",
"volume": "",
"issue": "",
"pages": "317--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 1999. Automatic identification of non- compositional phrases. In ACL:1999, pages 317-324.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Generative Lexicon",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky. 1995. The Generative Lexicon. MIT press, Cambridge, MA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of artificial intelligence research",
"volume": "11",
"issue": "",
"pages": "93--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1999. Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language. Journal of artificial intelligence research, 11:93-130.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A corpus-based approach for building semantic lexicons",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Shepherd",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Second Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "117--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff and Jessica Shepherd. 1997. A corpus-based approach for building semantic lexicons. In Claire Cardie and Ralph Weischedel, editors, Proceedings of the Second Conference on Empirical Methods in Natu- ral Language Processing, pages 117-124. Association for Computational Linguistics, Somerset, New Jersey.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Noun-phrase co-occurence statistics for semi-automatic semantic lexicon construction",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1998,
"venue": "COLING-ACL",
"volume": "",
"issue": "",
"pages": "1110--1116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Roark and Eugene Charniak. 1998. Noun-phrase co-occurence statistics for semi-automatic semantic lexicon construction. In COLING-ACL, pages 1110- 1116.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A graph model for unsupervised lexical acquisition",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": ""
},
{
"first": "Beate",
"middle": [],
"last": "Dorow",
"suffix": ""
}
],
"year": 2002,
"venue": "19th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1093--1099",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Widdows and Beate Dorow. 2002. A graph model for unsupervised lexical acquisition. In 19th In- ternational Conference on Computational Linguistics, pages 1093-1099, Taipei, Taiwan, August.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using parallel corpora to enrich multilingual lexical resources",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": ""
},
{
"first": "Beate",
"middle": [],
"last": "Dorow",
"suffix": ""
},
{
"first": "Chiu-Ki",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 2002,
"venue": "Third International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "240--245",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Widdows, Beate Dorow, and Chiu-Ki Chan. 2002. Using parallel corpora to enrich multilingual lexical resources. In Third International Conference on Language Resources and Evaluation, pages 240- 245, Las Palmas, Spain, May.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"html": null
},
"TABREF3": {
"num": null,
"content": "<table><tr><td>Height</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>Wrong</td></tr><tr><td colspan=\"7\">Common Nouns 0.799 0.905 0.785 0.858 0.671 0.671</td><td>0.569</td></tr><tr><td>Proper Nouns</td><td colspan=\"6\">1.625 0.688 0.350 0.581 0.683 0.430</td><td>0.529</td></tr><tr><td>Verbs</td><td colspan=\"6\">1.062 1.248 1.095 1.103 1.143 0.750</td><td>0.669</td></tr></table>",
"text": "Percentage of words which were automatically assigned class-labels which subsume them in the WordNet taxonomy, showing the number of taxonomic levels between the target word and the class-label",
"type_str": "table",
"html": null
},
"TABREF4": {
"num": null,
"content": "<table/>",
"text": "Average affinity score of class-labels for successful and unsuccessful classifications",
"type_str": "table",
"html": null
}
}
}
}