|
{ |
|
"paper_id": "N10-1010", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:51:16.555179Z" |
|
}, |
|
"title": "Taxonomy Learning Using Word Sense Induction", |
|
"authors": [ |
|
{ |
|
"first": "Ioannis", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Klapaftis", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of York York", |
|
"location": { |
|
"postCode": "YO10 5DD", |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Suresh", |
|
"middle": [], |
|
"last": "Manandhar", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of York York", |
|
"location": { |
|
"postCode": "YO10 5DD", |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "suresh@cs.york.ac.uk" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Taxonomies are an important resource for a variety of Natural Language Processing (NLP) applications. Despite this, the current stateof-the-art methods in taxonomy learning have disregarded word polysemy, in effect, developing taxonomies that conflate word senses. In this paper, we present an unsupervised method that builds a taxonomy of senses learned automatically from an unlabelled corpus. Our evaluation on two WordNet-derived taxonomies shows that the learned taxonomies capture a higher number of correct taxonomic relations compared to those produced by traditional distributional similarity approaches that merge senses by grouping the features of each word into a single vector.", |
|
"pdf_parse": { |
|
"paper_id": "N10-1010", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Taxonomies are an important resource for a variety of Natural Language Processing (NLP) applications. Despite this, the current stateof-the-art methods in taxonomy learning have disregarded word polysemy, in effect, developing taxonomies that conflate word senses. In this paper, we present an unsupervised method that builds a taxonomy of senses learned automatically from an unlabelled corpus. Our evaluation on two WordNet-derived taxonomies shows that the learned taxonomies capture a higher number of correct taxonomic relations compared to those produced by traditional distributional similarity approaches that merge senses by grouping the features of each word into a single vector.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "A concept or a sense, s, can be defined as the meaning of a word or a multiword expression. A concept s can be linguistically realised by more than one word while at the same time a word w can be the linguistic realisation of more than one concept. Given a set of concepts S, taxonomy learning is the task of hierarchically classifying the elements in S in an automatic manner. For example, consider a set of concepts linguistically realised by the words/multiword expressions LAN, computer network, internet, meshwork, gauze, snood. Taxonomy learning methods produce taxonomies, such as the ones shown in Figures 1 (a) and 1 (b).", |
|
"cite_spans": [ |
|
{ |
|
"start": 606, |
|
"end": 619, |
|
"text": "Figures 1 (a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "By observing Figure 1 (a), we can express IS-A statements, such as Internet IS-A Computer Network etc. However, the same does not apply to the Figure 1 (b), since this taxonomy is not fully labelled. Despite this, its hierarchical organisation clearly shows that the concepts are divided into groups, which are further subdivided into subgroups and so forth, until we reach a level where each concept belongs to its own group. Unlabelled taxonomies are typically produced by agglomerative hierarchical clustering algorithms (King, 1967; Sneath and Sokal, 1973) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 524, |
|
"end": 536, |
|
"text": "(King, 1967;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 560, |
|
"text": "Sneath and Sokal, 1973)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 21, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 151, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The knowledge encoded in taxonomies can be utilised in a range of NLP applications. For instance, taxonomies can be used in information retrieval to expand a user query with semantically related words or to enhance document representation by abstracting from plain words and adding conceptual information (Cimiano, 2006) . WordNet's (Fellbaum, 1998) taxonomic relations have also been used in Word Sense Disambiguation (WSD) (Navigli and Velardi, 2004b) . In named entity recognition, methods relying on gazetteers could make use of automatically acquired taxonomies (Cimiano, 2006) , while question answering systems have also benefited (Moldovan and Novischi, 2002) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 305, |
|
"end": 320, |
|
"text": "(Cimiano, 2006)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 349, |
|
"text": "WordNet's (Fellbaum, 1998)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 425, |
|
"end": 453, |
|
"text": "(Navigli and Velardi, 2004b)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 567, |
|
"end": 582, |
|
"text": "(Cimiano, 2006)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 638, |
|
"end": 667, |
|
"text": "(Moldovan and Novischi, 2002)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Despite the wide uses of taxonomies, the majority of methods disregard or do not deal effectively with word polysemy, in effect, developing taxonomies that conflate the senses of words (see Section 2). In this work, we show that Word Sense Induction (WSI) can be effectively employed to address this limitation of existing methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We present a novel method that employs WSI to generate the different senses of a set of target words from an unlabelled corpus and then produces a taxonomy of senses using Hierarchical Agglomerative Clustering (HAC) (King, 1967; Sneath and Sokal, 1973) . We evaluate our method on two WordNetderived sub-taxonomies and show that our method leads to the development of concept hierarchies that capture a higher number of correct taxonomic relations in comparison to those generated by current distributional similarity approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 228, |
|
"text": "(King, 1967;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 252, |
|
"text": "Sneath and Sokal, 1973)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Initial research on taxonomy learning focused on identifying in a given text lexico-syntactic patterns that suggest hyponymy relations (Hearst, 1992) . For instance, the pattern N P 0 such as N P 1 ,. . . ,N P n suggests that N P 0 is a hypernym of N P i . For example, given the phrase Fruits, such as oranges, apples,..., the above pattern would suggest that fruit is a hypernym of orange and apple. These patternbased approaches operate at the word level by learning lexical relations between words rather than between senses of words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 149, |
|
"text": "(Hearst, 1992)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the same spirit, other work attempted to exploit the regularities of dictionary entries to identify hyponymy relations (Amsler, 1981) . For example in WordNet, WAN is defined as a computer network that spans . . . . Hence, one can easily induce that WAN is a hyponym of computer network by assuming that the first noun phrase in the definition is a hypernym of the target word. These approaches learn lexical relations at the sense level since dictionaries separate the senses of a word. However this would be true if and only if the glosses of the dictionaries were sense-annotated, which is not the case for the majority of electronic dictionaries (Cimiano, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 136, |
|
"text": "(Amsler, 1981)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 653, |
|
"end": 668, |
|
"text": "(Cimiano, 2006)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Another limitation is that taxonomies are built according to the sense distinctions present in dictionaries and not according to the actual use of words in the corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The majority of taxonomy learning approaches are based on the distributional hypothesis (Harris, 1968) . Typically, distributional similarity methods (Cimiano et al., 2004; Cimiano et al., 2005; Faure and N\u00e9dellec, 1998; Reinberger and Spyns, 2004; Caraballo, 1999) utilise syntactic dependencies such as subject/verb, object/verb relations, conjunctive and appositive constructions and others. These dependencies are used to extract the features that serve as the dimensions of the vector space. Each target noun is then represented as a vector of extracted features where the frequency of co-occurrence of the target noun with each feature is used to calculate the weight of that feature. The constructed vectors are the input to hierarchical clustering or formal concept analysis (Ganter and Wille, 1999) to produce a taxonomy. These approaches assume that a target noun is monosemous creating one vector of features for each target noun. This limitation can lead to a number of problems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 102, |
|
"text": "(Harris, 1968)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 150, |
|
"end": 172, |
|
"text": "(Cimiano et al., 2004;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 194, |
|
"text": "Cimiano et al., 2005;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 220, |
|
"text": "Faure and N\u00e9dellec, 1998;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 221, |
|
"end": 248, |
|
"text": "Reinberger and Spyns, 2004;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 249, |
|
"end": 265, |
|
"text": "Caraballo, 1999)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 783, |
|
"end": 807, |
|
"text": "(Ganter and Wille, 1999)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Firstly, the constructed taxonomies might be biased towards the inclusion of taxonomic relationships between the most frequent senses of target nouns, ignoring interesting taxonomic relations where less frequent senses are present. For example, consider the word house. Current distributional similarity methods would possibly capture the hyponyms of its Most Frequent Sense (MFS 1 ), however ignoring the hyponyms of less frequent senses of house, e.g. casino, theater, etc. Given that word senses typically follow a Zipf distribution, these methods construct vectors dominated by the MFS of words. This bias significantly degrades the usefulness of learned taxonomies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Secondly, given that distributional similarity approaches rely on the computation of pairwise similarities between target words, merging their senses to a single vector might lead to unreliable similarity estimates. For example, merging the features of the different senses of house could provide a lower similarity with its monosemous hyponym beach house, since only the first sense of house is related to beach house. This problem might lead both to inclusion of incorrect or loss of correct taxonomic relations. In our work, we aim to overcome these drawbacks by identifying the different senses with which target words appear in text and then building a hierarchy of the identified senses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Soft clustering approaches (Reinberger and Spyns, 2004; Reinberger et al., 2003) have also been applied to taxonomy learning to deal with polysemy. These methods associate each verb with a vector of features, where each feature is a noun appearing as a subject or object of that verb. That way a noun can appear in different vectors, hence in different clusters during hierarchical clustering as a result of its polysemy. However, the underlying assumption is that a verb is monosemous with respect to its associated vector of nouns. This assumption is not always valid and can cause the problems mentioned above.", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 55, |
|
"text": "(Reinberger and Spyns, 2004;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 56, |
|
"end": 80, |
|
"text": "Reinberger et al., 2003)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Other work in taxonomy learning exploits the head/modifier relationships to create taxonomic relations (Buitelaar et al., 2004; Hwang, 1999; S\u00e1nchez and Moreno, 2005) . These relations are used to create: (1) a class (concept) for each head, and (2) subclasses by adding nominal or adjectival modifiers. For example, credit card IS-A card. The corresponding hyponymy relations are learned at the lexical level disregarding word polysemy. Some of these approaches identified the problem of polysemy and applied sense disambiguation with respect to WordNet in order to capture the different senses of a target term (Navigli and Velardi, 2004b; Navigli and Velardi, 2004a) . Specifically, the taxonomy built by exploiting head/modifiers relations was modified according to WordNet's hyponymy relations between senses of disambiguated terms. One important deficiency of using sense disambiguation is that dictionaries miss many domain-specific senses. Additionally, the fixed-list of senses paradigm prohibits learning word senses according to their use in context. The use of sense induction we propose in this paper aims to overcome these limitations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 127, |
|
"text": "(Buitelaar et al., 2004;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 128, |
|
"end": 140, |
|
"text": "Hwang, 1999;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 166, |
|
"text": "S\u00e1nchez and Moreno, 2005)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 641, |
|
"text": "(Navigli and Velardi, 2004b;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 669, |
|
"text": "Navigli and Velardi, 2004a)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Given a set of words W , a WSI method is applied to each w i \u2208 W (Section 3.1). The outcome of the first stage is a set of senses, S, where each s w i \u2208 S denotes the i-th sense of word w \u2208 W . This set of senses is the input to hierarchical clustering that produces a hierarchy of senses (Section 3.2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "WSI is the task of identifying the senses of a target word in a given text. Recent WSI methods were evaluated under the framework of SemEval-2007 WSI task (SWSI) (Agirre and Soroa, 2007) . The evaluation framework defines two types of assessment, i.e. evaluation in: (1) a clustering and (2) a WSD setting. Based on this evaluation, we selected the method of Klapaftis & Manandhar (2008) (henceforth referred to as KM) that achieves high Fscore in both evaluation schemes as compared to the systems participating in SWSI. We briefly describe KM mentioning its parameters used in our evaluation (Section 4). Figures 2 (a) and 2 (b) describe the different steps for inducing the senses of the target words network and LAN.", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 186, |
|
"text": "(Agirre and Soroa, 2007)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 359, |
|
"end": 387, |
|
"text": "Klapaftis & Manandhar (2008)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word sense induction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Corpus preprocessing: The input to KM is a base corpus bc, in which the target word w appears in each paragraph. In Figure 2 (a), the base corpus consists of the paragraphs A, B, C and D. The aim of this stage is to capture nouns contextually related to w. Initially, the target word is removed from bc, part-of-speech tagging is applied to each paragraph, only nouns are kept and lemmatised. In the next step, the distribution of each noun is compared to the distribution of the same noun in a reference corpus 2 using the log-likelihood ratio (G 2 ) (Dunning, 1993). Nouns with a G 2 below a prespecified threshold (parameter p 1 ) are removed from each paragraph. Figure 2 (a) shows the remaining nouns for each paragraph of bc.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 124, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 667, |
|
"end": 679, |
|
"text": "Figure 2 (a)", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Word sense induction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Graph creation & clustering: In the setting of KM, a collocation is a juxtaposition of two nouns within the same paragraph. Thus, each noun is combined with any other noun yielding a total of N 2 collocations for a paragraph with N nouns. Each collocation, c ij , is assigned a weight that measures the relative frequency of two nouns co-occurring. This weight is the average of the conditional probabilities p(n i |n j ) and p(n", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word sense induction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "j |n i ), where p(n i |n j ) = f (c ij ) f (n j ) , f (c ij )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word sense induction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is the number of paragraphs nouns n i , n j co-occur and f (n j ) is the number of paragraphs in which n j appears. Collocations are filtered with respect to their frequency (parameter p 2 ) and weight (parameter p 3 ). Each retained collocation is represented as a vertex. Edges between vertices are present, if two collocations co-occur in one or more paragraphs. Figure 2 (a) shows that this process has generated 24 collocations for the target word network. On the top right of the figure we also observe the collocations associated with each paragraph.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 366, |
|
"end": 374, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Word sense induction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the next step, a smoothing technique is applied to discover new edges between vertices. The weight applied to each edge connecting vertices v i and v j (collocations c ab , c de ) is the maximum of their conditional probabilities (max(p(c ab |c de ), p(c de |c ab ))). Finally, the graph is clustered using Chinese whispers (Biemann, 2006) . The final output is a set of senses, each one represented by a set of contextually related collocations. In Figure 2 , we generated two senses for network and one sense for LAN.", |
|
"cite_spans": [ |
|
{ |
|
"start": 327, |
|
"end": 342, |
|
"text": "(Biemann, 2006)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 453, |
|
"end": 461, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Word sense induction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Given the set of senses S, our task at this point is to hierarchically classify the senses using HAC. Consider for example the words network and LAN, and let us assume that the WSI process has generated the senses in Figures 2 (a) and 2 (b). HAC operates by treating each sense as a singleton cluster and then successively merging the most similar clusters according to a pre-defined similarity function. This process iterates until all clusters have been merged into a single cluster taken to be the root.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 230, |
|
"text": "Figures 2 (a)", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hierarchical clustering of senses", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To calculate the pairwise similarities between senses we exploit the attributes that represent each sense, i.e. their collocations. Let BC be the corpus resulting from the union of the base corpora of all words in W . In our example, BC would consist of the paragraphs, in which the words network and LAN appear, i.e. A, B, ..., G. An induced sense tags a paragraph, if one or more of its collocations appear in that paragraph. Thus, each induced sense is associated with a set of paragraph labels that denote the paragraphs tagged by that sense. Figure 3 shows the paragraph labels tagged by each sense of our example. Finally, given two senses s a i , s b i and their corresponding sets of tagged paragraphs f a i and f b i , we use the Jaccard coefficient to calculate their similarity, i.e. JC(s", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 547, |
|
"end": 555, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hierarchical clustering of senses", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "a i , s b i ) = |f a i \u2229f b i | |f a i \u222af b i |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical clustering of senses", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ", where s k j denotes the j-th sense of word k. The resulting similarity matrix of our example is shown in Table 1 . Given that matrix, HAC would first group computer network and LAN as they have the highest similarity (Figure 3 ). In the final iteration, the remaining two clusters (Cluster 1 & meshwork) would be grouped to the root.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 114, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 228, |
|
"text": "(Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hierarchical clustering of senses", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "An important parameter of HAC is the choice of the technique for calculating cluster similarities. Note that as we move towards the higher levels of the taxonomy clusters contain more than one sets of tagged paragraphs (Figure 3 -Cluster 1) , hence the choice of the similarity function is crucial. We experiment with three techniques, i.e. single-linkage, complete-linkage and average-linkage. The first one defines the similarity between two clusters as the maximum similarity among all the pairs of their corresponding feature sets. The second considers the minimum similarity among all the pairs, while the third calculates the average similarity of all the pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 240, |
|
"text": "(Figure 3 -Cluster 1)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hierarchical clustering of senses", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We evaluate our method with respect to two WordNet-derived sub-taxonomies (Section 4.3). For that reason, it is necessary to map the induced senses to WordNet before applying HAC. Note that the mapping process might map more than one induced senses to the same WordNet sense. In that case, these induced senses are merged to a single one along with their corresponding collocations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The process of mapping the induced senses to Word-Net is straightforward. Let w \u2208 W be a word with n senses in WordNet. A WordNet sense i of w is denoted by ws w i , i = [1, n]. Let us also assume that the WSI method has produced m senses for w, where each sense j is denoted as s w j , j = [1, m]. Each induced sense s w j is associated with a set of features f w j as in the previous section. These features are the paragraphs (paragraph labels) of BC tagged by s w j . In the next step, each WordNet sense ws w i is associated with its WordNet signature g w i that contains the following semantic features: hypernyms/hyponyms, meronyms/holonyms and synonyms of ws w i . For example, the signature of the fifth WordNet sense of network would contain internet, cyberspace and other semantically related words. Table 2 shows partial signatures for each sense of network.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 811, |
|
"end": 818, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mapping WSI clusters to WordNet senses", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The signature g w i is used to formalise the Word-Net sense ws w i as a set of features q w i . These features are the paragraphs (paragraph labels) of BC that contain one or more of the aforementioned semantically related to ws w i words that exist in g w i . Given an induced sense s w j , a similarity score is calculated between s w j and each WordNet sense of w. The maximum score determines the WordNet sense ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mapping WSI clusters to WordNet senses", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ") = argmax i JC(f w j , q w i ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mapping WSI clusters to WordNet senses", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where JC is the Jaccard similarity coefficient. In the example of Figure 2 (a) , the computer network sense would be mapped to the fifth WordNet sense of network, since there is a significant overlap between the paragraphs tagged by the induced and that WordNet sense.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 78, |
|
"text": "Figure 2 (a)", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mapping WSI clusters to WordNet senses", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For the purposes of this section we present one gold standard taxonomy (Figure 1 (a) ) and a second derived from our method (Figure 1 (b) ). The comparison of these taxonomies is based on the semantic cotopy of a node, which has also been used in (Maedche and Staab, 2002; Cimiano et al., 2005) . In particular, the semantic cotopy of a node is defined as the set of all its super-and subnodes excluding the root and including that node. For example, the semantic cotopy of computer network in Figure 1 (a) is {computer network, internet, LAN}. There are two issues, which make the evaluation difficult.", |
|
"cite_spans": [ |
|
{ |
|
"start": 247, |
|
"end": 272, |
|
"text": "(Maedche and Staab, 2002;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 294, |
|
"text": "Cimiano et al., 2005)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 84, |
|
"text": "(Figure 1 (a)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 124, |
|
"end": 137, |
|
"text": "(Figure 1 (b)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 506, |
|
"text": "Figure 1 (a)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The first one is that HAC produces a taxonomy in which all internal nodes are unlabelled, as opposed to the gold standard taxonomy. In Figure 1 (b) , we have manually labelled internal nodes with their IDs for clarity. For example, the semantic cotopy of the node New Cluster 1 in Figure 1 (b) is {computer network, internet, LAN, New Cluster 1, New Cluster 0}. By comparing the cotopies of nodes computer network in Figure 1 (a) and New Cluster 1 in Figure 1 (b) , we observe that the automatic method has successfully grouped all of the hypernyms and hyponyms of computer network under New Cluster 1. However, the corresponding cotopies are not identical, because the cotopy of New Cluster 1 also includes the labels produced by HAC.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 147, |
|
"text": "Figure 1 (b)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 293, |
|
"text": "Figure 1 (b)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 417, |
|
"end": 429, |
|
"text": "Figure 1 (a)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 451, |
|
"end": 463, |
|
"text": "Figure 1 (b)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To deal with this problem, we use a version of semantic cotopy for nodes in the automatically learned taxonomy which excludes nodes that do not exist in WordNet. That way the semantic cotopies of New Cluster 1 in Figure 1 (b) and computer network in Figure 1 (a) will yield maximum similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 225, |
|
"text": "Figure 1 (b)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 258, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The second issue is that the nodes that exist in the gold standard taxonomy are leaf nodes in the automatically learned taxonomy. As a result, the semantic cotopy of LAN in Figure 1 (b) is {LAN} since all of its supernodes do not exist in WordNet. In contrast, the semantic cotopy of LAN in Figure 1 (a) is {LAN, computer network}. We observe that there is an overlap between the two cotopies derived by the existence of the same concept in both taxonomies, i.e. LAN. In fact, all of the leaf nodes of a learned taxonomy will have a small overlap with the corresponding concept in the gold standard. For this problem, we observe that in our automatically learned taxonomies it does not make sense to calculate the semantic cotopy of leaf nodes. On the contrary, we need to evaluate the internal nodes that group the leaf nodes. Let us assume the following notation: T A = automatically learned taxonomy \u03b7 i = node in a taxonomy C(T A ) = internal nodes + leaf nodes of T A I(T A ) = internal nodes of T A T G = gold standard taxonomy C(T G ) = internal nodes + leaf nodes of T G I(T G ) = internal nodes of T G hyper(\u03b7 i ) = supernodes of \u03b7 i excluding the root hypo(\u03b7 i ) = subnodes of \u03b7 i including \u03b7 i For \u03b7 i \u2208 I(T A ), the semantic cotopy is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 185, |
|
"text": "Figure 1 (b)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 304, |
|
"text": "Figure 1 (a)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "SC (\u03b7 i ) = (hyper(\u03b7 i ) \u222a hypo(\u03b7 i )) \u2229 C(T G ) For \u03b7 i \u2208 C(T G )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ", the semantic cotopy is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "SC (\u03b7 i ) = (hyper(\u03b7 i ) \u222a hypo(\u03b7 i )) P (\u03b7 i , \u03b7 j ) = |SC (\u03b7 i ) \u2229 SC (\u03b7 j )| |SC (\u03b7 i )| (1) R(\u03b7 i , \u03b7 j ) = |SC (\u03b7 i ) \u2229 SC (\u03b7 j )| |SC (\u03b7 j )| (2) F (\u03b7 i , \u03b7 j ) = 2P (\u03b7 i , \u03b7 j )R(\u03b7 i , \u03b7 j ) P (\u03b7 i , \u03b7 j ) + R(\u03b7 i , \u03b7 j )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Precision, recall and harmonic mean of node \u03b7 i \u2208 I(T A ) with respect to node \u03b7 j \u2208 C(T G ) are defined in Equations 1, 2 and 3. The F-score, F S, of node \u03b7 i \u2208 I(T A ) is the maximum F attained at any", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u03b7 j \u2208 C(T G ) (F S(\u03b7 i ) = argmax j F (\u03b7 i , \u03b7 j ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ". Finally, the similarity T S of the entire taxonomy to the gold standard taxonomy is the average of the F-scores of each \u03b7 i \u2208 I(T A ) (Equation 4). The T S(T A , T G ) in Figure 1 is 0.9. All nodes of T A have a perfect match, apart from New Cluster 0 and New Cluster 2, which are matched against computer network and meshwork respectively, having a perfect precision but a lower recall since the cotopies of computer network and meshwork consist of three concepts. The automatically learned taxonomy has two redundant clusters that decrease its similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 181, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "T S(T A , T G ) = 1 |I(T A )| \u03b7 i \u2208I(T A ) F S(\u03b7 i ) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The similarity measure T S(T A , T G ) provides the similarity of the automatically learned taxonomy to the gold standard one, but it is not symmetric. Calculating the taxonomic similarity one way might not provide accurate results, in cases where T A misses senses of the gold standard. This is due to the fact that we would only evaluate the internal nodes of T A , partially ignoring the fact that T A might have missed some parts of the gold standard taxonomy. For that reason, we also calculate T S(T G , T A ) which provides the similarity of the gold standard taxonomy to the automatically learned one. Finally, taxonomic similarities are combined to produce their harmonic mean (Equation 5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "T xSm(T A , T G ) = 2T S(T G , T A )T S(T A , T G ) T S(T G , T A ) + T S(T A , T G )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Evaluation measures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The first gold standard taxonomy is derived by extracting from WordNet all the hyponyms of the senses of the word network. The extracted taxonomy contains 29 senses linguistically realized by 24 word sets (one sense might be expressed with more than one words), since network has 5 senses and reseau has 2 senses in the gold standard taxonomy. Note that we have disregarded senses only expressed by multiword expressions. The average polysemy of words is around 1.7. The second taxonomy is derived by extracting the concepts under the senses of the word speaker. The speaker taxonomy contains 52 senses linguistically realized by 50 word sets, since speaker has 3 senses included in the taxonomy. The average polysemy of words is around 1.58. To create our datasets 3 we use the Yahoo! search api 4 . For each word w in each of the datasets, we is-Parameter Range G 2 threshold (p 1 ) 5,10 Collocation frequency (p 2 ) 4,6,8 Collocation weight (p 3 ) 0.1,0.2,0.3,0.4 sue a query to Yahoo! that contains w and we download a maximum of 1000 pages. In cases where a particular sense is expressed by more than one word, the query was formulated by including all the words and putting the keyword OR between them. For each page we extracted fragments of text (paragraphs) that occur in <p> </p> html tags. We extracted 58956 and 78691 paragraphs for the network and speaker dataset respectively. The reason we extracted on average less content for the second dataset was that Yahoo! provided a small number of results for rare words such as alliterator, anecdotist, etc. Table 3 shows the parameter ranges for the WSI method. Our method is evaluated according to these parameters. Our first baseline is RAND, which performs a random hierarchical clustering of senses to produce a binary tree. In each iteration two clusters are randomly chosen and form a new cluster, until we end up with one cluster taken to be the root. The performance of RAND is calculated by executing the random algorithm 10 times and then averaging the results. The second baseline is the taxonomy most frequent sense baseline (TL MFS), in which we do not perform WSI. Instead, given a parameter setting and a word w, all the collocations of w are grouped into one vector, which will possibly be dominated by collocations related to the MFS of w. WordNet mapping takes place and finally HAC with averagelinkage is applied to create the taxonomy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1566, |
|
"end": 1573, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation datasets & setting", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Figures 4 (a) and 4 (b) show the performance of HAC with single-linkage (HAC SNG), averagelinkage (HAC AVG) and complete-linkage (HAC CMP) against RAND for p 1 = 5 and different combinations of p 2 and p 3 . It is clear that HAC SNG and HAC AVG outperform RAND by very large margins under all parameter combinations. In the network dataset, both of them achieve their highest distance from RAND (27.84%) at p 2 = 8 and p 3 = 0.2. In the speaker dataset, their highest distance from RAND (20.97% and 19.63% respectively) is achieved at p 2 = 4 and p 3 = 0.1. HAC CMP performs worse than the other HAC versions, yet it clearly outperforms RAND in all but one parameter combinations (p 1 = 5, p 2 = 6, p 3 = 0.4) in the speaker dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results & discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Generally, for collocation weight equal to 0.4 the performance of all HAC versions drops. At this high collocation weight the WSI method produces a larger number of small clusters than in lower thresholds. This issue negatively affects both the mapping process and HAC. For example in the speaker dataset, for p 1 = 5, p 2 = 8 and p 3 = 0.1 our taxonomies contained 86.54% of the gold standard taxonomy senses. Increasing the collocation weight to 0.2 did not have any effect, but increasing the weight to 0.3 and then 0.4 led to 71.15% and 65.38% sense coverage. Overall, our conclusion is that all HAC versions exploit the WSI method and learn useful information better than chance. The picture is the same for p 1 = 10.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results & discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Figures 4 (c) and 4 (d) show the performance of HAC versions against the TL MFS baseline in the same parameter setting as above. We observe that both HAC SNG and HAC AVG perform significantly better than TL MFS apart from p 3 = 0.4, in which case all HAC versions perform worse. In the network dataset, the largest performance difference for HAC SNG is 10.12% and for HAC AVG 9.9% at p 2 = 6 and p 3 = 0.2. In the speaker dataset, the largest performance difference for HAC SNG is 10.83% and for HAC AVG 7.83% at p 2 = 8 and p 3 = 0.2. HAC CMP performs worse than TL MFS under most parameter settings in both datasets. The picture is the same for p 1 = 10.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results & discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Overall, the analysis of the WSI-based taxonomy learning approach against TL MFS shows that HAC SNG and HAC AVG perform better than TL MFS under all parameter combinations for both datasets. The main reason for their superior performance is that their learned taxonomies contain a higher number of senses than TL MFS as a result of the sense induction process. This greater sense coverage leads to the discovery of a higher number of correct taxonomic relations between senses than TL MFS, hence in a better performance. To conclude, our results verify our hypothesis and suggest that the unsupervised learning of word senses contributes to producing taxonomies with a higher similarity to the gold standard ones than traditional distributional similarity methods. Despite that, our evaluation also shows that in most cases HAC CMP is unable to exploit the induced senses and performs worse than TL MFS, HAC SNG and HAC AVG. This result was not expected, since HAC SNG employs a local criterion to merge two clusters and does not consider the global structure of the clusters, in effect, being biased towards elongated clusters. The observation of the gold standard taxonomies shows that they consist both of cohyponym concepts which are expected to be contextually related, but also of cohyponyms which are not expected to appear in similar contexts. For example, someone would expect a high similarity between WAN, LAN, or between snood and tulle. However, the same does not apply for snood and cheesecloth or tulle and grillwork, because cheesecloth and grillwork appear in significantly different contexts than snood and tulle. Despite that, all of them are cohyponyms. This issue is more prevalent in the speaker dataset, where concepts such as loudspeaker, tannoy, woofer are expected to be contextually related, while cohyponyms such as whisperer, lecturer and interviewer are not. This means that the gold standard taxonomies include elongated clusters and explains the superior performance of HAC SNG.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results & discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "This issue is not affecting HAC AVG, but it has a significant effect on HAC CMP. Generally, HAC CMP employs a non-local criterion by considering the diameter of a candidate cluster. This results in compact clusters with small diameters, as opposed to elongated ones.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results & discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We presented an unsupervised method for taxonomy learning that employs WSI to identify the senses of target words and then builds a taxonomy of these senses using HAC. We have shown that dealing with polysemy by means of sense induction helps to develop taxonomies that capture a higher number of correct taxonomic relations than traditional distributional similarity methods, which associate each target word with one vector of features, in effect, merging its senses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "WordNet: A dwelling that serves as living quarters . . .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The British National Corpus, 2001, Distributed by Oxford University Computing Services.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Available in http://www.cs.york.ac.uk/aig/projects/indect/taxlearn 4 http://developer.yahoo.com/search/[Accessed:10/06/2009]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work is supported by the European Commission via the EU FP7 INDECT project, Grant No. 218086, Research area: SEC-2007-1.2-01 Intelligent Urban Environment Observation System.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "SemEval-2007 Task 02: Evaluating Word Sense Induction and Discrimination Systems", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Soroa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Agirre and A. Soroa. 2007. SemEval-2007 Task 02: Evaluating Word Sense Induction and Discrimi- nation Systems. In Proceedings of the Fourth Interna- tional Workshop on Semantic Evaluations, pages 7-12, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A Taxonomy for English Nouns and Verbs", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Amsler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1981, |
|
"venue": "Proceedings of the 19th ACL Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "133--138", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. A. Amsler. 1981. A Taxonomy for English Nouns and Verbs. In Proceedings of the 19th ACL Conference, pages 133-138, Stanford, California.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Chinese Whispers -An Efficient Graph Clustering Algorithm and its Application to Natural Language Processing Problems", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ";", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Buitelaar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Olejnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Sintek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 1st European Semantic Web Symposium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "31--44", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Biemann. 2006. Chinese Whispers -An Efficient Graph Clustering Algorithm and its Application to Natural Language Processing Problems. In Proceed- ings of TextGraphs, pages 73-80, New York,USA. P. Buitelaar, D. Olejnik, and M. Sintek. 2004. A Ptot\u00e9g\u00e9 Plug-in for Ontology Extraction from Text Based on Linguistic Analysis. In Proceedings of the 1st Euro- pean Semantic Web Symposium, pages 31-44, Crete, Greece. CEUR-WS.org.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Automatic Construction of a Hypernym-labeled Noun Hierarchy from Text", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Caraballo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 37th ACL Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "120--126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. A. Caraballo. 1999. Automatic Construction of a Hypernym-labeled Noun Hierarchy from Text. In Pro- ceedings of the 37th ACL Conference, pages 120-126, College Park, Maryland.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Comparing Conceptual, Divisive and Agglomerative Clustering for Learning Taxonomies from Text", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Cimiano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Hotho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Staab", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 16th ECAI Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "435--439", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Cimiano, A. Hotho, and S. Staab. 2004. Compar- ing Conceptual, Divisive and Agglomerative Cluster- ing for Learning Taxonomies from Text. In Proceed- ings of the 16th ECAI Conference, pages 435-439, Va- lencia, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Learning Concept Hieararchies from Text Corpora Using Formal Concept Analysis", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Cimiano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Hotho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Staab", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "24", |
|
"issue": "", |
|
"pages": "305--339", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Cimiano, A. Hotho, and S. Staab. 2005. Learning Concept Hieararchies from Text Corpora Using For- mal Concept Analysis. Journal of Artificial Intelli- gence Research, 24:305-339.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Ontology Learning and Population from Text: Algorithms, Evaluation and Applications", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Cimiano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "1", |
|
"pages": "61--74", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Cimiano. 2006. Ontology Learning and Population from Text: Algorithms, Evaluation and Applications. Springer-Verlag New York, Inc., Secaucus, NJ, USA. T. Dunning. 1993. Accurate Methods for the Statistics of Surprise and Coincidence. Computational Linguistics, 19(1):61-74.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Corpus-based Conceptual Clustering Method for Verb Frames and Ontology Acquisition", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Faure", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "N\u00e9dellec", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "LREC workshop on Adapting lexical and corpus resources to sublanguages and applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Faure and C. N\u00e9dellec. 1998. A Corpus-based Con- ceptual Clustering Method for Verb Frames and On- tology Acquisition. In LREC workshop on Adapting lexical and corpus resources to sublanguages and ap- plications, pages 5-12, Granada, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Wordnet: An Electronic Lexical Database", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Fellbaum. 1998. Wordnet: An Electronic Lexical Database. MIT Press, Cambridge, Massachusetts, USA.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Formal Concept Analysis: Mathematical Foundations", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Ganter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Wille", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Ganter and R. Wille. 1999. Formal Concept Anal- ysis: Mathematical Foundations. Springer-Verlag New York, Inc., Secaucus, NJ, USA. Translator-C. Franzke.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Mathematical Structures of Language", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1968, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Z. Harris. 1968. Mathematical Structures of Language. Wiley, New York, USA.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Automatic Acquisition of Hyponyms from Large Text Corpora", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Hearst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the 14th Coling Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "539--545", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. A. Hearst. 1992. Automatic Acquisition of Hy- ponyms from Large Text Corpora. In Proceedings of the 14th Coling Conference, pages 539-545, Nantes, France.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Incompletely and Imprecisely Speaking: Using Dynamic Ontologies for Representing and Retrieving Information", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Hwang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 6th International Workshop on Knowledge Representation Meets Databases", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "14--20", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Hwang. 1999. Incompletely and Imprecisely Speaking: Using Dynamic Ontologies for Represent- ing and Retrieving Information. In Proceedings of the 6th International Workshop on Knowledge Repre- sentation Meets Databases, pages 14-20, Linkoping, Sweden. CEUR-WS.org.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Step-wise Clustering Procedures. Journal of the American Statistical Association", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "King", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "", |
|
"volume": "69", |
|
"issue": "", |
|
"pages": "86--101", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. King. 1967. Step-wise Clustering Procedures. Jour- nal of the American Statistical Association, 69:86- 101.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Word Sense Induction Using Graphs of Collocations", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Klapaftis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Manandhar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 18th ECAI Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "298--302", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. P. Klapaftis and S. Manandhar. 2008. Word Sense In- duction Using Graphs of Collocations. In Proceedings of the 18th ECAI Conference, pages 298-302, Patras, Greece. IOS Press.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Measuring Similarity between Ontologies", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Maedche", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Staab", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the European Conference on Knowledge Acquisition and Management (EKAW)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "251--263", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Maedche and S. Staab. 2002. Measuring Similarity between Ontologies. In Proceedings of the European Conference on Knowledge Acquisition and Manage- ment (EKAW), pages 251-263, London,UK. Springer- Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Lexical Chains for Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Moldovan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Novischi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 19th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Moldovan and A. Novischi. 2002. Lexical Chains for Question Answering. In Proceedings of the 19th", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Coling Conference", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--7", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Coling Conference, pages 1-7, Taipei, Taiwan.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Learning Domain Ontologies from Document Warehouses and Dedicated web Sites", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Velardi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Computational Linguistics", |
|
"volume": "30", |
|
"issue": "2", |
|
"pages": "151--179", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Navigli and P. Velardi. 2004a. Learning Domain On- tologies from Document Warehouses and Dedicated web Sites. Computational Linguistics, 30(2):151-179.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Structural Semantic Interconnection: a Knowledge-based Approach to Word Sense Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Velardi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "179--182", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Navigli and P. Velardi. 2004b. Structural Semantic In- terconnection: a Knowledge-based Approach to Word Sense Disambiguation. In Proceedings of Senseval- 3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 179- 182, Barcelona, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Discovering Knowledge in Texts for the Learning of Dogmainspired Ontologies", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Reinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Spyns", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the ECAI Workshop on Ontology Learning and Population", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--24", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M.L. Reinberger and P. Spyns. 2004. Discovering Knowledge in Texts for the Learning of Dogma- inspired Ontologies. In Proceedings of the ECAI Workshop on Ontology Learning and Population, pages 19-24, Valencia, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Mining for Lexons: Applying Unsupervised Learning Methods to create ontology bases", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Reinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Spyns", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Meersman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "CoopIS/DOA/ODBASE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "803--819", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. L. Reinberger, P. Spyns, W. Daelemans, and R. Meers- man. 2003. Mining for Lexons: Applying Unsuper- vised Learning Methods to create ontology bases. In CoopIS/DOA/ODBASE, pages 803-819.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Web-scale Taxonomy Learning", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "S\u00e1nchez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Moreno", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Workshop on Learning and Extending Ontologies by using Machine Learning methods", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "53--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. S\u00e1nchez and A. Moreno. 2005. Web-scale Taxon- omy Learning. In Proceedings of the Workshop on Learning and Extending Ontologies by using Machine Learning methods, pages 53-60, Bonn, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Numerical Taxonomy, The Principles and Practice of Numerical Classification", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"H A" |
|
], |
|
"last": "Sneath", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Sokal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. H. A. Sneath and R. R. Sokal. 1973. Numerical Taxon- omy, The Principles and Practice of Numerical Clas- sification. W. H. Freeman, San Francisco, USA.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "A labelled and an unlabelled concept taxonomy taxonomy in" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "WSI for network & LAN" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "WSI & HAC example" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Performance analysis of the proposed method for p 1 = 5 and different combinations of p 2 and p 3 ." |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"text": "Similarity matrix for HAC.", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"text": "Semantically related words/phrases to network", |
|
"html": null, |
|
"content": "<table><tr><td>label that will be assigned to s w j , i.e. label(s w j</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"text": "Chosen parameters for the KM WSI method.", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |