|
{ |
|
"paper_id": "N01-1010", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:48:29.323783Z" |
|
}, |
|
"title": "Tree-cut and A Lexicon based on Systematic Polysemy", |
|
"authors": [ |
|
{ |
|
"first": "Noriko", |
|
"middle": [], |
|
"last": "Tomuro", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "DePaul University", |
|
"location": { |
|
"addrLine": "243 S. Wabash Ave. Chicago", |
|
"postCode": "60604", |
|
"region": "IL" |
|
} |
|
}, |
|
"email": "tomuro@cs.depaul.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes a lexicon organized around systematic polysemy: a set of word senses that are related in systematic and predictable ways. The lexicon is derived by a fully automatic extraction method which utilizes a clustering technique called tree-cut. We compare our lexicon to WordNet cousins, and the inter-annotator disagreement observed between WordNet Semcor and DSO corpora.", |
|
"pdf_parse": { |
|
"paper_id": "N01-1010", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes a lexicon organized around systematic polysemy: a set of word senses that are related in systematic and predictable ways. The lexicon is derived by a fully automatic extraction method which utilizes a clustering technique called tree-cut. We compare our lexicon to WordNet cousins, and the inter-annotator disagreement observed between WordNet Semcor and DSO corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In recent years, the granularity of word senses for computational lexicons has been discussed frequently in Lexical Semantics (for example, (Kilgarri , 1998a Palmer, 1998 ). This issue emerged as a prominent problem after previous studies and exercises in Word Sense Disambiguation (WSD) reported that, when ne-grained sense de nitions such as those in WordNet (Miller, 1990) were used, entries became very similar and indistinguishable to human annotators, thereby causing disagreement o n correct tags (Kilgarri , 1998b Veronis, 1998 Ng et al., 1999 . In addition to WSD, the selection of sense inventories is fundamentally critical in other Natural Language Processing (NLP) tasks such as Information Extraction (IE) and Machine Translation (MT), as well as in Information Retrieval (IR), since the di erence in the correct sense assignments a ects recall, precision and other evaluation measures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 157, |
|
"text": "(Kilgarri , 1998a", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 170, |
|
"text": "Palmer, 1998", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 361, |
|
"end": 375, |
|
"text": "(Miller, 1990)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 504, |
|
"end": 521, |
|
"text": "(Kilgarri , 1998b", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 522, |
|
"end": 535, |
|
"text": "Veronis, 1998", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 536, |
|
"end": 551, |
|
"text": "Ng et al., 1999", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In response to this, several approaches have been proposed which group ne-grained word senses in various ways to derive coarse-grained sense groups. Some approaches utilize an abstraction hierarchy d ened in a dictionary (Kilgarri , 1998b) , while others utilize surface syntactic patterns of the functional structures (such as predicate-argument structure for verbs) of words (Palmer, 1998) . Also, the current version of WordNet (1.6) encodes groupings of similar/related word senses (or synsets) by a relation called cousin.", |
|
"cite_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 239, |
|
"text": "(Kilgarri , 1998b)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 377, |
|
"end": 391, |
|
"text": "(Palmer, 1998)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Another approach to grouping word senses is to utilize a linguistic phenomenon called systematic polysemy: a set of word senses that are related in sys-tematic and predictable ways. 1 For example, ANIMAL and MEAT meanings of the word \\chicken\" are related because chicken as meat refers to the esh of a c hicken as a bird that is used for food. 2 This relation is systematic, since many ANIMAL words such a s \\duck\" and \\lamb\" have a MEAT meaning. Another example is the relation QUANTITY-PROCESS observed in nouns such as \\increase\" and \\supply\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Sense grouping based on systematic polysemy is lexico-semantically motivated in that it expresses general human knowledge about the relatedness of word meanings. Such sense groupings have advantages compared to other approaches. First, related senses of a word often exist simultaneously in a discourse (for example the QUANTITY and PROCESS meanings of \\increase\" above). Thus, systematic polysemy can be e ectively used in WSD (and WSD evaluation) to accept multiple or alternative sense tags (Buitelaar, personal communication) . Second, many systematic relations are observed between senses which belong to di erent semantic categories. So if a lexicon is de ned by a collection of separate trees/hierarchies (such as the case of Word-Net), systematic polysemy can express similarity b etween senses that are not hierarchically proximate. Third, by explicitly representing (inter-)relations between senses, a lexicon based on systematic polysemy can facilitate semantic inferences. Thus it is useful in knowledge-intensive NLP tasks such as discourse analysis, IE and MT. More recently, (Gonzalo et al., 2000) also discusses potential usefulness of systematic polysemy for clustering word senses for IR.", |
|
"cite_spans": [ |
|
{ |
|
"start": 494, |
|
"end": 529, |
|
"text": "(Buitelaar, personal communication)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1090, |
|
"end": 1112, |
|
"text": "(Gonzalo et al., 2000)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, extracting systematic relations from large sense inventories is a di cult task. Most often, this procedure is done manually. For example, WordNet cousin relations were identi ed manually by t h e W ordNet lexicographers. A similar e ort was also made in the EuroWordnet project (Vossen et al., 1999) . The problem is not only that manual inspection of a large, complex lexicon is very timeconsuming, it is also prone to inconsistencies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 308, |
|
"text": "(Vossen et al., 1999)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we describes a lexicon organized around systematic polysemy. The lexicon is derived by a fully automatic extraction method which utilizes a clustering technique called tree-cut (Li and Abe, 1998) . In our previous work (Tomuro, 2000) , we applied this method to a small subset of Word-Net nouns and showed potential applicability. In the current work, we applied the method to all nouns and verbs in WordNet, and built a lexicon in which word senses are partitioned by systematic polysemy. We report results of comparing our lexicon with the WordNet cousins as well as the inter-annotator disagreement observed between two semantically annotated corpora: WordNet Semcor (Landes et al., 1998) and DSO (Ng and Lee, 1996) . The results are quite promising: our extraction method discovered 89% of the WordNet cousins, and the sense partitions in our lexicon yielded better values (Carletta, 1996) than arbitrary sense groupings on the agreement data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 210, |
|
"text": "(Li and Abe, 1998)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 248, |
|
"text": "(Tomuro, 2000)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 685, |
|
"end": 706, |
|
"text": "(Landes et al., 1998)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 733, |
|
"text": "(Ng and Lee, 1996)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 892, |
|
"end": 908, |
|
"text": "(Carletta, 1996)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The tree-cut technique is an unsupervised learning technique which partitions data items organized in a tree structure into mutually-disjoint clusters. It was originally proposed in (Li and Abe, 1998) , and then adopted in our previous method for automatically extracting systematic polysemy ( T omuro, 2000) . In this section, we give a brief summary of this tree-cut technique using examples from (Li and Abe, 1998) 's original work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 200, |
|
"text": "(Li and Abe, 1998)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 308, |
|
"text": "( T omuro, 2000)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 417, |
|
"text": "(Li and Abe, 1998)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Tree-cut Technique", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The tree-cut technique is applied to data items that are organized in a structure called a thesaurus tree. A thesaurus tree is a hierarchically organized lexicon where leaf nodes encode lexical data (i.e., words) and internal nodes represent abstract semantic classes. A tree-cut is a partition of a thesaurus tree. It is a list of internal/leaf nodes in the tree, and each node represents a set of all leaf nodes in a s u b t r e e rooted by the node. Such a set is also considered as a cluster. 3 Clusters in a tree-cut exhaustively cover all leaf nodes of the tree, and they are mutually disjoint. For instance, Figure 1 shows an example thesaurus tree and one possible tree-cut AIRCRAFT, ball, kite, puzzle], which is indicated by a thick curve in the gure. There are also four other possible tree-cuts for this tree: airplane, helicopter, ball, kite, puzzle], airplane, helicopter, TOY], AIRCRAFT, TOY] a n d ARTIFACT].", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 615, |
|
"end": 623, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree-cut Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In (Li and Abe, 1998) , the tree-cut technique was applied to the problem of acquiring general-ized case frame patterns from a corpus. Thus, each node/word in the tree received as its value the number of instances where the word occurred as a case role (subject, object etc.) of a given verb. Then the acquisition of a generalized case frame was viewed as a problem of selecting the best tree-cut model that estimates the true probability distribution, given a sample corpus data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 21, |
|
"text": "(Li and Abe, 1998)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree-cut Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Formally, a tree-cut model M is a pair consisting of a tree-cut ; and a probability parameter vector of the same length, M = ( ; )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree-cut Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(1) where ; and are: ; = C 1 : : C k ] = P(C 1 ) : : P (C k )] (2) where C i (1 i k) is a cluster in the treecut, P(C i ) is the probability of a cluster C i , and P k i=1 P(C i ) = 1. Note that P(C) is the probability of cluster C = fn 1 :: n m g as a whole, that is, P(C) = P m j=1 P(n j ). For example, suppose a corpus contains 10 instances of verb-object relation for the verb \\ y\", and the frequencies of object nouns n, denoted f(n), are as follows: f(airplane) = 5 f (helicopter) = 3 f (ball) = 0 f (kite) = 2 f (puzzle) = 0 . Then, the set of treecut models for the example thesaurus tree shown in Figure 1 includes ( airplane, helicopter, TOY], .5, .3, .2]) and ( AIRCRAFT, TOY], .8, .2]).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 607, |
|
"end": 615, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree-cut Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "To select the best tree-cut model, (Li and Abe, 1998) uses the Minimal Description Length (MDL). The MDL is a principle of data compression in Information Theory which states that, for a given dataset, the best model is the one which requires the minimum length (often measured in bits) to encode the model (the model description length) and the data (the data description length) (Rissanen, 1978) . Thus, the MDL principle captures the trade-o between the simplicity of a model, which is measured by t h e n umber of clusters in a tree-cut, and the goodness of t to the data, which is measured by the estimation accuracy of the probability distribution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 53, |
|
"text": "(Li and Abe, 1998)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 381, |
|
"end": 397, |
|
"text": "(Rissanen, 1978)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The MDL Principle", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The calculation of the description length for a tree-cut model is as follows. Given a thesaurus tree T and a sample S consisting of the case frame instances, the total description length L(M S) for a tree-cut model", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The MDL Principle", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "M = ( ; ) is L(M S) = L(;) + L( j;) + L(Sj; ) (3) where L(;) is the model description length, L( j;)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The MDL Principle", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "is the parameter description length (explained shortly), and L(Sj; ) is the data description length. Note that L(;) + L( j;) essentially corresponds to the usual notion of the model description length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The MDL Principle", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "airplane helicopter ball kite puzzle 4where G is the set of all cuts in T, a n d jGj denotes the size of G. This value is a constant f o r a l l m o dels, thus it is omitted in the calculation of the total length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ARTIFACT AIRCRAFT TOY", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u0393 L(\u0398|\u0393) L(S|\u0393,\u0398) L(M,S) [A] 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ARTIFACT AIRCRAFT TOY", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The parameter description length L( j;) indicates the complexity o f t h e model. It is the length required to encode the probability distribution of the clusters in the tree-cut ;. It is calculated as L( j;) = k 2 log 2 jSj 5where k is the length of , and jSj is the size of S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ARTIFACT AIRCRAFT TOY", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, the data description length L(Sj; ) is the length required to encode the whole sample data. It is calculated as L(Sj; ) = ; X n2S log 2 P(n)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ARTIFACT AIRCRAFT TOY", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where, for each n 2 C and each C 2 ;, P(n) = P(C) jCj and P(C) = f(C) jSj 7Note the equation 7essentially computes the Maximum Likelihood Estimate (MLE) for all n. 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ARTIFACT AIRCRAFT TOY", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A table in Figure 1 shows the MDL lengths for all ve tree-cut models. The best model is the one with the tree-cut AIRCRAFT, ball, kite, puzzle].", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 19, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ARTIFACT AIRCRAFT TOY", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Using the tree-cut technique described above, our previous work (Tomuro, 2000) extracted systematic polysemy f r o m W ordNet. In this section, we g i v e a summary of this method, and describe the cluster pairs obtained by the method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 78, |
|
"text": "(Tomuro, 2000)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Systematic Polysemy", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In our previous work, systematically related word senses are derived as binary cluster pairs, by applying the extraction procedure to a combination of two WordNet (sub)trees. This process is done in the following three steps. In the rst step, all leaf nodes of the two trees are assigned a v alue of either 1, if a node/word appears in both trees, or 0 otherwise. 6 In the second step, the tree-cut technique is applied to each tree separately, a n d t wo tree-cuts (or sets of clusters) are obtained. To search t h e best tree-cut for a t r e e (i.e., the model which requires the minimum total description length), a greedy algorithm called Find-MDL described in (Li and Abe, 1998) is used to speed up the search. Finally in the third step, clusters in those two tree-cuts are matched up, and the pairs which h a ve s u b s t a n tial overlap (more than three overlapping words) are selected as systematic polysemies. Figure 2 shows parts of the nal tree-cuts for the ARTIFACT and MEASURE classes. Note in the gure, bold letters indicate words which are polysemous in the two trees (i.e., assigned a value 1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 364, |
|
"end": 365, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 665, |
|
"end": 683, |
|
"text": "(Li and Abe, 1998)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 920, |
|
"end": 928, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Extraction Method", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the current w ork, we made a minor modi cation to the extraction method described above, by removing nodes that are assigned a value 0 from the trees. The purpose was to make the tree-cut technique less sensitive to the structure of a tree and produce more speci c clusters de ned at deeper levels. 7 The MDL principle inherently penalizes a complex tree-cut by assigning a long parameter length. Therefore, shorter tree-cuts partitioned at abstract levels are often preferred. This causes a problem when the tree is bushy, which is the case with Word-Net trees. Indeed, many tree-cut clusters obtained in our previous work were from nodes at depth 1 (counting the root as depth 0) { around 88% (122 Figure 2 : Parts of the nal tree-cuts for ARTIFACT and MEASURE out of total 138 clusters) obtained for 5 combinations of WordNet noun trees. Note that we did not allow a cluster at the root of a tree thus, depth 1 i s the highest level for any cluster. After the modi cation above, the proportion of depth 1 clusters decreased to 49% (169 out of total 343 clusters) for the same tree combinations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 303, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 703, |
|
"end": 711, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Modi cation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We applied the modi ed method described above t o all nouns and verbs in WordNet. We rst partitioned words in the two categories into basic classes. A basic class is an abstract semantic concept, and it corresponds to a (sub)tree in the WordNet hierarchies. We c hose 24 basic classes for nouns and 10 basic classes for verbs, from WordNet Top categories for nouns and lexicographers' le names for verbs respectively. Those basic classes exhaustively cover all words in the two categories encoded in Word-Net. For example, basic classes for nouns include ARTIFACT, SUBSTANCE and LOCATION, while basic classes for verbs include CHANGE, MOTION and STATE. For each part-of-speech category, we applied our extraction method to all combinations of two basic classes. Here, a combined class, for instance ARTIFACT-SUBSTANCE, represents an underspeci ed semantic class. We obtained 2,377 cluster pairs in 99 underspeci ed classes for nouns, and 1,710 cluster pairs in 59 underspeci ed classes for verbs. Table 1 shows a summary of the number of basic and underspeci ed classes and cluster pairs extracted by our method.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 997, |
|
"end": 1004, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Extracted Cluster Pairs", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Although the results vary among category combinations, the accuracy (precision) of the derived cluster pairs was rather low: 50 to 60% on average, based on our manual inspection using around 5% randomly chosen samples. 8 This means our automatic method over-generates possible relations. We speculate that this is because in general, there are many homonymous relations that are 'systematic' in the English language. For example, in the ARTIFACT-GROUP class, a pair LUMBER, SOCIAL GROUP] w as extracted. Words which are common in the two clusters are \\picket\", \\board\" and \\stock\". Since there are enough numb e r o f s u c h w ords (for our purpose), our automatic method could not di erentiate them from true systematic polysemy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracted Cluster Pairs", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "To test our automatic extraction method, we c o mpared the cluster pairs derived by our method to WordNet cousins. The cousin relation is relatively new in WordNet, and the coverage is still incomplete. Currently a total of 194 unique relations are encoded. A cousin relation in WordNet is de ned between two synsets, and it indicates that senses of a w ord that appear in both of the (sub)trees rooted by those synsets are related. 9 The cousins were man-8 Note that the relatedness between clusters was determined solely by our subjective judgement. That is because there is no existing large-scale lexicon which encodes related senses completely for all words in the lexicon. (Note that WordNet cousin relation is encoded only for some words). Although the distinction between related vs. unrelated meanings is sometimes unclear, systematicity of the related senses among words is quite intuitive and has been well studied in Lexical Semantics (for example, (Apresjan, 1973 Nunberg, 1995 Copestake and Briscoe, 1995 ). A comparison with WordNet cousin is discussed in the next section 4.", |
|
"cite_spans": [ |
|
{ |
|
"start": 961, |
|
"end": 976, |
|
"text": "(Apresjan, 1973", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 977, |
|
"end": 990, |
|
"text": "Nunberg, 1995", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 991, |
|
"end": 1018, |
|
"text": "Copestake and Briscoe, 1995", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation: Comparison with WordNet Cousins", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "9 Actually, cousin is one of the three relations which indicate the grouping of related senses of a word. Others are sister and twin. In this paper, we use cousin to refer to all relations listed in \\cousin.tps\" le (available in a WordNet distribution). ually identi ed by the WordNet lexicographers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation: Comparison with WordNet Cousins", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To compare the automatically derived cluster pairs to WordNet cousins, we used the hypernymhyponym relation in the trees, instead of the number or ratio of the overlapping words. This is because the levels at which the cousin relations are de ned di er quite widely, from depth 0 to depth 6, thus the number of polysemous words covered in each cousin relation signi cantly varies. Therefore, it was dicult to decide on an appropriate threshold value for either criteria.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation: Comparison with WordNet Cousins", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Using the hypernym-hyponym relation, we checked, for each cousin relation, whether there was at least one cluster pair that subsumed or was subsumed by the cousin. More speci cally, for a cousin relation de ned between nodes c1 and c2 in trees T1 and T2 respectively and a cluster pair de ned between nodes r1 and r2 in the same trees, we d ecided on the correspondence if c1 i s a h ypernym or hyponym of r1, and c2 i s a h ypernym or hyponym r2 at the same time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation: Comparison with WordNet Cousins", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Based on this criteria, we obtained a result indicating that 173 out of the 194 cousin relations had corresponding cluster pairs. This makes the recall ratio 89%, which w e consider to be quite high.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation: Comparison with WordNet Cousins", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In addition to the WordNet cousins, our automatic extraction method discovered several interesting relations. Table 2 shows some examples.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 117, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation: Comparison with WordNet Cousins", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Using the extracted cluster pairs, we partitioned word senses for all nouns and verbs in WordNet, and produced a lexicon. Recall from the previous section that our cluster pairs are generated for all possible binary combinations of basic classes, thus one sense could appear in more than one cluster pair. For example, Ta b l e 3 s h o ws the cluster pairs (and a set of senses covered by each pair, which we call a sense cover) extracted for the noun \\table\" (which has 6 senses in WordNet). Also as we h a ve mentioned earlier in section accuracy-result, our cluster pairs contain many false positives ones. For those reasons, we took a conservative a p p r o a c h, by disallowing transitivity of cluster pairs. To partition senses of a word, we rst assign each sense cover a value which w e c a l l a connectedness. It is de ned as follows. For a given word w which h a s n senses, let S be the set of all sense covers generated for w. Let c ij denote the numb e r o f s e n s e c o vers in which sense i (s i ) and sense j (s j ) occurred together in S (where c ii = 0 for all 1 i n), and d ij = Table 3 shows the connectedness values for all sense covers for \\table\".", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1101, |
|
"end": 1108, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Lexicon based on Systematic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Then, we partition the senses by selecting a set of non-overlapping sense covers which maximizes the total connectedness value. So in the example above, the set f(1 4),(2 3 5)g yields the maximum connectedness. Finally, senses that are not covered by any sense covers are taken as singletons, and added to the nal sense partition. So the sense partition for \\table\" becomes f(1 4),(2 3 5),(6)g. Table 4 shows the comparison between Word-Net and our new lexicon. As you can see, our lexicon contains much less ambiguity: the ratio of monosemous words increased from 84% (88,650/105,461 .84) to 92% (96,964/105,461 .92) , and the average number of senses for polysemous words decreased from 2.73 to 2.52 for nouns, and from 3.57 to 2.82 for verbs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 569, |
|
"end": 617, |
|
"text": "(88,650/105,461 .84) to 92% (96,964/105,461 .92)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 395, |
|
"end": 402, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Lexicon based on Systematic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As a note, our lexicon is similar to CORELEX (Buitelaar, 1998) (or CORELEX-II presented in (Buitelaar, 2000) ), in that both lexicons share the same motivation. However, our lexicon di ers from CORELEX in that CORELEX looks at all senses of a w ord and groups words that have the same sense distribution pattern, whereas our lexicon groups \\drop\", \\circle\", \\intersection\", \\dig\", \\crossing\", \\bull's eye\" ARTIFACT-GROUP STRUCTURE, PEOPLE] \\house\", \\convent\", \\market\", \\center\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 108, |
|
"text": "(Buitelaar, 2000)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 439, |
|
"text": "ARTIFACT-GROUP STRUCTURE, PEOPLE]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Lexicon based on Systematic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "ARTIFACT-SUBSTANCE FABRIC, CHEMICAL COMPOUND] \\acetate\", \\nylon\", \\acrylic\", \\polyester\" COMMUNICATION-PERSON VOICE, SINGER] \\soprano\", \\alto\", \\tenor\", \\baritone\" WRITING, RELIGIOUS PERSON] \\John\", \\Matthew\", \\Jonah\", \\Joshua\", \\Jeremiah\" word senses that have the same systematic relation. Thus, our lexicon represents systematic polysemy a t a n e r l e v el than CORELEX, by pinpointing related senses within each w ord.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 124, |
|
"text": "COMMUNICATION-PERSON VOICE, SINGER]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 190, |
|
"text": "RELIGIOUS PERSON]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Lexicon based on Systematic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To test if the sense partitions in our lexicon constitute an appropriate (or useful) level of granularity, we applied it to the inter-annotator disagreement observed in two semantically annotated corpora: WordNet Semcor (Landes et al., 1998) and DSO (Ng and Lee, 1996) . The agreement b e t ween those corpora is previously studied in (Ng et al., 1999) . In our current work, we rst re-produced their agreement data, then used our sense partitions to see whether or not they yield a better agreement. In this experiment, we extracted 28,772 sentences/instances for 191 words (consisting of 121 nouns and 70 verbs) tagged in the intersection of the two corpora. This constitutes the base data set. Table 5 shows the breakdown of the number of instances where tags agreed and disagreed. 10 As you 10 Note that the numbers reported in (Ng et al., 1999) are slightly more than the ones reported in this paper. For instance, the number of sentences in the intersected corpus reported in (Ng et al., 1999) is 30,315. We speculate the discrepancies are due to the di erent s e n tence alignment meth- This low agreement ratio is also re ected in a measure called the statistic (Carletta, 1996 Bruce and Wiebe, 1998 Ng et al., 1999 . measure takes into account c hance agreement, thus better representing the state of disagreement. A value is calculated for each word, on a confusion matrix where rows represent the senses assigned by judge 1 (DSO) and columns represent the senses assigned by judge 2 (Semcor). Table 6 shows an example matrix for the noun \\table\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 241, |
|
"text": "(Landes et al., 1998)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 268, |
|
"text": "(Ng and Lee, 1996)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 352, |
|
"text": "(Ng et al., 1999)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 795, |
|
"end": 797, |
|
"text": "10", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 832, |
|
"end": 849, |
|
"text": "(Ng et al., 1999)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 982, |
|
"end": 999, |
|
"text": "(Ng et al., 1999)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1170, |
|
"end": 1185, |
|
"text": "(Carletta, 1996", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1186, |
|
"end": 1207, |
|
"text": "Bruce and Wiebe, 1998", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1208, |
|
"end": 1223, |
|
"text": "Ng et al., 1999", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 697, |
|
"end": 704, |
|
"text": "Table 5", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1504, |
|
"end": 1511, |
|
"text": "Table 6", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation: Inter-annotator Disagreement", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "A value for a word is calculated as follows. We use the notation and formula used in (Bruce and Wiebe, 1998) . Let n ij denote the number of instances where the judge 1 assigned sense i and the judge 2 assigned sense j to the same instance, and n i+ and n +i denote the marginal totals of rows and columns respectively. The formula is: k = P i P ii ; P i P i+ P +i 1 ; P i P i+ P +i 9where P ii = nii n++ (i.e., proportion of n ii , the number of instances where both judges agreed on sense i, t o the total instances), P i+ = ni+ n++ and P +i = n+i n++ . The value is 1.0 when the agreement is perfect (i.e., values in the o -diagonal cells are all 0, that is, P i P ii = 1 ) , or 0 when the agreement is purely ods used in the experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 108, |
|
"text": "(Bruce and Wiebe, 1998)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation: Inter-annotator Disagreement", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "11 (Ng et al., 1999) reports a higher agreement of 57%. We speculate the discrepancy might be from the version of Word-Net senses used in DSO, which w as slightly di erent from the standard delivery version (as noted in (Ng et al., 1999) ). (Semcor) 1 2 3 4 5 6 Total 1 43 0 0 0 0 0 43 (= n 1+ by c hance (i.e., values in a row (or column) are uniformly distributed across rows (or columns), that is, P ii = P i+ P +i for all 1 i M, where M is the number of rows/columns). also takes a negative value when there is a systematic disagreement between the two judges (e.g., some values in the diagonal cells are 0, that is, P ii = 0 for some i). Normally, :8 is considered a good agreement (Carletta, 1996) . By using the formula above, the average for the 191 words was .264, as shown in Table 5 . 12 This means the agreement between Semcor and DSO is quite low.", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 20, |
|
"text": "(Ng et al., 1999)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 220, |
|
"end": 237, |
|
"text": "(Ng et al., 1999)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 702, |
|
"end": 718, |
|
"text": "(Carletta, 1996)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 811, |
|
"end": 813, |
|
"text": "12", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 308, |
|
"text": "(Semcor) 1 2 3 4 5 6 Total 1 43 0 0 0 0 0 43 (= n 1+", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 801, |
|
"end": 808, |
|
"text": "Table 5", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation: Inter-annotator Disagreement", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We selected the same 191 words from our lexicon, and used their sense partitions to reduce the size of the confusion matrices. For each w ord, we computed the for the reduced matrix, and compared it with the for a random sense grouping of the same partition pattern. 13 For example, the partition pattern of f(1 4),(2 3 5),(6)g for \\table\" mentioned earlier (where Table 7 shows its reduced matrix) is a multinomial combination ; 6 2 3 1 . The value for a random grouping is obtained by generating 5,000 random partitions which h a ve the same pattern as the corresponding sense partition in our lexicon, then taking the mean of their 's. Then we measured the possible increase in by our lexicon by taking the di erence between the paired values for all words (i.e., w by our sense partitionw by random partition, for a word w), and performed a signi cance 12 (Ng et al. 1999 )'s result is slightly higher: = :317. 13 For this comparison, we excluded 23 words whose sense partitions consisted of only 1 sense cover. This is re ected in the total number of instances in Table 8 . test, with a null hypothesis that there was no significant increase. The result showed that the P-values were 4.17 and 2.65 for nouns and verbs respectively, which w ere both statistically signi cant. Therefore, the null hypothesis was rejected, and we concluded that there was a signi cant increase in by using our lexicon. As a note, the average 's for the 191 words from our lexicon and their corresponding random partitions were .260 and .233 respectively. Those values are in fact lower than that for the original WordNet lexicon. There are two major reasons for this. First, in general, combining any arbitrary senses does not always increase . In the given formula 9, actually decreases when the increase in P i P ii (i.e., the diagonal sum) in the reduced matrix is less than the increase in P i P i+ P +i (i.e., the marginal product sum) by some factor. 14 This situation typically happens when senses combined are well distinguished in the original matrix, in the sense that, for senses i and j, n ij and n ji are 0 or very small (relative to the total frequency). Second, some systematic relations are in fact easily distinguishable. Senses in such relations often denote di erent objects in a context, for instance ANIMAL and MEAT senses of \\chicken\". Since our lexicon groups those senses together, the 's for the reduce matrices decrease for the reason we mentioned above. Table 8 s h o ws the breakdown of the average for our lexicon and random groupings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 860, |
|
"end": 875, |
|
"text": "(Ng et al. 1999", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 915, |
|
"end": 917, |
|
"text": "13", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 365, |
|
"end": 372, |
|
"text": "Table 7", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 1069, |
|
"end": 1076, |
|
"text": "Table 8", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 2466, |
|
"end": 2473, |
|
"text": "Table 8", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation: Inter-annotator Disagreement", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "both P i P ii and P i P i+ P +i always increase when any arbitrary senses are combined. The factor mentioned here is 1; P i P ii 1; P i P i+ P +i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation: Inter-annotator Disagreement", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As we reported in previous sections, our tree-cut extraction method discovered 89% of the Word-Net cousins. Although the precision was relatively low (50-60%), this is an encouraging result. As for the lexicon, our sense partitions consistently yielded better values than arbitrary sense groupings. We consider these results to be quite promising. Our data is available at www.depaul.edu/ ntomuro/research/naacl-01.html.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "It is signi cant to note that cluster pairs and sense partitions derived in this work are domain independent. Such information is useful in broad-domain applications, or as a background lexicon (Kilgarri , 1997) in domain speci c applications or text categorization and IR tasks. For those tasks, we a n ticipate that our extraction methods may be useful in deriving characteristics of the domains or given corpus, as well as customizing the lexical resource. This is our next future research.", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 211, |
|
"text": "(Kilgarri , 1997)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For other future work, we plan to investigate an automatic way of detecting and ltering unrelated relations. We are also planning to compare our sense partitions with the systematic disagreement obtained by (Wiebe, et al., 1998) 's automatic classi er.", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 228, |
|
"text": "(Wiebe, et al., 1998)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Systematic polysemy (in the sense we use in this paper) is also referred to as regular polysemy(Apresjan, 1973) or logical polysemy (Pustejovsky, 1 9 9 5 ) .2 Note that systematic polysemy should be contrasted with homonymy, which refers to words which have more than one unrelated sense (e.g. FINANCIAL INSTITUTION and SLOPING LAND meanings of the word \\bank\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A leaf node is also a cluster whose cardinality i s 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For justi cation and detailed explanation of these formulas, see(Li and Abe, 1998).5 In our previous work, we u s e d e n tropy instead of MLE. That is because the lexicon represents true population, not samples thus there is no additional data to estimate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Prior to this, each W ordNet (sub)tree is transformed into a thesaurus tree, since WordNet tree is a graph rather than a tree, and internal nodes as well as leaf nodes carry data. In the transformation, all internal nodes in a WordNet tree are copied as leaf nodes, and shared subtrees are duplicated.7 Removing nodes with 0 is also warranted since we are not estimating values for those nodes (as explained in footnote 5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This is because P i P i+ P +i is subtracted in both the numerator and the denominator in the formula. Note that", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The author wishes to thank Steve Lytinen at DePaul University and the anonymous reviewers for very useful comments and suggestions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Regular Polysemy. Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Apresjan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Apresjan, J. (1973). Regular Polysemy. Linguistics, (142).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Word-sense Distinguishability and Inter-coder Agreement", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bruce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the COLING/ACL-98", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bruce, R. and Wiebe, J. (1998). Word-sense Dis- tinguishability and Inter-coder Agreement. In Proceedings of the COLING/ACL-98, Montreal, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "CORELEX: Systematic Polysemy and Underspeci cation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Buitelaar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Buitelaar, P. (1998). CORELEX: Systematic Poly- semy and Underspeci cation. Ph.D. dissertation, Department of Computer Science, Brandeis Uni- versity.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Reducing Lexical Semantic Complexity with Systematic Polysemous Classes and Underspeci cation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Buitelaar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the ANLP/NAACL-00 Workshop on Syntactic and Semantic Complexity in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Buitelaar, P. (2000). Reducing Lexical Semantic Complexity with Systematic Polysemous Classes and Underspeci cation. In Proceedings of the ANLP/NAACL-00 Workshop on Syntactic and Semantic Complexity in Natural Language Pro- cessing, S e a t t l e , W A.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Assessing Agreement o n C l a ssi cation Tasks: The Kappa Statistic", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carletta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Computational Linguistics", |
|
"volume": "22", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carletta, J. (1996). Assessing Agreement o n C l a s - si cation Tasks: The Kappa Statistic, Computa- tional Linguistics, 22(2).", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Semiproductive P olysemy and Sense Extension", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Copestake", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Journal of Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Copestake, A. and Briscoe, T. (1995). Semi- productive P olysemy and Sense Extension. Jour- nal of Semantics, 12.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Sense Clusters for Information Retrieval: Evidence from Semcor and the InterLingual Index", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Gonzalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Chugur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Verdejo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the ACL-2000 Workshop on Word Senses and Multilinguality", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gonzalo, J., Chugur, I. and Verdejo, F. (2000). Sense Clusters for Information Retrieval: Evi- dence from Semcor and the InterLingual Index. In Proceedings of the ACL-2000 Workshop on Word Senses and Multilinguality, Hong-Kong.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Foreground and Background Lexicons and Word Sense Disambiguation for Information Extraction", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kilgarri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the International Workshop on Lexically Driven Information Extraction", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kilgarri , A. (1997). Foreground and Background Lexicons and Word Sense Disambiguation for In- formation Extraction. In Proceedings of the In- ternational Workshop on Lexically Driven Infor- mation Extraction.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "SENSEVAL: An Exercise in Evaluating Word Sense Disambiguation Programs", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kilgarri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kilgarri , A. (1998a). SENSEVAL: An Exercise in Evaluating Word Sense Disambiguation Pro- grams. In Proceedings of the LREC.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Inter-tagger Agreement", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kilgarri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Advanced Papers of the SENSEVAL Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kilgarri , A. (1998b). Inter-tagger Agreement. In Advanced Papers of the SENSEVAL Workshop, Sussex, UK.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Building Semantic Concordance", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Landes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Leacock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Tengi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "WordNet: An Electronic Lexical Database", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Landes, S., Leacock, C. and Tengi, R. (1998). Building Semantic Concordance. In WordNet: An Electronic Lexical Database, The MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Generalizing Case Frames Using a Thesaurus and the MDL Principle", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Abe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Computational Linguistics", |
|
"volume": "", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li, H. and Abe, N. (1998). Generalizing Case Frames Using a Thesaurus and the MDL Prin- ciple, Computational Linguistics, 24(2).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "WORDNET: An Online Lexical Database", |
|
"authors": [], |
|
"year": 1990, |
|
"venue": "International Journal of Lexicography", |
|
"volume": "3", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miller, G. (eds.) (1990). WORDNET: An Online Lexical Database. International Journal of Lex- icography, 3(4).", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A Case Study on Inter-Annotator Agreement for Word Sense Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"B ;" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Cruz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Lim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Foo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the ACL SIGLEX Workshop on Standardizing Lexical Resources", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ng, H.T., and Lee, H.B. (1996). Integrating Mul- tiple Knowledge Sources to Disambiguate Word Sense. In Proceed i n g s o f t h e A CL-96, S a n ta Cruz, CA. Ng, H.T., Lim, C. and Foo, S. (1999). A Case Study on Inter-Annotator Agreement for Word Sense Disambiguation. In Proceedings of the ACL SIGLEX Workshop on Standardizing Lexi- cal Resources, C o l l e g e P ark, MD.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Transfers of Meaning", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Nunberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Journal of Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nunberg,G. (1995). Transfers of Meaning. Journal of Semantics, 12.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Are Wordnet sense distinctions appropriate for computational lexicons?", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Advanced P a p ers of the SENSEVAL Workshop, Sussex", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Palmer, M. (1998). Are Wordnet sense distinctions appropriate for computational lexicons? In Ad- vanced P a p ers of the SENSEVAL Workshop, Sus- sex, UK.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The Generative Lexicon", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pustejovsky, J. (1995). The Generative Lexicon, The MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Modeling by Shortest Data Description. Automatic", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Rissanen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1978, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rissanen, J. (1978). Modeling by Shortest Data Description. Automatic, 14.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Automatic Extraction of Systematic Polysemy Using Tree-cut", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Tomuro", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the ANLP/NAACL-00 Workshop on Syntactic and Semantic Complexity in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomuro, N. (2000). Automatic Extraction of Sys- tematic Polysemy Using Tree-cut. In Proceedings of the ANLP/NAACL-00 Workshop on Syntactic and Semantic Complexity in Natural Language Processing, Seattle, WA.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A Study of Polysemy Judgements and Inter-annotator Agreement", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Veronis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Advanced P a p ers of the SENSEVAL Workshop, Sussex", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Veronis, J. (1998). A Study of Polysemy Judge- ments and Inter-annotator Agreement. In Ad- vanced P a p ers of the SENSEVAL Workshop, Sus- sex, UK.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Towards a Universal Index of Meaning", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Vossen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Gonzalo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the ACL SIGLEX Workshop on Standardizing Lexical Resources", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vossen, P., Peters, W. and Gonzalo, J. (1999). To- wards a Universal Index of Meaning. In Proceed- ings of the ACL SIGLEX Workshop on Standard- izing Lexical Resources, College Park, MD.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Development and Use of a Gold-Standard Data Set for Subjectivity Classi cations", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bruce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "O'hara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the ACL-99", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wiebe, J., Bruce, R. and O'Hara, T. (1999). De- velopment and Use of a Gold-Standard Data Set for Subjectivity Classi cations. In Proceedings of the ACL-99, College Park, MD.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "The MDL lengths and the nal tree-cut Each length in L(M S) is calculated as follows.4 The model description length L(;) is L(;) = log 2 jGj", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "kj C , where k 6 = i, k 6 = j, c ik > 0, c kj > 0, and C = P i j c ij . A connectedness of a sense cover sc 2 S, denoted C N sc , where sc = ( s l : : s m ) (1", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "ij represents the weight o f a direct relation, and d ij represents the weight o f an indirect relation between any t wo senses i and j. The idea behind this connectedness measure is to favor sense covers that have strong intra-relations. This measure also e ectively takes into account a one-level transitivity i n d ij . As an example, the connectedness of (2 3 4) is the summation of c 23 c 34 c 24 d 23 d 34 and d 24 . Here, c 23 = 4 because sense 2 and 3 cooccur in four sense covers, and c 34 = c 24 = 1 where either or both c ik and c kj are zero), and similarly d 34 = :5 and d 24 = :5. Thus, C N (234) = 4 + 1 + 1 + :429 + :5 + :5 = 7 :429.", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"text": "Automatically Extracted Cluster Pairs", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Category Basic Underspeci ed Cluster classes classes pairs Nouns 24 99 2,377 Verbs 10 59 1,710 Total 34 158 4,077</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"text": "Extracted Relations for \\table\"", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Sense Cover Cluster Pair (1 4) ARRANGEMENT, NAT OBJ] (1 5) ARRANGEMENT, SOC GROUP] (2 3) FURNITURE] (2 3 4) FURNITURE, NAT OBJ] (2 3 5) FURNITURE, SOC GROUP] (2 3 6) FURNITURE, FOOD] (4 5) NAT OBJ, SOC GROUP]</td><td>C N 1.143 1.143 4.429 7.429 7.714 7.429 1.429</td></tr><tr><td>(5 6)</td><td>SOC GROUP, FOOD]</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"text": "Examples of Automatically Extracted Systematic Polysemy", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Underspeci ed Class</td><td>Cluster Pair</td><td>Common Words</td></tr><tr><td>ACTION-LOCATION</td><td>ACTION, POINT]</td><td/></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"text": "", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">: WordNet vs. the New Lexicon</td></tr><tr><td>Category Nouns Verbs Total</td><td>Monosemous Polysemous Total words Ave # senses Monosemous Polysemous Total words Ave # senses Monosemous Polysemous Total words</td><td>WordNet 82,892 88,977 New 12,243 6,158 95,135 95,135 2.73 2.52 5,758 7,987 4,568 2,339 10,326 10,326 3.57 2.82 88,650 96,964 16,811 8,497 105,461 105,461</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"text": "", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>: Agreement b e t ween Semcor and DSO</td></tr><tr><td>Category Agree Disagree Total Ave. Nouns 6,528 5,815 12,343 .268 Verbs 7,408 9,021 16,429 .260 Total 13,936 14,836 28,772 .264 (%) (48.4) (51.6) (100.0)</td></tr><tr><td>can see, the agreement i s n o t v ery high: only around 48%. 11</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"text": "Confusion Matrix for the noun \\table\" ( = :611)", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Judge 2</td></tr></table>" |
|
}, |
|
"TABREF8": { |
|
"type_str": "table", |
|
"text": "", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>: Reduced Matrix for \\table\" ( = :699)</td></tr><tr><td>1,4 2,3,5 6 Total 1,4 44 0 0 44 2,3,5 6 20 0 26 6 2 3 0 5 Total 52 23 0 75</td></tr></table>" |
|
}, |
|
"TABREF9": { |
|
"type_str": "table", |
|
"text": "Our Lexicon vs. Random Partitions", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Category Total Our Lexicon Random Ave. Ave. Nouns 10,980 .247 .217 Verbs 14,392 .283 .262 Total 25,372 .260 .233</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |