{ "paper_id": "J06-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:01:15.512369Z" }, "title": "Automatic Discovery of Part-Whole Relations", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign", "location": { "postCode": "61801", "settlement": "Urbana", "region": "IL" } }, "email": "girju@uiuc.edu" }, { "first": "Adriana", "middle": [], "last": "Badulescu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Language Computer Corporation An important problem in knowledge discovery from text is the automatic extraction of semantic relations. This paper presents a supervised, semantically intensive, domain independent approach for the automatic detection of part-whole relations in text. First an algorithm is described that identifies lexico-syntactic patterns that encode part-whole relations. A difficulty is that these patterns also encode other semantic relations, and a learning method is necessary to discriminate whether or not a pattern contains a part-whole relation. A large set of training examples have been annotated and fed into a specialized learning system that learns classification rules. The rules are learned through an iterative semantic specialization (ISS) method applied to noun phrase constituents. Classification rules have been generated this way for different patterns such as genitives, noun compounds, and noun phrases containing prepositional phrases to extract part-whole relations from them. The applicability of these rules has been tested on a test corpus obtaining an overall average precision of 80.95% and recall of 75.91%. The results demonstrate the importance of word sense disambiguation for this task. They also demonstrate that different lexico-syntactic patterns encode different semantic information and should be treated separately in the sense that different clarification rules apply to different patterns.", "pdf_parse": { "paper_id": "J06-1005", "_pdf_hash": "", "abstract": [ { "text": "Language Computer Corporation An important problem in knowledge discovery from text is the automatic extraction of semantic relations. This paper presents a supervised, semantically intensive, domain independent approach for the automatic detection of part-whole relations in text. First an algorithm is described that identifies lexico-syntactic patterns that encode part-whole relations. A difficulty is that these patterns also encode other semantic relations, and a learning method is necessary to discriminate whether or not a pattern contains a part-whole relation. A large set of training examples have been annotated and fed into a specialized learning system that learns classification rules. The rules are learned through an iterative semantic specialization (ISS) method applied to noun phrase constituents. Classification rules have been generated this way for different patterns such as genitives, noun compounds, and noun phrases containing prepositional phrases to extract part-whole relations from them. The applicability of these rules has been tested on a test corpus obtaining an overall average precision of 80.95% and recall of 75.91%. The results demonstrate the importance of word sense disambiguation for this task. They also demonstrate that different lexico-syntactic patterns encode different semantic information and should be treated separately in the sense that different clarification rules apply to different patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The identification of semantic relations in text is at the core of Natural Language Processing and many of its applications. Detecting semantic relations between various text segments, such as phrases, sentences, and discourse spans, is important for automatic text understanding (Rosario, Hearst, and Fillmore 2002; Lapata 2002; Morris and Hirst 2004) . Furthermore, semantic relations represent the core elements in the organization of lexical semantic knowledge bases intended for inference purposes. Recently, there has been a renewed interest in text semantics as evidenced by the international participation in the Senseval 3 Semantic Roles competition, 1 the associated workshops, 2 and numerous other workshops.", "cite_spans": [ { "start": 280, "end": 316, "text": "(Rosario, Hearst, and Fillmore 2002;", "ref_id": "BIBREF33" }, { "start": 317, "end": 329, "text": "Lapata 2002;", "ref_id": "BIBREF18" }, { "start": 330, "end": 352, "text": "Morris and Hirst 2004)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "An important semantic relation for many applications is the part-whole relation, or meronymy. Let us notate the part-whole relation as PART(X, Y) , where X is part of Y. For example, the compound nominal door knob contains the part-whole relation PART(knob, door) . Part-whole relations occur frequently in text and are expressed by a variety of lexical constructions as illustrated in the text below.", "cite_spans": [ { "start": 135, "end": 145, "text": "PART(X, Y)", "ref_id": null }, { "start": 247, "end": 263, "text": "PART(knob, door)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "( 1)The car's mail messenger is busy at work in the mail car as the train moves along. Through the open side door of the car, moving scenery can be seen. The worker is alarmed when he hears an unusual sound. He peeks through the door's keyhole leading to the tender and locomotive cab and sees the two bandits trying to break through the express car door. 3 There are several part-whole relations in this text: 1) the mail car is part of the train, 2) the side door is part of the car, 3) the keyhole is part of the door, 4) the cab is part of the locomotive, 5) the tender is part of the train, 6) the locomotive is part of the train, 7) the door is part of the car, and 8) the car is part of the express train (in the compound noun express car door).", "cite_spans": [ { "start": 356, "end": 357, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This paper provides a supervised, knowledge-intensive method for the automatic detection of part-whole relations in English texts. Based on a set of positive (encoding meronymy) and negative (not encoding meronymy) training examples provided and annotated by us, the algorithm creates a decision tree and a set of rules that classify new data. The rules produce semantic conditions that the noun constituents matched by the patterns must satisfy in order to exhibit a part-whole relation. For the discovery of classification rules we used C4.5 decision tree learning (Quinlan 1993) . The learned function is represented by a decision tree transformed into a set of if-then rules. The decision tree learning searches a complete hypothesis space from simple to complex hypotheses until it finds a hypothesis consistent with the data. Its bias is a preference for the shorter tree that places high information gain attributes closer to the root.", "cite_spans": [ { "start": 567, "end": 581, "text": "(Quinlan 1993)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "For training purposes we used WordNet, and the LA Times (TREC9) 4 and SemCor 1.7 5 text collections. From these we formed a large corpus of 27,963 negative examples and 29,134 positive examples of well distributed subtypes of part-whole relationships which provided a comprehensive set of classification rules. The rules were tested on two different text collections (LA Times and Wall Street Journal) obtaining an overall average precision of 80.95% and recall of 75.91%.", "cite_spans": [ { "start": 64, "end": 65, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper we do not distinguish between situations when whole objects consist of parts that are always present, or parts that are only sometimes present. For example, it might be relatively easy to pin down the parts of a car (e.g., four wheels, one engine, as ever present parts of a car irrespective of its type) as compared to enumerating all the components of a sandwich (e.g., two layers of cheese and/or salami, two slices of bread, that depend on the type of sandwich). In our experiments we focus only on part-whole instances that are mentioned in the corpus employed and on those provided by generalpurpose lexical knowledge bases such as WordNet, 6 whether the parts are just sometimes constituents of the entity considered or are always present. We do not check for the validity of these instances (e.g., whether the instance \"wood is part of a sandwich\" is true or not). Based on a large training corpus of positive and negative part-whole examples, our system infers what type of objects are parts and wholes. Also, our system does not take into consideration modality information such as knowledge about the possibility, certainty, or probability of existence of part-whole relations.", "cite_spans": [ { "start": 661, "end": 662, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The paper is organized as follows. Section 2 presents a summary of previous work on meronymy from several perspectives. Section 3 gives a detailed classification of the lexico-syntactic patterns used to express meronymy in English texts and a procedure for finding these patterns. Section 4 describes a method for learning semantic classification rules, while Section 5 shows the results obtained for discovering the part-whole relations by applying the classification rules on two distinct test corpora. Section 6 comments on the method's limitations and extensions, and Section 7 discusses the relevance of the task to NLP applications. Conclusions are offered in Section 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Historically, part-whole or meronymy relations have played an important role in linguistics, philosophy, and psychology mainly because a clear understanding of partwhole relations requires a deep interaction of logic, semantics, and pragmatics as they provide the tools needed for our understanding of the world. The part-whole relation has been considered a fundamental ontological relation since the atomists (Plato, Aristotle, and the Scholastics). They were the first to give a systematic characterization of parts and wholes, the relation between them, and the inheritance properties of this relation. However, most of the investigations of part-whole relations have been made since the beginning of the 20th century.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "The logical/philosophical studies of meronymy were concerned with formal theories of parts (mereologies), wholes, and their relation in the context of formal ontology. This school of thought advocates a single, universal, and transitive part-of relation used for modeling various domains such as time and space. Simons (1986) criticized this standard extensional view and proposed a more adequate account that offers an axiomatic representation of the part-of relation as a strict partial-ordering relation. The axioms considered were: existence (if A is a part of B then both A and B exist), asymmetry (if A is a part of B then B is not a part of A), supplementarity (if A is a part of B then B has a part C disjoint of A), and transitivity (if A is a part of B and B is a part of C then A is a part of C). In 1991, Simons (1991) added two more axioms: extensionality (objects with the same parts are identical) and existence of mereological sum (for any number of objects there exists a whole that consists exactly of those objects).", "cite_spans": [ { "start": 312, "end": 325, "text": "Simons (1986)", "ref_id": null }, { "start": 817, "end": 830, "text": "Simons (1991)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "Linguistics and cognitive psychology researchers focused on different part-whole relations and their role as semantic primitives. Since there are different ways in which something can be expressed as part of something else, many researchers have claimed that meronymy is a complex relation that \"should be treated as a collection of relations, not as a single relation\" (Iris, Litowitz, and Evens 1988) .", "cite_spans": [ { "start": 370, "end": 402, "text": "(Iris, Litowitz, and Evens 1988)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "Based on psycholinguistic experiments and the way in which the parts contribute to the structure of the wholes, Winston, Chaffin, and Hermann (1987) determined six types of part-whole relations: (1) COMPONENT-INTEGRAL OBJECT, (2) MEMBER-COLLECTION, (3) PORTION-MASS, (4) STUFF-OBJECT, (5) FEATURE-ACTIVITY, and (6) PLACE-AREA.", "cite_spans": [ { "start": 112, "end": 148, "text": "Winston, Chaffin, and Hermann (1987)", "ref_id": "BIBREF43" }, { "start": 199, "end": 225, "text": "COMPONENT-INTEGRAL OBJECT,", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "They also proposed three relation elements ( functional, homeomerous, and separable) to further classify the six types of meronymy relations. The functional relational element indicates that the part has a function with respect to its whole, whereas homeomerous means that the part is identical to the other parts making up the whole. The separable relational element shows that the part can be separated from the whole. For example, the relation wheel-car is a COMPONENT-INTEGRAL part-whole relation that is functional, non-homeomerous and separable. This means that the wheel has a specific function with respect to the car, does not resemble the other parts of the car, and can be separated from the car.", "cite_spans": [ { "start": 43, "end": 84, "text": "( functional, homeomerous, and separable)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "The COMPONENT-INTEGRAL relation is the relation between components and the objects to which they belong. Integral objects have a structure, their components are separable and have a functional relation with their wholes. For example, kitchen-apartment and aria-opera are COMPONENT-INTEGRAL relations.", "cite_spans": [ { "start": 4, "end": 22, "text": "COMPONENT-INTEGRAL", "ref_id": null }, { "start": 271, "end": 289, "text": "COMPONENT-INTEGRAL", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "The MEMBER-COLLECTION relation represents membership in a collection. Members are parts, but they cannot be separated from their collections and do not play any functional role with respect to their whole. For example, soldier-army, professor-faculty, and tree-forest are MEMBER-COLLECTION relations. PORTION-MASS captures the relations between portions and masses, extensive objects, and physical dimensions. The parts are separable and similar to each other and to the wholes which they comprise, and do not play any functional role with respect to their whole. For example, slice-pie and meter-kilometer are PORTION-MASS relations.", "cite_spans": [ { "start": 4, "end": 21, "text": "MEMBER-COLLECTION", "ref_id": null }, { "start": 272, "end": 300, "text": "MEMBER-COLLECTION relations.", "ref_id": null }, { "start": 611, "end": 623, "text": "PORTION-MASS", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "The STUFF-OBJECT category encodes the relations between an object and the stuff of which it is partly or entirely made. The parts are not similar to the wholes that they comprise, cannot be separated from the whole, and have no functional role. For example, steel-car and alcohol-wine are STUFF-OBJECT relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "The FEATURE-ACTIVITY relation captures the semantic links within features or phases of various activities or processes. The parts have a functional role, but they are not similar or separable from the whole. For example, paying-shopping and chewingeating are FEATURE-ACTIVITY relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "PLACE-AREA captures the relation between areas and special places and locations within them. The parts are similar to their wholes, but they are not separable from them. For example, oasis-desert and Guadalupe Mountains National Park-Texas are PLACE-AREA relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "In this paper we use the Winston, Chaffin, and Hermann classification as a criterion for building the training corpus to provide a wide coverage of such subtypes of partwhole relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "In computational linguistics, although a considerable amount of work has been done on semantic relation detection, 7 the work most similar to the task of identifying part-whole semantic relations is that of Hearst (1992) and Berland and Charniak (1999) .", "cite_spans": [ { "start": 207, "end": 220, "text": "Hearst (1992)", "ref_id": "BIBREF13" }, { "start": 225, "end": 252, "text": "Berland and Charniak (1999)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "Hearst developed a method for the automatic acquisition of hypernymy relations by identifying a set of frequently used and mostly unambiguous lexico-syntactic patterns. For example, countries, such as England indicates a hypernymy relation between the words countries and England. In her paper, she mentions that she tried applying the same method to meronymy, but without much success, as the patterns detected also expressed other semantic relations. This is consistent with our study of part-whole lexico-syntactic patterns presented in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "In 1999, Berland and Charniak applied statistical methods to a very large corpus 8 to find part-whole relations. Using Hearst's method, they focused on a small set of genitive patterns and a list of six seeds representing whole objects (book, building, car, hospital, plant, and school) . Their system's output was an ordered list of possible parts according to some statistical metrics (e.g., the log-likelihood metric (Dunning 1993) ). Although the training corpus used is very large, the coverage of the algorithm is small due to the limited number of patterns used and the small number of wholes allowed. Moreover, certain words, such as those ending in -ing, -ness, or -ity, were ruled out. Their accuracy is 55% for the first 50 ranked parts and 70% for the first 20 ranked parts. As a baseline, they considered as potential parts the head nouns immediately surrounding the target whole object and ranked them based on the same statistical metric. The baseline accuracy was 8%.", "cite_spans": [ { "start": 236, "end": 286, "text": "(book, building, car, hospital, plant, and school)", "ref_id": null }, { "start": 420, "end": 434, "text": "(Dunning 1993)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "While Berland and Charniak's method focuses solely on identifying parts given a whole, our task targets the identification of both parts and wholes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "Hearst, and Berland and Charniak observed that for ambiguous whole words, such as plant, the method produces the weakest part list of the six seeds considered. Although they don't provide a one-to-one comparison, Berland and Charniak mention that their method outperforms Hearst's pattern matching algorithm mainly due to the very large corpus used. However, neither approach addresses the pattern ambiguity problem, i.e., patterns such as genitives that can express different semantic relations in different contexts (the dress of silk encodes a part-whole relation, but the dress of my girl does not). The ambiguity of these patterns explains our rationale for choosing an approach based on a machine learning method to discover discriminating rules automatically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Meronymy", "sec_num": "2." }, { "text": "The automatic discovery of any semantic relation must start with a thorough understanding of the lexical and syntactic forms used to express that relation. Since there are many ways in which something can be part of something else, there is a variety of lexico-syntactic structures that can express a meronymy semantic relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexico-Syntactic Patterns that Express Meronymy", "sec_num": "3." }, { "text": "7 Besides the work on semantic roles (Charniak 2000; Gildea and Jurafsky 2002; Thompson, Levy, and Manning 2003) , considerable interest has been shown in the automatic interpretation of various noun phrase-level constructions, such as noun compounds. The focus here is to determine the semantic relations that link the two noun constituents. The best-performing noun compound interpretation systems have employed either symbolic (Finin 1980; Vanderwende 1994) or statistical techniques (Pustejovsky, Bergler, and Anick 1993; Lauer and Dras 1994; Lapata 2002 ) relying on rather ad hoc, domain-specific, hand-coded semantic taxonomies, or on statistical patterns in a large corpus of examples, respectively. 8 The North American News Corpus (NANC) of 1 million words.", "cite_spans": [ { "start": 37, "end": 52, "text": "(Charniak 2000;", "ref_id": "BIBREF2" }, { "start": 53, "end": 78, "text": "Gildea and Jurafsky 2002;", "ref_id": "BIBREF9" }, { "start": 79, "end": 112, "text": "Thompson, Levy, and Manning 2003)", "ref_id": "BIBREF39" }, { "start": 430, "end": 442, "text": "(Finin 1980;", "ref_id": "BIBREF7" }, { "start": 443, "end": 460, "text": "Vanderwende 1994)", "ref_id": "BIBREF41" }, { "start": 487, "end": 525, "text": "(Pustejovsky, Bergler, and Anick 1993;", "ref_id": "BIBREF28" }, { "start": 526, "end": 546, "text": "Lauer and Dras 1994;", "ref_id": "BIBREF20" }, { "start": 547, "end": 558, "text": "Lapata 2002", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Lexico-Syntactic Patterns that Express Meronymy", "sec_num": "3." }, { "text": "There are unambiguous lexical expressions that always convey a part-whole relation. For example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexico-Syntactic Patterns that Express Meronymy", "sec_num": "3." }, { "text": "(2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexico-Syntactic Patterns that Express Meronymy", "sec_num": "3." }, { "text": "The substance consists of three ingredients.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexico-Syntactic Patterns that Express Meronymy", "sec_num": "3." }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexico-Syntactic Patterns that Express Meronymy", "sec_num": "3." }, { "text": "The cloud was made of dust.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexico-Syntactic Patterns that Express Meronymy", "sec_num": "3." }, { "text": "Iceland is a member of NATO.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4)", "sec_num": null }, { "text": "In these cases the simple detection of the patterns leads to the discovery of part-whole relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4)", "sec_num": null }, { "text": "On the other hand, there are many ambiguous expressions that are explicit but convey part-whole relations only in some contexts. The detection of meronymy in these cases is based on extracting semantic features of constituents and checking whether or not these features match the classification rules. For example, The horn is part of the car is meronymic whereas He is part of the game is not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4)", "sec_num": null }, { "text": "In the case of meronymy, since there are numerous unambiguous and ambiguous patterns, we devised a method to find these patterns and rank them in the order of their frequency of use. Our intention is to detect the most frequently occurring patterns that express meronymy and provide an algorithm for their automatic detection and disambiguation in text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4)", "sec_num": null }, { "text": "In order to identify lexico-syntactic forms that express part-whole relations and determine their distribution over a very large corpus, we used the following algorithm inspired by Hearst's (1998) work:", "cite_spans": [ { "start": 181, "end": 196, "text": "Hearst's (1998)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "An Algorithm for Finding Lexico-Syntactic Patterns", "sec_num": "3.1" }, { "text": "Step 1. Pick pairs of concepts C i , C j among which there is a part-whole relation For this task, we used the information provided by WordNet 1.7 (Fellbaum 1998) . In WordNet, the nouns are organized into nine hierarchies, each hierarchy being identified by its corresponding root concept: {abstraction}, {act}, {entity}, {event}, {group}, {phenomenon}, {possession}, {psychological feature}, and {state}. The nouns are grouped in concepts or synsets; a concept consisting of a list of synonymous word senses. For example, {mother#1, female parent#1} is a WordNet concept. Besides concepts, WordNet contains 11 semantic relations: HYPONYMY (IS-A), HYPERNYMY (REVERSE IS-A), MERONYMY (PART-WHOLE), HOLONYMY (REVERSE PART-WHOLE), ENTAIL, CAUSE-TO, ATTRIBUTE, PERTAINYMY, ANTONYMY, SYNSET (SYNONYMY), and SIMILARITY. The part-whole relations in WordNet are further classified into three basic types: MEMBER-OF (e.g., UK#1 IS-MEMBER-OF NATO#1), STUFF-OF (e.g., carbon#1 IS-STUFF-OF coal#1), and PART-OF (e.g., leg#3 IS-PART-OF table#2) which includes all the other part-whole relations described in the Winston, Chaffin and Hermann (WCH) classification.", "cite_spans": [ { "start": 147, "end": 162, "text": "(Fellbaum 1998)", "ref_id": "BIBREF6" }, { "start": 282, "end": 406, "text": "concept: {abstraction}, {act}, {entity}, {event}, {group}, {phenomenon}, {possession}, {psychological feature}, and {state}.", "ref_id": null }, { "start": 1100, "end": 1134, "text": "Winston, Chaffin and Hermann (WCH)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "An Algorithm for Finding Lexico-Syntactic Patterns", "sec_num": "3.1" }, { "text": "Since the part and whole concepts provided by WordNet can belong to almost any WordNet noun hierarchy, we randomly selected 100 pairs of part-whole concepts that were well distributed over all nine WordNet noun hierarchies, the three WordNet meronymic relations, and the six types of part-whole relations of WCH. Two annotators with computational linguistic knowledge classified the WordNet meronymic relations into the WCH's six part-whole types. According to our annotations, the MEMBER-OF WordNet relations correspond to Winston, Chaffin, and Hermann's MEMBER-COLLECTION relations, STUFF-OF relations correspond to WCH's STUFF-OBJECT relations, and the PART-OF correspond to the other four WCH relations. The annotators obtained a 100% agreement in mapping the MEMBER-OF to MEMBER-COLLECTION, STUFF-OF to STUFF-OBJECT. The PART-OF relations were mapped to the other four types of WCH relations with an average agreement of 98%. A third judge (one of the authors) checked the correctness of all the mappings and decided on the non-agreed instances. This mapping ensures that the 100 general-purpose WordNet pairs cover most of the possible types of part-whole relations in text. Table 1 shows only 50 pairs from the set of 100 WordNet part-whole pairs and their distribution among the WordNet hierarchies and the part-whole types provided by WordNet and the WCH taxonomy. For example, the pair Bucharest#1-Romania#1 is a PART-OF relation in WordNet, but based on the Winston, Chaffin, and Hermann classification it can be further classified as a more specific meronymy relation, PLACE-AREA. For the purpose of this research, we lumped together all part-whole types in the classification of Winston et al. 9 However, the method presented in the paper is applicable to extracting subtypes of part-whole relations; separate annotations for each type would be necessary.", "cite_spans": [ { "start": 1581, "end": 1592, "text": "PLACE-AREA.", "ref_id": null }, { "start": 1692, "end": 1708, "text": "Winston et al. 9", "ref_id": null } ], "ref_spans": [ { "start": 1181, "end": 1188, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "An Algorithm for Finding Lexico-Syntactic Patterns", "sec_num": "3.1" }, { "text": "Step 2. Search a corpus and extract lexico-syntactic patterns that link a pair of partwhole concepts For each pair of part-whole noun concepts determined above, search the Internet or any other large collection of documents and retain only the sentences containing that pair. Since our intention is to demonstrate that the automatic procedure proposed here is domain independent, we chose two distinct text collections: SemCor 1.7 and the LA Times from TREC-9. From each collection we randomly selected 10,000 sentences, which were searched for the pair of concepts selected. Since the LA Times collection is not word-sense disambiguated, we searched for sentences containing the pair of nouns without considering their senses. Out of these sentences, only some contained the partwhole pairs selected in Step 1. We manually inspected these sentences and picked only those in which the pairs involved meronymy. For example, the sentence I can feel my fingers and close my hand contains the meronymic pair finger-hand, but in this context the relationship is not expressed. From these sentences we manually extracted meronymic lexico-syntactic patterns. Table 2 shows for each collection the number of sentences used, the number of sentences that contain the studied concept pairs, the number of sentences that contain part-whole relations, and the number of unique patterns discovered from those sentences. Seven of the unique patterns occurred in both SemCor and the LA Times.", "cite_spans": [], "ref_spans": [ { "start": 1152, "end": 1159, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "An Algorithm for Finding Lexico-Syntactic Patterns", "sec_num": "3.1" }, { "text": "In order to extract the patterns from the SemCor collection we used its gold standard word sense annotations to our advantage and looked for the occurrences of concepts (word with the sense) in the corpus. This explains the large difference between the number of sentences discovered in the two corpora. The SemCor patterns thus extracted did not need manual validation, since the noun concept pairs were always in a partwhole relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Algorithm for Finding Lexico-Syntactic Patterns", "sec_num": "3.1" }, { "text": "The list of fifty selected part-whole relation pairs used as input for the lexico-syntactic pattern identification procedure. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "From the 535 part-whole relations detected from the 20,000 SemCor and LA Times sentences, 493 (92.15%) were expressed by phrase-level patterns and 42 (7.85%) by sentence-level patterns. Overall, there were 42 unique meronymic lexico-syntactic patterns, of which 31 were phrase-level patterns and 11 sentence-level patterns. Recall our notation for the part-whole relation PART(X, Y) , where X is part of Y.", "cite_spans": [ { "start": 372, "end": 382, "text": "PART(X, Y)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Taxonomy of Part-Whole Patterns", "sec_num": "3.2" }, { "text": "a. Phrase-level patterns Here, the part and whole concepts are included in the same phrase. For example, for the pattern NP X PP Y the noun phrase that contains the part and the prepositional phrase that contains the whole are found in the same noun phrase. The engine in the car is an instance of this pattern where X is the part (engine) and Y is the whole (car).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Taxonomy of Part-Whole Patterns", "sec_num": "3.2" }, { "text": "In these constructions, the part-whole relation is intrasentential. The patterns contain specific verbs and the part and the whole can be found inside noun phrases or prepositional phrases that contain specific prepositions. A frequent such pattern is NP Y verb NP X , where NP X is the noun phrase that contains the part, NP Y is the noun phrase that contains the whole and the verb is restricted (see Table 2 of Appendix A). For instance, the cars have doors is an instance of this pattern. An extension of this pattern is NP X verb NP Z PP Y , with NP Z containing the words part or member. An example is: The engine is a part of the car; NP X -the engine, PP Y -of the car, and the verb -to be.", "cite_spans": [], "ref_spans": [ { "start": 403, "end": 410, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "b. Sentence-level patterns", "sec_num": null }, { "text": "In some instances the meronymic constructions contained combinations, conjunctions and/or disjunctions, of parts and wholes. For example, NP X1X2 PP Y (e.g., wheels and engine of a car) is a form of the pattern NP X PP Y . This observation enabled us to generalize the list of patterns. A summary of phrase-level and sentence-level meronymic patterns along with their extensions and generalizations is provided in Appendix A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b. Sentence-level patterns", "sec_num": null }, { "text": "Based on our observations of the corpus used for the pattern identification procedure and based on the results obtained by others (Evens et al. 1980) , we have concluded that the lexico-syntactic patterns encoding meronymy can be classified according to their semantic similarity and frequency of occurrence into the clusters presented in Table 3 . The clusters contain lexico-syntactic patterns that have similar semantic behavior. We also noticed that more than a half of cluster 4's patterns are very rare; for example, X branch of Y; In Y, X 1 verb X 2 ; or In Y packed to X. Overall, this cluster covers less than 7% of the part-whole patterns discovered. Thus, for the purpose of this research we considered only the first three clusters of lexico-syntactic patterns expressing meronymy. This pattern classification criterion is justified, in part, by our desire to verify whether or not the automatic approach proposed here is generally applicable not only for the genitive cluster patterns (cluster 1) (Girju, Badulescu, and Moldovan 2003) , but also for more complex types, such as noun compounds (cluster 2) and prepositional constructions (cluster 3). Our intuition that the proposed patterns have different semantic behavior, and thus have to be treated separately in distinct clusters, is partially justified by a linguistic analysis summarized in Section 3.3 and supported by our empirical results from Section 5.3. In the remainder of the paper, we refer to these clusters as the genitives (cluster 1), noun compounds (cluster 2), preposition (cluster 3), and other (cluster 4) clusters.", "cite_spans": [ { "start": 130, "end": 149, "text": "(Evens et al. 1980)", "ref_id": "BIBREF5" }, { "start": 1010, "end": 1047, "text": "(Girju, Badulescu, and Moldovan 2003)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 339, "end": 346, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "b. Sentence-level patterns", "sec_num": null }, { "text": "We also noticed that some patterns, such as the genitive and preposition clusters, prefer the part and the whole in a certain position. For example, in of-genitives the part is mostly in the first position (modifier), and the whole in the second (head) (e.g., door of the car), while in s-genitives the positions are reversed (e.g., car's door). The verb to have requires parts in the second position, while noun compounds have a preference for them in the second position (e.g., car has door and car door, respectively). In the preposition cluster patterns, for the preposition in the part is usually in the first position (e.g., door in the car) and for the preposition with the positions are reversed (e.g., car with four doors).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b. Sentence-level patterns", "sec_num": null }, { "text": "However, there are also exceptions. For instance, in some of-genitives the part can occupy the second position (e.g., flock of birds) and in some noun compounds it can be present in the first position (e.g., ham sandwich). In the corpus used for pattern identification these exceptions are rare. Therefore, we will not consider the patterns NP Y of NP X and NP XY in our experiments. If such examples are encountered, the part and the whole concepts are wrongly identified, representing one source of errors. Berland and Charniak (1999) also used Hearst's algorithm to find part-whole patterns. However, they focused only on the first five patterns that occur frequently in their corpus. These patterns are subsumed by our clusters as shown in Table 4 . They noticed that the last three patterns are ambiguous and decided to use only the first two in their experiments.", "cite_spans": [ { "start": 509, "end": 536, "text": "Berland and Charniak (1999)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 744, "end": 751, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "b. Sentence-level patterns", "sec_num": null }, { "text": "From the list of lexico-syntactic patterns thus extracted, we noticed that some of these part-whole constructions always refer to meronymy, but most of them are ambiguous, Table 4 The patterns used by Berland and Charniak and the corresponding cluster patterns used by us.", "cite_spans": [], "ref_spans": [ { "start": 172, "end": 179, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "The Ambiguity of Part-Whole Lexico-Syntactic Patterns", "sec_num": "3.3" }, { "text": "Our cluster patterns Example", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Berland and Charniak patterns", "sec_num": null }, { "text": "NN whole 's NN part NP Y 's NP X girl's mouth NN part of (the|a) (JJ|NN) NN whole NP X of NP Y eyes of the baby NN part in (the|a) (JJ|NN) NN whole NP X PP Y ball in red box NN parts of NN wholes NP X of NP Y doors of cars NN parts in NN wholes NP X PP Y quotations in articles", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Berland and Charniak patterns", "sec_num": null }, { "text": "in the sense that they express a part-whole relation only in some particular contexts and only between specific pairs of nouns. For example, NP 1 is member of NP 2 always refers to meronymy, but this is not true for NP 1 has NP 2 . In most cases, the verb to have has the sense of to possess, and only in some particular contexts refers to meronymy. Table 5 presents a summary of some of the most frequent part-whole lexicosyntactic patterns we observed, classified based on their ambiguity.", "cite_spans": [], "ref_spans": [ { "start": 350, "end": 357, "text": "Table 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Berland and Charniak patterns", "sec_num": null }, { "text": "Below we discuss further the ambiguities encountered in the patterns of the first three clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Berland and Charniak patterns", "sec_num": null }, { "text": "In English there are two kinds of genitives: the s-genitive and the of-genitive. A characteristic of the genitives is that they are very ambiguous, as the constructions can be given various interpretations (Moldovan and Badulescu 2005) . For instance, genitives can encode relations such as PART-WHOLE MAKE-PRODUCE (Mary's novel -if Mary wrote it). Thus, any attempt to interpret genitive constructions has to deal with the semantic analysis of the two noun constituents. Sometimes world knowledge or more contextual information is necessary to identify the correct semantic relation (e.g., Mary's novel might mean the novel written by Mary, read by Mary, or dreamed about by Mary).", "cite_spans": [ { "start": 206, "end": 235, "text": "(Moldovan and Badulescu 2005)", "ref_id": "BIBREF23" }, { "start": 291, "end": 301, "text": "PART-WHOLE", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Semantic Ambiguity of Genitive Constructions", "sec_num": null }, { "text": "(Mary's hand), POSSESSION (Mary's car), KINSHIP (Mary's sister), PROPERTY/ATTRIBUTE HOLDER (Mary's beauty), DEPICTION- DEPICTED (Mary's painting -if it depicts her), SOURCE-FROM (Mary's birth city), or", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantic Ambiguity of Genitive Constructions", "sec_num": null }, { "text": "According to WordNet 1.7, the verb to have in transitive constructions has 21 different senses, such as to possess, feature, need, get, undergo, be confronted with, accept, suffer from, and many others. Although the senses enumerated in WordNet represent a rather disparate set with no well defined semantic connection among them, the verb to have can participate in many different semantic structures and has been studied extensively in the linguistics community (Freeze 1992; Schafer 1995; Jensen and Vikner 1996) . The semantic relations encoded by the verb to have are quite similar to those realized by genitive constructions. Some researchers (Jensen and Vikner 1996) offered a detailed analysis for the purpose of capturing the most important semantic features of the verb to have. Their hypothesis is based on the idea that, semantically, the verb to have has a sense of its own derived from the semantic interpretation of the close context or the sentence in which it occurs. Let's consider the following sentences: (a) Kate has a sister (KINSHIP), (b) Kate has a cat (POSSESSION), and (c) Kate has green eyes (PART-WHOLE). The meaning of the verb to have in these situations is derived from the semantic information encoded in both the subject and the object.", "cite_spans": [ { "start": 464, "end": 477, "text": "(Freeze 1992;", "ref_id": "BIBREF8" }, { "start": 478, "end": 491, "text": "Schafer 1995;", "ref_id": "BIBREF34" }, { "start": 492, "end": 515, "text": "Jensen and Vikner 1996)", "ref_id": "BIBREF16" }, { "start": 649, "end": 673, "text": "(Jensen and Vikner 1996)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "The Semantic Ambiguity of the Verb To Have", "sec_num": null }, { "text": "Noun compounds (NCs) are noun sequences of the type N 1 N 2 .. N n that have a particular meaning as a whole. NCs have been studied intensively in linguistics (Levi 1978) , psycholinguistics (Downing 1977) , and computational linguistics (Sp\u00e4rck Jones 1983; Lauer and Dras 1994; Rosario and Hearst 2001) for a long time. The interpretation of NCs focuses on the detection and classification of a comprehensive set of semantic relations between the noun constituents. This task has proved to be very difficult due to the complex semantic aspect of noun compounds:", "cite_spans": [ { "start": 159, "end": 170, "text": "(Levi 1978)", "ref_id": "BIBREF21" }, { "start": 191, "end": 205, "text": "(Downing 1977)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "The Semantic Ambiguity of Noun Compounds", "sec_num": null }, { "text": "NCs have implicit semantic relations: for example, spoon handle (PART-WHOLE).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "NCs' interpretation is knowledge intensive and can be idiosyncratic: For example, GM car (in order to correctly interpret this compound we have to know that GM is a car-producing company).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "3. There can be many possible semantic relations between a given pair of word constituents. For example, linen bag can mean bag made of linen (PART-WHOLE), as well as bag for linen (PURPOSE).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Interpretation of NCs can be highly context-dependent. For example, apple juice seat can be defined as \"seat with apple juice on the table in front of it\" (Downing 1977) .", "cite_spans": [ { "start": 155, "end": 169, "text": "(Downing 1977)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "In English and various other natural languages, prepositions play a very important role both syntactically and semantically in the phrases, clauses, and sentences in which they occur. Semantically speaking, prepositional constructions can encode various semantic relations, their interpretations being provided most of the time by the underlying context. For instance, in the following examples the preposition with encodes different semantic relations: (a) It was the girl with blue eyes (MERONYMY), (b) The baby with the red ribbon is cute (POSSESSION), and (c) The woman with triplets received a lot of attention (KINSHIP). The variety and ambiguity of these constructions show the complexity and importance of our task. We have seen that the interpretation of these constructions depends heavily on the meaning of the two noun constituents. To get the meaning of the nouns we rely on a word sense disambiguation system that takes into consideration surrounding contexts of the words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantic Ambiguity of Prepositional Constructions", "sec_num": null }, { "text": "In this section we propose a method for the automatic discovery of rules that discriminate whether or not a selected pattern instance is meronymic. First a corpus is prepared and patterns from clusters C1-C3 are identified. The approach relies on the assumption that the semantic relation between two noun constituents representing the part and the whole can be detected based on nouns' semantic features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4.1" }, { "text": "This procedure applies to ambiguous constructions. The unambiguous constructions don't have to be processed since they lead unmistakably to part-whole relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4.1" }, { "text": "The system learns automatically classification rules that check semantic features of noun constituents. The classification rules are learned through an iterative semantic specialization (ISS) procedure applied on the noun constituents' semantic features provided by the WordNet lexical knowledge base (Fellbaum 1998) . ISS starts by mapping the training noun pairs to the corresponding top WordNet noun concepts using hypernymy chains. Then, it builds a learning tree by recursively splitting the training corpus into unambiguous and ambiguous examples based on the semantic information provided by the WordNet noun hierarchies. The learning tree is built top-down, one level at a time, each level corresponding to a specialization iteration. The internal nodes represent sets of ambiguous examples at various levels of specialization, while the leaves contain unambiguous examples. The ambiguous examples are further specialized with nextlevel WordNet concepts. The process is repeated recursively until there are no more ambiguous examples. For each set of unambiguous positive and negative examples at each level in the downward descent, we apply Quinlan's C4.5 algorithm and learn classification rules of the form if X is/is not of a WordNet semantic class A and Y is/is not of WordNet semantic class B, then the instance is/is not a part-whole relation.", "cite_spans": [ { "start": 301, "end": 316, "text": "(Fellbaum 1998)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4.1" }, { "text": "Since our discovery procedure is based on the semantic information provided by Word-Net, we need to preprocess the noun phrases (NPs) extracted by the three clusters considered and identify the potential part and the whole concepts. For each NP we keep only the largest sequence of words (from left to right) defined in WordNet. For example, from the noun phrase brown carving knife the procedure retains only carving knife, since this concept is defined in WordNet. For each such sequence of words, we manually annotate it with its WordNet sense in context. For the example above we annotated the noun phrase with sense #1 (carving knife#1), since in that context it had sense #1 in WordNet (for this concept WordNet lists only one sense, defined as \"a large knife used to carve cooked meat\"). Table 6 shows a few examples of patterns from different clusters and the results of this preprocessing step.", "cite_spans": [], "ref_spans": [ { "start": 795, "end": 802, "text": "Table 6", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Preprocessing Part-Whole Lexico-Syntactic Patterns", "sec_num": "4.2" }, { "text": "In order to learn the classification rules, we used the SemCor 1.7 and TREC 9 text collections, and the part-whole information provided by WordNet. From the SemCor collection we selected 19,000 sentences. Another 100,000 sentences were randomly extracted from the LA Times articles of TREC 9. As SemCor 1.7 is already annotated with part-of-speech tags and WordNet senses, we part-of-speech tagged only the LA Times collection using Brill's tagger (1995) . A corpus \"A\" was thus created from the selected sentences of each text collection. Each sentence in this corpus was then parsed using the syntactic parser developed by Charniak (2000) . Focusing only on sentences containing the lexico-syntactic patterns in each cluster C1-C3, we manually annotated nouns in the patterns with their corresponding WordNet senses (with the exception of those from SemCor), as shown in Section 4.2, and marked all candidate instances that encoded a part-whole relation as positives, and negatives otherwise. In the corpus, 66% of the annotated instances were PART-OF relations, 14% STUFF-OF, and 20% MEMBER-OF.", "cite_spans": [ { "start": 433, "end": 454, "text": "Brill's tagger (1995)", "ref_id": null }, { "start": 625, "end": 640, "text": "Charniak (2000)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Building the Training Corpus", "sec_num": "4.3" }, { "text": "Moreover, WordNet 1.7 contains 27,636 part-whole relations linking various noun concepts. As this information is very valuable for training purposes, we tried to see which of the selected patterns match these pairs. For each WordNet part-whole pair, we formed inflected queries (to capture singular and plural instances) and searched the Web, the largest on-line general purpose text collection, using Altavista. From the first 100 retrieved documents, we selected and syntactically parsed only those sentences containing pairs within cluster patterns. We manually validated those instances and registered which cluster(s) of patterns could extract the pair. All these sentences formed a second corpus, corpus \"B\". For instance, for the pair door#4-car#1 we searched Altavista for documents containing both words car and door. Then, we retrieved all the sentences that contained the two words in at least one of the target patterns. As a result, we obtained sentences containing the pair of words linked by patterns such as door of car, car's door, car has door, car with four doors, car door, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Training Corpus", "sec_num": "4.3" }, { "text": "Overall, the 27,636 WordNet pairs were linked by the genitive cluster patterns, while the noun compound and preposition clusters extracted only some subsets of these pairs. Some part-whole pairs were linked by patterns that belong to more than one cluster. For instance, door knob is a pair that usually belongs to the noun compound cluster, but it can also be selected by the genitive cluster (e.g., knob of the door) and the preposition cluster (e.g., the door with the iron knob). Corpus \"B\" was used only to convince us that the part-whole pairs selected from WordNet were representative, ie., present in the patterns considered. Indeed corpus \"B\" pairs were found in at least one of the cluster patterns. While corpus \"A\" consists of positive and negative examples from LA Times and SemCor collections, corpus \"B\" contains only positive instances as they are WordNet part-whole pair concepts. Moreover, although corpus \"B\" has a different distribution than corpus \"A\", the noun pairs from WordNet are general-purpose and always encode a part-whole relation. Table 7 shows the statistics for the positive and negative training examples for each cluster. In the genitive cluster, for example, there were 18,936 such pattern instances, of which 325 encoded part-whole relations, while 18,611 did not. Thus, for the genitive cluster we used a training corpus of 27,961 positive examples (325 pairs of concepts in a part-whole relation extracted from corpus \"A\" and 27,636 extracted from WordNet as selected pairs) and 18,611 negative examples (the non-part-whole relations extracted from corpus \"A\").", "cite_spans": [], "ref_spans": [ { "start": 1063, "end": 1070, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Building the Training Corpus", "sec_num": "4.3" }, { "text": "The part-whole relation discovery procedure proposed in this paper was trained and tested on a large corpus of human annotated examples (a part of the LA Times collection for both training and testing, and a part of the Wall Street Journal (WSJ) collection for testing). The annotators, two researchers in computational semantics, decided whether an example pair encoded a part-whole relation or not. The examples were disambiguated in context: the annotators were given the pairs and the sentence in which they occurred. The two annotators' task was to determine the correct senses of the two noun constituents and then decide if the relation is meronymic or not. A third researcher decided on the non-agreed word senses and relations. The annotators were also provided with the list of subtypes of meronymy relations proposed by (Winston, Chaffin, and Hermann 1987) as a guideline for detecting part-whole relations. If an example contained one of the six meronymy subtypes, the annotators tagged that example as positive (part-whole); otherwise they tagged it as a negative example.", "cite_spans": [ { "start": 831, "end": 867, "text": "(Winston, Chaffin, and Hermann 1987)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Inter-Annotator Agreement", "sec_num": "4.4" }, { "text": "The annotators' agreement was measured using the kappa statistic (Siegel and Castellan 1988) , one of the most frequently used measures of inter-annotator agreement for classification tasks:", "cite_spans": [ { "start": 65, "end": 92, "text": "(Siegel and Castellan 1988)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Inter-Annotator Agreement", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "K = Pr(A) \u2212 Pr(E) 1 \u2212 Pr(E) ,", "eq_num": "( 1 )" } ], "section": "Inter-Annotator Agreement", "sec_num": "4.4" }, { "text": "where Pr(A) is the proportion of times the raters agree and Pr(E) is the probability of agreement by chance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inter-Annotator Agreement", "sec_num": "4.4" }, { "text": "Training corpora statistics for each of the three clusters considered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 7", "sec_num": null }, { "text": "Cluster from WordNet as from Corpus \"A\" from Corpus \"A\" evidenced by corpus \"B\" The K coefficient is 1 if there is total agreement among the annotators, and 0 if there is no agreement other than that expected to occur by chance. This coefficient measures how well annotators agree at identifying both positive and negative instances of meronymic relations. Table 8 shows the inter-annotator agreement on the part-whole classification task for each of the three clusters considered in both training and test phases of the partwhole relation discovery procedure.", "cite_spans": [], "ref_spans": [ { "start": 357, "end": 364, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Positive examples Negative examples", "sec_num": null }, { "text": "On average, the K coefficient is close to 0.85, showing a good level of agreement, for all clusters in the training and test data. This can be explained by the instructions the annotators received prior to annotation and by their expertise in lexical semantics. The results also show that even for more productive genitive and noun compound examples, the sentence-level context was enough to disambiguate the examples most of the time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive examples Negative examples", "sec_num": null }, { "text": "Iterative Semantic Specialization Learning is an iterative process that learns a decision tree and classification rules by mapping the semantic features of the noun pairs to the WordNet noun hierarchies. The procedure starts with a generalized version of the training examples as pairs of top WordNet noun concepts using hypernymy chains. The examples are then split into unambiguous and ambiguous. The ambiguous examples are further specialized with next-level WordNet concepts. The process is repeated recursively until there are no more ambiguous examples. For each set of unambiguous positive and negative examples at each level in the downward descent, we apply Quinlan's C4.5 algorithm and learn classification rules. As will be shown in Section 5.3, the algorithm is applied separately to each of the three clusters considered for optimal results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Iterative Semantic Specialization (ISS) Learning", "sec_num": "4.5" }, { "text": "Input: Positive and negative meronymic examples of pairs of concepts. The concepts are WordNet words semantically disambiguated in context (tagged with their corresponding WordNet senses).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Iterative Semantic Specialization (ISS) Learning Algorithm", "sec_num": null }, { "text": "Output: Classification rules in the form of semantic selectional restrictions on the modifier and head concepts using WordNet IS-A hierarchy information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Iterative Semantic Specialization (ISS) Learning Algorithm", "sec_num": null }, { "text": "The inter-annotator agreement on the part-whole classification task for each of the three clusters considered in both training and test phases of the part-whole relation discovery procedure. Step 1. Generalizing the training examples Initially, the training corpus consists of examples that have the format part#sense; whole#sense; target , where target can be either Yes or No, depending whether the relation between the part and whole is meronymy or not: for example, aria#1, opera#1, Yes . From this initial set of examples an intermediate corpus was created by expanding each example using the following format: part#sense, class part#sense, whole#sense, class whole#sense; target , where class part and class whole correspond to the WordNet top semantic classes of the part and whole concepts, respectively. For instance, the previous example becomes aria#1, entity#1, opera#1, abstraction#6, Yes . From this intermediate corpus a generalized set of training examples is built, retaining only the semantic classes and the target value. At this point, the generalized training corpus contains three types of examples:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 8", "sec_num": null }, { "text": "1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "Positive examples X hierarchy#sense, Y hierarchy#sense, Yes The third situation occurs when the training corpus contains both positive and negative examples for the same hierarchy types. For example, both the relationships apartment#1, woman#1, No and hand#1, woman#1, Yes are mapped into the more general type entity#1, entity#1, Yes/No . However, the first example is negative (a POS-SESSION relation), while the second one is a positive example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "Step 2. Learning classification rules for unambiguous examples For the unambiguous examples in the generalized training corpus (those that are either positive or negative), rules are determined using C4.5. In this context, the features are the components of the relation (the part and, respectively the whole) and the values of the features are the corresponding WordNet semantic classes (the furthest ancestor in WordNet of the corresponding concept).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "With the first two types of examples, the unambiguous ones, a new training corpus was created on which we applied C4.5 using a 10-fold cross validation. The corpus is split in ten permutations, 9/10 training and 1/10 testing, and the output is represented by 10 sets of rules and default values generated from these unambiguous examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "The rules obtained are if-then rules with the part and whole noun semantic senses as preconditions. The default value is the most probable value for the target value and is used to classify unseen instances of that type when no other rule applies. It can be either Yes or No, corresponding to the possible values of the target attribute (part-whole relation or not).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "The rules were ranked according to their frequency of occurrence and average accuracy obtained for each particular set. In order to use the best rules, we decided to keep only those that had a frequency above a threshold (occurring in at least 7 of the 10 sets of rules) and an average accuracy greater than or equal to 50%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "In order to minimize the redundancies that may occur during the learning process, rules with the same classification value as the default value are ignored. The idea is that the default rule incorporates all the rules with the same target value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "For instance, after running C4.5 on the unambiguous set for the abstraction#6abstraction#6 example, we obtained a list of five rules and a default value No, as shown in Table 9 . Rules 1 and 5 were discarded as they were incorporated into the default class. Rules 3 and 4 were also discarded as their frequency did not pass the threshold of 7. Thus, rule 2 remains the only applicable rule.", "cite_spans": [], "ref_spans": [ { "start": 169, "end": 176, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "After filtering the rules that have the default value or do not pass the frequency and accuracy thresholds, there might be cases in which the set of remaining rules is empty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "Step 3. Specializing ambiguous examples Since C4.5 cannot be applied to ambiguous examples, we recursively specialize them to eliminate the ambiguity. The specialization procedure is based on the IS-A information provided by WordNet. Initially, each semantic class represented the root of one of the noun hierarchies in WordNet. By specialization, the semantic class is replaced with the corresponding hyponym for that particular sense, i.e., the concept immediately below in the hierarchy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "For this task, we again considered the intermediate training corpus of examples. For instance, the examples leg#2, entity#1, bee#1, entity#1, Yes and beehive#1, entity#1, bee#1, entity#1, No that caused the ambiguity entity#1, entity#1, Yes/No , were replaced with leg#2, thing#12, bee#1, organism#1, Yes and beehive#1, object#1, bee#1, organism#1, No , respectively. This intermediate example is thus generalized in the less ambiguous examples thing#12, organism#1, Yes and object#1, organism#1, No . This way, we specialize the ambiguous examples with more specific values for the attributes. The specialization process for this particular example is shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 662, "end": 670, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "Although this specialization procedure eliminates a proportion of the ambiguous examples, there is no guarantee it will work for all the ambiguous examples of this type. This is because the specialization splits the initial hierarchy into smaller distinct subhierarchies, with the examples distributed over this new set of subhierarchies. For the examples described above, the procedure eliminates the ambiguity through specialization of the semantic classes into new ones: thing#12-organism#1 and object#1-organism#1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "However, not all the examples can be disambiguated after only one specialization. For the examples leg#2, bee#1, Yes and world#7, bee#1, No , the procedure generalizes abstraction#6-abstraction#6 into the ambiguous example entity#1, entity#1, Yes/No and then specializes it in the ambiguous example part#7, organism#1, Yes/No . After one specialization the ambiguity still remains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "Steps 2 and 3 are repeated until there are no more ambiguous examples. The general architecture of this procedure is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 126, "end": 134, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "The list of rules for the iteration generated by the unambiguous subset of the ambiguous example abstraction#6, abstraction#6, yes/no . 'Yes' means part-whole relation, while 'No' means non-part-whole relation. The global default target value of this unambiguous node is No. Note that rules 3 and 4 are discarded as their frequency is below 7, and rules 1 and 5 were also discarded as incorporated in the default class No. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 9", "sec_num": null }, { "text": "The specialization of examples leg#2, entity#1, bee#1, entity#1, Yes , beehive#1, entity#1, bee#1, entity#1, No , and world#7, entity#1, bee#1, entity#1, No with the corresponding WordNet semantic classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "We observed that after the first generalization, 99.72% of the examples were ambiguous. After each specialization, the percentage decreases. For instance, after one level of specialization, 97.36% of the examples for entity#1-entity#1, 96.05% for abstraction#6abstraction#6, and 97.56% for entity#1-group#1 were ambiguous. Table 10 presents a sample of the iterations produced by the program to specialize the genitive cluster ambiguous example abstraction#6-abstraction#6. Each indentation corresponds to a specialization iteration.", "cite_spans": [], "ref_spans": [ { "start": 323, "end": 331, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "The training corpus considered for this research required on average 2.5 and at most five levels of specialization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "The next section describes the construction of classification rules, the experiments, and the results obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Diagram of the ISS system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "A sample iteration produced by the ISS procedure for the genitive cluster abstraction#6-abstraction#6 ambiguous example. The italicized examples are unambiguous. abstraction#6-abstraction#6 attribute#2-attribute#2 attribute#2-measure#3 attribute#2-relation#1 relation#1-attribute#2 measure#3-measure#3 measure#3-relation#1 time#5-time#5 relation#1-time#5 relation# ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 10", "sec_num": null }, { "text": "Part-Whole Relations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formulating Classification Rules and Applying them to Discover", "sec_num": "5." }, { "text": "The ISS learning procedure presented in the previous section builds a learning tree by recursively splitting the training corpus into unambiguous and ambiguous examples, based on the semantic information provided by the WordNet noun hierarchies. The learning tree is built top-down, one level at a time, each level corresponding to a specialization iteration. The internal nodes represent ambiguous examples at various levels of specialization, while the leaves contain sets of unambiguous examples. For instance, Figure 3 shows the learning tree corresponding to the specialization from Table 10 . Initially, the learning tree contains only a dummy root node that provides no information. After the generalization done in step 1 of the ISS learning procedure, all the initial examples are mapped into corresponding pairs of top noun semantic classes in WordNet and split into unambiguous and ambiguous sets based on their target function. All these new sets of examples form the first level of the learning tree.", "cite_spans": [], "ref_spans": [ { "start": 514, "end": 522, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 588, "end": 596, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Building the Learning Tree", "sec_num": "5.1" }, { "text": "The learning tree has two types of nodes: unambiguous nodes, corresponding to the sets of unambiguous examples from each iteration (e.g., nodes 1.1, 1.3.1, and 1.4.1 from Figure 3 ) and ambiguous nodes, corresponding to each ambiguous example from each iteration (e.g., nodes 1.2, 1.3, 1.4, and 1.4.2 from Figure 3) . Each node has associated with it a pair {R, D} representing a set of rules and a default value. The set of rules represents the rules to be used for classifying the new instances and the default value represents the target value (Yes if an instance is a part-whole relation and No if the instance is A snapshot of the learning subtree abstraction#6-abstraction#6 on which the combination and propagation algorithm is exemplified. Each node has an associated set of rules and a default value. The rule number references are for the \"No.\" column from Table 11 . not a part-whole relation) that should be returned if none of the rules classify the new instances.", "cite_spans": [], "ref_spans": [ { "start": 171, "end": 179, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 306, "end": 315, "text": "Figure 3)", "ref_id": "FIGREF2" }, { "start": 867, "end": 875, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Building the Learning Tree", "sec_num": "5.1" }, { "text": "After learning the classification rules in Step 2 of the ISS procedure, all the unambiguous nodes have default values and some have rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Learning Tree", "sec_num": "5.1" }, { "text": "In order to generate an overall set of classification rules, we traverse the learning tree in a bottom-up fashion, applying the rules generated at each level in this order. The rationale of this approach is that the rules closer to the bottom are more specific, and thus more accurate. At each level, the idea is to combine the rules associated with each sibling node and propagate the result to the parent. The combination and propagation steps are applied recursively until the root is reached. The combination phase guarantees that the rules to be combined are applied in a particular order at each level. Figure 4 shows a typical tree corresponding to one iteration of the ISS procedure on which we will explain the combination and propagation algorithm. Node L represents an internal node containing an ambiguous example. Through specialization, the learning procedure generated a set of unambiguous examples represented by the leaf L U , and a sequence of n ambiguous examples represented by the internal nodes L A 1 , L A 2 , .. L A n .", "cite_spans": [], "ref_spans": [ { "start": 609, "end": 617, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Formulating the Classification Rules", "sec_num": "5.2" }, { "text": "The rules and default value learned for the genitive cluster for the abstraction#6-abstraction#6 ambiguous example. \"Val.\" is the target value, \"Acc.\" is the rules' accuracy, and \"Fr.\" is their occurrence frequency. The numbering style used in the \"No.\" column is intended to indicate rules at different specialization levels. The values associated with the ambiguous nodes (rules and default values) are generated through propagation from lower levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 11", "sec_num": null }, { "text": "Input: Pairs of rules and associated default values for each unambiguous and ambiguous node:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule combination and propagation algorithm:", "sec_num": null }, { "text": "{R U , D U }, {R A 1 , D A 1 }, {R A 2 , D A 2 }, ..{R A n , D A n };", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule combination and propagation algorithm:", "sec_num": null }, { "text": "Output: A pair of rules and default value for parent node:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule combination and propagation algorithm:", "sec_num": null }, { "text": "{R L , D L }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule combination and propagation algorithm:", "sec_num": null }, { "text": "Step 1. Propagating the default value to the parent node:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule combination and propagation algorithm:", "sec_num": null }, { "text": "D L \u2190 D U", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule combination and propagation algorithm:", "sec_num": null }, { "text": "The default value of the unambiguous examples (D U ) will be directly propagated to the parent as the global default value of the subtree L (D L ). For example, the default value for the unambiguous node 1.1 from Figure 3 is No and it will propagate to the parent node abstraction#6-abstraction#6 (node 1 in Figure 3 ).", "cite_spans": [], "ref_spans": [ { "start": 213, "end": 221, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 308, "end": 316, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Rule combination and propagation algorithm:", "sec_num": null }, { "text": "If there is no unambiguous node L U (and therefore default value D U ), the default value for the first ambiguous example is propagated to L. For instance, for the ambiguous node 1.4.2 (social relation#1-social relation#1), there were no unambiguous ex-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule combination and propagation algorithm:", "sec_num": null }, { "text": "A part of the learning tree generated by the ISS learning procedure. The pairs of rules and default value associated with the parent node are generated through propagation of the combined pairs of rules and default values of the children. amples; and therefore the default value from the node 1.4.2.1 (written communication#2written communication#2) will be used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "Step 2. Propagating the rules from an ambiguous node with the same default value to the parent node:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "R L \u2190 {R A i |D A i = D L , 1 \u2264 i \u2264 n}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "The ambiguous nodes are the first to be tested. All the rules associated with the ambiguous nodes having the same default value as the global one are applied with the highest priority. For instance, all the ambiguous nodes for abstraction#6-abstraction#6 (nodes 1.2-1.8 in Figure 3 ) received a default value of No through propagation from their descendents. Since the default value for this node is No, it will receive all their rules (Rules 1 and 2 from Table 12 ).", "cite_spans": [], "ref_spans": [ { "start": 273, "end": 281, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 456, "end": 464, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "Step 3. Propagating the rules from an ambiguous node with the opposite default value to the parent node:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "R L \u2190 R L \u222a {if A j then R A j \u222a D A j |D A j = D L , 1 \u2264 j \u2264 n}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "The remaining ambiguous nodes have associated with them a different default value (a non-default value). Since the two nodes have opposite default values, the default value (D A j ) needs to be used when the rules for the child node (A j ) do not hold. Therefore, a new rule, specific to the example A j , needs to be created, for handling all the instances of A j : if A j then R A j \u222a D A j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "For example, the ambiguous node 1.4.2.1.2 (written communication#1-written communication#1) has the default value No. Its only ambiguous node (node 1.4.2.1.2.2 : writing#2-writing#2) has the default value Yes. Therefore, a specific rule (Rule 2.1.1 from Table 12 ) needs to be created for the example (Part=writing#2 and Whole=writing#2), Step 4. Propagating the rules learned from an unambiguous node to the parent node:", "cite_spans": [], "ref_spans": [ { "start": 254, "end": 262, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "R L \u2190 R L \u222a R U", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "Last, the rules learned from the unambiguous examples propagate to the parent node. They are applied last, since they are more general than the other rules. For example, after running C4.5 on the unambiguous set for the abstraction#6-abstraction#6 ambiguous example (Node 1 in Figure 3) , and eliminating the non-satisfactory rules (see Table 9 ), we obtained only Rule 3: if Part is time#5 and Whole is abstraction#6 then Yes and the default value No (see Table 11 ). The rule is propagated to the parent node abstraction#6-abstraction#6 and applied last.", "cite_spans": [], "ref_spans": [ { "start": 277, "end": 286, "text": "Figure 3)", "ref_id": "FIGREF2" }, { "start": 337, "end": 344, "text": "Table 9", "ref_id": null }, { "start": 457, "end": 465, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "In the end, the rules learned from the unambiguous examples are propagated to the parent node L. The procedure repeats until the top node of the tree is reached. After the combination and propagation procedure finishes, the root node contains the complete set of rules. The default value is added as a last rule, for classifying the instances that are not captured by the rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "A sample of the rules obtained using the ISS procedure for the genitive cluster is shown in Table 11 in the order in which they were applied and propagated to the abstraction#6-abstraction#6 node. Table 12 shows a translation of these rules into if-thenelse rules.", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 100, "text": "Table 11", "ref_id": "TABREF0" }, { "start": 197, "end": 205, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "The meaning of a rule Part Class Whole Class Val is if Part is Part Class and Whole is Whole Class, then It is a part-whole relation (Val. = Yes) or not (Val. = No) . For example, Rule 1 is if Part is a linear measure#3 and Whole is a measure#3, then It is a part-whole relation.", "cite_spans": [ { "start": 153, "end": 164, "text": "(Val. = No)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "In this section we present the classification rules learned for each cluster using the ISS learning procedure. We also performed various experiments to study the similarities and differences among clusters, especially to determine whether or not the classification rules learned for a particular cluster can be applied with high accuracy to other clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Rules for Each Cluster", "sec_num": "5.3" }, { "text": "The most frequently used set of part-whole lexico-syntactic patterns is represented by the genitive cluster. Tables 13 shows some of the classification rules learned for this cluster by the ISS learning procedure in the order provided by the combination and propagation algorithm. The full list of classification rules is shown in Tables 1 and 2 from Appendix B. The unambiguous set at level 1 of the learning tree did not generate any rules. The rule labeled Default in Table 13 shows the learning tree global default value (No). The tables of classification rules show only the frequency and accuracy of the rules generated at the unambiguous nodes.", "cite_spans": [], "ref_spans": [ { "start": 331, "end": 345, "text": "Tables 1 and 2", "ref_id": "TABREF0" }, { "start": 471, "end": 479, "text": "Table 13", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "A. Experiments with the genitive cluster", "sec_num": null }, { "text": "A sample of the rules learned for the genitive cluster. The full list is provided in Table 1 , Appendix B. \"Val.\" means target value (No or Yes), \"Acc.\" is the rules' accuracy, and \"Fr.\" is the frequency with which they occurred. The numbering style used in the \"No.\" column is intended to indicate rules learned at different specialization levels.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 92, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Table 13", "sec_num": null }, { "text": "No. Overall, for the genitive cluster the ISS procedure obtained 27 complex sets of classification rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 13", "sec_num": null }, { "text": "Taking into consideration the results already obtained for the genitive cluster, there are three possible approaches for detecting part-whole relations using the Y X and X Y patterns: a.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B. Experiments with the noun compound cluster", "sec_num": null }, { "text": "[C1] Use the classification rules obtained for the genitive cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B. Experiments with the noun compound cluster", "sec_num": null }, { "text": "b.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B. Experiments with the noun compound cluster", "sec_num": null }, { "text": "[C1 + C2] Determine new classification rules collectively for the genitive and noun compound clusters (Y's X; X of Y; Y have X; and Y X).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B. Experiments with the noun compound cluster", "sec_num": null }, { "text": "c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B. Experiments with the noun compound cluster", "sec_num": null }, { "text": "[C2] Determine classification rules only for the noun compound cluster (Y X; X Y). Table 14 shows the results obtained for the noun compound cluster using these three approaches. As one can observe, the best approach is to use only the classification rules generated by the noun compound cluster training examples. The recall increases significantly when new classification rules are learned for both the genitive and noun compound clusters, while the precision jumps considerably when the classification rules are learned only from the noun compound cluster examples. These statistics indicate that the genitive and noun compound clusters encode different semantic information, and consequently should be treated separately. Table 15 shows the classification rules learned only for the noun compound cluster.", "cite_spans": [], "ref_spans": [ { "start": 83, "end": 91, "text": "Table 14", "ref_id": "TABREF0" }, { "start": 726, "end": 734, "text": "Table 15", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "B. Experiments with the noun compound cluster", "sec_num": null }, { "text": "Taking into consideration the results obtained for the previous two clusters, there are five possible approaches for detecting part-whole relations using X prep Y and Y prep X patterns: a.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C. Experiments with the preposition cluster", "sec_num": null }, { "text": "[C1] Use the classification rules obtained for the genitive cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C. Experiments with the preposition cluster", "sec_num": null }, { "text": "b.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C. Experiments with the preposition cluster", "sec_num": null }, { "text": "[C2] Use the classification rules obtained for the noun compound cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C. Experiments with the preposition cluster", "sec_num": null }, { "text": "c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C. Experiments with the preposition cluster", "sec_num": null }, { "text": "[C1 + C3] Determine new classification rules for all the patterns in the genitive and preposition clusters (Y's X; X of Y; and Y have X; Y prep X; and X prep Y).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C. Experiments with the preposition cluster", "sec_num": null }, { "text": "The results obtained for each of the three approaches for the Y X; X Y patterns applied on the LA Times test corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 14", "sec_num": null }, { "text": "Genitives Genitives + Noun compounds Noun compounds (C1) (C1 + C2) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "The semantic classification rules learned for the noun compound cluster. \"Val.\" means target value (No or Yes), \"Acc.\" is the rules' accuracy, and \"Fr.\" is the frequency with which they occurred. No. Table 16 The results obtained for each of the five approaches for the Y prep X and X prep Y patterns in the preposition cluster applied on the LA Times test corpus. C1 refers to the genitive cluster, C2 to the noun compound cluster, and C3 to the preposition cluster. d.", "cite_spans": [], "ref_spans": [ { "start": 200, "end": 208, "text": "Table 16", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Table 15", "sec_num": null }, { "text": "[C2 + C3] Determine new classification rules for all the patterns in the noun compound and preposition clusters (Y X, Y prep X and X prep Y).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C1", "sec_num": null }, { "text": "e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C1", "sec_num": null }, { "text": "[C3] Determine classification rules only for the preposition cluster patterns (Y prep X and X prep Y patterns).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C1", "sec_num": null }, { "text": "f.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C1", "sec_num": null }, { "text": "[C1 + C2 + C3] Determine new classification rules for all the patterns in all three clusters (Y's X; X of Y; X Y; Y X, and Y have X; Y prep X and X prep Y). Table 16 shows the results obtained for the preposition cluster patterns in each of the five approaches used. One can observe that the preposition cluster alone provides the best results over all other combinations. These statistics are also consistent with the results obtained for the noun compound cluster experiments. The best approach is to use only the classification rules generated by the preposition cluster training examples. Table 17 shows the classification rules learned only for the preposition cluster patterns.", "cite_spans": [], "ref_spans": [ { "start": 157, "end": 165, "text": "Table 16", "ref_id": "TABREF0" }, { "start": 593, "end": 601, "text": "Table 17", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "C1", "sec_num": null }, { "text": "In order to test the classification rules for the extraction of part-whole relations, we selected two different text collections: the LA Times news articles from TREC 9 and the Wall Street Journal (WSJ) articles from Treebank2. 10 From each collection we randomly selected 10,000 sentences that formed two distinct test corpora. This corpus was parsed and disambiguated using a state-of-the-art domain independent Word Sense Disambiguation system that has an accuracy of 71% when disambiguating nouns in texts (Novischi et al. 2004) . In cases in which the noun constituents were not in WordNet, we used an in-house Named Entity Recognizer (NERD) that has a 96% F-measure on MUC6 data.", "cite_spans": [ { "start": 510, "end": 532, "text": "(Novischi et al. 2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results for Discovering Part-Whole Relations", "sec_num": "5.4" }, { "text": "The part-whole relations extracted by the ISS system were validated by comparing them with the gold standard for the test set obtained through inter-annotator agreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results for Discovering Part-Whole Relations", "sec_num": "5.4" }, { "text": "We define the precision, recall, and F-measure performance metrics in this context: The semantic classification rules learned for the preposition cluster. \"Val.\" means target value (No or Yes), \"Acc.\" is the rules' accuracy, and \"Fr.\" is the frequency with which they occurred.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results for Discovering Part-Whole Relations", "sec_num": "5.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Precision = Number of correctly retrieved relations Number of relations retrieved", "eq_num": "(2)" } ], "section": "Results for Discovering Part-Whole Relations", "sec_num": "5.4" }, { "text": "No. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results for Discovering Part-Whole Relations", "sec_num": "5.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F \u2212 measure = 2 1 Precision + 1 Recall", "eq_num": "(4)" } ], "section": "Results for Discovering Part-Whole Relations", "sec_num": "5.4" }, { "text": "Tables 18 and 19 show the overall results obtained by the ISS system on the Wall Street Journal (WSJ) and on the LA Times collections of news articles, respectively. The results obtained for each cluster are summarized in Tables 1 and 2 in Appendix C. Overall, on the WSJ test set the system obtained 82.87% precision and 79.09% recall on these three clusters. Besides the 373 relations corresponding to the three clusters, 33 other meronymy relations (406 \u2212 373) were found in the corpus corresponding to partwhole lexico-syntactic patterns that were not studied in this paper, giving us a global part-whole relation coverage (recall) of 72.66%.", "cite_spans": [], "ref_spans": [ { "start": 222, "end": 251, "text": "Tables 1 and 2 in Appendix C.", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results for Discovering Part-Whole Relations", "sec_num": "5.4" }, { "text": "The ISS system's results were compared to four baseline measures. Baseline1 shows the results obtained by the system with no word sense disambiguation (WSD), using only sense#1 (the most frequent sense in WordNet) for the pair of concepts. In Baseline2, the system considered WSD and applied the specialization algorithm, but ran C4.5 only once on all the unambiguous sets of specialized training examples representing all the leaves of the learning tree. Baseline3 shows the results obtained without generalizing the concepts; and Baseline4 shows the results obtained with automatic word sense disambiguation (WSD) on the training corpus as opposed to the manual word sense disambiguation used for ISS training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results for Discovering Part-Whole Relations", "sec_num": "5.4" }, { "text": "From the baselines' results for both the WSJ and LA Times text collections, one can see the importance of the WSD and IS-A generalization/specialization features to the extraction of the part-whole relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results for Discovering Part-Whole Relations", "sec_num": "5.4" }, { "text": "The number of part-whole relations obtained and the accuracy in the WSJ collection. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 18", "sec_num": null }, { "text": "The number of part-whole relations obtained and the accuracy in the LA Times collection. Figure 5 shows the learning curve where the classifier is trained on an incrementally increasing number of training data instances. The learning curve was determined by applying the training rules obtained through specialization on the LA Times test corpus annotated with automatic WSD. If for 1,000 positive and 1,000 negative examples the F-measure is only 35%, for 5,000 it increases to 70%, for 10,000 to 74%, for 15,000 to 77%, and it stabilizes at 87% for 20,000 examples. The learning curve shows that the ISS system obtains an F-measure of about 75% with only 16.8% of the training data.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 97, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Table 19", "sec_num": null }, { "text": "In this section we compare our work with two other approaches most similar to our task of part-whole semantic relation detection. Berland and Charniak (1999) limit their approach to single words denoting some entities that have recognizable parts, such as car and building. As they also observe, this approach causes errors, such as the detection of conditioner is part of car instead of air conditioner is part of car. Our system is considerably more knowledge intensive, but more general in the sense that it relies on WordNet and NERD to detect both single word and multiple word concepts in context. Moreover, their system was tested only on a working list of predefined highly probable wholes for their corpus based on the genitive syntactic patterns. In contrast, the ISS system can disambiguate any pair of concepts, provided they are in WordNet or can be classified by NERD.", "cite_spans": [ { "start": 130, "end": 157, "text": "Berland and Charniak (1999)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Previous Work", "sec_num": "5.5" }, { "text": "In order to eliminate a part of the data ambiguities, Berland and Charniak apply an ad hoc filtering procedure to eliminate those instances that represent properties or qualities of objects, such as those ending in -ing, -ity, and -ness. Our procedure is general enough to treat both positive and negative example instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Previous Work", "sec_num": "5.5" }, { "text": "Using the genitive patterns they find parts of a predefined list of wholes from a large text collection. Our method, however, determines if two noun concepts are in a part-whole relation or not. By generalizing the method to all the parts and wholes from our testing corpus, the accuracy of the system will fall. On the other hand, to be able to test the system on their six whole concepts we would need thousands of positive and negative examples for each such word. For instance, for the word book, Berland", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Previous Work", "sec_num": "5.5" }, { "text": "The learning curve for the number of learning examples. and Charniak had almost 2,000 examples for the top 50 ranked parts. Unfortunately, in our LA Times testing corpus we couldn't find more than ten parts for each of their proposed whole objects. Therefore, we are unable to replicate their work using our text collection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "ISS algorithm is based on an iterative semantic specialization method that allows us to go deeper into the semantic complexity problem of the patterns considered. To the best of our knowledge, ISS is the only noun-phrase interpretation system that uses word sense disambiguation. One other noun compound interpretation system, SENS (Vanderwende 1995) , used IS-A generalizations, and considered only the first sense of the noun constituents. The current state-of-the-art approaches in automatic detection of semantic roles (Gildea and Jurafsky 2002) have tried to use lexicosemantic hierarchies, such as WordNet, to generalize from lexical noun features. However, they also rely on the first sense listed for each noun occurring in the training data. Our experiments indicate the importance of WSD in extracting part-whole semantic relations.", "cite_spans": [ { "start": 332, "end": 350, "text": "(Vanderwende 1995)", "ref_id": "BIBREF42" }, { "start": 523, "end": 549, "text": "(Gildea and Jurafsky 2002)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "The difficulty of detecting part-whole relations is due to a variety of factors ranging from syntactic analysis, to semantic and pragmatic information. In this section we analyze the sources of errors occurring in our experiments and present some possible improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and Extensions", "sec_num": "6." }, { "text": "To arrive at an interpretation of the pair of words selected by the cluster patterns, it is first necessary to identify that both words are nouns, and not other parts of speech. For example, if Brill's tagger mis-tags an adjective or verb as a noun, then the ISS system will also be affected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and Extensions", "sec_num": "6." }, { "text": "Our classification rule learning approach is based on the WordNet semantic classes of the two concepts that represent the part and the whole, respectively. Thus, if the WSD system fails to annotate the concepts with the correct senses, the ISS system can generate wrong semantic classes, which leads to wrong conclusions. For example, the WordNet concept end has 14 senses corresponding to 6 semantic classes (entity, abstraction, event, psychological feature, state, and act) (see Table 20 ). However, not all the senses refer to a part-whole relation (e.g., senses 4, 6, 8, 9, 11, and 14 do not). Some senses corresponding to both positive and negative examples are mapped into the same semantic class (e.g., senses 7 and 8). In this case, the classification error will not affect the final result as it is eliminated in the specialization phase. However, when a part-whole sense of end is mapped erroneously into a semantic class that is representative of negative examples, then the error might propagate to the final classification rule.", "cite_spans": [], "ref_spans": [ { "start": 482, "end": 490, "text": "Table 20", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Limitations and Extensions", "sec_num": "6." }, { "text": "For some words, WordNet does not have all their senses. For example, the concepts import and export are not listed in WordNet as denoting the act of importing/exporting commodities from a foreign country. Thus, relations such as import of sweater and export of milk are mis-classified. Similar examples are participant and beneficiary for which WordNet lists only the senses corresponding to people and not to other entities, such as countries (e.g., a country can be one of the participants at a NATO meeting).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and Extensions", "sec_num": "6." }, { "text": "When a noun is too specific to be found in WordNet, we rely on a named entity recognizer (NERD). NERD identifies people, organizations, and other information extraction categories and annotates them accordingly. However, NERD doesn't always provide the correct annotation. For example, in the phrase attorney of York, it identifies sentences prefers the PURPOSE interpretation (bag for cotton clothes) of the noun compound cotton bag over the PART-WHOLE Encoding discourse knowledge is thus necessary. However, this is an open research problem and involves considerable manual annotation effort. Furthermore, our experiments focused on the detection of part-whole relations in compositional constructions. A more general approach would consider lexicalized instances as well. Pragmatic knowledge is particularly important for the interpretation of lexicalized constructions, such as soap opera. The meaning of lexicalized instances is usually captured by semantic lexicons and dictionaries.", "cite_spans": [ { "start": 443, "end": 453, "text": "PART-WHOLE", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Limitations and Extensions", "sec_num": "6." }, { "text": "Finally, the approach presented here can be extended to other semantic relations encoded by the cluster patterns considered. The only part-whole elements used in this algorithm were the patterns and the examples. Thus the learning and the validation procedures are generally applicable and we intend to generalize the method for the detection of other semantic relations, such as KINSHIP and PURPOSE. So far, we have obtained encouraging results for a list of 35 general-purpose semantic relations encoded by genitives (Moldovan and Badulescu 2005) , by noun compounds (Girju et al. 2005) , and different noun phrase-level patterns including genitives, noun compounds, and the preposition patterns (Moldovan et al. 2004) .", "cite_spans": [ { "start": 519, "end": 548, "text": "(Moldovan and Badulescu 2005)", "ref_id": "BIBREF23" }, { "start": 569, "end": 588, "text": "(Girju et al. 2005)", "ref_id": "BIBREF12" }, { "start": 698, "end": 720, "text": "(Moldovan et al. 2004)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Limitations and Extensions", "sec_num": "6." }, { "text": "The drawback of the method presented here, as for other very precise learning methods, is that the number of training examples needs to be very large. If a certain class of negative or positive examples is not seen in the training data (and therefore it is not captured by the classification rules), the system cannot classify its instances. Thus, the larger and more diverse the training data, the better the classification rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and Extensions", "sec_num": "6." }, { "text": "The components of the AH-64A Apache Helicopter found on Web documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 22", "sec_num": null }, { "text": "Hellfire air-to-surface missile millimeter wave seeker 70mm Folding Fin Aerial rocket 30mm Cannon camera armaments General Electric 1700-GE engine 4-rail launchers four-bladed main rotor anti-tank laser guided missile Longbow millimetre wave fire control radar integrated radar frequency interferometer rotating turret tandem cockpit Kevlar seats", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AH-64A Apache Helicopter", "sec_num": null }, { "text": "Since part-whole semantic relations occur frequently in text and have been recognized as fundamental ontological relations since ancient times, their discovery is paramount for applications such as Question Answering, automatic ontology construction, textual inferencing, and others. For questions like What parts does General Electric manufacture?, What are the components of X, What is Y made of?, and many more, the discovery of partwhole relations is necessary to assemble the right answer. The concepts and part-whole relations acquired from a collection of documents can be useful in answering difficult questions that normally can not be handled based solely on keyword matching and proximity. As the level of difficulty increases, Question Answering systems need richer semantic resources, including ontologies and larger knowledge bases. Consider the question What does the AH-64A Apache helicopter consist of? For questions like this, the system must extract all the components the war helicopter has. Unless an ontology of such army attack helicopter parts exists in the knowledge base, which in an open domain situation is highly unlikely, the system must first acquire from the document collection all the pieces the helicopter is made of. These parts can be scattered all over the text collection, so the Question Answering system has to gather together these partial answers into a single and concise hierarchy of parts. This technique is called answer fusion (Girju 2001) .", "cite_spans": [ { "start": 1475, "end": 1487, "text": "(Girju 2001)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Importance to NLP Applications", "sec_num": "7." }, { "text": "Using a state-of-the-art Question Answering system (Moldovan et al. 2002 ) adapted for answer fusion and including the ISS system as a module, the question presented above was answered by searching the Internet (the website for the Defence Industriesarmy at www.army-technology.com). The QA system started with the question focus helicopter and extracted and disambiguated all the meronymy relations using the ISS module. Table 22 shows the taxonomic ontology created for this question (presenting all the parts of a whole).", "cite_spans": [ { "start": 51, "end": 72, "text": "(Moldovan et al. 2002", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 422, "end": 430, "text": "Table 22", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Importance to NLP Applications", "sec_num": "7." }, { "text": "For example, the relation \"AH-64 Apache helicopter has part Hellfire air-to-surface missile\" was determined from the sentence AH-64 Apache helicopter has a Longbowmillimetre wave fire control radar and a Hellfire air-to-surface missile. Only the heads of the noun phrases were considered as they occur in WordNet (i.e., helicopter and air-to-surface missile, respectively).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Importance to NLP Applications", "sec_num": "7." }, { "text": "Ontologies 12 are used more and more as means to boost the accuracy of natural language application systems (Moldovan and Girju 2001) . Semantically richer ontologies can be built by incorporating more semantic relations in addition to the traditional IS-A relation. Part-whole is an excellent example of such relations. Recently, Tatu and Moldovan (2005) have shown that semantic relations such as part-whole can be combined with other relations using a semantic calculus for the purpose of improving the performance of a textual inference system.", "cite_spans": [ { "start": 108, "end": 133, "text": "(Moldovan and Girju 2001)", "ref_id": "BIBREF25" }, { "start": 331, "end": 355, "text": "Tatu and Moldovan (2005)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Importance to NLP Applications", "sec_num": "7." }, { "text": "In this paper we presented a supervised, knowledge-intensive approach to the automatic detection of part-whole relations encoded by the three most frequent clusters of syntactic constructions: (1) genitives and NP have NP clauses, (2) noun compounds, and (3) other NP PP phrases. The detection of the part-whole relations is difficult due to the highly ambiguous nature of the syntactic constructions, as they can encode other relations than meronymy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "Our method for detection of part-whole relations discovers semi-automatically the part-whole lexico-syntactic patterns and learns automatically the semantic classification rules needed for the disambiguation of these patterns. We defined the task as a binary classification problem and used an approach that relies on the assumption that the semantic relation between two constituent nouns representing the part and the whole can be detected based on the components' semantic classification rules. The classification rules are learned automatically through an iterative semantic specialization (ISS) procedure applied on the noun constituents' semantic classes provided by WordNet. We successfully combined the results of decision tree learning with the WordNet IS-A hierarchy specialization for more accurate learning. We proved the method is domain independent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "The classification rules learned by our method and listed in several tables can be easily implemented to extract part-whole relations from text. However, to apply these rules a word sense disambiguation system for nouns is necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "Our experiments revealed the importance of word sense disambiguation and Word-Net IS-A specialization. We have directly compared and contrasted the results of our system with a variety of baselines and have shown impressive results. Combination of word sense disambiguation information with IS-A semantic information in WordNet yields better performance over either WSD or IS-A specialization alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "Our experiments also showed that the three cluster patterns considered are not alternative ways of encoding part-whole information. This observation is very important for various text understanding applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "Moreover, the approach presented can be extended to other semantic relations since the learning procedures are generally applicable and yield good results for sufficiently large training corpora. compose, compound, confine, constitute, dwell, embrace, encompass, fall in, form, get together, infiltrate, inhere, involve, join, let in, lie, lie in, make, make up, pertain, rejoin, repose, represent, reside, rest, sign up, take verbs: carry, combine, comprehend, comprise, consist, contain , enclose, feature, have, hold, hold in, house, include, incorporate, inherit, integrate, receive, retain Tables 1 and 2 show the full list of semantic classification rules learned for the genitive cluster from all the ambiguous nodes.", "cite_spans": [ { "start": 196, "end": 426, "text": "compose, compound, confine, constitute, dwell, embrace, encompass, fall in, form, get together, infiltrate, inhere, involve, join, let in, lie, lie in, make, make up, pertain, rejoin, repose, represent, reside, rest, sign up, take", "ref_id": null }, { "start": 427, "end": 594, "text": "verbs: carry, combine, comprehend, comprise, consist, contain , enclose, feature, have, hold, hold in, house, include, incorporate, inherit, integrate, receive, retain", "ref_id": null } ], "ref_spans": [ { "start": 595, "end": 609, "text": "Tables 1 and 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "The semantic classification rules learned for the genitive cluster all ambiguous nodes (default value No). \"Val.\" is the target value, \"Acc.\" is the rules' accuracy, and \"Fr.\" is their occurrence frequency. The indentations in the \"No.\" column refer to rules at different specialization levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "No. Table 2 The semantic classification rules learned for the genitive cluster from all the ambiguous nodes with default value Yes. \"Val.\" means target value (No or Yes), \"Acc.\" is the rules' accuracy, and \"Fr.\" is the frequency with which they occurred. The numbering style used in the \"No.\" column is intended to indicate rules learned at different specialization levels. No. Table 1 The number of part-whole relations obtained and the accuracy for each cluster and for all the clusters in the WSJ collection. Table 2 The number of part-whole relations obtained and the accuracy for each cluster and for all the clusters in the LA Times collection. ", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 11, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 378, "end": 385, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 512, "end": 519, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "http://www.senseval.org/senseval3. 2 The Computational Lexical Semantics Workshop at the 2004 Human Language Technology (HLT/NAACL) conference; the first and second Workshops on Text Meaning and Interpretation at the HLT/NAACL-03 and the 2004 Association for Computational Linguistics conference (ACL), respectively; the first and second Workshops on Multiword Expressions at ACL 2003 and 2004; the ACL 2005 Workshop on Deep Lexical Acquisition. 3 This example is an excerpt from a review of the 1903 movie \"The Great Train Robbery\" (http://filmsite.org/grea.html). 4 TREC 9 is a text collection provided by NIST for the Question Answering competition (TREC-QA) at the TExt Retrieval Conference in 2000. It contains 3 GBytes of news articles from the Wall Street Journal, Financial Times, LA Times, Financial Report, AP Newswire, San Jose Mercury News, and Foreign Broadcast Information Center from 1989 to 1994. 5 The SemCor collection(Miller et al., 1993) is a subset of the Brown Corpus and consists of 352 news articles distributed into three sets in which the nouns, verbs, adverbs, and adjectives have been manually tagged with their corresponding WordNet senses and part-of-speech tags usingBrill's tagger (1995).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For example, in WordNet 1.7 the only part listed for the concept sandwich is bread.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The WCH categories were also used by the annotators to better distinguish between positive and negative examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Treebank2 is a text collection developed at UPenn consisting of a million words of 1989 Wall Street Journal material.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These sentences were introduced in(Lapata 2002).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Gartner Group identified Ontologies as one of the leading IT technologies, ranked 3 rd in its list of top 10 technologies forecast for 2005.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Matthew Jones for his help in providing the gold-standard annotations for the training and test corpora used in this research. We are grateful for the constructive comments made by Robert Dale and anonymous reviewers that helped considerably clarify and improve the presentation. This work was partially supported by the Advanced Research and Development Activity/ Disruptive Technology Office.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "The semantic classes and part-whole status for all the senses of the concept end in WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 20", "sec_num": null }, { "text": "Part York as a name of a person and tags it with sense#1. However, York#1 is defined in WordNet as the House of York, the English royal house that reigned from 1461 to 1485. Consequently, the ISS system will consider York#1 a group instead of an entity, yielding an erroneous result. The WSD tool identifies noun compounds and annotates them with the corresponding WordNet sense. For instance in the sentence \"... by/IN simply/RB/1 redesigning/ VBG/1 how/ WRB/1 a/ DT car door/ NN/1 is/ VBZ assembled/ VBN/1\" the system annotated the concept car door with its WordNet sense (sense 1). This way, the ISS system considers the two words as a concept and not as a noun compound encoding a part-whole relation. The majority of noun compounds from the test corpus are names of people (e.g., Andrea West, Mr. Moore), dates (e.g., Oct 12, Monday afternoon), names of institutions (e.g., Bank of America, Planters Corp., Research Inc., Johnson & Johnson) , or numbers (e.g., six days, five years). After analyzing the ambiguous pairs of nouns in noun compound instances, we noticed that only a few of them were positive examples. This error can be easily fixed by disabling the labeling of noun compounds with word senses.Another class of errors involves the position of part and whole concepts. For example, the part-whole instance band#1 of people#1 is detected by the pattern NP X of NP Y and the system classifies erroneously band as part, and people as whole. One way to overcome this is to further classify the patterns based on selectional restrictions on their constituent nouns (e.g., group nouns in of-genitives have different positions for the part and whole concepts).We present in Table 21 the types of errors and their frequency of occurrence for each cluster and overall.Although our approach takes context into account through the use of word sense disambiguation, it does so in a limited way, without access to the general discourse and pragmatic context within which a pair of nouns is embedded. Various researchers (Sp\u00e4rck Jones 1983; Lascarides and Copestake 1998; Lapata 2002) showed that the interpretation of noun compounds, for example, may be influenced by discourse and pragmatic knowledge. For instance, the discourse context provided by the following Tables 1, 2 , and 3 present a summary of phrase-level and sentence-level meronymic patterns and their possible extensions.", "cite_spans": [ { "start": 887, "end": 945, "text": "America, Planters Corp., Research Inc., Johnson & Johnson)", "ref_id": null }, { "start": 2025, "end": 2044, "text": "(Sp\u00e4rck Jones 1983;", "ref_id": "BIBREF37" }, { "start": 2045, "end": 2075, "text": "Lascarides and Copestake 1998;", "ref_id": "BIBREF19" }, { "start": 2076, "end": 2088, "text": "Lapata 2002)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 1685, "end": 1693, "text": "Table 21", "ref_id": null }, { "start": 2270, "end": 2281, "text": "Tables 1, 2", "ref_id": null } ], "eq_spans": [], "section": "Sense Semantic Class", "sec_num": null }, { "text": "The phrase-level patterns determined with the pattern identification procedure in Section 3. \"Fr.\" means frequency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "Fr Tables 1 and 2 show the performance results obtained for each cluster considered on LA Times and WSJ test corpora.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 17, "text": "Tables 1 and 2", "ref_id": null } ], "eq_spans": [], "section": "No. Pattern", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Finding parts in very large corpora", "authors": [ { "first": "Matthew", "middle": [], "last": "Berland", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL 1999)", "volume": "", "issue": "", "pages": "57--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berland, Matthew and Eugene Charniak. 1999. Finding parts in very large corpora. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL 1999), pages 57-64, University of Maryland.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1995, "venue": "Computational Linguistics", "volume": "21", "issue": "4", "pages": "543--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brill, Eric. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543-566.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A maximum-entropy-inspired parser", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2000)", "volume": "", "issue": "", "pages": "132--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, Eugene. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2000), pages 132-139, Seattle, WA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "On the creation and use of English compound nouns", "authors": [ { "first": "Pamela", "middle": [], "last": "Downing", "suffix": "" } ], "year": 1977, "venue": "Language", "volume": "53", "issue": "4", "pages": "810--842", "other_ids": {}, "num": null, "urls": [], "raw_text": "Downing, Pamela. 1977. On the creation and use of English compound nouns. Language, 53(4):810-842.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Accurate methods for the statistics of surprise and coincidence", "authors": [ { "first": "Ted", "middle": [], "last": "Dunning", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "61--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dunning, Ted. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19:61-74.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Lexical-semantic relations: A comparative survey", "authors": [ { "first": "Martha", "middle": [ "W" ], "last": "Evens", "suffix": "" }, { "first": "C", "middle": [], "last": "Bonnie", "suffix": "" }, { "first": "Judith", "middle": [ "A" ], "last": "Litowitz", "suffix": "" }, { "first": "Raoul", "middle": [ "N" ], "last": "Markowitz", "suffix": "" }, { "first": "Oswald", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Werner", "suffix": "" } ], "year": 1980, "venue": "Linguistic Research", "volume": "", "issue": "", "pages": "187--219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evens, Martha W., Bonnie C. Litowitz, Judith A. Markowitz, Raoul N. Smith, and Oswald Werner. 1980. Lexical-semantic relations: A comparative survey. Linguistic Research, pages 187-219.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "WordNet-An Electronic Lexical Database", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, Christiane. 1998. WordNet-An Electronic Lexical Database. MIT Press, Cambridge, MA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Semantic Interpretation of Compound Nominals", "authors": [ { "first": "Timothy", "middle": [ "W" ], "last": "Finin", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Finin, Timothy W. 1980. The Semantic Interpretation of Compound Nominals. Ph.D. thesis, University of Illinois at Urbana-Champaign.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Existentials and other locatives. Language", "authors": [ { "first": "Ray", "middle": [], "last": "Freeze", "suffix": "" } ], "year": 1992, "venue": "", "volume": "68", "issue": "", "pages": "553--595", "other_ids": {}, "num": null, "urls": [], "raw_text": "Freeze, Ray. 1992. Existentials and other locatives. Language, 68:553-595.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automatic labeling of semantic roles", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "3", "pages": "245--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gildea, Daniel and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3): 245-288.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Answer fusion with on-line ontology development", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 2nd Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL 2001) -Student Research Workshop", "volume": "", "issue": "", "pages": "23--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Girju, Roxana. 2001. Answer fusion with on-line ontology development. In Proceedings of the 2nd Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL 2001) - Student Research Workshop, pages 23-28, Pittsburgh, PA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning semantic constraints for the automatic discovery of part-whole relations", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Badulescu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 3rd Human Language Technology Conference/ 4th Meeting of the North American Chapter of the Association for Computational Linguistics Conference (HLT-NAACL 2003)", "volume": "", "issue": "", "pages": "80--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "Girju, Roxana, Adriana Badulescu, and Dan Moldovan. 2003. Learning semantic constraints for the automatic discovery of part-whole relations. In Proceedings of the 3rd Human Language Technology Conference/ 4th Meeting of the North American Chapter of the Association for Computational Linguistics Conference (HLT-NAACL 2003), pages 80-87, Edmonton, Canada.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "On the semantics of noun compounds", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "Marta", "middle": [], "last": "Tatu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Antohe", "suffix": "" } ], "year": 2005, "venue": "Computer Speech and Language-Special Issue on Multiword Expressions", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Girju, Roxana, Dan Moldovan, Marta Tatu, and Daniel Antohe. 2005. On the semantics of noun compounds. Computer Speech and Language-Special Issue on Multiword Expressions (in press).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Acquisition of hyponyms from large text corpora", "authors": [ { "first": "Marti", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 14th International Conference on Computational Linguistics (COLING-92)", "volume": "", "issue": "", "pages": "539--545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hearst, Marti. 1992. Acquisition of hyponyms from large text corpora. In Proceedings of the 14th International Conference on Computational Linguistics (COLING-92), pages 539-545, Nantes, France.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An Electronic Lexical Database and Some of Its Applications", "authors": [ { "first": "Marti", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "131--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hearst, Marti. 1998. Automated discovery of WordNet relations. In Christiane Fellbaum, editor, An Electronic Lexical Database and Some of Its Applications. MIT Press, Cambridge, MA, pages 131-151.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Problems with part-whole relation", "authors": [ { "first": "Madelyn", "middle": [], "last": "Iris", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Litowitz", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Evens", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iris, Madelyn, Bonnie Litowitz, and Martha Evens. 1988. Problems with part-whole relation. In M. W. Evens, editor, Relational Models of the Lexicon: Representing Knowledge in Semantic Networks.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The double nature of the verb have", "authors": [ { "first": "Per", "middle": [], "last": "Jensen", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Anker", "suffix": "" }, { "first": "", "middle": [], "last": "Vikner", "suffix": "" } ], "year": 1996, "venue": "LAMBDA", "volume": "21", "issue": "", "pages": "25--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jensen, Per Anker and Carl Vikner. 1996. The double nature of the verb have. LAMBDA, 21:25-37.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Adding semantic annotation to the Penn Treebank", "authors": [ { "first": "Paul", "middle": [], "last": "Kingsbury", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Mitch", "middle": [], "last": "Marcus", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2nd Human Language Technology Conference (HLT 2002)", "volume": "", "issue": "", "pages": "252--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kingsbury, Paul, Martha Palmer, and Mitch Marcus. 2002. Adding semantic annotation to the Penn Treebank. In Proceedings of the 2nd Human Language Technology Conference (HLT 2002), pages 252-256, San Diego, CA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The disambiguation of nominalisations", "authors": [ { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "3", "pages": "357--388", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lapata, Mirella. 2002. The disambiguation of nominalisations. Computational Linguistics, 28(3):357-388.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Pragmatics and word meaning", "authors": [ { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" } ], "year": 1998, "venue": "Journal of Linguistics", "volume": "34", "issue": "2", "pages": "387--414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lascarides, Alex and Ann Copestake. 1998. Pragmatics and word meaning. Journal of Linguistics, 34(2):387-414.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A probabilistic model of compound nouns", "authors": [ { "first": "Mark", "middle": [], "last": "Lauer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dras", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 7th Australian Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "474--481", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lauer, Mark and Mark Dras. 1994. A probabilistic model of compound nouns. In Proceedings of the 7th Australian Joint Conference on Artificial Intelligence, pages 474-481, Armidale, Australia.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The Syntax and Semantics of Complex Nominals", "authors": [ { "first": "Judith", "middle": [], "last": "Levi", "suffix": "" } ], "year": 1978, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levi, Judith. 1978. The Syntax and Semantics of Complex Nominals. Academic Press, New York.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, Mitchell P., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A semantic scattering model for the automatic interpretation of genitives", "authors": [ { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Badulescu", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Proceesing (HLT/ EMNLP 2005)", "volume": "", "issue": "", "pages": "891--898", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moldovan, Dan and Adriana Badulescu. 2005. A semantic scattering model for the automatic interpretation of genitives. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Proceesing (HLT/ EMNLP 2005), pages 891-898, Vancouver, BC, Canada.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Models for the semantic classification of noun phrases", "authors": [ { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Badulescu", "suffix": "" }, { "first": "Marta", "middle": [], "last": "Tatu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Antohe", "suffix": "" }, { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Human Language Technology Conference (HLT-NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moldovan, Dan, Adriana Badulescu, Marta Tatu, Daniel Antohe, and Roxana Girju. 2004. Models for the semantic classification of noun phrases. In Proceedings of the Human Language Technology Conference (HLT-NAACL) 2004, Computational Lexical Semantics Workshop, Boston, MA.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "An interactive tool for the rapid development of knowledge bases", "authors": [ { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" } ], "year": 2001, "venue": "International Journal on Artificial Intelligence Tools", "volume": "10", "issue": "1-2", "pages": "65--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moldovan, Dan and Roxana Girju. 2001. An interactive tool for the rapid development of knowledge bases. International Journal on Artificial Intelligence Tools, 10(1-2):65-86.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "LCC tools for question answering", "authors": [ { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Morarescu", "suffix": "" }, { "first": "Finley", "middle": [], "last": "Lacatusu", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Novischi", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Badulescu", "suffix": "" }, { "first": "Orest", "middle": [], "last": "Bolohan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 11th Meeting of the Text Retrieval Conference", "volume": "", "issue": "", "pages": "388--397", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moldovan, Dan, Sanda Harabagiu, Roxana Girju, Paul Morarescu, Finley Lacatusu, Adrian Novischi, Adriana Badulescu, and Orest Bolohan. 2002. LCC tools for question answering. In Proceedings of the 11th Meeting of the Text Retrieval Conference (TREC 2002), pages 388-397, Gaithersburg, MD.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Non-classical lexical semantic relations", "authors": [ { "first": "Jane", "middle": [], "last": "Morris", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 4th Human Language Technology Conference / of the 5th Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2004) -Workshop on Computational Lexical Semantics", "volume": "", "issue": "", "pages": "46--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morris, Jane and Graeme Hirst. 2004. Non-classical lexical semantic relations. In Proceedings of the 4th Human Language Technology Conference / of the 5th Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2004) -Workshop on Computational Lexical Semantics, pages 46-51, Boston, MA. Novischi, Adrian, Dan Moldovan, Paul Parker, Adriana Badulescu, and Bob Hauser. 2004. LCC's WSD systems for Senseval 3. In Proceedings of Senseval 3 (ACL 2004), Barcelona, Spain.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Lexical semantic techniques for corpus analysis", "authors": [ { "first": "James", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Bergler", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Anick", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "331--358", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pustejovsky, James, Sabine Bergler, and Peter Anick. 1993. Lexical semantic techniques for corpus analysis. Computational Linguistics, 19(2): 331-358.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "C4.5: Programs for Machine Learning", "authors": [ { "first": "Ross", "middle": [ "J" ], "last": "Quinlan", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinlan, Ross. J. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco, CA.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Selectional constraints: An information-theoretic model and its computational realization", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1996, "venue": "Cognition", "volume": "61", "issue": "", "pages": "127--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, Philip. 1996. Selectional constraints: An information-theoretic model and its computational realization. Cognition, 61:127-159.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Structural ambiguity and conceptual relations", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Marti", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st Meeting of the Association for Computational Linguistics (ACL 1993)-1st Workshop on Very Large Corpora: Academic and Industrial Perspectives", "volume": "", "issue": "", "pages": "58--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, Philip and Marti Hearst. 1993. Structural ambiguity and conceptual relations. In Proceedings of the 31st Meeting of the Association for Computational Linguistics (ACL 1993)- 1st Workshop on Very Large Corpora: Academic and Industrial Perspectives, pages 58-64, Ohio State University, Columbus, OH.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Classifying the semantic relations in noun compounds via a domain-specific lexical hierarchy", "authors": [ { "first": "Barbara", "middle": [], "last": "Rosario", "suffix": "" }, { "first": "Marti", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "82--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosario, Barbara and Marti Hearst. 2001. Classifying the semantic relations in noun compounds via a domain-specific lexical hierarchy. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2001), pages 82-90, Pittsburgh, PA.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "The descent of hierarchy, and selection in relational semantics", "authors": [ { "first": "Barbara", "middle": [], "last": "Rosario", "suffix": "" }, { "first": "Marti", "middle": [], "last": "Hearst", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Fillmore", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "247--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosario, Barbara, Marti Hearst, and Charles Fillmore. 2002. The descent of hierarchy, and selection in relational semantics. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 247-254, University of Pennsylvania.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "The SLP/ILP distinction in have-predication", "authors": [ { "first": "Robin", "middle": [], "last": "Schafer", "suffix": "" } ], "year": 1995, "venue": "Proceedings from Semantics and Linguistic Theory V", "volume": "", "issue": "", "pages": "292--309", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schafer, Robin. 1995. The SLP/ILP distinction in have-predication. In M. Simons and T. Galloway, editors, Proceedings from Semantics and Linguistic Theory V. Cornell University Department of Linguistics, pages 292-309, Ithaca.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Nonparametric Statistics for the Behavioral Science", "authors": [ { "first": "Sidney", "middle": [], "last": "Siegel", "suffix": "" }, { "first": "John", "middle": [], "last": "Castellan", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siegel, Sidney and John Castellan. 1988. Nonparametric Statistics for the Behavioral Science. McGraw-Hill, New York.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Part/whole II: Mereology since 1900", "authors": [ { "first": "Peter", "middle": [], "last": "Simons", "suffix": "" } ], "year": 1987, "venue": "Handbook of Metaphysics and Ontology", "volume": "", "issue": "", "pages": "672--675", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simons, Peter. 1987. Parts. A Study in Ontology. Clarendon Press, Oxford. Simons, Peter. 1991. Part/whole II: Mereology since 1900. In H. Burkhardt and B. Smith, editors, Handbook of Metaphysics and Ontology. Philosophia, Munich, pages 672-675.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Compound noun interpretation problems", "authors": [ { "first": "Sp\u00e4rck", "middle": [], "last": "Jones", "suffix": "" }, { "first": "K", "middle": [], "last": "", "suffix": "" } ], "year": 1983, "venue": "Computer Speech Processing", "volume": "", "issue": "", "pages": "363--380", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sp\u00e4rck Jones, K. 1983. Compound noun interpretation problems. In F. Fallside and W. A. Woods, editors, Computer Speech Processing. Prentice-Hall, Englewood Cliffs, NJ, pages 363-380.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A semantic approach to recognizing textual entailmant", "authors": [ { "first": "Marta", "middle": [], "last": "Tatu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP 2005)", "volume": "", "issue": "", "pages": "371--378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tatu, Marta and Dan Moldovan. 2005. A semantic approach to recognizing textual entailmant. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP 2005), pages 371-378, Vancouver, BC, Canada.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "A generative model for Framenet semantic role labeling", "authors": [ { "first": "Cynthia", "middle": [ "A" ], "last": "Thompson", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 14th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thompson, Cynthia A., Roger Levy, and Christopher Manning. 2003. A generative model for Framenet semantic role labeling. In Proceedings of the 14th", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "European Conference on Machine Learning", "authors": [], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "397--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "European Conference on Machine Learning (ECML 2003), pages 397-408, Cavtat-Dubrovnik, Croatia.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Algorithm for automatic interpretation of noun sequences", "authors": [ { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 15th International Conference on Computational Linguistics (COLING 1994)", "volume": "", "issue": "", "pages": "782--788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanderwende, Lucy. 1994. Algorithm for automatic interpretation of noun sequences. In Proceedings of the 15th International Conference on Computational Linguistics (COLING 1994), pages 782-788, Kyoto, Japan.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "The Analysis of Noun Sequences using Semantic Information Extracted from On-Line Dictionaries", "authors": [ { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanderwende, Lucy. 1995. The Analysis of Noun Sequences using Semantic Information Extracted from On-Line Dictionaries. Ph.D. thesis, Georgetown University.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "A taxonomy of part-whole relations", "authors": [ { "first": "Morton", "middle": [], "last": "Winston", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Chaffin", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Hermann", "suffix": "" } ], "year": 1987, "venue": "Cognitive Science", "volume": "11", "issue": "4", "pages": "417--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Winston, Morton, Roger Chaffin, and Douglas Hermann. 1987. A taxonomy of part-whole relations. Cognitive Science, 11(4):417-444.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "WN Type is the part-whole type from WordNet and WCH Type is the part-whole type from the Winston, Chaffin and Hermann taxonomy.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "Figure 3 A snapshot of the learning subtree abstraction#6-abstraction#6 on which the combination and propagation algorithm is exemplified. Each node has an associated set of rules and a default value. The rule number references are for the \"No.\" column from Table 11.", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "meaning (bag made of cotton) (cf. (Lapata 2002)): (5) Mary sorted her clothes into various bags made from plastic. (6) She put her skirt into the cotton bag. 11", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "text": "", "type_str": "table", "num": null, "html": null, "content": "
WCH Type
WN Type
The Whole
The Part
Whole concept
(cont.)Part concept
" }, "TABREF1": { "text": "Number of sentences and patterns containing the 100 part-whole pairs in each text collection considered.", "type_str": "table", "num": null, "html": null, "content": "
CollectionNumber of Number of sentences Number of sentences Number of
sentencescontaining the pairscontainingpatterns
part-whole relations
SemCor10,000874812
LA Times10,0001,98848730
" }, "TABREF2": { "text": "Clusters of lexico-syntactic patterns classified based on their semantic similarity and their frequency of occurrence in the 20,000 sentence corpus used in the part-whole pattern identification procedure.", "type_str": "table", "num": null, "html": null, "content": "
ClusterPatternsFreq. Coverage Examples
C1. genitivesNP X of NP Yeyes of the baby
andNP Y 's NP X28252.71%girl's mouth
verb to have NP Y have NP XThe table has four legs.
C2. nounNP XY8616.07%door knob
compounds NP YXturkey pie
C3. preposition NP Y PP X13324.86%A bird without wings cannot fly.
NP X PP YA room in the house.
C4. otherothers346.36%The Supreme Court is a branch of
the Government.
" }, "TABREF3": { "text": "Examples of meronymic expressions based on their ambiguity. One of the air's constituents is oxygen. The cloud was made of dust. Iceland is a member of NATO.AmbiguousThe horn is part of the car.", "type_str": "table", "num": null, "html": null, "content": "
Types ofPositive Examples (part-whole) Negative Examples (not part-whole)
Part-Whole
Expressions
Unambiguous The parts of an airplane include
the engine, ..
The substance consists of
three ingredients.
" }, "TABREF4": { "text": "Examples of identifying the potential Part and Whole concepts for different clusters.", "type_str": "table", "num": null, "html": null, "content": "
ClusterExamplePotential Part(X) andPositive or
Whole(Y) conceptsnegative example
C1. genitivesthe door of the carthe [door#4] X of the [car#1] Ypositive
my friend's carmy [ friend#1] Y 's [car#1] Xnegative
C2. noun compounds car door company[car door#1] X [company#1] Ynegative
[car#1] X [door#4] Ypositive
C3. prepositionswindow from the car [window#2]
" }, "TABREF11": { "text": "", "type_str": "table", "num": null, "html": null, "content": "
list of rules obtained for the ambiguous example abstraction#6-abstraction#6 for the genitive
cluster.
1if Part is linear measure#3 and Whole Class is measure#3
then It is a part-whole relation
2if Part is communication#2 and Whole is communication#2
then
2.1if Part is written communication#1 and Whole is written communication#1
then
2.1.1if Part is writing#2 and Whole is writing#2
then
2.1.1.1if Part is matter#6
then It is not a part-whole relation
else It is a part-whole relation
else It is not a part-whole relation
else
2.2if Part is indication#1 and Whole is message#2
then It is not a part-whole relation
else
2.3if Part is message#2 and Whole is communication#2
then It is not a part-whole relation
else It is a part-whole relation
3if Part is time#5 and Whole is abstraction#6
then It is a part-whole relation
" }, "TABREF16": { "text": "", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF20": { "text": "Error types statistics measured on the Wall Street Journal corpus for the ISS system.", "type_str": "table", "num": null, "html": null, "content": "
Clusters
" }, "TABREF21": { "text": "", "type_str": "table", "num": null, "html": null, "content": "
cont.)
No. PatternFr. Example
22NP X1X2 PP Y8 between the executive and legislative branches
of government
-NP X1X2 ends with branches
-NP X1X2 contains and or or
-PP Y begins with of
23NP X1X2 PP Y1 the memory and other features of IBM-compatible
personal computers.
-NP X1X2 contains and or or
-PP Y begins with of
24NP Y (NP X1X2 )1 the three states of Southern New England
(Massachusetts, Connecticut, and Rhode Island)
-NP X1X2 contains and or or
25NP X (NP Y )1 red-bellied snake (Storeria)
26NP Y NP X42 He sell car doors
27NP Z -NP X NP Y26 a one-act ballet
28NP Y NP X NP Z12 faulty garage door lock
computer memory chip
four-door compact car
29NP X NP Y3 membership organization
power window buildings
30NP X -NP Y NP Z3 a play-act universe
" }, "TABREF22": { "text": "The sentence-level patterns determined with the pattern identification procedure in Section 3. \"Fr.\" means frequency.", "type_str": "table", "num": null, "html": null, "content": "
Fr. Example18 A car has wheels.The cake contains fresh fruits.Any car includes a spare tire.The patient received a new heart.13 They constructed the car from engine, doors, wheels.The price includes a membership in a good club.The system administrator connect the computersinto a computer network.The state uses these soldiers as the main army.The colonel organized the soldiers into an elite army.The user inserts the file into his directory.The programmer includes the main procedure inthe C source file.2 They dragged him out of the carthrough the window.1 The member joined the organization in 1976.The oxygen composes the air.The man infiltrates the organization in 1999.
No. Pattern1 NP Y verb NP X-verbs: carry, combine, comprehend, comprise,consist, contain , enclose, feature, have, hold,hold in, house, include, incorporate, inherit,integrate, receive, retain, subsume2 NP Z verb NP X PP Y-PP Y starts with in, into, as or from-verbs: assemble, build, build in, carry, combine,compose, compound, comprehend, comprise,connect, consist, construct, contain , coordinate,create, embrace, enclose, enter, fabricate, feature,file, form, have, hold, hold in, house, include,incorporate, infix, inherit, insert, integrate,introduce, join, link, make, manufacture, merge,observe, organize, overlap, receive, retain,subsume, unify, unite, use3 NP Z verb PP X PP Y-PP Y starts with through-verbs: drag, exit, leave4 NP X verb NP Y-verbs: accommodate, add, admit, affiliate,appertain, be, bear, belong, build in, colligate,
" }, "TABREF23": { "text": "", "type_str": "table", "num": null, "html": null, "content": "
Fr. Example1 The cytoplasm inhere in a cell.2 The engine is a part of a car.A rose is a member of genus Rosa.1 The headlights the car had were blue.
Y starts
(PP
(cont.)No. Pattern5 NP X verb PP Y-verbs: attached to, inhere inwith in or to)6 NP X verb NP Z PP Y-PP Y starts with of-NP Z is part or member7 NP X NP Y verb
" }, "TABREF25": { "text": "Extensions for lexico-syntactic patterns discovered in the 20,000 sentence corpus used in the pattern identification procedure in Section 3. PP Y X inside YThe walls inside the building had better colors. -PP Y starts with inside", "type_str": "table", "num": null, "html": null, "content": "
No.PatternExample
1N P Y PP XA bird without wings cannot fly.
-PP Y starts with without
2N P X
" }, "TABREF28": { "text": "", "type_str": "table", "num": null, "html": null, "content": "
Fr. Example Val. Acc.67.62 10 colors#1-paint#1 No91.79 10 continent#1-Atlantis#1 Nostock#7-artillery#1 Noslab#1-fat#2 Noaddition#1-sodium nitrate#1 Nosupplier#1-cocaine#1 Noambassador#1-iraq#1 Noauthor#1-book#1 Noassassin#1-Kennedy#1 No94.64 10 something#1-America#1 No68.18 8 sea#1-interaction#1 Nodoor#4-car#1 Yesacademician#1-academy#2 Yes87.90 10 river#1-ecosystem#1 Norostrum#1-congress#2 Noacademician#1-academy#2 YesTuamotu Archipelago#1-YesFrench Polynesia#185.50 8 manager#1-investment funds#1 No85.30 8 buyer#1-life insurance#1 NoTuamotu Archipelago#1-YesFrench Polynesia#1genus amoeba#1-amoebida#1 Yesdictatorship#1-proletariat#1 No83.86 8 demi-monde#1-high society#1 No82.22 10 classification#2-family#4 No82.22 10 circle#2-law#2 Nogenus amoeba#1-amoebida#1 YesNo
Whole Classcovering#2island#1instrumentality#3part#7unit#6causal agent#1location#1object#1organism#1entity#1entity#1g r o u p #1system#1gathering#1possession#2possession#2possession#2g r o u p #1people#1people#1collection#1collection#1
Part Classdesign#4land#3appendage#3object#1object#1organism#1organism#1organism#1organism#1thing#12body of water#1Defaultentity#1entity#1artifact#1Defaultentity#1organism#1causal agent#1Defaultgroup#1social group#1group#1arrangement#2social group#1DefaultDefault
(cont.)No.23.2423.2523.2623.2723.2823.2923.3023.3123.3223.3323.3423.2424.124.224.2525.125.225.2626.126.226.326.426.
" } } } }