{ "paper_id": "P13-1037", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:36:14.886975Z" }, "title": "Automatic Interpretation of the English Possessive", "authors": [ { "first": "Stephen", "middle": [], "last": "Tratz", "suffix": "", "affiliation": {}, "email": "stephen.c.tratz.civ@mail.mil" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "", "affiliation": {}, "email": "hovy@cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The English 's possessive construction occurs frequently in text and can encode several different semantic relations; however, it has received limited attention from the computational linguistics community. This paper describes the creation of a semantic relation inventory covering the use of 's, an inter-annotator agreement study to calculate how well humans can agree on the relations, a large collection of possessives annotated according to the relations, and an accurate automatic annotation system for labeling new examples. Our 21,938 example dataset is by far the largest annotated possessives dataset we are aware of, and both our automatic classification system, which achieves 87.4% accuracy in our classification experiment, and our annotation data are publicly available.", "pdf_parse": { "paper_id": "P13-1037", "_pdf_hash": "", "abstract": [ { "text": "The English 's possessive construction occurs frequently in text and can encode several different semantic relations; however, it has received limited attention from the computational linguistics community. This paper describes the creation of a semantic relation inventory covering the use of 's, an inter-annotator agreement study to calculate how well humans can agree on the relations, a large collection of possessives annotated according to the relations, and an accurate automatic annotation system for labeling new examples. Our 21,938 example dataset is by far the largest annotated possessives dataset we are aware of, and both our automatic classification system, which achieves 87.4% accuracy in our classification experiment, and our annotation data are publicly available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The English 's possessive construction occurs frequently in text-approximately 1.8 times for every 100 hundred words in the Penn Treebank 1 (Marcus et al., 1993) -and can encode a number of different semantic relations including ownership (John's car), part-of-whole (John's arm), extent (6 hours' drive), and location (America's rivers). Accurate automatic possessive interpretation could aid many natural language processing (NLP) applications, especially those that build semantic representations for text understanding, text generation, question answering, or information extraction. These interpretations could be valuable for machine translation to or from languages that allow different semantic relations to be encoded by \u2020 The authors were affiliated with the USC Information Sciences Institute at the time this work was performed.", "cite_spans": [ { "start": 140, "end": 161, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "the possessive/genitive. This paper presents an inventory of 17 semantic relations expressed by the English 's-construction, a large dataset annotated according to the this inventory, and an accurate automatic classification system. The final inter-annotator agreement study achieved a strong level of agreement, 0.78 Fleiss' Kappa (Fleiss, 1971) and the dataset is easily the largest manually annotated dataset of possessive constructions created to date. We show that our automatic classication system is highly accurate, achieving 87.4% accuracy on a held-out test set.", "cite_spans": [ { "start": 332, "end": 346, "text": "(Fleiss, 1971)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although the linguistics field has devoted significant effort to the English possessive ( \u00a76.1), the computational linguistics community has given it limited attention. By far the most notable exception to this is the line of work by Moldovan and Badulescu (Moldovan and Badulescu, 2005; Badulescu and Moldovan, 2009) , who define a taxonomy of relations, annotate data, calculate interannotator agreement, and perform automatic classification experiments. Badulescu and Moldovan (2009) investigate both 's-constructions and of constructions in the same context using a list of 36 semantic relations (including OTHER). They take their examples from a collection of 20,000 randomly selected sentences from Los Angeles Times news articles used in TREC-9. For the 960 extracted 's-possessive examples, only 20 of their semantic relations are observed, including OTHER, with 8 of the observed relations occurring fewer than 10 times. They report a 0.82 Kappa agreement (Siegel and Castellan, 1988) for the two computational semantics graduates who annotate the data, stating that this strong result \"can be explained by the instructions the annotators received prior to annotation and by their expertise in Lexical Semantics.\"", "cite_spans": [ { "start": 257, "end": 287, "text": "(Moldovan and Badulescu, 2005;", "ref_id": "BIBREF23" }, { "start": 288, "end": 317, "text": "Badulescu and Moldovan, 2009)", "ref_id": "BIBREF0" }, { "start": 457, "end": 486, "text": "Badulescu and Moldovan (2009)", "ref_id": "BIBREF0" }, { "start": 965, "end": 993, "text": "(Siegel and Castellan, 1988)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Moldovan and Badulescu experiment with several different classification techniques. They find that their semantic scattering technique significantly outperforms their comparison systems with its F-measure score of 78.75. Their SVM system performs the worst with only 23.25% accuracysuprisingly low, especially considering that 220 of the 960 's examples have the same label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Unfortunately, Badulescu and Moldovan (2009) have not publicly released their data 2 . Also, it is sometimes difficult to understand the meaning of the semantic relations, partly because most relations are only described by a single example and, to a lesser extent, because the bulk of the given examples are of -constructions. For example, why President of Bolivia warrants a SOURCE/FROM relation but University of Texas is assigned to LOCA-TION/SPACE is unclear. Their relations and provided examples are presented below in Table 1 . Badulescu and Moldovan (2009) along with their examples.", "cite_spans": [ { "start": 15, "end": 44, "text": "Badulescu and Moldovan (2009)", "ref_id": "BIBREF0" }, { "start": 536, "end": 565, "text": "Badulescu and Moldovan (2009)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 526, "end": 533, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "We created the dataset used in this work from three different sources, each representing a distinct genre-newswire, non-fiction, and fiction. Of the 21,938 total examples, 15,330 come from sections 2-21 of the Penn Treebank (Marcus et al., 1993) . Another 5,266 examples are from The History of the Decline and Fall of the Roman Empire (Gibbon, 1776), a non-fiction work, and 1,342 are from The Jungle Book (Kipling, 1894) , a collection of fictional short stories. For the Penn Treebank, we extracted the examples using the provided gold standard parse trees, whereas, for the latter cases, we used the output of an open source parser (Tratz and Hovy, 2011) .", "cite_spans": [ { "start": 224, "end": 245, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF21" }, { "start": 407, "end": 422, "text": "(Kipling, 1894)", "ref_id": null }, { "start": 636, "end": 658, "text": "(Tratz and Hovy, 2011)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset Creation", "sec_num": "3" }, { "text": "The initial semantic relation inventory for possessives was created by first examining some of the relevant literature on possessives, including work by Badulescu and Moldovan (2009) , Barker (1995) , Quirk et al. (1985) , Rosenbach (2002) , and Taylor (1996) , and then manually annotating the large dataset of examples. Similar examples were grouped together to form initial categories, and groups that were considered more difficult were later reexamined in greater detail. Once all the examples were assigned to initial categories, the process of refining the definitions and annotations began. In total, 17 relations were created, not including OTHER. They are shown in Table 3 along with approximate (best guess) mappings to relations defined by others, specifically those of Quirk et al. (1985) , whose relations are presented in Table 2 , as well as Badulescu and Moldovan's (2009) Quirk et al. (1985) for 's along with some of their examples.", "cite_spans": [ { "start": 153, "end": 182, "text": "Badulescu and Moldovan (2009)", "ref_id": "BIBREF0" }, { "start": 185, "end": 198, "text": "Barker (1995)", "ref_id": "BIBREF1" }, { "start": 201, "end": 220, "text": "Quirk et al. (1985)", "ref_id": "BIBREF31" }, { "start": 223, "end": 239, "text": "Rosenbach (2002)", "ref_id": "BIBREF32" }, { "start": 246, "end": 259, "text": "Taylor (1996)", "ref_id": "BIBREF34" }, { "start": 782, "end": 801, "text": "Quirk et al. (1985)", "ref_id": "BIBREF31" }, { "start": 858, "end": 889, "text": "Badulescu and Moldovan's (2009)", "ref_id": "BIBREF0" }, { "start": 890, "end": 909, "text": "Quirk et al. (1985)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 675, "end": 682, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 837, "end": 844, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Semantic Relation Inventory", "sec_num": "4" }, { "text": "The semantic relation inventory was refined using an iterative process, with each iteration involv- Quirk et al. (1985) and Badulescu and Moldovan (2009) , respectively. HDFRE, JB, PTB: The History of the Decline and Fall of the Roman Empire, The Jungle Book, and the Penn Treebank, respectively.", "cite_spans": [ { "start": 100, "end": 119, "text": "Quirk et al. (1985)", "ref_id": "BIBREF31" }, { "start": 124, "end": 153, "text": "Badulescu and Moldovan (2009)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Refinement and Inter-annotator Agreement", "sec_num": "4.1" }, { "text": "ing the annotation of a random set of 50 examples. Each set of examples was extracted such that no two examples had an identical possessee word. For a given example, annotators were instructed to select the most appropriate option but could also record a second-best choice to provide additional feedback. Figure 1 presents a screenshot of the HTML-based annotation interface. After the annotation was complete for a given round, agreement and entropy figures were calculated and changes were made to the relation definitions and dataset. The number of refinement rounds was arbitrarily limited to five. To measure agreement, in addition to calculating simple percentage agreement, we computed Fleiss' Kappa (Fleiss, 1971 ), a measure of agreement that incorporates a correction for agreement due to chance, similar to Cohen's Kappa (Cohen, 1960) , but which can be used to measure agreement involving more than two annotations per item. The agreement and entropy figures for these five intermediate annotation rounds are given in Table 4 . In all the possessive annotation tables, Annotator A refers to the primary author and the labels B and C refer to two additional annotators.", "cite_spans": [ { "start": 708, "end": 721, "text": "(Fleiss, 1971", "ref_id": "BIBREF8" }, { "start": 833, "end": 846, "text": "(Cohen, 1960)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 306, "end": 314, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1031, "end": 1038, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Refinement and Inter-annotator Agreement", "sec_num": "4.1" }, { "text": "To calculate a final measure of inter-annotator agreement, we randomly drew 150 examples from the dataset not used in the previous refinement iterations, with 50 examples coming from each of the three original data sources. All three annotators initially agreed on 82 of the 150 examples, leaving 68 examples with at least some disagreement, including 17 for which all three annotators disagreed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Refinement and Inter-annotator Agreement", "sec_num": "4.1" }, { "text": "Annotators then engaged in a new task in which they re-annotated these 68 examples, in each case being able to select only from the definitions previously chosen for each example by at least one annotator. No indication of who or how many people had previously selected the definitions was given 3 . Annotators were instructed not to choose a definition simply because they thought they had chosen it before or because they thought someone else had chosen it. After the revision process, all three annotators agreed in 109 cases and all three disagreed in only 6 cases. During the revision process, Annotator A made 8 changes, B made 20 changes, and C made 33 changes. Annotator A likely made the fewest changes because he, as the primary author, spent a significant amount of time thinking about, writing, and re-writing the definitions used for the various iterations. Annotator C's annotation work tended to be less consistent in general than Annotator B's throughout this work as well as in a different task not discussed within this paper, which probably why Annotator C made more changes than Annotator B. Prior to this revision process, the three-way Fleiss' Kappa score was 0.60 but, afterwards, it was at 0.78. The inter-annotator agreement and entropy figures for before and after this revision process, including pairwise scores between individual annotators, are presented in Tables 5 and 6 .", "cite_spans": [], "ref_spans": [ { "start": 1388, "end": 1402, "text": "Tables 5 and 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Refinement and Inter-annotator Agreement", "sec_num": "4.1" }, { "text": "The distribution of semantic relations varies somewhat by the data source. The Jungle Book's distribution is significantly different from the oth-ers, with a much larger percentage of PARTITIVE and KINSHIP relations. The Penn Treebank and The History of the Decline and Fall of the Roman Empire were substantially more similar, although there are notable differences. For instance, the LOCATION and TEMPORAL relations almost never occur in The History of the Decline and Fall of the Roman Empire. Whether these differences are due to variations in genre, time period, and/or other factors would be an interesting topic for future study. The distribution of relations for each data source is presented in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 704, "end": 712, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Distribution of Relations", "sec_num": "4.2" }, { "text": "Though it is harder to compare across datasets using different annotation schemes, there are at least a couple notable differences between the distribution of relations for Badulescu and Moldovan's (2009) dataset and the distribution of relations used in this work. One such difference is the much higher percentage of examples labeled as TEMPORAL-11.35% vs only 2.84% in our data. Another difference is a higher incidence of the KINSHIP relation (6.31% vs 3.39%), although it is far less frequent than it is in The Jungle Book (11.62%).", "cite_spans": [ { "start": 173, "end": 204, "text": "Badulescu and Moldovan's (2009)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Distribution of Relations", "sec_num": "4.2" }, { "text": "One of the problems with creating a list of relations expressed by 's-constructions is that some examples can potentially fit into multiple categories. For example, Joe's resentment encodes both SUBJECTIVE relation and MENTAL EXPE-RIENCER relations and UK's cities encodes both PARTITIVE and LOCATION relations. A representative list of these types of issues along with examples designed to illustrate them is presented in Table 7 .", "cite_spans": [], "ref_spans": [ { "start": 423, "end": 430, "text": "Table 7", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Encountered Ambiguities", "sec_num": "4.3" }, { "text": "For the automatic classification experiments, we set aside 10% of the data for test purposes, and used the the remaining 90% for training. We used 5-fold cross-validation performed using the training data to tweak the included feature templates and optimize training parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The LIBLINEAR (Fan et al., 2008) package was used to train linear Support Vector Machine (SVMs) for all the experiments in the one-againstthe-rest style. All training parameters took their default values with the exception of the C parameter, which controls the tradeoff between margin width and training error and which was set to 0.02, the point of highest performance in the crossvalidation tuning.", "cite_spans": [ { "start": 14, "end": 32, "text": "(Fan et al., 2008)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Approach", "sec_num": "5.1" }, { "text": "For feature generation, we conflated the possessive pronouns 'his', 'her', 'my', and 'your' to 'person.'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Generation", "sec_num": "5.2" }, { "text": "Similarly, every term matching the case-insensitive regular expression (corp|co|plc|inc|ag|ltd|llc)\\\\.?) was replaced with the word 'corporation.'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Generation", "sec_num": "5.2" }, { "text": "All the features used are functions of the following five words. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Generation", "sec_num": "5.2" }, { "text": "The system predicted correct labels for 1,962 of the 2,247 test examples, or 87.4%. The accuracy figures for the test instances from the Penn Treebank, The Jungle Book, and The History of the Decline and Fall of the Roman Empire were 88.8%, 84.7%, and 80.6%, respectively. The fact that the score for The Jungle Book was the lowest is somewhat surprising considering it contains a high percentage of body part and kinship terms, which tend to be straightforward, but this may be because the other sources comprise approximately 94% of the training examples. Given that human agreement typically represents an upper bound on machine performance in classification tasks, the 87.4% accuracy figure may be somewhat surprising. One explanation is that the examples pulled out for the inter-annotator agreement study each had a unique possessee word. For example, \"expectations\", as in \"analyst's expectations\", occurs 26 times as the possessee in the dataset, but, for the inter-annotator agreement study, at most one of these examples could be included. More importantly, when the initial relations were being defined, the data were first sorted based upon the possessee and then the possessor in order to create blocks of similar examples. Doing this allowed multiple examples to be assigned to a category more quickly because one can decide upon a category for the whole lot at once and then just extract the few, if any, that belong to other categories. This is likely to be both faster and more consistent than examining each example in isolation. This advantage did not exist in the inter-annotator agreement study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.3" }, { "text": "To evaluate the importance of the different types of features, the same experiment was re-run multiple times, each time including or excluding exactly one feature template. Before each variation, the C parameter was retuned using 5-fold cross validation on the training data. The results for these runs are shown in Table 8 . Based upon the leave-one-out and only-one feature evaluation experiment results, it appears that the possessee word is more important to classification than the possessor word. The possessor word is still valuable though, with it likely being more valuable for certain categories (e.g., TEMPORAL and LOCATION) than others (e.g., KINSHIP). Hypernym and gloss term features proved to be about equally valuable. Curiously, although hypernyms are commonly used as features in NLP classification tasks, gloss terms, which are rarely used for these tasks, are approximately as useful, at least in this particular context. This would be an interesting result to examine in greater detail.", "cite_spans": [], "ref_spans": [ { "start": 316, "end": 323, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Feature Ablation Experiments", "sec_num": "5.4" }, { "text": "6 Related Work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Ablation Experiments", "sec_num": "5.4" }, { "text": "Semantic relation inventories for the English 'sconstruction have been around for some time; Taylor (1996) mentions a set of 6 relations enumerated by Poutsma (1914 Poutsma ( -1916 . Curiously, there is not a single dominant semantic relation inventory for possessives. A representative example of semantic relation inventories for 's-constructions is the one given by Quirk et al. (1985) (presented earlier in Section 2).", "cite_spans": [ { "start": 151, "end": 164, "text": "Poutsma (1914", "ref_id": null }, { "start": 165, "end": 180, "text": "Poutsma ( -1916", "ref_id": null }, { "start": 369, "end": 388, "text": "Quirk et al. (1985)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistics", "sec_num": "6.1" }, { "text": "Interestingly, the set of relations expressed by possessives varies by language. For example, Classical Greek permits a standard of comparison relation (e.g., \"better than Plato\") (Nikiforidou, 1991) , and, in Japanese, some relations are expressed in the opposite direction (e.g., \"blue eye's doll\") while others are not (e.g., \"Tanaka's face\") (Nishiguchi, 2009) .", "cite_spans": [ { "start": 180, "end": 199, "text": "(Nikiforidou, 1991)", "ref_id": "BIBREF26" }, { "start": 346, "end": 364, "text": "(Nishiguchi, 2009)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistics", "sec_num": "6.1" }, { "text": "To explain how and why such seemingly different relations as whole+part and cause+effect are expressed by the same linguistic phenomenon, Nikiforidou (1991) pursues an approach of metaphorical structuring in line with the work of Lakoff and Johnson (1980) and Lakoff (1987) . She thus proposes a variety of such metaphors as THINGS THAT HAPPEN (TO US) ARE (OUR) POS-SESSIONS and CAUSES ARE ORIGINS to explain how the different relations expressed by possessives extend from one another.", "cite_spans": [ { "start": 138, "end": 156, "text": "Nikiforidou (1991)", "ref_id": "BIBREF26" }, { "start": 230, "end": 255, "text": "Lakoff and Johnson (1980)", "ref_id": "BIBREF16" }, { "start": 260, "end": 273, "text": "Lakoff (1987)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistics", "sec_num": "6.1" }, { "text": "Certainly, not all, or even most, of the linguistics literature on English possessives focuses on creating lists of semantic relations. Much of the work covering the semantics of the 's construction in English, such as Barker's (1995) work, dwells on the split between cases of relational nouns, such as sister, that, by their very definition, hold a specific relation to other real or conceptual things, and non-relational, or sortal nouns (L\u00f6bner, 1985) , such as car. Vikner and Jensen's (2002) approach for han-dling these disparate cases is based upon Pustejovsky's (1995) generative lexicon framework. They coerce sortal nouns (e.g., car) into being relational, purporting to create a uniform analysis. They split lexical possession into four types: inherent, part-whole, agentive, and control, with agentive and control encompassing many, if not most, of the cases involving sortal nouns. A variety of other issues related to possessives considered by the linguistics literature include adjectival modifiers that significantly alter interpretation (e.g., favorite and former), double genitives (e.g., book of John's), bare possessives (i.e., cases where the possessee is omitted, as in \"Eat at Joe's\"), possessive compounds (e.g., driver's license), the syntactic structure of possessives, definitiveness, changes over the course of history, and differences between languages in terms of which relations may be expressed by the genitive. Representative work includes that by Barker (1995) , Taylor (1996) , Heine (1997) , Partee and Borschev (1998) , Rosenbach (2002) , and Vikner and Jensen (2002) .", "cite_spans": [ { "start": 219, "end": 234, "text": "Barker's (1995)", "ref_id": "BIBREF1" }, { "start": 441, "end": 455, "text": "(L\u00f6bner, 1985)", "ref_id": null }, { "start": 471, "end": 497, "text": "Vikner and Jensen's (2002)", "ref_id": "BIBREF37" }, { "start": 1482, "end": 1495, "text": "Barker (1995)", "ref_id": "BIBREF1" }, { "start": 1498, "end": 1511, "text": "Taylor (1996)", "ref_id": "BIBREF34" }, { "start": 1514, "end": 1526, "text": "Heine (1997)", "ref_id": "BIBREF11" }, { "start": 1529, "end": 1555, "text": "Partee and Borschev (1998)", "ref_id": "BIBREF29" }, { "start": 1558, "end": 1574, "text": "Rosenbach (2002)", "ref_id": "BIBREF32" }, { "start": 1577, "end": 1605, "text": "and Vikner and Jensen (2002)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistics", "sec_num": "6.1" }, { "text": "Though the relation between nominals in the English possessive construction has received little attention from the NLP community, there is a large body of work that focuses on similar problems involving noun-noun relation interpretation/paraphrasing, including interpreting the relations between the components of noun compounds (Butnariu et al., 2010) , disambiguating preposition senses (Litkowski and Hargraves, 2007) , or annotating the relation between nominals in more arbitrary constructions within the same sentence (Hendrickx et al., 2009) .", "cite_spans": [ { "start": 329, "end": 352, "text": "(Butnariu et al., 2010)", "ref_id": "BIBREF4" }, { "start": 389, "end": 420, "text": "(Litkowski and Hargraves, 2007)", "ref_id": "BIBREF19" }, { "start": 524, "end": 548, "text": "(Hendrickx et al., 2009)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Computational Linguistics", "sec_num": "6.2" }, { "text": "Whereas some of these lines of work use fixed inventories of semantic relations (Lauer, 1995; Nastase and Szpakowicz, 2003; Kim and Baldwin, 2005; Girju, 2009; \u00d3 S\u00e9aghdha and Copestake, 2009; Tratz and Hovy, 2010) , other work allows for a nearly infinite number of interpretations (Butnariu and Veale, 2008; Nakov, 2008) . Recent SemEval tasks (Butnariu et al., 2009; Hendrickx et al., 2013) pursue this more open-ended strategy. In these tasks, participating systems recover the implicit predicate between the nouns in noun compounds by creating potentially unique paraphrases for each example. For instance, a system might generate the paraphrase made of for the noun com- Table 8 : Results for leave-one-out and only-one feature template ablation experiment results for all feature templates sorted by the only-one case. L, R, C, G, B, and N stand for left word (possessor), right word (possessee), pairwise combination of outputs for possessor and possessee, syntactic governor of possessee, all tokens between possessor and possessee, and the word next to the possessee (on the right), respectively. The C parameter value used to train the SVMs is shown in parentheses.", "cite_spans": [ { "start": 80, "end": 93, "text": "(Lauer, 1995;", "ref_id": "BIBREF18" }, { "start": 94, "end": 123, "text": "Nastase and Szpakowicz, 2003;", "ref_id": "BIBREF25" }, { "start": 124, "end": 146, "text": "Kim and Baldwin, 2005;", "ref_id": "BIBREF14" }, { "start": 147, "end": 159, "text": "Girju, 2009;", "ref_id": "BIBREF10" }, { "start": 160, "end": 191, "text": "\u00d3 S\u00e9aghdha and Copestake, 2009;", "ref_id": "BIBREF28" }, { "start": 192, "end": 213, "text": "Tratz and Hovy, 2010)", "ref_id": "BIBREF35" }, { "start": 282, "end": 308, "text": "(Butnariu and Veale, 2008;", "ref_id": "BIBREF2" }, { "start": 309, "end": 321, "text": "Nakov, 2008)", "ref_id": "BIBREF24" }, { "start": 345, "end": 368, "text": "(Butnariu et al., 2009;", "ref_id": "BIBREF3" }, { "start": 369, "end": 392, "text": "Hendrickx et al., 2013)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 676, "end": 683, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Computational Linguistics", "sec_num": "6.2" }, { "text": "pound pepperoni pizza. Computer-generated results are scored against a list of human-generated options in order to rank the participating systems. This approach could be applied to possessives interpretation as well. Concurrent with the lack of NLP research on the subject is the absence of available annotated datasets for training, evaluation, and analysis. The NomBank project (Meyers et al., 2004) provides coarse annotations for some of the possessive constructions in the Penn Treebank, but only those that meet their criteria.", "cite_spans": [ { "start": 380, "end": 401, "text": "(Meyers et al., 2004)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Computational Linguistics", "sec_num": "6.2" }, { "text": "In this paper, we present a semantic relation inventory for 's possessives consisting of 17 relations expressed by the English 's construction, the largest available manually-annotated collection of possessives, and an effective method for automatically assigning the relations to unseen examples. We explain our methodology for building this inventory and dataset and report a strong level of inter-annotator agreement, reaching 0.78 Kappa overall. The resulting dataset is quite large, at 21,938 instances, and crosses multiple domains, including news, fiction, and historical non-fiction. It is the only large fully-annotated publiclyavailable collection of possessive examples that we are aware of. The straightforward SVMbased automatic classification system achieves 87.4% accuracy-the highest automatic possessive interpretation accuracy figured reported to date. These high results suggest that SVMs are a good choice for automatic possessive interpre-tation systems, in contrast to Moldovan and Badulescu (2005) findings. The data and software presented in this paper are available for download at http://www.isi.edu/publications/licensedsw/fanseparser/index.html.", "cite_spans": [ { "start": 991, "end": 1020, "text": "Moldovan and Badulescu (2005)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Going forward, we would like to examine the various ambiguities of possessives described in Section 4.3. Instead of trying to find the one-best interpretation for a given possessive example, we would like to produce a list of all appropriate intepretations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "8" }, { "text": "Another avenue for future research is to study variation in possessive use across genres, including scientific and technical genres. Similarly, one could automatically process large volumes of text from various time periods to investigate changes in the use of the possessive over time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "8" }, { "text": "Possessive pronouns such as his and their are treated as 's constructions in this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Email requests asking for relation definitions and the data were not answered, and, thus, we are unable to provide an informative comparison with their work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Of course, if three definitions were present, it could be inferred that all three annotators had initially disagreed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Charles Zheng and Sarah Benzel for all their annotation work and valuable feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Semantic Scattering Model for the Automatic Interpretation of English Genitives", "authors": [ { "first": "Adriana", "middle": [], "last": "Badulescu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": 2009, "venue": "Natural Language Engineering", "volume": "15", "issue": "", "pages": "215--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adriana Badulescu and Dan Moldovan. 2009. A Se- mantic Scattering Model for the Automatic Interpre- tation of English Genitives. Natural Language En- gineering, 15:215-239.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Possessive Descriptions", "authors": [ { "first": "Chris", "middle": [], "last": "Barker", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Barker. 1995. Possessive Descriptions. CSLI Publications, Stanford, CA, USA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Concept-Centered Approach to Noun-Compound Interpretation", "authors": [ { "first": "Cristina", "middle": [], "last": "Butnariu", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cristina Butnariu and Tony Veale. 2008. A Concept- Centered Approach to Noun-Compound Interpreta- tion. In Proceedings of the 22nd International Con- ference on Computational Linguistics, pages 81-88.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "SemEval-2010 Task 9: The Interpretation of Noun Compounds Using Paraphrasing Verbs and Prepositions", "authors": [ { "first": "Cristina", "middle": [], "last": "Butnariu", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "\u00d3", "middle": [], "last": "Diarmuid", "suffix": "" }, { "first": "Stan", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Szpakowicz", "suffix": "" }, { "first": "", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2009, "venue": "DEW '09: Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions", "volume": "", "issue": "", "pages": "100--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cristina Butnariu, Su Nam Kim, Preslav Nakov, Di- armuid \u00d3 S\u00e9aghdha, Stan Szpakowicz, and Tony Veale. 2009. SemEval-2010 Task 9: The Inter- pretation of Noun Compounds Using Paraphrasing Verbs and Prepositions. In DEW '09: Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, pages 100- 105.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "SemEval-2010 Task 9: The Interpretation of Noun Compounds Using Paraphrasing Verbs and Prepositions", "authors": [ { "first": "Cristina", "middle": [], "last": "Butnariu", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Diarmuid", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "39--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cristina Butnariu, Su Nam Kim, Preslav Nakov, Di- armuid. \u00d3 S\u00e9aghdha, Stan Szpakowicz, and Tony Veale. 2010. SemEval-2010 Task 9: The Interpreta- tion of Noun Compounds Using Paraphrasing Verbs and Prepositions. In Proceedings of the 5th Inter- national Workshop on Semantic Evaluation, pages 39-44.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Coefficient of Agreement for Nominal Scales", "authors": [ { "first": "Jacob", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1960, "venue": "Educational and Psychological Measurement", "volume": "20", "issue": "1", "pages": "37--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1):37-46.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "LIBLINEAR: A Library for Large Linear Classification", "authors": [ { "first": "Kai-Wei", "middle": [], "last": "Rong-En Fan", "suffix": "" }, { "first": "Cho-Jui", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Xiang-Rui", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "Chih-Jen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2008, "venue": "Journal of Machine Learning Research", "volume": "9", "issue": "", "pages": "1871--1874", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A Library for Large Linear Classification. Journal of Machine Learning Research, 9:1871-1874.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. The MIT Press, Cambridge, MA.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Measuring nominal scale agreement among many raters", "authors": [ { "first": "Joseph", "middle": [ "L" ], "last": "Fleiss", "suffix": "" } ], "year": 1971, "venue": "Psychological Bulletin", "volume": "76", "issue": "5", "pages": "378--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bul- letin, 76(5):378-382.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The History of the Decline and Fall of the Roman Empire, volume I of The History of the Decline and Fall of the Roman Empire", "authors": [ { "first": "Edward", "middle": [], "last": "Gibbon", "suffix": "" }, { "first": ". ; T", "middle": [], "last": "Cadell", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Gibbon. 1776. The History of the Decline and Fall of the Roman Empire, volume I of The His- tory of the Decline and Fall of the Roman Empire. Printed for W. Strahan and T. Cadell.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Syntax and Semantics of Prepositions in the Task of Automatic Interpretation of Nominal Phrases and Compounds: A Cross-linguistic Study", "authors": [ { "first": "Roxanna", "middle": [], "last": "Girju", "suffix": "" } ], "year": 2009, "venue": "Computational Linguistics -Special Issue on Prepositions in Application", "volume": "35", "issue": "2", "pages": "185--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roxanna Girju. 2009. The Syntax and Seman- tics of Prepositions in the Task of Automatic In- terpretation of Nominal Phrases and Compounds: A Cross-linguistic Study. Computational Linguis- tics -Special Issue on Prepositions in Application, 35(2):185-228.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Possession: Cognitive Sources, Forces, and Grammaticalization", "authors": [ { "first": "Bernd", "middle": [], "last": "Heine", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Heine. 1997. Possession: Cognitive Sources, Forces, and Grammaticalization. Cambridge Uni- versity Press, United Kingdom.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Semeval-2010 task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals", "authors": [ { "first": "Iris", "middle": [], "last": "Hendrickx", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" }, { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "\u00d3", "middle": [], "last": "Diarmuid", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Lorenza", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Romano", "suffix": "" }, { "first": "", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009)", "volume": "", "issue": "", "pages": "94--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid \u00d3 S\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009), pages 94-99.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Task Description: SemEval-2013 Task 4: Free Paraphrases of Noun Compounds", "authors": [ { "first": "Iris", "middle": [], "last": "Hendrickx", "suffix": "" }, { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "\u00d3", "middle": [], "last": "Diarmuid", "suffix": "" }, { "first": "Stan", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Szpakowicz", "suffix": "" }, { "first": "", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iris Hendrickx, Zornitsa Kozareva, Preslav Nakov, Diarmuid \u00d3 S\u00e9aghdha, Stan Szpakowicz, and Tony Veale. 2013. Task Description: SemEval- 2013 Task 4: Free Paraphrases of Noun Com- pounds. http://www.cs.york.ac.uk/ semeval-2013/task4/. [Online; accessed 1-May-2013].", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Automatic Interpretation of Noun Compounds using Word-Net::Similarity. Natural Language Processing-IJCNLP 2005", "authors": [ { "first": "Nam", "middle": [], "last": "Su", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "945--956", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su Nam Kim and Timothy Baldwin. 2005. Automatic Interpretation of Noun Compounds using Word- Net::Similarity. Natural Language Processing- IJCNLP 2005, pages 945-956.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "1894. The Jungle Book", "authors": [ { "first": "Rudyard", "middle": [], "last": "Kipling", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rudyard Kipling. 1894. The Jungle Book. Macmillan, London, UK.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Metaphors We Live by", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Lakoff and Mark Johnson. 1980. Metaphors We Live by. The University of Chicago Press, Chicago, USA.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Women, Fire, and Dangerous Things: What Categories Reveal about the Mind", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Lakoff. 1987. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. The University of Chicago Press, Chicago, USA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Corpus Statistics Meet the Noun Compound: Some Empirical Results", "authors": [ { "first": "Mark", "middle": [], "last": "Lauer", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "47--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Lauer. 1995. Corpus Statistics Meet the Noun Compound: Some Empirical Results. In Proceed- ings of the 33rd Annual Meeting on Association for Computational Linguistics, pages 47-54.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "SemEval-2007 Task 06: Word-Sense Disambiguation of Prepositions", "authors": [ { "first": "Ken", "middle": [], "last": "Litkowski", "suffix": "" }, { "first": "Orin", "middle": [], "last": "Hargraves", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Workshop on Semantic Evaluations", "volume": "", "issue": "", "pages": "24--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ken Litkowski and Orin Hargraves. 2007. SemEval- 2007 Task 06: Word-Sense Disambiguation of Prepositions. In Proceedings of the 4th Interna- tional Workshop on Semantic Evaluations, pages 24-29.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Building a Large Annotated Corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Mary", "middle": [ "A" ], "last": "Marcinkiewicz", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Mary A. Marcinkiewicz, and Beat- rice Santorini. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):330.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The NomBank Project: An Interim Report", "authors": [ { "first": "Adam", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "Ruth", "middle": [], "last": "Reeves", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Macleod", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Szekely", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Zielinska", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Young", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the NAACL/HLT Workshop on Frontiers in Corpus Annotation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. The NomBank Project: An Interim Report. In Proceedings of the NAACL/HLT Workshop on Frontiers in Corpus An- notation.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A Semantic Scattering Model for the Automatic Interpretation of Genitives", "authors": [ { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Badulescu", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "891--898", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Moldovan and Adriana Badulescu. 2005. A Se- mantic Scattering Model for the Automatic Interpre- tation of Genitives. In Proceedings of Human Lan- guage Technology Conference and Conference on Empirical Methods in Natural Language Process- ing, pages 891-898.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Noun Compound Interpretation Using Paraphrasing Verbs: Feasibility Study", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 13th International Conference on Artificial Intelligence: Methodology, Systems, and Applications", "volume": "", "issue": "", "pages": "103--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preslav Nakov. 2008. Noun Compound Interpreta- tion Using Paraphrasing Verbs: Feasibility Study. In Proceedings of the 13th International Conference on Artificial Intelligence: Methodology, Systems, and Applications, pages 103-117.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Exploring Noun-Modifier Semantic Relations", "authors": [ { "first": "Vivi", "middle": [], "last": "Nastase", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2003, "venue": "Fifth International Workshop on Computational Semantics (IWCS-5)", "volume": "", "issue": "", "pages": "285--301", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vivi Nastase and Stan Szpakowicz. 2003. Explor- ing Noun-Modifier Semantic Relations. In Fifth In- ternational Workshop on Computational Semantics (IWCS-5), pages 285-301.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The Meanings of the Genitive: A Case Study in the Semantic Structure and Semantic Change", "authors": [ { "first": "Kiki", "middle": [], "last": "Nikiforidou", "suffix": "" } ], "year": 1991, "venue": "Cognitive Linguistics", "volume": "2", "issue": "2", "pages": "149--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kiki Nikiforidou. 1991. The Meanings of the Gen- itive: A Case Study in the Semantic Structure and Semantic Change. Cognitive Linguistics, 2(2):149- 206.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Qualia-Based Lexical Knowledge for the Disambiguation of the Japanese Postposition No", "authors": [ { "first": "Sumiyo", "middle": [], "last": "Nishiguchi", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Eighth International Conference on Computational Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sumiyo Nishiguchi. 2009. Qualia-Based Lexical Knowledge for the Disambiguation of the Japanese Postposition No. In Proceedings of the Eighth Inter- national Conference on Computational Semantics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Using Lexical and Relational Similarity to Classify Semantic Relations", "authors": [ { "first": "\u00d3", "middle": [], "last": "Diarmuid", "suffix": "" }, { "first": "Ann", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "", "middle": [], "last": "Copestake", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "621--629", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diarmuid \u00d3 S\u00e9aghdha and Ann Copestake. 2009. Us- ing Lexical and Relational Similarity to Classify Se- mantic Relations. In Proceedings of the 12th Con- ference of the European Chapter of the Association for Computational Linguistics, pages 621-629.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "tegrating Lexical and Formal Semantics: Genitives, Relational Nouns, and Type-Shifting", "authors": [ { "first": "H", "middle": [], "last": "Barbara", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Partee", "suffix": "" }, { "first": "", "middle": [], "last": "Borschev", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "229--241", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara H. Partee and Vladimir Borschev. 1998. In- tegrating Lexical and Formal Semantics: Genitives, Relational Nouns, and Type-Shifting. In Proceed- ings of the Second Tbilisi Symposium on Language, Logic, and Computation, pages 229-241.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The Generative Lexicon", "authors": [ { "first": "James", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Pustejovsky. 1995. The Generative Lexicon. MIT Press, Cambridge, MA, USA.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A Comprehensive Grammar of the English Language", "authors": [ { "first": "Randolph", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Sidney", "middle": [], "last": "Greenbaum", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Leech", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Svartvik", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Randolph Quirk, Sidney Greenbaum, Geoffrey Leech, and Jan Svartvik. 1985. A Comprehensive Gram- mar of the English Language. Longman Inc., New York.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Genitive Variation in English: Conceptual Factors in Synchronic and Diachronic Studies. Topics in English linguistics", "authors": [ { "first": "Anette", "middle": [], "last": "Rosenbach", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anette Rosenbach. 2002. Genitive Variation in En- glish: Conceptual Factors in Synchronic and Di- achronic Studies. Topics in English linguistics. Mouton de Gruyter.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Nonparametric statistics for the behavioral sciences", "authors": [ { "first": "Sidney", "middle": [], "last": "Siegel", "suffix": "" }, { "first": "N. John", "middle": [], "last": "Castellan", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sidney Siegel and N. John Castellan. 1988. Non- parametric statistics for the behavioral sciences. McGraw-Hill.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Possessives in English", "authors": [ { "first": "John", "middle": [ "R" ], "last": "Taylor", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John R. Taylor. 1996. Possessives in English. Oxford University Press, New York.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A Taxonomy, Dataset, and Classifier for Automatic Noun Compound Interpretation", "authors": [ { "first": "Stephen", "middle": [], "last": "Tratz", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "678--687", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Tratz and Eduard Hovy. 2010. A Taxonomy, Dataset, and Classifier for Automatic Noun Com- pound Interpretation. In Proceedings of the 48th An- nual Meeting of the Association for Computational Linguistics, pages 678-687.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A Fast, Accurate, Non-Projective, Semantically-Enriched Parser", "authors": [ { "first": "Stephen", "middle": [], "last": "Tratz", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1257--1268", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Tratz and Eduard Hovy. 2011. A Fast, Accu- rate, Non-Projective, Semantically-Enriched Parser. In Proceedings of the 2011 Conference on Empiri- cal Methods in Natural Language Processing, pages 1257-1268.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "A Semantic Analysis of the English Genitive. Interation of Lexical and Formal Semantics", "authors": [ { "first": "Carl", "middle": [], "last": "Vikner", "suffix": "" }, { "first": "Per", "middle": [], "last": "Anker Jensen", "suffix": "" } ], "year": 2002, "venue": "Studia Linguistica", "volume": "56", "issue": "2", "pages": "191--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carl Vikner and Per Anker Jensen. 2002. A Seman- tic Analysis of the English Genitive. Interation of Lexical and Formal Semantics. Studia Linguistica, 56(2):191-226.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Screenshot of the HTML template page used for annotation.", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "Semantic relation distribution for the dataset presented in this work. HDFRE: History of the Decline and Fall of the Roman Empire; JB: Jungle Book; PTB: Sections 2-21 of the Wall Street Journal portion of the Penn Treebank.", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": "The possessor word \u2022 The possessee word \u2022 The syntactic governor of the possessee word \u2022 The set of words between the possessor and possessee word (e.g., first in John's first kiss) \u2022 The word to the right of the possessee The following feature templates are used to generate features from the above words. Many of these templates utilize information from WordNet (Fellbaum, 1998). \u2022 WordNet link types (link type list) (e.g., attribute, hypernym, entailment) \u2022 Lexicographer filenames (lexnames)-top level categories used in WordNet (e.g., noun.body, verb.cognition) \u2022 Set of words from the WordNet definitions (gloss terms) \u2022 The list of words connected via WordNet part-of links (part words) \u2022 The word's text (the word itself) \u2022 A collection of affix features (e.g., -ion, -er, -ity, -ness, -ism) \u2022 The last {2,3} letters of the word \u2022 List of all possible parts-of-speech in Word-Net for the word \u2022 The part-of-speech assigned by the part-ofspeech tagger \u2022 WordNet hypernyms \u2022 WordNet synonyms \u2022 Dependent words (all words linked as children in the parse tree) \u2022 Dependency relation to the word's syntactic governor", "num": null }, "TABREF1": { "type_str": "table", "html": null, "content": "", "text": "", "num": null }, "TABREF3": { "type_str": "table", "html": null, "content": "
", "text": "The semantic relations proposed by", "num": null }, "TABREF5": { "type_str": "table", "html": null, "content": "
", "text": "", "num": null }, "TABREF7": { "type_str": "table", "html": null, "content": "
Portion PTB HDFRE JB AllAgreement (%) A vs B A vs C B vs C A vs B A vs C B vs C All Fleiss' \u03ba 0.62 0.62 0.54 0.56 0.56 0.46 0.53 3.22 3.17 3.13 Entropy A B C 0.82 0.78 0.72 0.77 0.71 0.64 0.71 2.73 2.75 2.73 0.74 0.56 0.54 0.70 0.50 0.48 0.56 3.17 3.11 3.17 0.73 0.65 0.60 0.69 0.61 0.55 0.62 3.43 3.35 3.51
", "text": "Intermediate results for the possessives refinement work.", "num": null }, "TABREF8": { "type_str": "table", "html": null, "content": "
Source PTB HDFRE JB AllAgreement (%) A vs B A vs C B vs C A vs B A vs C B vs C All Fleiss' \u03ba 0.78 0.74 0.74 0.75 0.70 0.70 0.72 3.30 3.11 3.35 Entropy A B C 0.78 0.76 0.76 0.74 0.72 0.72 0.73 3.03 2.98 3.17 0.92 0.90 0.86 0.90 0.87 0.82 0.86 2.73 2.71 2.65 0.83 0.80 0.79 0.80 0.77 0.76 0.78 3.37 3.30 3.48
", "text": "Final possessives annotation agreement figures before revisions.", "num": null }, "TABREF9": { "type_str": "table", "html": null, "content": "
First Relation PARTITIVE PARTITIVE PARTITIVE PARTITIVE PARTITIVE CONTROLLER/... CONTROLLER/... CONTROLLER/... CONTROLLER/... CONTROLLER/... CONTROLLER/... RECIPIENT SUBJECTIVE SUBJECTIVE SUBJECTIVE SUBJECTIVE SUBJECTIVE OBJECTIVE OBJECTIVE KINSHIPSecond Relation CONTROLLER/... LOCATION OBJECTIVE OTHER RELATIONAL NOUN PRODUCER'S PRODUCT PRODUCER'S PRODUCT OBJECTIVE LOCATION ATTRIBUTE MEMBER'S COLLECTION RECIPIENT OBJECTIVE PRODUCER'S PRODUCT OBJECTIVE CONTROLLER/... LOCATION MENTAL EXPERIENCER MENTAL EXPERIENCER LOCATION OTHER RELATIONAL NOUNExample BoA's Mr. Davis UK's cities BoA's adviser BoA's chairman the lamb's wool the bird's nest his assistant Libya's oil company Joe's strength the colonel's unit Joe's trophy Joe's reward Joe's announcement its change Joe's employee Libya's devolution Joe's resentment Joe's concern the town's inhabitants his fiancee
", "text": "Final possessives annotation agreement figures after revisions.", "num": null }, "TABREF10": { "type_str": "table", "html": null, "content": "", "text": "Ambiguous/multiclass possessive examples.", "num": null } } } }