{ "paper_id": "S10-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:27:30.551050Z" }, "title": "SemEval-2010 Task 7: Argument Selection and Coercion", "authors": [ { "first": "James", "middle": [], "last": "Pustejovsky", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brandeis University Waltham", "location": { "region": "MA", "country": "USA" } }, "email": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brandeis University Waltham", "location": { "region": "MA", "country": "USA" } }, "email": "" }, { "first": "Alex", "middle": [], "last": "Plotnick", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brandeis University Waltham", "location": { "region": "MA", "country": "USA" } }, "email": "" }, { "first": "Elisabetta", "middle": [], "last": "Jezek", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pavia Pavia", "location": { "country": "Italy" } }, "email": "" }, { "first": "Olga", "middle": [], "last": "Batiukova", "suffix": "", "affiliation": { "laboratory": "", "institution": "III University of Madrid Madrid", "location": { "country": "Spain" } }, "email": "" }, { "first": "Valeria", "middle": [], "last": "Quochi", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe the Argument Selection and Coercion task for the SemEval-2010 evaluation exercise. This task involves characterizing the type of compositional operation that exists between a predicate and the arguments it selects. Specifically, the goal is to identify whether the type that a verb selects is satisfied directly by the argument, or whether the argument must change type to satisfy the verb typing. We discuss the problem in detail, describe the data preparation for the task, and analyze the results of the submissions.", "pdf_parse": { "paper_id": "S10-1005", "_pdf_hash": "", "abstract": [ { "text": "We describe the Argument Selection and Coercion task for the SemEval-2010 evaluation exercise. This task involves characterizing the type of compositional operation that exists between a predicate and the arguments it selects. Specifically, the goal is to identify whether the type that a verb selects is satisfied directly by the argument, or whether the argument must change type to satisfy the verb typing. We discuss the problem in detail, describe the data preparation for the task, and analyze the results of the submissions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, a number of annotation schemes that encode semantic information have been developed and used to produce data sets for training machine learning algorithms. Semantic markup schemes that have focused on annotating entity types and, more generally, word senses, have been extended to include semantic relationships between sentence elements, such as the semantic role (or label) assigned to the argument by the predicate (Palmer et al., 2005; Ruppenhofer et al., 2006; Kipper, 2005; Burchardt et al., 2006; Subirats, 2004) .", "cite_spans": [ { "start": 435, "end": 456, "text": "(Palmer et al., 2005;", "ref_id": "BIBREF15" }, { "start": 457, "end": 482, "text": "Ruppenhofer et al., 2006;", "ref_id": "BIBREF23" }, { "start": 483, "end": 496, "text": "Kipper, 2005;", "ref_id": "BIBREF10" }, { "start": 497, "end": 520, "text": "Burchardt et al., 2006;", "ref_id": "BIBREF2" }, { "start": 521, "end": 536, "text": "Subirats, 2004)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this task, we take this one step further and attempt to capture the \"compositional history\" of the argument selection relative to the predicate. In particular, this task attempts to identify the operations of type adjustment induced by a predicate over its arguments when they do not match its selectional properties. The task is defined as follows: for each argument of a predicate, identify whether the entity in that argument position satisfies the type expected by the predicate. If not, then identify how the entity in that position satisfies the typing expected by the predicate; that is, identify the source and target types in a type-shifting or coercion operation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Consider the example below, where the verb report normally selects for a human in subject position, as in (1a). Notice, however, that through a metonymic interpretation, this constraint can be violated, as demonstrated in (1b).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) a. John reported in late from Washington. b. Washington reported in late.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Neither the surface annotation of entity extents and types nor assigning semantic roles associated with the predicate would reflect in this case a crucial point: namely, that in order for the typing requirements of the predicate to be satisfied, a type coercion or a metonymy (Hobbs et al., 1993; Pustejovsky, 1991; Nunberg, 1979; Egg, 2005) has taken place.", "cite_spans": [ { "start": 276, "end": 296, "text": "(Hobbs et al., 1993;", "ref_id": "BIBREF8" }, { "start": 297, "end": 315, "text": "Pustejovsky, 1991;", "ref_id": "BIBREF20" }, { "start": 316, "end": 330, "text": "Nunberg, 1979;", "ref_id": "BIBREF14" }, { "start": 331, "end": 341, "text": "Egg, 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The SemEval Metonymy task (Markert and Nissim, 2007 ) was a good attempt to annotate such metonymic relations over a larger data set. This task involved two types with their metonymic variants: categories-for-locations (e.g., placefor-people) and categories-for-organizations (e.g., organization-for-members). One of the limitations of this approach, however, is that while appropriate for these specialized metonymy relations, the annotation specification and resulting corpus are not an informative guide for extending the annotation of argument selection more broadly.", "cite_spans": [ { "start": 26, "end": 51, "text": "(Markert and Nissim, 2007", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In fact, the metonymy example in (1) is an instance of a much more pervasive phenomenon of type shifting and coercion in argument selection. For example, in (2) below, the sense annotation for the verb enjoy should arguably assign similar values to both (2a) and (2b). The consequence of this is that under current sense and role annotation strategies, the mapping to a syntactic realization for a given sense is made more complex, and is in fact perplexing for a clustering or learning algorithm operating over subcategorization types for the verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Before introducing the specifics of the argument selection and coercion task, we will briefly review our assumptions regarding the role of annotation in computational linguistic systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology of Annotation", "sec_num": "2" }, { "text": "We assume that the features we use for encoding a specific linguistic phenomenon are rich enough to capture the desired behavior. These linguistic descriptions are typically distilled from extensive theoretical modeling of the phenomenon. The descriptions in turn form the basis for the annotation values of the specification language, which are themselves the features used in a development cycle for training and testing a labeling algorithm over a text. Finally, based on an analysis and evaluation of the performance of a system, the model of the phenomenon may be revised.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology of Annotation", "sec_num": "2" }, { "text": "We call this cycle of development the MATTER methodology ( Fig. 1 ", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 65, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Methodology of Annotation", "sec_num": "2" }, { "text": "Model: Structural descriptions provide theoretically informed attributes derived from empirical observations over the data;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "):", "sec_num": null }, { "text": "Annotate: Annotation scheme assumes a feature set that encodes specific structural descriptions and properties of the input data;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "):", "sec_num": null }, { "text": "Train: Algorithm is trained over a corpus annotated with the target feature set;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "):", "sec_num": null }, { "text": "Test: Algorithm is tested against held-out data;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "):", "sec_num": null }, { "text": "Evaluate: Standardized evaluation of results;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "):", "sec_num": null }, { "text": "Revise: Revisit the model, annotation specification, or algorithm, in order to make the annotation more robust and reliable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "):", "sec_num": null }, { "text": "Some of the current and completed annotation efforts that have undergone such a development cycle include PropBank (Palmer et al., 2005) , Nom-Bank (Meyers et al., 2004) , and TimeBank .", "cite_spans": [ { "start": 115, "end": 136, "text": "(Palmer et al., 2005)", "ref_id": "BIBREF15" }, { "start": 148, "end": 169, "text": "(Meyers et al., 2004)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "):", "sec_num": null }, { "text": "The argument selection and coercion (ASC) task involves identifying the selectional mechanism used by the predicate over a particular argument. 1 For the purposes of this task, the possible relations between the predicate and a given argument are restricted to selection and coercion. In selection, the argument NP satisfies the typing requirements of the predicate, as in (3):(3) a. The spokesman denied the statement (PROPOSI-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "TION).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "b. The child threw the stone (PHYSICAL OBJECT).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "c. The audience didn't believe the rumor (PROPOSI-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "TION).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "Coercion occurs when a type-shifting operation must be performed on the complement NP in order to satisfy selectional requirements of the predicate, as in (4). Note that coercion operations may apply to any argument position in a sentence, including the subject, as seen in (4b). Coercion can also be seen as an object of a proposition, as in (4c). In order to determine whether type-shifting has taken place, the classification task must then involve (1) identifying the verb sense and the associated syntactic frame, (2) identifying selectional requirements imposed by that verb sense on the target argument, and (3) identifying the semantic type of the target argument.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "We prepared the data for this task in two phases: the data set construction phase and the annotation phase (see Fig. 2 ). The first phase consisted of (1) selecting the target verbs to be annotated and compiling a sense inventory for each target, and (2) data extraction and preprocessing. The prepared data was then loaded into the annotation interface. During the annotation phase, the annotation judgments were entered into the database, and an adjudicator resolved disagreements. The resulting database was then exported in an XML format. ", "cite_spans": [], "ref_spans": [ { "start": 112, "end": 118, "text": "Fig. 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Resources and Corpus Development", "sec_num": "4" }, { "text": "For the English data set, the data construction phase was combined with the annotation phase. The data for the task was created using the following steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Set Construction Phase: English", "sec_num": "4.1" }, { "text": "1. The verbs were selected by examining the data from the BNC, using the Sketch Engine (Kilgarriff et al., 2004) as described in (Rumshisky and Batiukova, 2008) . Verbs that consistently impose semantic typing on one of their arguments in at least one of their senses (strongly coercive verbs) were included into the final data set: arrive (at), cancel, deny, finish, and hear.", "cite_spans": [ { "start": 129, "end": 160, "text": "(Rumshisky and Batiukova, 2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Data Set Construction Phase: English", "sec_num": "4.1" }, { "text": "2. Sense inventories were compiled for each verb, with the senses mapped to OntoNotes (Pradhan et al., 2007) whenever possible. For each sense, a set of type templates was compiled using a modification of the CPA technique (Hanks and Pustejovsky, 2005; Pustejovsky et al., 2004) : every argument in the syntactic pattern associated with a given sense was assigned a type specification. Although a particular sense is often compatible with more than one semantic type for a given argument, this was never the case in our data set, where no disjoint types were tested. The coercive senses of the chosen verbs were associated with the following type templates: We used a subset of semantic types from the Brandeis Shallow Ontology (BSO), which is a shallow hierarchy of types developed as a part of the CPA effort (Hanks, 2009; Pustejovsky et al., 2004; Rumshisky et al., 2006) . Types were selected for their prevalence in manually identified selection context patterns developed for several hundred English verbs. That is, they capture common semantic distinctions associated with the selectional properties of many verbs. The types used for annotation were: This set of types is purposefully shallow and non-hierarchical. For example, HUMAN is a subtype of both ANIMATE and PHYSICAL OB-JECT, but annotators and system developers were instructed to choose the most relevant type (e.g., HUMAN) and to ignore inheritance.", "cite_spans": [ { "start": 86, "end": 108, "text": "(Pradhan et al., 2007)", "ref_id": "BIBREF16" }, { "start": 223, "end": 252, "text": "(Hanks and Pustejovsky, 2005;", "ref_id": "BIBREF6" }, { "start": 253, "end": 278, "text": "Pustejovsky et al., 2004)", "ref_id": "BIBREF17" }, { "start": 811, "end": 824, "text": "(Hanks, 2009;", "ref_id": "BIBREF7" }, { "start": 825, "end": 850, "text": "Pustejovsky et al., 2004;", "ref_id": "BIBREF17" }, { "start": 851, "end": 874, "text": "Rumshisky et al., 2006)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Data Set Construction Phase: English", "sec_num": "4.1" }, { "text": "ABSTRACT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Set Construction Phase: English", "sec_num": "4.1" }, { "text": "each target verb from the BNC (Burnard, 1995) . The extracted sentences were parsed automatically, and the sentences organized according to the grammatical relation the target verb was involved in. Sentences were excluded from the set if the target argument was expressed as anaphor, or was not present in the sentence. The semantic head for the target grammatical relation was identified in each case.", "cite_spans": [ { "start": 30, "end": 45, "text": "(Burnard, 1995)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "A set of sentences was randomly extracted for", "sec_num": "3." }, { "text": "4. Word sense disambiguation of the target predicate was performed manually on each extracted sentence, matching the target against the sense inventory and the corresponding type templates as described above. The appropriate senses were then saved into the database along with the associated type template.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A set of sentences was randomly extracted for", "sec_num": "3." }, { "text": "5. The sentences containing coercive senses of the target verbs were loaded into the Brandeis Annotation Tool (Verhagen, 2010) . Annotators were presented with a list of sentences and asked to determine whether the argument in the specified grammatical relation to the target belongs to the type associated with that sense in the corresponding template. Disagreements were resolved by adjudication. . To guarantee robustness of the data, two additional steps were taken. First, only the six most recurrent coercion types were selected; these are given in table 1. Preference was given to cross-domain coercions, where the source and the target types are not related ontologically. Second, the distribution of selection and coercion instances were skewed to increase the number of coercions. The final English data set contains about 30% coercions.", "cite_spans": [ { "start": 110, "end": 126, "text": "(Verhagen, 2010)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "A set of sentences was randomly extracted for", "sec_num": "3." }, { "text": "7. Finally, the data set was randomly split in half into a training set and a test set. The training data has 1032 instances, 311 of which are coercions, and the test data has 1039 instances, 314 of which are coercions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A set of sentences was randomly extracted for", "sec_num": "3." }, { "text": "In constructing the Italian data set, we adopted the same methodology used for the English data set, with the following differences:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Set Construction Phase: Italian", "sec_num": "4.2" }, { "text": "1. The list of coercive verbs was selected by examining data from the ItWaC (Baroni and Kilgarriff, 2006) using the Sketch Engine (Kilgarriff et al., 2004) :", "cite_spans": [ { "start": 76, "end": 105, "text": "(Baroni and Kilgarriff, 2006)", "ref_id": "BIBREF0" }, { "start": 130, "end": 155, "text": "(Kilgarriff et al., 2004)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Data Set Construction Phase: Italian", "sec_num": "4.2" }, { "text": "accusare 'accuse', annunciare 'announce', arrivare 'arrive', ascoltare 'listen', avvisare 'inform', chiamare 'call', cominciare 'begin', completare 'complete', concludere 'conclude', contattare 'contact', divorare 'devour', echeggiare 'echo', finire 'finish', informare 'inform', interrompere 'interrupt', leggere 'read', raggiungere 'reach', recar(si) 'go to', rimbombare 'resound', sentire 'hear', udire 'hear', visitare 'visit'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Set Construction Phase: Italian", "sec_num": "4.2" }, { "text": "2. The coercive senses of the chosen verbs were associated with type templates, some of which are listed listed below. Whenever possible, senses and type templates were adapted from the Italian Pattern Dictionary (Hanks and Jezek, 2007) and mapped to their SIMPLE equivalents (Lenci et al., 2000) . The annotators were provided with a set of definitions and examples of each type.", "cite_spans": [ { "start": 213, "end": 236, "text": "(Hanks and Jezek, 2007)", "ref_id": "BIBREF5" }, { "start": 276, "end": 296, "text": "(Lenci et al., 2000)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Data Set Construction Phase: Italian", "sec_num": "4.2" }, { "text": "3. A set of sentences for each target verb was extracted and parsed from the PAROLE sottoinsieme corpus (Bindi et al., 2000) . They were skimmed to ensure that the final data set contained a sufficient number of coercions, with proportionally more selections than coercions. Sentences were preselected to include instances representing one of the chosen senses.", "cite_spans": [ { "start": 104, "end": 124, "text": "(Bindi et al., 2000)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data Set Construction Phase: Italian", "sec_num": "4.2" }, { "text": "4. In order to exclude instances that may have been wrongly selected, a judge performed word sense disambiguation of the target predicate in the extracted sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Set Construction Phase: Italian", "sec_num": "4.2" }, { "text": "5. Annotators were presented with a list of sentences and asked to determine the usual semantic type associated with the argument in the specified grammatical relation. Every sentence was annotated by two annotators and one judge, who resolved disagreements. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Set Construction Phase: Italian", "sec_num": "4.2" }, { "text": "The test and training data were provided in XML. The relation between the predicate (viewed as a function) and its argument were represented by composition link elements (CompLink), as shown below. The test data differed from the training data in the omission of CompLink elements. In case of coercion, there is a mismatch between the source and the target types, and both types need to be identified; e.g., The State Department repeatedly denied the attack:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Format", "sec_num": "5" }, { "text": "The State Department repeatedly denied the attack. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Format", "sec_num": "5" }, { "text": "When the compositional operation is selection, the source and target types must match; e.g., The State Department repeatedly denied the statement:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Format", "sec_num": "5" }, { "text": "The State Department repeatedly denied the statement. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Format", "sec_num": "5" }, { "text": "We received only a single submission for the ASC task. The UTDMet system was an SVMbased system with features derived from two main sources: a PageRank-style algorithm over Word-Net hypernyms used to define semantic classes, and statistics from a PropBank-style parse of some 8 million documents from the English Gigaword corpus. The results, shown in Table 2 , were computed from confusion matrices constructed for each of four classification tasks for the 1039 link instances in the English test data: determination of argument selection or coercion, identification of the argument source type, identification of the argument target type, and the joint identification of the source/target type pair. Clearly, the UTDMet system did quite well at this task. The one immediately noticeable outlier is the macro-averaged precision for the joint type, which reflects a small number of miscategorizations of rare types. For example, eliminating the single miscategorized ARTIFACT-LOCATION link in the submitted test data bumps this score up to a respectable 94%. This large discrepancy can explained by the lack of any coercions with those types in the gold-standard data. In the absence of any other submissions, it is difficult to provide a point of comparison for this performance. However, we can provide a baseline by taking each link to be a selection whose source and target types are the most common type (EVENT for the gold-standard English data). This yields micro-averaged precision scores of 69% for selection vs. coercion, 33% for source type identification, 37% for the target type identification, and 22% for the joint type.", "cite_spans": [], "ref_spans": [ { "start": 352, "end": 359, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results & Analysis", "sec_num": "6" }, { "text": "The performance of the UTDMet system suggests that most of the type coercions were identifiable based largely on examination of lexical clues associated with selection contexts. This is in fact to be expected for the type coercions that were the focus of the English data set. It will be interesting to see how systems perform on the Italian data set and an expanded corpus for English and Italian, where more subtle and complex type exploitations and manipulations are at play. These will hopefully be explored in future competitions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Analysis", "sec_num": "6" }, { "text": "In this paper, we have described the Argument Selection and Coercion task for SemEval-2010. This task involves identifying the relation between a predicate and its argument as one that encodes the compositional history of the selection process. This allows us to distinguish surface forms that directly satisfy the selectional (type) requirements of a predicate from those that are coerced in context. We described some details of a specification language for selection, the annotation task using this specification to identify argument selection behavior, and the preparation of the data for the task. Finally, we analyzed the results of the task submissions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "This task is part of a larger effort to annotate text with compositional operations(Pustejovsky et al., 2009).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Large linguistically-processed web corpora for multiple languages", "authors": [ { "first": "M", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "A", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 2006, "venue": "Proceedings of European ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Baroni and A. Kilgarriff. 2006. Large linguistically-processed web corpora for multiple languages. In Proceedings of European ACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The salsa corpus: a german corpus resource for lexical semantics", "authors": [ { "first": "Aljoscha", "middle": [], "last": "Burchardt", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Kowalski", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pado", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aljoscha Burchardt, Katrin Erk, Anette Frank, An- drea Kowalski, Sebastian Pado, and Manfred Pinkal. 2006. The salsa corpus: a german corpus resource for lexical semantics. In Proceedings of LREC, Genoa, Italy.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Users' Reference Guide", "authors": [ { "first": "L", "middle": [], "last": "Burnard", "suffix": "" } ], "year": 1995, "venue": "British National Corpus. British National Corpus Consortium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Burnard, 1995. Users' Reference Guide, British Na- tional Corpus. British National Corpus Consortium, Oxford, England.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Flexible semantics for reinterpretation phenomena", "authors": [ { "first": "Marcus", "middle": [ "Egg ;" ], "last": "Csli", "suffix": "" }, { "first": "Stanford", "middle": [], "last": "", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus Egg. 2005. Flexible semantics for reinterpre- tation phenomena. CSLI, Stanford.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Building Pattern Dictionaries with Corpus Analysis", "authors": [ { "first": "P", "middle": [], "last": "Hanks", "suffix": "" }, { "first": "E", "middle": [], "last": "Jezek", "suffix": "" } ], "year": 2007, "venue": "International Colloquium on Possible Dictionaries", "volume": "", "issue": "", "pages": "6--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Hanks and E. Jezek. 2007. Building Pattern Dictio- naries with Corpus Analysis. In International Col- loquium on Possible Dictionaries, Rome, June, 6-7. Oral Presentation.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A pattern dictionary for natural language processing", "authors": [ { "first": "P", "middle": [], "last": "Hanks", "suffix": "" }, { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Hanks and J. Pustejovsky. 2005. A pattern dic- tionary for natural language processing. Revue Fran\u00e7aise de Linguistique Appliqu\u00e9e.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Corpus pattern analysis. CPA Project Page", "authors": [ { "first": "P", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Hanks. 2009. Corpus pattern analysis. CPA Project Page. Retrieved April 11, 2009, from http://nlp.fi.muni.cz/projekty/cpa/.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Interpretation as abduction", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Hobbs", "suffix": "" }, { "first": "M", "middle": [], "last": "Stickel", "suffix": "" }, { "first": "P", "middle": [], "last": "Martin", "suffix": "" } ], "year": 1993, "venue": "Artificial Intelligence", "volume": "63", "issue": "", "pages": "69--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. R. Hobbs, M. Stickel, and P. Martin. 1993. Interpre- tation as abduction. Artificial Intelligence, 63:69- 142.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Sketch Engine", "authors": [ { "first": "A", "middle": [], "last": "Kilgarriff", "suffix": "" }, { "first": "P", "middle": [], "last": "Rychly", "suffix": "" }, { "first": "P", "middle": [], "last": "Smrz", "suffix": "" }, { "first": "D", "middle": [], "last": "Tugwell", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Euralex", "volume": "", "issue": "", "pages": "105--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Kilgarriff, P. Rychly, P. Smrz, and D. Tugwell. 2004. The Sketch Engine. Proceedings of Euralex, Lorient, France, pages 105-116.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "VerbNet: A broad-coverage, comprehensive verb lexicon", "authors": [ { "first": "Karin", "middle": [], "last": "Kipper", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karin Kipper. 2005. VerbNet: A broad-coverage, com- prehensive verb lexicon. Phd dissertation, Univer- sity of Pennsylvania, PA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "SIMPLE: A general framework for the development of multilingual lexicons", "authors": [ { "first": "A", "middle": [], "last": "Lenci", "suffix": "" }, { "first": "N", "middle": [], "last": "Bel", "suffix": "" }, { "first": "F", "middle": [], "last": "Busa", "suffix": "" }, { "first": "N", "middle": [], "last": "Calzolari", "suffix": "" }, { "first": "E", "middle": [], "last": "Gola", "suffix": "" }, { "first": "M", "middle": [], "last": "Monachini", "suffix": "" }, { "first": "A", "middle": [], "last": "Ogonowski", "suffix": "" }, { "first": "I", "middle": [], "last": "Peters", "suffix": "" }, { "first": "W", "middle": [], "last": "Peters", "suffix": "" }, { "first": "N", "middle": [], "last": "Ruimy", "suffix": "" } ], "year": 2000, "venue": "International Journal of Lexicography", "volume": "13", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Lenci, N. Bel, F. Busa, N. Calzolari, E. Gola, M. Monachini, A. Ogonowski, I. Peters, W. Peters, N. Ruimy, et al. 2000. SIMPLE: A general frame- work for the development of multilingual lexicons. International Journal of Lexicography, 13(4):249.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "SemEval-2007 task 8: Metonymy resolution", "authors": [ { "first": "K", "middle": [], "last": "Markert", "suffix": "" }, { "first": "M", "middle": [], "last": "Nissim", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Markert and M. Nissim. 2007. SemEval-2007 task 8: Metonymy resolution. In Eneko Agirre, Llu\u00eds M\u00e0rquez, and Richard Wicentowski, editors, Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), Prague, Czech Republic, June. Association for Computa- tional Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The NomBank project: An interim report", "authors": [ { "first": "A", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "R", "middle": [], "last": "Reeves", "suffix": "" }, { "first": "C", "middle": [], "last": "Macleod", "suffix": "" }, { "first": "R", "middle": [], "last": "Szekely", "suffix": "" }, { "first": "V", "middle": [], "last": "Zielinska", "suffix": "" }, { "first": "B", "middle": [], "last": "Young", "suffix": "" }, { "first": "R", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL 2004 Workshop: Frontiers in Corpus Annotation", "volume": "", "issue": "", "pages": "24--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Meyers, R. Reeves, C. Macleod, R. Szekely, V. Zielinska, B. Young, and R. Grishman. 2004. The NomBank project: An interim report. In HLT- NAACL 2004 Workshop: Frontiers in Corpus Anno- tation, pages 24-31.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The non-uniqueness of semantic solutions", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Nunberg", "suffix": "" } ], "year": 1979, "venue": "Polysemy. Linguistics and Philosophy", "volume": "3", "issue": "", "pages": "143--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey Nunberg. 1979. The non-uniqueness of se- mantic solutions: Polysemy. Linguistics and Phi- losophy, 3:143-184.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The proposition bank: An annotated corpus of semantic roles", "authors": [ { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "P", "middle": [], "last": "Kingsbury", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "1", "pages": "71--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Palmer, D. Gildea, and P. Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-106.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Ontonotes: A unified relational semantic representation", "authors": [ { "first": "S", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "L", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "R", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2007, "venue": "International Conference on Semantic Computing", "volume": "", "issue": "", "pages": "517--526", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Pradhan, E. Hovy, MS Marcus, M. Palmer, L. Ramshaw, and R. Weischedel. 2007. Ontonotes: A unified relational semantic representation. In International Conference on Semantic Computing, 2007, pages 517-526.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automated Induction of Sense in Context", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "P", "middle": [], "last": "Hanks", "suffix": "" }, { "first": "A", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2004, "venue": "COL-ING 2004", "volume": "", "issue": "", "pages": "924--931", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Pustejovsky, P. Hanks, and A. Rumshisky. 2004. Automated Induction of Sense in Context. In COL- ING 2004, Geneva, Switzerland, pages 924-931.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Temporal and event information in natural language text", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "R", "middle": [], "last": "Knippen", "suffix": "" }, { "first": "J", "middle": [], "last": "Littman", "suffix": "" }, { "first": "R", "middle": [], "last": "Sauri", "suffix": "" } ], "year": 2005, "venue": "Language Resources and Evaluation", "volume": "39", "issue": "2", "pages": "123--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Pustejovsky, R. Knippen, J. Littman, and R. Sauri. 2005. Temporal and event information in natural language text. Language Resources and Evaluation, 39(2):123-164.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "GLML: Annotating argument selection and coercion", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "A", "middle": [], "last": "Rumshisky", "suffix": "" }, { "first": "J", "middle": [], "last": "Moszkowicz", "suffix": "" }, { "first": "O", "middle": [], "last": "Batiukova", "suffix": "" } ], "year": 2009, "venue": "IWCS-8: Eighth International Conference on Computational Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Pustejovsky, A. Rumshisky, J. Moszkowicz, and O. Batiukova. 2009. GLML: Annotating argument selection and coercion. IWCS-8: Eighth Interna- tional Conference on Computational Semantics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The generative lexicon", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 1991, "venue": "Computational Linguistics", "volume": "17", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Pustejovsky. 1991. The generative lexicon. Compu- tational Linguistics, 17(4).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Polysemy in verbs: systematic relations between senses and their effect on annotation", "authors": [ { "first": "A", "middle": [], "last": "Rumshisky", "suffix": "" }, { "first": "O", "middle": [], "last": "Batiukova", "suffix": "" } ], "year": 2008, "venue": "COLING Workshop on Human Judgement in Computational Linguistics (HJCL-2008)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Rumshisky and O. Batiukova. 2008. Polysemy in verbs: systematic relations between senses and their effect on annotation. In COLING Workshop on Human Judgement in Computational Linguistics (HJCL-2008), Manchester, England.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Constructing a corpus-based ontology using model bias", "authors": [ { "first": "A", "middle": [], "last": "Rumshisky", "suffix": "" }, { "first": "P", "middle": [], "last": "Hanks", "suffix": "" }, { "first": "C", "middle": [], "last": "Havasi", "suffix": "" }, { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 2006, "venue": "The 19th International FLAIRS Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Rumshisky, P. Hanks, C. Havasi, and J. Pustejovsky. 2006. Constructing a corpus-based ontology using model bias. In The 19th International FLAIRS Con- ference, FLAIRS 2006, Melbourne Beach, Florida, USA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "FrameNet II: Extended Theory and Practice", "authors": [ { "first": "J", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "M", "middle": [], "last": "Ellsworth", "suffix": "" }, { "first": "M", "middle": [], "last": "Petruck", "suffix": "" }, { "first": "C", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "J", "middle": [], "last": "Scheffczyk", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Ruppenhofer, M. Ellsworth, M. Petruck, C. Johnson, and J. Scheffczyk. 2006. FrameNet II: Extended Theory and Practice.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "FrameNet Espa\u00f1ol. Una red sem\u00e1ntica de marcos conceptuales", "authors": [ { "first": "Carlos", "middle": [], "last": "Subirats", "suffix": "" } ], "year": 2004, "venue": "VI International Congress of Hispanic Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos Subirats. 2004. FrameNet Espa\u00f1ol. Una red sem\u00e1ntica de marcos conceptuales. In VI Interna- tional Congress of Hispanic Linguistics, Leipzig.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "The Brandeis Annotation Tool", "authors": [ { "first": "Marc", "middle": [], "last": "Verhagen", "suffix": "" } ], "year": 2010, "venue": "Language Resources and Evaluation Conference, LREC 2010", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Verhagen. 2010. The Brandeis Annotation Tool. In Language Resources and Evaluation Conference, LREC 2010, Malta.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "The MATTER Methodology (2) a. Mary enjoyed drinking her beer. b. Mary enjoyed her beer.", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "(4) a. The president denied the attack (EVENT \u2192 PROPO-SITION). b. The White House (LOCATION \u2192 HUMAN) denied this statement. c. The Boston office called with an update (EVENT \u2192 INFO).", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "Corpus Development Architecture", "uris": null }, "FIGREF3": { "num": null, "type_str": "figure", "text": "ENTITY, ANIMATE, ARTIFACT, ATTITUDE, DOCUMENT, DRINK, EMOTION, ENTITY, EVENT, FOOD, HUMAN, HUMAN GROUP, IDEA, INFORMATION, LOCA-TION, OBLIGATION, ORGANIZATION, PATH, PHYSICAL OBJECT, PROPERTY, PROPOSITION, RULE, SENSATION, SOUND, SUBSTANCE, TIME PERIOD, VEHICLE", "uris": null }, "TABREF0": { "html": null, "text": "Hear, sense perceive physical sound : HUMAN hear SOUND", "type_str": "table", "content": "", "num": null }, "TABREF2": { "html": null, "text": "HUMAN arriva [prep] LOCATION b. cominciare, sense initiate an undertaking: HUMAN comincia EVENT c. completare, sense finish an activity: HUMAN completa EVENT d. udire, sense perceive a sound : HUMAN ode SOUND e. visitare, sense visit a place: HUMAN visita LOCA-", "type_str": "table", "content": "
TION
The following types were used to annotate
the Italian dataset:
ABSTRACT ENTITY, ANIMATE, ARTIFACT, ATTITUDE,
CONTAINER, DOCUMENT, DRINK, EMOTION, ENTITY,
EVENT, FOOD, HUMAN, HUMAN GROUP, IDEA, IN-
FORMATION, LIQUID, LOCATION, ORGANIZATION,
PHYSICAL OBJECT, PROPERTY, SENSATION, SOUND,
TIME PERIOD, VEHICLE
", "num": null }, "TABREF3": { "html": null, "text": "HUMAN (accusare, annunciare) b. ARTIFACT \u2192 HUMAN (annunciare, avvisare) c. EVENT \u2192 LOCATION (arrivare, raggiungere) d. ARTIFACT \u2192 EVENT (cominciare, completare) e. EVENT \u2192 DOCUMENT (leggere, divorare) f. HUMAN \u2192 DOCUMENT (leggere, divorare) g. EVENT \u2192 SOUND (ascoltare, echeggiare) h. ARTIFACT \u2192 SOUND (ascoltare, echeggiare)", "type_str": "table", "content": "
6. Some of the coercion types selected for Italian
were:
7. The Italian training data contained 1466 in-
stances, 381 of which are coercions; the test
data had 1463 instances, with 384 coercions.
", "num": null }, "TABREF5": { "html": null, "text": "Results for the UTDMet submission.", "type_str": "table", "content": "", "num": null } } } }