{ "paper_id": "S10-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:28:01.778159Z" }, "title": "SemEval-2010 Task 9: The Interpretation of Noun Compounds Using Paraphrasing Verbs and Prepositions", "authors": [ { "first": "Cristina", "middle": [], "last": "Butnariu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University College Dublin", "location": {} }, "email": "ioana.butnariu@ucd.ie" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Melbourne", "location": {} }, "email": "nkim@csse.unimelb.edu.au" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": {} }, "email": "nakov@comp.nus.edu.sg" }, { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Cambridge", "location": {} }, "email": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Tony", "middle": [], "last": "Veale", "suffix": "", "affiliation": { "laboratory": "", "institution": "University College Dublin", "location": {} }, "email": "tony.veale@ucd.ie" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Previous research has shown that the meaning of many noun-noun compounds N 1 N 2 can be approximated reasonably well by paraphrasing clauses of the form 'N 2 that. .. N 1 ', where '.. . ' stands for a verb with or without a preposition. For example, malaria mosquito is a 'mosquito that carries malaria'. Evaluating the quality of such paraphrases is the theme of Task 9 at SemEval-2010. This paper describes some background, the task definition, the process of data collection and the task results. We also venture a few general conclusions before the participating teams present their systems at the SemEval-2010 workshop. There were 5 teams who submitted 7 systems.", "pdf_parse": { "paper_id": "S10-1007", "_pdf_hash": "", "abstract": [ { "text": "Previous research has shown that the meaning of many noun-noun compounds N 1 N 2 can be approximated reasonably well by paraphrasing clauses of the form 'N 2 that. .. N 1 ', where '.. . ' stands for a verb with or without a preposition. For example, malaria mosquito is a 'mosquito that carries malaria'. Evaluating the quality of such paraphrases is the theme of Task 9 at SemEval-2010. This paper describes some background, the task definition, the process of data collection and the task results. We also venture a few general conclusions before the participating teams present their systems at the SemEval-2010 workshop. There were 5 teams who submitted 7 systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Noun compounds (NCs) are sequences of two or more nouns that act as a single noun, 1 e.g., stem cell, stem cell research, stem cell research organization, etc. Lapata and Lascarides (2003) observe that NCs pose syntactic and semantic challenges for three basic reasons: (1) the compounding process is extremely productive in English; (2) the semantic relation between the head and the modifier is implicit; (3) the interpretation can be influenced by contextual and pragmatic factors. Corpus studies have shown that while NCs are very common in English, their frequency distribution follows a Zipfian or power-law distribution and the majority of NCs encountered will be rare types (Tanaka and Baldwin, 2003; Lapata and Lascarides, 2003 ; Baldwin and Tanaka, 2004; \u00d3 S\u00e9aghdha, 2008) . As a consequence, Natural Language Processing (NLP) 1 We follow the definition in (Downing, 1977) . applications cannot afford either to ignore NCs or to assume that they can be handled by relying on a dictionary or other static resource.", "cite_spans": [ { "start": 160, "end": 188, "text": "Lapata and Lascarides (2003)", "ref_id": "BIBREF7" }, { "start": 682, "end": 708, "text": "(Tanaka and Baldwin, 2003;", "ref_id": "BIBREF20" }, { "start": 709, "end": 736, "text": "Lapata and Lascarides, 2003", "ref_id": "BIBREF7" }, { "start": 751, "end": 764, "text": "Tanaka, 2004;", "ref_id": "BIBREF0" }, { "start": 765, "end": 782, "text": "\u00d3 S\u00e9aghdha, 2008)", "ref_id": null }, { "start": 837, "end": 838, "text": "1", "ref_id": null }, { "start": 867, "end": 882, "text": "(Downing, 1977)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Trouble with lexical resources for NCs notwithstanding, NC semantics plays a central role in complex knowledge discovery and applications, including but not limited to Question Answering (QA), Machine Translation (MT), and Information Retrieval (IR). For example, knowing the (implicit) semantic relation between the NC components can help rank and refine queries in QA and IR, or select promising translation pairs in MT (Nakov, 2008a) . Thus, robust semantic interpretation of NCs should be of much help in broad-coverage semantic processing.", "cite_spans": [ { "start": 422, "end": 436, "text": "(Nakov, 2008a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Proposed approaches to modelling NC semantics have used semantic similarity (Nastase and Szpakowicz, 2003; Moldovan et al., 2004; Kim and Baldwin, 2005; Nastase and Szpakowicz, 2006; Girju, 2007; \u00d3 S\u00e9aghdha and Copestake, 2007) and paraphrasing (Vanderwende, 1994; Kim and Baldwin, 2006; Butnariu and Veale, 2008; Nakov and Hearst, 2008) . The former body of work seeks to measure the similarity between known and unseen NCs by considering various features, usually context-related. In contrast, the latter group uses verb semantics to interpret NCs directly, e.g., olive oil as 'oil that is extracted from olive(s)', drug death as 'death that is caused by drug(s)', flu shot as a 'shot that prevents flu'.", "cite_spans": [ { "start": 76, "end": 106, "text": "(Nastase and Szpakowicz, 2003;", "ref_id": "BIBREF15" }, { "start": 107, "end": 129, "text": "Moldovan et al., 2004;", "ref_id": "BIBREF11" }, { "start": 130, "end": 152, "text": "Kim and Baldwin, 2005;", "ref_id": "BIBREF5" }, { "start": 153, "end": 182, "text": "Nastase and Szpakowicz, 2006;", "ref_id": "BIBREF16" }, { "start": 183, "end": 195, "text": "Girju, 2007;", "ref_id": "BIBREF4" }, { "start": 196, "end": 227, "text": "\u00d3 S\u00e9aghdha and Copestake, 2007)", "ref_id": null }, { "start": 245, "end": 264, "text": "(Vanderwende, 1994;", "ref_id": "BIBREF21" }, { "start": 265, "end": 287, "text": "Kim and Baldwin, 2006;", "ref_id": "BIBREF6" }, { "start": 288, "end": 313, "text": "Butnariu and Veale, 2008;", "ref_id": "BIBREF1" }, { "start": 314, "end": 337, "text": "Nakov and Hearst, 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The growing popularity -and expected direct utility -of paraphrase-based NC semantics has encouraged us to propose an evaluation exercise for the 2010 edition of SemEval. This paper gives a bird's-eye view of the task. Section 2 presents its objective, data, data collection, and evaluation method. Section 3 lists the participating teams. Section 4 shows the results and our analysis. In Section 5, we sum up our experience so far.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For the purpose of the task, we focused on twoword NCs which are modifier-head pairs of nouns, such as apple pie or malaria mosquito. There are several ways to \"attack\" the paraphrase-based semantics of such NCs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Objective", "sec_num": "2.1" }, { "text": "We have proposed a rather simple problem: assume that many paraphrases can be found -perhaps via clever Web search -but their relevance is up in the air. Given sufficient training data, we seek to estimate the quality of candidate paraphrases in a test set. Each NC in the training set comes with a long list of verbs in the infinitive (often with a preposition) which may paraphrase the NC adequately. Examples of apt paraphrasing verbs: olive oilbe extracted from, drug death -be caused by, flu shot -prevent. These lists have been constructed from human-proposed paraphrases. For the training data, we also provide the participants with a quality score for each paraphrase, which is a simple count of the number of human subjects who proposed that paraphrase. At test time, given a noun compound and a list of paraphrasing verbs, a participating system needs to produce aptness scores that correlate well (in terms of relative ranking) with the held out human judgments. There may be a diverse range of paraphrases for a given compound, some of them in fact might be inappropriate, but it can be expected that the distribution over paraphrases estimated from a large number of subjects will indeed be representative of the compound's meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Objective", "sec_num": "2.1" }, { "text": "Following Nakov (2008b) , we took advantage of the Amazon Mechanical Turk 2 (MTurk) to acquire paraphrasing verbs from human annotators. The service offers inexpensive access to subjects for tasks which require human intelligence. Its API allows a computer program to run tasks easily and collate the subjects' responses. MTurk is becoming a popular means of eliciting and collecting linguistic intuitions for NLP research; see Snow et al. (2008) for an overview and a further discussion.", "cite_spans": [ { "start": 10, "end": 23, "text": "Nakov (2008b)", "ref_id": "BIBREF14" }, { "start": 428, "end": 446, "text": "Snow et al. (2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "The Datasets", "sec_num": "2.2" }, { "text": "Even though we recruited human subjects, whom we required to take a qualification test, 3 data collection was time-consuming since many annotators did not follow the instructions. We had to monitor their progress and to send them timely messages, pointing out mistakes. Although the MTurk service allows task owners to accept or reject individual submissions, rejection was the last resort since it has the triply unpleasant effect of (1) denying the worker her fee, (2) negatively affecting her rating, and (3) lowering our rating as a requester. We thus chose to try and educate our workers \"on the fly\". Even so, we ended up with many examples which we had to correct manually by labor-intensive post-processing. The flaws were not different from those already described by Nakov (2008b) . Post-editing was also necessary to lemmatize the paraphrasing verbs systematically.", "cite_spans": [ { "start": 777, "end": 790, "text": "Nakov (2008b)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "The Datasets", "sec_num": "2.2" }, { "text": "Trial Data. At the end of August 2009, we released as trial data the previously collected paraphrase sets (Nakov, 2008b) for the Levi-250 dataset (after further review and cleaning). This dataset consisted of 250 noun-noun compounds form (Levi, 1978) , each paraphrased by 25-30 MTurk workers (without a qualification test).", "cite_spans": [ { "start": 106, "end": 120, "text": "(Nakov, 2008b)", "ref_id": "BIBREF14" }, { "start": 238, "end": 250, "text": "(Levi, 1978)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "The Datasets", "sec_num": "2.2" }, { "text": "Training Data. The training dataset was an extension of the trial dataset. It consisted of the same 250 noun-noun compounds, but the number of annotators per compound increased significantly. We aimed to recruit at least 30 additional MTurk workers per compound; for some compounds we managed to get many more. For example, when we added the paraphrasing verbs from the trial dataset to the newly collected verbs, we had 131 different workers for neighborhood bars, compared to just 50 for tear gas. On the average, we had 72.7 workers per compound. Each worker was instructed to try to produce at least three paraphrasing verbs, so we ended up with 191.8 paraphrasing verbs per compound, 84.6 of them being unique. See Table 1 for more details.", "cite_spans": [], "ref_spans": [ { "start": 720, "end": 727, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "The Datasets", "sec_num": "2.2" }, { "text": "Test Data. The test dataset consisted of 388 noun compounds collected from two data sources:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Datasets", "sec_num": "2.2" }, { "text": "(1) the Nastase and Szpakowicz (2003) dataset; and (2) the Lauer (1995) dataset. The former contains 328 noun-noun compounds (there are also a number of adjective-noun and adverb-noun pairs), while the latter contains 266 noun-noun compounds. Since these datasets overlap between themselves and with the training dataset, we had to exclude some examples. In the end, we had 388 we found little difference in the quality of work of subjects recruited with and without the test. unique noun-noun compounds for testing, distinct from those used for training. We aimed for 100 human workers per testing NC, but we could only get 68.3, with a minimum of 57 and a maximum of 96; there were 185.0 paraphrasing verbs per compound, 70.9 of them being unique, which is close to what we had for the training data. Data format. We distribute the training data as a raw text file. Each line has the following tabseparated format:", "cite_spans": [ { "start": 8, "end": 37, "text": "Nastase and Szpakowicz (2003)", "ref_id": "BIBREF15" }, { "start": 59, "end": 71, "text": "Lauer (1995)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "The Datasets", "sec_num": "2.2" }, { "text": "where NC is a noun-noun compound (e.g., apple cake, flu virus), paraphrase is a humanproposed paraphrasing verb optionally followed by a preposition, and frequency is the number of annotators who proposed that paraphrase. Here is an illustrative extract from the training dataset: The test file has a similar format, except that the frequency is not included and the paraphrases for each noun compound appear in random order:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NC paraphrase frequency", "sec_num": null }, { "text": "... chest pain originate chest pain start in chest pain descend in chest pain be in ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NC paraphrase frequency", "sec_num": null }, { "text": "License. All datasets are released under the Creative Commons Attribution 3.0 Unported license. 4 4 creativecommons.org/licenses/by/3.0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NC paraphrase frequency", "sec_num": null }, { "text": "All evaluation was performed by computing an appropriate measure of similarity/correlation between system predictions and the compiled judgements of the human annotators. We did it on a compound-bycompound basis and averaged over all compounds in the test dataset. Section 4 shows results for three measures: Spearman rank correlation, Pearson correlation, and cosine similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "Spearman Rank Correlation (\u03c1) was adopted as the official evaluation measure for the competition. As a rank correlation statistic, it does not use the numerical values of the predictions or human judgements, only their relative ordering encoded as integer ranks. For a sample of n items ranked by two methods x and y, the rank correlation \u03c1 is calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "\u03c1 = n x i y i \u2212 ( x i )( y i ) n x 2 i \u2212 ( x i ) 2 n y 2 i \u2212 ( y i ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "(1) where x i , y i are the ranks given by x and y to the ith item, respectively. The value of \u03c1 ranges between -1.0 (total negative correlation) and 1.0 (total positive correlation).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "Pearson Correlation (r) is a standard measure of correlation strength between real-valued variables. The formula is the same as (1), but with x i , y i taking real values rather than rank values; just like \u03c1, r's values fall between -1.0 and 1.0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "Cosine similarity is frequently used in NLP to compare numerical vectors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "cos = n i x i y i n i x 2 i n i y 2 i (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "For non-negative data, the cosine similarity takes values between 0.0 and 1.0. Pearson's r can be viewed as a version of the cosine similarity which performs centering on x and y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "Baseline: To help interpret these evaluation measures, we implemented a simple baseline. A distribution over the paraphrases was estimated by summing the frequencies for all compounds in the training dataset, and the paraphrases for the test examples were scored according to this distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "Note that this baseline entirely ignores the identity of the nouns in the compound.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "The task attracted five teams, one of which (UCD-GOGGLE) submitted three runs. The participants are listed in Table 2 along with brief system descriptions; for more details please see the teams' own description papers.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 117, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Participants", "sec_num": "3" }, { "text": "The task results appear in Table 3 . In an evaluation by Spearman's \u03c1 (the official ranking measure), the winning system was UVT-MEPHISTO, which scored 0.450. UVT also achieved the top Pearson's r score. UCD-PN is the top-scoring system according to the cosine measure. One participant submitted part of his results after the official deadline, which is marked by an asterisk. The participants used a variety of information sources and estimation methods. UVT-MEPHISTO is a supervised system that uses frequency information from the Google N-Gram Corpus and features from WordNet (Fellbaum, 1998) to rank candidate paraphrases. On the other hand, UCD-PN uses no external resources and no supervised training, yet came within 0.009 of UVT-MEPHISTO in the official evaluation. The basic idea of UCD-PNthat one can predict the plausibility of a paraphrase simply by knowing which other paraphrases have been given for that compound regardless of their frequency -is clearly a powerful one. Unlike the other systems, UCD-PN used information about the test examples (not their ranks, of course) for model estimation; this has similarities to \"transductive\" methods for semi-supervised learning. However, post-hoc analysis shows that UCD-PN would have preserved its rank if it had estimated its model on the training data only. On the other hand, if the task had been designed differently -by asking systems to propose paraphrases from the set of all possible verb/preposition combinations -then we would not expect UCD-PN's approach to work as well as models that use corpus information.", "cite_spans": [ { "start": 580, "end": 596, "text": "(Fellbaum, 1998)", "ref_id": null } ], "ref_spans": [ { "start": 27, "end": 34, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "The other systems are comparable to UVT-MEPHISTO in that they use corpus frequencies to evaluate paraphrases and apply some kind of semantic smoothing to handle sparsity. However, UCD-GOGGLE-I, UCAM and NC-INTERP are unsupervised systems. UCAM uses the 100million word BNC corpus, while the other systems use Web-scale resources; this has presumably exacerbated sparsity issues and contributed to a relatively poor performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "The hybrid approach exemplified by UCD-GOGGLE-III combines the predictions of a system that models paraphrase correlations and one that learns from corpus frequencies and thus attains better performance. Given that the two topscoring systems can also be characterized as using these two distinct information sources, it is natural to consider combining these systems. Simply normalizing (to unit sum) and averaging the two sets of prediction values for each compound does indeed give better scores: Spearman \u03c1 = 0.472, r = 0.431, Cosine = 0.685. The baseline from Section 2.3 turns out to be very strong. Evaluating with Spearman's \u03c1, only three systems outperform it. It is less competitive on the other evaluation measures though. This suggests that global paraphrase frequencies may be useful for telling sensible paraphrases from bad ones, but will not do for quantifying the plausibility of a paraphrase for a given noun compound.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "Given that it is a newly-proposed task, this initial experiment in paraphrasing noun compounds has been a moderate success. The participation rate has been sufficient for the purposes of comparing and contrasting different approaches to the role of paraphrases in the interpretation of noun-noun compounds. We have seen a variety of approaches applied to the same dataset, and we have been able to compare the performance of pure approaches to hybrid approaches, and of supervised approaches to unsupervised approaches. The results reported here are also encouraging, though clearly there is considerable room for improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "This task has established a high baseline for systems to beat. We can take heart from the fact that the best performance is apparently obtained from a combination of corpus-derived usage features and dictionary-derived linguistic knowledge. Although clever but simple approaches can do quite well on such a task, it is encouraging to note that the best results await those who employ the most robust and the most informed treatments of NCs and their paraphrases. Despite a good start, this is a challenge that remains resolutely open. We expect that the dataset created for the task will be a valuable resource for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "www.mturk.com3 We soon realized that we also had to offer a version of our assignments without a qualification test (at a lower pay rate) since very few people were willing to take a test. Overall,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is partially supported by grants from Amazon and from the Bulgarian National Science Foundation (D002-111/15.12.2008 -SmartBook).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Translation by Machine of Compound Nominals: Getting it Right", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Takaaki", "middle": [], "last": "Tanaka", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the ACL-04 Workshop on Multiword Expressions: Integrating Processing", "volume": "", "issue": "", "pages": "24--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Baldwin and Takaaki Tanaka. 2004. Trans- lation by Machine of Compound Nominals: Getting it Right. In Proceedings of the ACL-04 Workshop on Multiword Expressions: Integrating Processing, pages 24-31, Barcelona, Spain.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Concept-Centered Approach to Noun-Compound Interpretation", "authors": [ { "first": "Cristina", "middle": [], "last": "Butnariu", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics (COLING-08)", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cristina Butnariu and Tony Veale. 2008. A Concept- Centered Approach to Noun-Compound Interpreta- tion. In Proceedings of the 22nd International Con- ference on Computational Linguistics (COLING- 08), pages 81-88, Manchester, UK.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "On the creation and use of English compound nouns", "authors": [ { "first": "Pamela", "middle": [], "last": "Downing", "suffix": "" } ], "year": 1977, "venue": "Language", "volume": "53", "issue": "4", "pages": "810--842", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pamela Downing. 1977. On the creation and use of En- glish compound nouns. Language, 53(4):810-842.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "WordNet: an electronic lexical database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: an elec- tronic lexical database. MIT Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Improving the Interpretation of Noun Phrases with Cross-linguistic Information", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL-07)", "volume": "", "issue": "", "pages": "568--575", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roxana Girju. 2007. Improving the Interpretation of Noun Phrases with Cross-linguistic Information. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL-07), pages 568-575, Prague, Czech Republic.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic interpretation of noun compounds using WordNet similarity", "authors": [ { "first": "Nam", "middle": [], "last": "Su", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05)", "volume": "", "issue": "", "pages": "945--956", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su Nam Kim and Timothy Baldwin. 2005. Automatic interpretation of noun compounds using WordNet similarity. In Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05), pages 945-956, Jeju Island, South Ko- rea.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Interpreting Semantic Relations in Noun Compounds via Verb Semantics", "authors": [ { "first": "Nam", "middle": [], "last": "Su", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the COLING-ACL-06 Main Conference Poster Sessions", "volume": "", "issue": "", "pages": "491--498", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su Nam Kim and Timothy Baldwin. 2006. Inter- preting Semantic Relations in Noun Compounds via Verb Semantics. In Proceedings of the COLING- ACL-06 Main Conference Poster Sessions, pages 491-498, Sydney, Australia.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Detecting novel compounds: The role of distributional evidence", "authors": [ { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 10th Conference of the", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mirella Lapata and Alex Lascarides. 2003. Detect- ing novel compounds: The role of distributional evi- dence. In Proceedings of the 10th Conference of the", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "European Chapter of the Association for Computational Linguistics (EACL-03)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "235--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "European Chapter of the Association for Computa- tional Linguistics (EACL-03), pages 235-242, Bu- dapest, Hungary.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Designing Statistical Language Learners: Experiments on Noun Compounds", "authors": [ { "first": "Mark", "middle": [], "last": "Lauer", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Lauer. 1995. Designing Statistical Language Learners: Experiments on Noun Compounds. Ph.D. thesis, Macquarie University.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Syntax and Semantics of Complex Nominals", "authors": [ { "first": "Judith", "middle": [], "last": "Levi", "suffix": "" } ], "year": 1978, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judith Levi. 1978. The Syntax and Semantics of Com- plex Nominals. Academic Press, New York, NY.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Models for the Semantic Classification of Noun Phrases", "authors": [ { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Badulescu", "suffix": "" }, { "first": "Marta", "middle": [], "last": "Tatu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Antohe", "suffix": "" }, { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the HLT-NAACL-04 Workshop on Computational Lexical Semantics", "volume": "", "issue": "", "pages": "60--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Moldovan, Adriana Badulescu, Marta Tatu, Daniel Antohe, and Roxana Girju. 2004. Models for the Se- mantic Classification of Noun Phrases. In Proceed- ings of the HLT-NAACL-04 Workshop on Computa- tional Lexical Semantics, pages 60-67, Boston, MA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Solving Relational Similarity Problems Using the Web as a Corpus", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Marti", "middle": [ "A" ], "last": "Hearst", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics (ACL-08)", "volume": "", "issue": "", "pages": "452--460", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preslav Nakov and Marti A. Hearst. 2008. Solving Relational Similarity Problems Using the Web as a Corpus. In Proceedings of the 46th Annual Meet- ing of the Association of Computational Linguistics (ACL-08), pages 452-460, Columbus, OH.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Improved Statistical Machine Translation Using Monolingual Paraphrases", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 18th European Conference on Artificial Intelligence (ECAI-08)", "volume": "", "issue": "", "pages": "338--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preslav Nakov. 2008a. Improved Statistical Machine Translation Using Monolingual Paraphrases. In Pro- ceedings of the 18th European Conference on Artifi- cial Intelligence (ECAI-08), pages 338-342, Patras, Greece.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Noun Compound Interpretation Using Paraphrasing Verbs: Feasibility Study", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 13th International Conference on Artificial Intelligence: Methodology, Systems and Applications (AIMSA-08)", "volume": "", "issue": "", "pages": "103--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preslav Nakov. 2008b. Noun Compound Interpreta- tion Using Paraphrasing Verbs: Feasibility Study. In Proceedings of the 13th International Confer- ence on Artificial Intelligence: Methodology, Sys- tems and Applications (AIMSA-08), pages 103-117, Varna, Bulgaria.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Exploring noun-modifier semantic relations", "authors": [ { "first": "Vivi", "middle": [], "last": "Nastase", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 5th International Workshop on Computational Semantics (IWCS-03)", "volume": "", "issue": "", "pages": "285--301", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vivi Nastase and Stan Szpakowicz. 2003. Exploring noun-modifier semantic relations. In Proceedings of the 5th International Workshop on Computational Semantics (IWCS-03), pages 285-301, Tilburg, The Netherlands.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Matching syntactic-semantic graphs for semantic relation assignment", "authors": [ { "first": "Vivi", "middle": [], "last": "Nastase", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 1st Workshop on Graph Based Methods for Natural Language Processing (TextGraphs-06)", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vivi Nastase and Stan Szpakowicz. 2006. Matching syntactic-semantic graphs for semantic relation as- signment. In Proceedings of the 1st Workshop on Graph Based Methods for Natural Language Pro- cessing (TextGraphs-06), pages 81-88, New York, NY.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Cooccurrence Contexts for Noun Compound Interpretation", "authors": [ { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL-07 Workshop on A Broader Perspective on Multiword Expressions (MWE-07)", "volume": "", "issue": "", "pages": "57--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diarmuid\u00d3 S\u00e9aghdha and Ann Copestake. 2007. Co- occurrence Contexts for Noun Compound Interpre- tation. In Proceedings of the ACL-07 Workshop on A Broader Perspective on Multiword Expressions (MWE-07), pages 57-64, Prague, Czech Republic.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Learning Compound Noun Semantics", "authors": [ { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diarmuid\u00d3 S\u00e9aghdha. 2008. Learning Compound Noun Semantics. Ph.D. thesis, University of Cam- bridge.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Cheap and Fast -But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks", "authors": [ { "first": "Rion", "middle": [], "last": "Snow", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP-08)", "volume": "", "issue": "", "pages": "254--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and Fast -But is it Good? Evaluating Non-Expert Annotations for Natural Lan- guage Tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Process- ing (EMNLP-08), pages 254-263, Honolulu, HI.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Nounnoun compound machine translation: A feasibility study on shallow processing", "authors": [ { "first": "Takaaki", "middle": [], "last": "Tanaka", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ACL-03 Workshop on Multiword Expressions (MWE-03)", "volume": "", "issue": "", "pages": "17--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takaaki Tanaka and Timothy Baldwin. 2003. Noun- noun compound machine translation: A feasibility study on shallow processing. In Proceedings of the ACL-03 Workshop on Multiword Expressions (MWE- 03), pages 17-24, Sapporo, Japan.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Algorithm for Automatic Interpretation of Noun Sequences", "authors": [ { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 15th International Conference on Computational Linguistics (COLING-94)", "volume": "", "issue": "", "pages": "782--788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucy Vanderwende. 1994. Algorithm for Automatic Interpretation of Noun Sequences. In Proceedings of the 15th International Conference on Compu- tational Linguistics (COLING-94), pages 782-788, Kyoto, Japan.", "links": null } }, "ref_entries": { "TABREF0": { "text": "Table 1: Statistics about the the training/test datasets. Shown are the total number of verbs proposed as well as the minimum, maximum and average number of paraphrasing verb types/tokens per compound.", "content": "
Training: 250 NCsTesting: 388 NCsAll: 638 NCs
Total Min/Max/Avg TotalMin/Max/AvgTotalMin/Max/Avg
MTurk workers 28,19950/131/72.717,06757/96/68.345,26650/131/71.0
Verb types32,83225/173/84.617,73041/133/70.950,56225/173/79.3
Verb tokens74,407 92/462/191.8 46,247 129/291/185.0 120,654 92/462/189.1
", "num": null, "html": null, "type_str": "table" }, "TABREF2": { "text": "Teams participating in SemEval-2010 Task 9", "content": "", "num": null, "html": null, "type_str": "table" }, "TABREF4": { "text": "Evaluation results for SemEval-2010 Task 9 (* denotes a late submission).", "content": "
", "num": null, "html": null, "type_str": "table" } } } }