{ "paper_id": "S14-2003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:32:15.654386Z" }, "title": "SemEval-2014 Task 3: Cross-Level Semantic Similarity", "authors": [ { "first": "David", "middle": [], "last": "Jurgens", "suffix": "", "affiliation": { "laboratory": "", "institution": "Sapienza University of Rome", "location": {} }, "email": "jurgens@di.uniroma1.it" }, { "first": "Mohammad", "middle": [ "Taher" ], "last": "Pilehvar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Sapienza University of Rome", "location": {} }, "email": "pilehvar@di.uniroma1.it" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "", "affiliation": { "laboratory": "", "institution": "Sapienza University of Rome", "location": {} }, "email": "navigli@di.uniroma1.it" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper introduces a new SemEval task on Cross-Level Semantic Similarity (CLSS), which measures the degree to which the meaning of a larger linguistic item, such as a paragraph, is captured by a smaller item, such as a sentence. Highquality data sets were constructed for four comparison types using multi-stage annotation procedures with a graded scale of similarity. Nineteen teams submitted 38 systems. Most systems surpassed the baseline performance, with several attaining high performance for multiple comparison types. Further, our results show that comparisons of semantic representation increase performance beyond what is possible with text alone.", "pdf_parse": { "paper_id": "S14-2003", "_pdf_hash": "", "abstract": [ { "text": "This paper introduces a new SemEval task on Cross-Level Semantic Similarity (CLSS), which measures the degree to which the meaning of a larger linguistic item, such as a paragraph, is captured by a smaller item, such as a sentence. Highquality data sets were constructed for four comparison types using multi-stage annotation procedures with a graded scale of similarity. Nineteen teams submitted 38 systems. Most systems surpassed the baseline performance, with several attaining high performance for multiple comparison types. Further, our results show that comparisons of semantic representation increase performance beyond what is possible with text alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Given two linguistic items, semantic similarity measures the degree to which the two items have the same meaning. Semantic similarity is an essential component of many applications in Natural Language Processing (NLP) , and similarity measurements between all types of text as well as between word senses lend themselves to a variety of NLP tasks such as information retrieval (Hliaoutakis et al., 2006) or paraphrasing (Glickman and Dagan, 2003) .", "cite_spans": [ { "start": 184, "end": 217, "text": "Natural Language Processing (NLP)", "ref_id": null }, { "start": 377, "end": 403, "text": "(Hliaoutakis et al., 2006)", "ref_id": "BIBREF8" }, { "start": 420, "end": 446, "text": "(Glickman and Dagan, 2003)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Semantic similarity evaluations have largely focused on comparing similar types of lexical items. Most recently, tasks in SemEval (Agirre et al., 2012) and *SEM (Agirre et al., 2013) have introduced benchmarks for measuring Semantic Textual Similarity (STS) between similar-sized sentences and phrases. Other data sets such as that This work is licensed under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organizers. License details: http: //creativecommons.org/licenses/by/4.0/ of Rubenstein and Goodenough (1965) measure similarity between word pairs, while the data sets of Navigli (2006) and Kilgarriff (2001) offer a binary similar-dissimilar distinction between senses. Notably, all of these evaluations have focused on comparisons between a single type, in contrast to application-based evaluations such as summarization and compositionality which incorporate textual items of different sizes, e.g., measuring the quality of a paragraph's sentence summarization.", "cite_spans": [ { "start": 130, "end": 151, "text": "(Agirre et al., 2012)", "ref_id": "BIBREF0" }, { "start": 161, "end": 182, "text": "(Agirre et al., 2013)", "ref_id": "BIBREF1" }, { "start": 548, "end": 580, "text": "Rubenstein and Goodenough (1965)", "ref_id": "BIBREF21" }, { "start": 643, "end": 657, "text": "Navigli (2006)", "ref_id": "BIBREF18" }, { "start": 662, "end": 679, "text": "Kilgarriff (2001)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Task 3 introduces a new evaluation where similarity is measured between items of different types: paragraphs, sentences, phrases, words and senses. Given an item of the lexically-larger type, a system measures the degree to which the meaning of the larger item is captured in the smaller type, e.g., comparing a paragraph to a sentence. We refer to this task as Cross-Level Semantic Similarity (CLSS). A major motivation of this task is to produce semantic similarity systems that are able to compare all types of text, thereby freeing downstream NLP applications from needing to consider the type of text being compared. Task 3 enables assessing the extent to which the meaning of the sentence \"do u know where i can watch free older movies online without download?\" is captured in the phrase \"streaming vintage movies for free\", or how similar is \"circumscribe\" to the phrase \"beating around the bush.\" Furthermore, by incorporating comparisons of a variety of item sizes, Task 3 unifies in a single task multiple objectives from different areas of NLP such as paraphrasing, summarization, and compositionality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Because CLSS generalizes STS to items of different types, successful CLSS systems can directly be applied to all STS-based applications. Furthermore, CLSS systems can be used in other similarity-based applications such as text simplification (Specia et al., 2012) , keyphrase identification (Kim et al., 2010) , lexical substitution (McCarthy and Navigli, 2009) , summariza-tion (Sp\u00e4rck Jones, 2007) , gloss-to-sense mapping (Pilehvar and Navigli, 2014b) , and modeling the semantics of multi-word expressions (Marelli et al., 2014) or polysemous words (Pilehvar and Navigli, 2014a) .", "cite_spans": [ { "start": 242, "end": 263, "text": "(Specia et al., 2012)", "ref_id": "BIBREF23" }, { "start": 291, "end": 309, "text": "(Kim et al., 2010)", "ref_id": "BIBREF12" }, { "start": 333, "end": 361, "text": "(McCarthy and Navigli, 2009)", "ref_id": "BIBREF17" }, { "start": 387, "end": 399, "text": "Jones, 2007)", "ref_id": "BIBREF22" }, { "start": 425, "end": 454, "text": "(Pilehvar and Navigli, 2014b)", "ref_id": "BIBREF20" }, { "start": 510, "end": 532, "text": "(Marelli et al., 2014)", "ref_id": "BIBREF15" }, { "start": 553, "end": 582, "text": "(Pilehvar and Navigli, 2014a)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Task 3 was designed with three main objectives. First, the task should include multiple types of comparison in order to assess each type's difficulty and whether specialized resources are needed for each. Second, the task should incorporate text from multiple domains and writing styles to ensure that system performance is robust across text types. Third, the similarity methods should be able to operate at the sense level, thereby potentially uniting text-and sense-based similarity methods within a single framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Task 3 is intended to serve as an initial task for evaluating the capabilities of systems at measuring all types of semantic similarity, independently of the size of the text. To accomplish this objective, systems were presented with items from four comparison types: (1) paragraph to sentence, (2) sentence to phrase, (3) phrase to word, and (4) word to sense. Given a pair of items, a system must assess the degree to which the meaning of the larger item is captured in the smaller item. WordNet 3.0 was chosen as the sense inventory (Fellbaum, 1998).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objective", "sec_num": "2.1" }, { "text": "Following previous SemEval tasks (Agirre et al., 2012; Jurgens et al., 2012) , Task 3 recognizes that two items' similarity may fall within a range of similarity values, rather than having a binary notion of similar or dissimilar. Initially a six-point (0-5) scale similar to that used in the STS tasks was considered (Agirre et al., 2012) ; however, annotators found difficulty in deciding between the lower-similarity options. After multiple revisions and feedback from a group of initial annotators, we developed a five-point Likert scale for rating a pair's similarity, shown in Table 1 . 1 The scale was designed to systematically order a broad range of semantic relations: synonymy, similarity, relatedness, topical association, and unrelatedness. Because items are of different sizes, the highest rating is defined as very similar rather 1 Annotation materials along with all training and test data are available on the task website http://alt.qcri. org/semeval2014/task3/. than identical to allow for some small loss in the overall meaning. Furthermore, although the scale is designed as a Likert scale, annotators were given flexibility when rating items to use values between the defined points in the scale, indicating a blend of two relations. Table 2 provides examples of pairs for each scale rating for all four comparison type.", "cite_spans": [ { "start": 33, "end": 54, "text": "(Agirre et al., 2012;", "ref_id": "BIBREF0" }, { "start": 55, "end": 76, "text": "Jurgens et al., 2012)", "ref_id": "BIBREF10" }, { "start": 318, "end": 339, "text": "(Agirre et al., 2012)", "ref_id": "BIBREF0" }, { "start": 593, "end": 594, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 583, "end": 590, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1256, "end": 1263, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Rating Scale", "sec_num": "2.2" }, { "text": "Though several data sets exist for STS and comparing words and senses, no standard data set exists for CLSS. Therefore, we created a pilot data set designed to test the capabilities of systems in a variety of settings. The task data for all comparisons but word-to-sense was created using a threephase process. First, items of all sizes were selected from publicly-available data sets. Second, the selected items were used to produce a second item of the next-smaller level (e.g., a sentence inspires a phrase). Third, the pairs of items were annotated for their similarity. Because of the expertise required for working with word senses, the word-to-sense data set was constructed by the organizers using a separate but similar process. In the training and test data, each comparison type had 500 annotated examples, for a total of 2000 pairs each for training and test. We first describe the corpora used by Task 3 followed by the annotation process. We then describe the construction of the word-to-sense data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Data", "sec_num": "3" }, { "text": "Test and training data were constructed by drawing from multiple publicly-available corpora and then manually generating a paired item for comparison. To achieve our second objective for the task, the data sets used to create item pairs included texts from specific domains, social media, and text with idiomatic or slang language. Table 3 summarizes the corpora and their distribution across the test and training sets for each comparison type, with a high-level description of the genre of the data. We briefly describe the corpora next.", "cite_spans": [], "ref_spans": [ { "start": 332, "end": 340, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Corpora", "sec_num": "3.1" }, { "text": "The WikiNews, Reuters 21578, and Microsoft Research (MSR) Paraphrase corpora are all drawn from newswire text, with WikiNews being authored by volunteer writers and the latter two corpora written by professionals. Travel Guides was drawn from the Berlitz travel guides data in the Open American National Corpus (Ide and Suderman, 2004) and includes very verbose sentences 4 -Very Similar The two items have very similar meanings and the most important ideas, concepts, or actions in the larger text are represented in the smaller text. Some less important information may be missing, but the smaller text is a very good summary of the larger text.", "cite_spans": [ { "start": 311, "end": 335, "text": "(Ide and Suderman, 2004)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.1" }, { "text": "The two items share many of the same important ideas, concepts, or actions, but include slightly different details. The smaller text may use similar but not identical concepts (e.g., car vs. vehicle), or may omit a few of the more important ideas present in the larger text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "-Somewhat Similar", "sec_num": "3" }, { "text": "The two items have dissimilar meaning, but share concepts, ideas, and actions that are related. The smaller text may use related but not necessarily similar concepts (window vs. house) but should still share some overlapping concepts, ideas, or actions with the larger text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "-Somewhat related but not similar", "sec_num": "2" }, { "text": "The two items describe dissimilar concepts, ideas and actions, but may share some small details or domain in common and might be likely to be found together in a longer document on the same topic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "-Slightly related", "sec_num": "1" }, { "text": "The two items do not mean the same thing and are not on the same topic. Leskovec, 2013) and are customer-authored reviews for a variety of food items. Fables were taken from a collection of Aesop's Fables. The Yahoo! Answers corpus was derived from the Yahoo! Answers data set, which is a collection of questions and answers from the Community Question Answering (CQA) site; the data set is notable for having the highest degree of ungrammaticality in our test set. SMT Europarl is a collection of texts from the English-language proceedings of the European parliament (Koehn, 2005) ; Europarl data was also used in the PPDB corpus (Ganitkevitch et al., 2013) , from which phrases were extracted. Wikipedia was used to generate two phrase data sets from (1) extracting the definitional portion of an article's initial sentence, e.g., \"An [article name] is a [definition],\" and (2) captions for an article's images. Web queries were gathered from online sources of realworld queries. Last, the first and second authors generated slang and idiomatic phrases based on expressions contained in Wiktionary.", "cite_spans": [ { "start": 72, "end": 87, "text": "Leskovec, 2013)", "ref_id": "BIBREF16" }, { "start": 569, "end": 582, "text": "(Koehn, 2005)", "ref_id": "BIBREF13" }, { "start": 632, "end": 659, "text": "(Ganitkevitch et al., 2013)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "-Unrelated", "sec_num": "0" }, { "text": "For all comparison types, the test data included one genre that was not seen in the training data in order to test the generalizability of the systems on data from a novel domain. In addition, we included a new type of challenge genre with Fables; unlike other domains, the sentences paired with the fable paragraphs were potentially semantic interpretations of the intent of the fable, i.e., the moral of the story. These interpretations often have little textual overlap with the fable itself and require a deeper interpretation of the paragraph's meaning in order to make the correct similarity judgment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "-Unrelated", "sec_num": "0" }, { "text": "Prior to the annotation process, all content was filtered to ensure its size and format matched the desired text type. By average, a paragraph in our dataset consists of 3.8 sentences. Typos and grammatical mistakes in the community-produced content were left unchanged.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "-Unrelated", "sec_num": "0" }, { "text": "A two-phase process was used to produce the test and training data sets for all but word-to-sense. Phase 1 generates the item pairs from source texts and Phase 2 rates the pairs' similarity. Phase 1 In this phase, annotators were shown the larger text of a comparison type and then asked to produce the smaller text of the pair at a specified similarity; for example an annotator may be shown a paragraph and asked to write a sentence that is a \"3\" rating. Annotators were instructed to leave the smaller text blank if they had difficulty understanding the larger text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation Process", "sec_num": "3.2" }, { "text": "The requested similarity ratings were balanced to create a uniform distribution of similarity values. Annotators were asked only to generate ratings of 1-4; pairs with a \"0\" rating were automatically created by pairing the larger item with random selections of text of the appropriate size from the same corpus. The intent of Phase 1 is to produce varied item pairs with an expected uniform distribution of similarity values along the rating scale.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation Process", "sec_num": "3.2" }, { "text": "Four annotators participated in Phase 1 and were paid a bulk rate of e110 for completing the work. In addition to the four annotators, the first two organizers also assisted in Phase 1: Both completed items from the SCIENTIFIC genre and the first organizer produced 994 pairs, including all PARAGRAPH TO SENTENCE Paragraph: Teenagers take aerial shots of their neighbourhood using digital cameras sitting in old bottles which are launched via kites -a common toy for children living in the favelas. They then use GPS-enabled smartphones to take pictures of specific danger points -such as rubbish heaps, which can become a breeding ground for mosquitoes carrying dengue fever.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation Process", "sec_num": "3.2" }, { "text": "4 Students use their GPS-enabled cellphones to take birdview photographs of a land in order to find specific danger points such as rubbish heaps. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rating Sentence", "sec_num": null }, { "text": "Teenagers are enthusiastic about taking aerial photograph in order to study their neighbourhood. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rating Sentence", "sec_num": null }, { "text": "Aerial photography is a great way to identify terrestrial features that aren't visible from the ground level, such as lake contours or river paths. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rating Sentence", "sec_num": null }, { "text": "During the early days of digital SLRs, Canon was pretty much the undisputed leader in CMOS image sensor technology. 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rating Sentence", "sec_num": null }, { "text": "Syrian President Bashar al-Assad tells the US it will \"pay the price\" if it strikes against Syria.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rating Sentence", "sec_num": null }, { "text": "Sentence: Schumacher was undoubtedly one of the very greatest racing drivers there has ever been, a man who was routinely, on every lap, able to dance on a limit accessible to almost no-one else. those for the METAPHORIC genre, and those that the other annotators left blank. Phase 2 Here, the item pairs produced in Phase 1 were rated for their similarity according to the scale described in Section 2.2. An initial pilot study showed that crowdsourcing was only moderately effective for producing these ratings with high agreement. Furthermore, the texts used in Task 3 came from a variety of genres, such as scientific domains, which some workers had difficulty understanding. While we note that crowdsourcing has been used in prior STS tasks for generating similarity scores (Agirre et al., 2012; Agirre et al., 2013) , both tasks' efforts encountered lower worker score correlations on some portions of the dataset (Diab, 2013) , suggesting that crowdsourcing may not be reliable for judging the similarity of certain types of text. See Section 3.5 for additional details.", "cite_spans": [ { "start": 779, "end": 800, "text": "(Agirre et al., 2012;", "ref_id": "BIBREF0" }, { "start": 801, "end": 821, "text": "Agirre et al., 2013)", "ref_id": "BIBREF1" }, { "start": 920, "end": 932, "text": "(Diab, 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "SENTENCE TO PHRASE", "sec_num": null }, { "text": "Therefore, to ensure high quality, the first two organizers rated all items independently. Because the sentence-to-phrase and phrase-to-word comparisons contain slang and idiomatic language, a third American English mother tongue annotator was added for those data sets. The third annotator was compensated e250 for their assistance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rating", "sec_num": null }, { "text": "Annotators were allowed to make finer-grained distinctions in similarity using multiples of 0.25. For all items, when any two annotators disagreed by one or more scale points, we performed an adjudication to determine the item's rating in the gold standard. The adjudication process revealed that nearly all disagreements were due to annotator mistakes, e.g., where one annotator had overlooked a part of the text or had misunderstood the text's meaning. The final similarity rating for an unadjudicated item was the average of its ratings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rating", "sec_num": null }, { "text": "Word-to-sense comparison items were generated in three phases. To increase the diversity and challenge of the data set, the word-to-sense was created for four types of words: (1) a word and its intended meaning are in WordNet, (2) a word was not in the WordNet vocabulary, e.g., the verb \"zombify,\" (3) the word is in WordNet, but has a novel meaning that is not in WordNet, e.g., the adjective \"red\" referring to Communist, and (4) a set of challenge words where one of the word's senses and a second sense are directly connected by an edge in the WordNet network, but the two senses are not always highly similar. In Phase 1, to select the first type of word, lemmas in WordNet were ranked by frequency in Wikipedia; the ranking was divided into ten equally-sized groups, with words sampled evenly from groups in order to control for word frequency in the task data. For the second type, words not present in WordNet were drawn from two sources: examining words in Wikipedia, which we refer to as out-of-vocabulary (OOV), and slang words. For the third type, to identify words with a novel sense, we examined Wiktionary entries and chose novel, salient senses that were distinct from those in WordNet. We refer to words with a novel meaning as out-of-sense (OOS). Words of the fourth type were chosen by hand. The part-of-speech distributions for all four types of items were balanced as 50% noun, 25% verb, 25% adjective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-to-Sense", "sec_num": "3.3" }, { "text": "In Phase 2, each word was associated with a particular WordNet sense for its intended meaning, or the closest available sense in WordNet for OOV or OOS items. To select a comparison sense, we adopted a neighborhood search procedure: All synsets connected by at most three edges in the WordNet semantic network were shown. Given a word and its neighborhood, the corresponding sense for the item pair was selected by matching the sense with an intended similarity for the pair, much like how text items were generated in Phase 1. The reason behind using this neighborhood-based selection process was to minimize the potential bias of consistently selecting lower-similarity items from those further away in the WordNet semantic network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-to-Sense", "sec_num": "3.3" }, { "text": "In Phase 3, given all word-sense pairs, annotators were shown the definitions associated with the intended meaning of the word and of the sense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-to-Sense", "sec_num": "3.3" }, { "text": "Definitions were drawn from WordNet or from Wiktionary, if the word was OOV or OOS. Annotators had access to the WordNet structure for the compared sense in order to take into account its parents and siblings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-to-Sense", "sec_num": "3.3" }, { "text": "The trial data set was created using a separate process. Source text was drawn from WikiNews; we selected the text for the larger item of each level and then generated the text or sense of the smaller. A total of 156 items were produced. After, four fluent annotators independently rated all items. Inter-annotator agreement rates varied in 0.734-0.882, using Krippendorff's \u03b1 (Krippendorff, 2004) on the interval scale.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trial Data", "sec_num": "3.4" }, { "text": "The resulting annotation process produced a highquality data set. First, Table 4 shows the interannotator agreement (IAA) statistics for each comparison type on both the full and unadjudicated portions of the data set. IAA was measured using Krippendorff's \u03b1 for interval data. Because the disagreements that led to lower \u03b1 in the full data were resolved via adjudication, the quality of the full data set is expected to be on par with that of the unadjudicated data. The annotation quality for Task 3 was further improved by manually adjudicating all significant disagreements.", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 80, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Data Set Discussion", "sec_num": "3.5" }, { "text": "In contrast, the data sets of current STS tasks aggregated data from annotators with moderate correlation with each other (Diab, 2013) ; STS-2012 (Agirre et al., 2012) Table 4 : IAA rates for the task data. inter-annotator correlations of 0.377-0.832. However, we note that Pearson correlation and Krippendorff's \u03b1 are not directly comparable (Artstein and Poesio, 2008) , as annotators' scores may be correlated, but completely disagree. Second, the two-phase construction process produced values that were evenly distributed across the rating scale, shown in Figure 1 as the distribution of the values for all data sets. However, we note that this creation procedure was very resource intensive and, therefore, semi-automated or crowdsourcing-based approaches for producing high-quality data will be needed to expand the size of the data in future CLSS-based evaluations. Nevertheless, as a pilot task, the manual effort was essential for ensuring a rigorouslyconstructed data set for the initial evaluation.", "cite_spans": [ { "start": 122, "end": 134, "text": "(Diab, 2013)", "ref_id": "BIBREF4" }, { "start": 146, "end": 167, "text": "(Agirre et al., 2012)", "ref_id": "BIBREF0" }, { "start": 343, "end": 370, "text": "(Artstein and Poesio, 2008)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 168, "end": 175, "text": "Table 4", "ref_id": null }, { "start": 561, "end": 569, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Data Set Discussion", "sec_num": "3.5" }, { "text": "Participation The ultimate goal of Task 3 is to produce systems that can measure similarity for multiple types of items. Therefore, we strongly encouraged participating teams to submit systems that were capable of generating similarity judgments for multiple comparison types. However, to further the analysis, participants were also permitted to submit systems specialized to a single domain. Teams were allowed at most three system submissions, regardless of the number of comparison types supported.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "Scoring Systems were required to provide similarity values for all items within a comparison type. Following prior STS evaluations, systems were scored for each comparison type using Pearson correlation. Additionally, we include a second score using Spearman's rank correlation, which is only affected by differences in the ranking of items by similarity, rather than differences in the similarity values. Pearson correlation was chosen as the official evaluation metric since the goal of the task is to produce similar scores. However, Spearman's rank correlation provides an important metric for assessing systems whose scores do not match human scores but whose rankings might, e.g., stringsimilarity measures. Ultimately, a global ranking was produced by ordering systems by the sum of their Pearson correlation values for each of the four comparison levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "Baselines The official baseline system was based on the Longest Common Substring (LCS), normalized by the length of items using the method of Clough and Stevenson (2011) . Given a pair, the similarity is reported as the normalized length of the LCS. In the case of word-to-sense, the LCS for a word-sense pair is measured between the sense's definition in WordNet and the definitions of each sense of the pair's word, reporting the maximal LCS. Because OOV and slang words are not in WordNet, the baseline reports the average similarity value of non-OOV items. Baseline scores were made public after the evaluation period ended.", "cite_spans": [ { "start": 142, "end": 169, "text": "Clough and Stevenson (2011)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "Because LCS is a simple procedure, a second baseline based on Greedy String Tiling (GST) (Wise, 1996) was added after the evaluation period concluded. Unlike LCS, GST better handles the transpositions of tokens across the two texts and can still report high similarity when encountering reordered text. The minimum match length for GST was set to 6.", "cite_spans": [ { "start": 89, "end": 101, "text": "(Wise, 1996)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "Nineteen teams submitted 38 systems. Of those systems, 34 produced values for paragraph-tosentence and sentence-to-phrase comparisons, 22 for phrase-to-word, and 20 for word-to-sense. Two teams submitted revised scores for their systems after the deadline but before the test set had been released. These systems were scored and noted in the results but were not included in the official ranking. Table 5 shows the performance of the participating systems across all the four comparison types in terms of Pearson correlation. The two right-most columns show system rankings by Pearson (Official Rank) and Spearman's ranks correlation.", "cite_spans": [], "ref_spans": [ { "start": 397, "end": 404, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The SimCompass system attained first place, partially due to its superior performance on phrase-to-word comparisons, providing an improvement of 0.10 over the second-best system. The late-submitted version of the Meerkat Mafia pairingWords \u2020 system corrected a bug in the phrase-to-word comparison, which ultimately would have attained first place due to large performance improvements over SimCompass on phrase-to-word and word-to-sense. ENCU and UNAL-NLP systems rank respectively second and third while the former being always in top-4 and the latter being among the top-7 systems across the four comparison types. Most systems were able to surpass the naive LCS baseline; however, the more sophisticated GST baseline (which accounts for text transposition) outperforms two-thirds of the systems. Importantly, both baselines perform poorly on smaller text, highlighting the importance of performing a semantic comparison, as opposed to a string-based one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Within the individual comparison types, specialized systems performed well for the larger text sizes. In the paragraph-to-sentence type, the run1 system of UNAL-NLP provides the best official result, with the late RTM-DCU run1 \u2020 system surpassing its performance slightly. Meerkat Mafia provides the best performance in sentenceto-phrase with its SuperSaiyan system and the best performances in phrase-to-word and word-tosense with its late pairingWords \u2020 system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Analysis Performance across the comparison types varied considerably, with systems performing best on comparisons between longer textual items. As a general trend, both the baselines' and systems' performances tend to decrease with the size of lexical items in the comparison types. A main contributing factor to this is the reliance on textual similarity measures (such as the baselines), which perform well when two items' may share content. However, as the items' content becomes smaller, e.g., a word or phrase, the textual similarity does not necessarily provide a meaningful indication of the semantic similarity between the two. This performance discrepancy suggests that, in order to perform well, CLSS systems must rely on comparisons between semantic representations rather than textual representations. The two top-performing systems on these smaller levels, Meerkat Mafia and SimCompass, used additional resources beyond WordNet to expand a word or sense to its definition or to represent words with distributional representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison-Type", "sec_num": null }, { "text": "Per-genre results and discussions Task 3 includes multiple genres within the data set for each comparison type. Figure 2 shows the correlation of each system for each of these genres, with systems ordered left to right according to their official ranking in Table 5 . An interesting observation is that a system's official rank does not always match the rank from aggregating its correlations for each genre individually. This difference suggests that some systems provided good similarity judgments on individual genres, but their range of similarity values was not consistent between genres leading to lower overall Pearson correlation. For instance, in the phrase-to-word comparison type, the aggregated per-genre performance of Duluth-1 and Duluth-3 are among the best whereas their overall Pearson performance puts these systems among the worst-performing ones in the comparison type.", "cite_spans": [], "ref_spans": [ { "start": 112, "end": 120, "text": "Figure 2", "ref_id": null }, { "start": 258, "end": 265, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Comparison-Type", "sec_num": null }, { "text": "Among the genres, CQA, SLANG, and ID-IOMATIC prove to be the more difficult for systems to interpret and judge. These genres included misspelled, colloquial, or slang language which required converting the text into semantic form in order to meaningfully compare it. Furthermore, as expected, the METAPHORIC genre was the most difficult, with no system performing well; we view the METAPHORIC genre as an open challenge for future systems to address when interpreting larger text. On the other hand, SCI-ENTIFIC, TRAVEL, and NEWSWIRE tend to be the easiest genres for paragraph-to-sentence and sentence-to-phrase. All three genres tend to include many named entities or highly-specific language, which are likely to be more preserved in the more-similar paired items. Similarly, DESCRIP-TIVE and SEARCH genres were easiest in phraseto-word, which also often featured specific words that were preserved in highly-similar pairs. In the case of word-to-sense, REGULAR proves to be the least difficult genre. Interestingly, in wordto-sense, most systems attained moderate performance for comparisons with words not in Word-Net (i.e., OOV) but had poor performance for slang words, which were also OOV. This difference suggests that systems could be improved with additional semantic resources for slang.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison-Type", "sec_num": null }, { "text": "Spearman Rank Analysis Although the goal of Task 3 is to have systems produce similarity judgments, some applications may benefit from simply having a ranking of pairs, e.g., ranking summarizations by goodness. The Spearman rank correlation measures the ability of systems to perform such a ranking. Surprisingly, with the Spearmanbased ranking, the Duluth1 and Duluth3 systems attain the third and fifth ranks -despite being among the lowest ranked with Pearson. Both systems were unsupervised and produced similarity values that did not correlate well with those of humans. However, their Spearman ranks demonstrate the systems ability to correctly identify relative similarity and suggests that such unsupervised systems could improve their Pearson correlation by using the training data to tune the range of similarity values to match those of humans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison-Type", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Tiziano Flati, Marc Franco Salvador, Maud Erhmann, and Andrea Moro for their help in preparing the trial data; Gaby Ford, Chelsea Smith, and Eve Atkinson for their help in generating the training and test data; and Amy Templin for her help in generating and rating the training and test data.The authors gratefully acknowledge the support of the ERC Starting Grant Multi-JEDI No. 259234.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": " 0 1 2 3 4 5 U N A L -N L P -2 E C N U -1 M e e rk a t_ M a fi a -S S M e e rk a t_ M a fi a -H S e m a n ti K L U E -1 U N A L -N L P -1 S im C o m p a s s -1 B U A P -1 B U A P -2 M e e rk a t_ M a fi a -P W M e e rk a t_ M a fi a -H M e e rk a t_ M a fi a -P W Figure 2 : A stacked histogram for each system, showing its Pearson correlations for genre-specific portions of the gold-standard data, which may also be negative.", "cite_spans": [], "ref_spans": [ { "start": 1, "end": 22, "text": "0 1 2 3 4 5", "ref_id": null }, { "start": 274, "end": 282, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "This paper introduces a new similarity task, Cross-Level Semantic Similarity, for measuring the semantic similarity of lexical items of different sizes. Using a multi-phase annotation procedure, we have produced a high-quality data set of 4000 items comprising of various genres, evenlysplit between training and test with four types of comparison: paragraph-to-sentence, sentence-tophrase, phrase-to-word, and word-to-sense. Nineteen teams submitted 38 systems, with most teams surpassing the baseline system and several systems achieving high performance in multiple types of comparison. However, a clear performance trend emerged where systems perform well only when the text itself is similar, rather than its underlying meaning. Nevertheless, the results of Task 3 are highly encouraging and point to clear future objectives for developing CLSS systems that operate on more semantic representations rather than text. In future work on CLSS evaluation, we first intend to develop scalable annotation methods to increase the data sets. Second, we plan to add new evaluations where systems are tested according to their performance in an application related to each comparison-type, such as measuring the quality of a paraphrase or summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "SemEval-2012 task 6: A pilot on semantic textual similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 6th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "385--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez- Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the 6th International Workshop on Semantic Evaluation (SemEval-2012), pages 385-393, Montr\u00e9al, Canada.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "*SEM 2013 Shared Task: Semantic textual similarity, including a pilot on typedsimilarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Second Joint Conference on Lexical and Computational Semantics (*SEM)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. *SEM 2013 Shared Task: Semantic textual similarity, including a pilot on typed- similarity. In Proceedings of the Second Joint Confer- ence on Lexical and Computational Semantics (*SEM), Atlanta, Georgia.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Inter-coder agreement for computational linguistics", "authors": [ { "first": "Ron", "middle": [], "last": "Artstein", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "4", "pages": "555--596", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-coder agree- ment for computational linguistics. Computational Lin- guistics, 34(4):555-596.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Developing a corpus of plagiarised short answers. Language Resources and Evaluation", "authors": [ { "first": "Paul", "middle": [], "last": "Clough", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2011, "venue": "", "volume": "45", "issue": "", "pages": "5--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Clough and Mark Stevenson. 2011. Developing a cor- pus of plagiarised short answers. Language Resources and Evaluation, 45(1):5-24.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Semantic textual similarity: past present and future", "authors": [ { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2013, "venue": "Joint Symposium on Semantic Processing. Keynote address", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mona Diab. 2013. Semantic textual similarity: past present and future. In Joint Symposium on Semantic Process- ing. Keynote address. http://jssp2013.fbk.eu/ sites/jssp2013.fbk.eu/files/Mona.pdf.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "WordNet: An Electronic Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Electronic Database. MIT Press, Cambridge, MA.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "PPDB: The paraphrase database", "authors": [ { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", "volume": "", "issue": "", "pages": "758--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison- Burch. 2013. PPDB: The paraphrase database. In Pro- ceedings of the 2013 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics: Human Language Technologies (NAACL-HLT), pages 758-764, Atlanta, Georgia.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Acquiring lexical paraphrases from a single corpus", "authors": [ { "first": "Oren", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP)", "volume": "", "issue": "", "pages": "81--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Glickman and Ido Dagan. 2003. Acquiring lexical paraphrases from a single corpus. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP), pages 81-90, Borovets, Bulgaria.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Epimenidis Voutsakis, Euripides GM Petrakis, and Evangelos Milios", "authors": [ { "first": "Angelos", "middle": [], "last": "Hliaoutakis", "suffix": "" }, { "first": "Giannis", "middle": [], "last": "Varelas", "suffix": "" } ], "year": 2006, "venue": "International Journal on Semantic Web and Information Systems", "volume": "2", "issue": "3", "pages": "55--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angelos Hliaoutakis, Giannis Varelas, Epimenidis Voutsakis, Euripides GM Petrakis, and Evangelos Milios. 2006. Information retrieval by semantic similarity. Interna- tional Journal on Semantic Web and Information Systems, 2(3):55-73.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The American National Corpus First Release", "authors": [ { "first": "Nancy", "middle": [], "last": "Ide", "suffix": "" }, { "first": "K", "middle": [], "last": "Suderman", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 4 th Language Resources and Evaluation Conference (LREC)", "volume": "", "issue": "", "pages": "1681--1684", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nancy Ide and K. Suderman. 2004. The American Na- tional Corpus First Release. In Proceedings of the 4 th Language Resources and Evaluation Conference (LREC), pages 1681-1684, Lisbon, Portugal.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "SemEval-2012 Task 2: Measuring Degrees of Relational Similarity", "authors": [ { "first": "David", "middle": [], "last": "Jurgens", "suffix": "" }, { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Holyoak", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 6th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "356--364", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Jurgens, Saif Mohammad, Peter Turney, and Keith Holyoak. 2012. SemEval-2012 Task 2: Measuring Degrees of Relational Similarity. In Proceedings of the 6th International Workshop on Semantic Evaluation (SemEval-2012), pages 356-364, Montr\u00e9al, Canada.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "English lexical sample task description", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 2001, "venue": "The Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Systems (SENSEVAL-2)", "volume": "", "issue": "", "pages": "17--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Kilgarriff. 2001. English lexical sample task de- scription. In The Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Sys- tems (SENSEVAL-2), pages 17-20, Toulouse, France.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "SemEval-2010 Task 5: Automatic Keyphrase Extraction from Scientific Articles", "authors": [ { "first": "Nam", "middle": [], "last": "Su", "suffix": "" }, { "first": "Olena", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Medelyan", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Kan", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 5th International Workshop on Semantic Evaluation (SemEval-2010)", "volume": "", "issue": "", "pages": "21--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timo- thy Baldwin. 2010. SemEval-2010 Task 5: Automatic Keyphrase Extraction from Scientific Articles. In Pro- ceedings of the 5th International Workshop on Semantic Evaluation (SemEval-2010), pages 21-26, Los Angeles, California.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Machine Translation Summit X", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for sta- tistical machine translation. In Proceedings of Machine Translation Summit X, pages 79-86, Phuket, Thailand.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Content Analysis: An Introduction to Its Methodology", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaus Krippendorff. 2004. Content Analysis: An Introduc- tion to Its Methodology. Sage, Thousand Oaks, CA, sec- ond edition.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "SemEval-2014 Task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment", "authors": [ { "first": "Marco", "middle": [], "last": "Marelli", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Menini", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Ben- tivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. SemEval-2014 Task 1: Evaluation of compositional dis- tributional semantic models on full sentences through se- mantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval-2014), Dublin, Ireland.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews", "authors": [ { "first": "Julian John Mcauley", "middle": [], "last": "", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 22nd International Conference on World Wide Web (WWW)", "volume": "", "issue": "", "pages": "897--908", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian John McAuley and Jure Leskovec. 2013. From ama- teurs to connoisseurs: modeling the evolution of user ex- pertise through online reviews. In Proceedings of the 22nd International Conference on World Wide Web (WWW), pages 897-908, Rio de Janeiro, Brazil.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The English lexical substitution task. Language Resources and Evaluation", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2009, "venue": "", "volume": "43", "issue": "", "pages": "139--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana McCarthy and Roberto Navigli. 2009. The English lexical substitution task. Language Resources and Evalu- ation, 43(2):139-159.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Meaningful clustering of senses helps boost Word Sense Disambiguation performance", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL)", "volume": "", "issue": "", "pages": "105--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli. 2006. Meaningful clustering of senses helps boost Word Sense Disambiguation performance. In Proceedings of the 21st International Conference on Com- putational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (COLING- ACL), pages 105-112, Sydney, Australia.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A large-scale pseudoword-based evaluation framework for state-of-the-art Word Sense Disambiguation", "authors": [ { "first": "Mohammad", "middle": [], "last": "Taher", "suffix": "" }, { "first": "Pilehvar", "middle": [], "last": "", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2014, "venue": "Computational Linguistics", "volume": "40", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Taher Pilehvar and Roberto Navigli. 2014a. A large-scale pseudoword-based evaluation framework for state-of-the-art Word Sense Disambiguation. Computa- tional Linguistics, 40(4).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A robust approach to aligning heterogeneous lexical resources", "authors": [ { "first": "Mohammad", "middle": [], "last": "Taher", "suffix": "" }, { "first": "Pilehvar", "middle": [], "last": "", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "468--478", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Taher Pilehvar and Roberto Navigli. 2014b. A robust approach to aligning heterogeneous lexical re- sources. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 468- 478, Baltimore, USA.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Contextual correlates of synonymy", "authors": [ { "first": "Herbert", "middle": [], "last": "Rubenstein", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Goodenough", "suffix": "" } ], "year": 1965, "venue": "Communications of the ACM", "volume": "8", "issue": "10", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herbert Rubenstein and John B. Goodenough. 1965. Con- textual correlates of synonymy. Communications of the ACM, 8(10):627-633.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Automatic summarising: The state of the art. Information Processing and Management", "authors": [ { "first": "Karen Sp\u00e4rck", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2007, "venue": "", "volume": "43", "issue": "", "pages": "1449--1481", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karen Sp\u00e4rck Jones. 2007. Automatic summarising: The state of the art. Information Processing and Management, 43(6):1449-1481.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "SemEval-2012 Task 1: English Lexical Simplification", "authors": [ { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Sujay", "middle": [], "last": "Kumar Jauhar", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "347--355", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucia Specia, Sujay Kumar Jauhar, and Rada Mihalcea. 2012. SemEval-2012 Task 1: English Lexical Simplifica- tion. In Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval-2012), pages 347-355.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "YAP3: Improved detection of similarities in computer program and other texts", "authors": [ { "first": "Michael", "middle": [ "J" ], "last": "Wise", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the twenty-seventh SIGCSE technical symposium on Computer science education, SIGCSE '96", "volume": "", "issue": "", "pages": "130--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael J. Wise. 1996. YAP3: Improved detection of simi- larities in computer program and other texts. In Proceed- ings of the twenty-seventh SIGCSE technical symposium on Computer science education, SIGCSE '96, pages 130- 134, Philadelphia, Pennsylvania, USA.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Similarity ratings distributions." }, "TABREF0": { "num": null, "html": null, "text": "The five-point Likert scale used to rate the similarity of item pairs. SeeTable 2for examples.", "content": "