{ "paper_id": "S17-2002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:28:49.273589Z" }, "title": "SemEval-2017 Task 2: Multilingual and Cross-lingual Semantic Word Similarity", "authors": [ { "first": "Jose", "middle": [], "last": "Camacho-Collados", "suffix": "", "affiliation": { "laboratory": "", "institution": "Sapienza University of Rome", "location": {} }, "email": "" }, { "first": "Mohammad", "middle": [ "Taher" ], "last": "Pilehvar", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Cambridge", "location": {} }, "email": "" }, { "first": "Nigel", "middle": [], "last": "Collier", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Cambridge", "location": {} }, "email": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "", "affiliation": { "laboratory": "", "institution": "Sapienza University of Rome", "location": {} }, "email": "navigli@di.uniroma1.it" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper introduces a new task on Multilingual and Cross-lingual Semantic Word Similarity which measures the semantic similarity of word pairs within and across five languages: English, Farsi, German, Italian and Spanish. High quality datasets were manually curated for the five languages with high inter-annotator agreements (consistently in the 0.9 ballpark). These were used for semi-automatic construction of ten cross-lingual datasets. 17 teams participated in the task, submitting 24 systems in subtask 1 and 14 systems in subtask 2. Results show that systems that combine statistical knowledge from text corpora, in the form of word embeddings, and external knowledge from lexical resources are best performers in both subtasks. More information can be found on the task website:", "pdf_parse": { "paper_id": "S17-2002", "_pdf_hash": "", "abstract": [ { "text": "This paper introduces a new task on Multilingual and Cross-lingual Semantic Word Similarity which measures the semantic similarity of word pairs within and across five languages: English, Farsi, German, Italian and Spanish. High quality datasets were manually curated for the five languages with high inter-annotator agreements (consistently in the 0.9 ballpark). These were used for semi-automatic construction of ten cross-lingual datasets. 17 teams participated in the task, submitting 24 systems in subtask 1 and 14 systems in subtask 2. Results show that systems that combine statistical knowledge from text corpora, in the form of word embeddings, and external knowledge from lexical resources are best performers in both subtasks. More information can be found on the task website:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Measuring the extent to which two words are semantically similar is one of the most popular research fields in lexical semantics, with a wide range of Natural Language Processing (NLP) applications. Examples include Word Sense Disambiguation (Miller et al., 2012) , Information Retrieval (Hliaoutakis et al., 2006) , Machine Translation (Lavie and Denkowski, 2009) , Lexical Substitution (McCarthy and Navigli, 2009) , Question Answering (Mohler et al., 2011) , Text Summarization (Mohammad and Hirst, 2012) , and Ontology Alignment (Pilehvar and . Moreover, word similarity is generally accepted as the most direct in-vitro evaluation framework for Authors marked with * contributed equally.", "cite_spans": [ { "start": 242, "end": 263, "text": "(Miller et al., 2012)", "ref_id": "BIBREF39" }, { "start": 288, "end": 314, "text": "(Hliaoutakis et al., 2006)", "ref_id": "BIBREF25" }, { "start": 337, "end": 364, "text": "(Lavie and Denkowski, 2009)", "ref_id": "BIBREF30" }, { "start": 388, "end": 416, "text": "(McCarthy and Navigli, 2009)", "ref_id": "BIBREF32" }, { "start": 438, "end": 459, "text": "(Mohler et al., 2011)", "ref_id": "BIBREF41" }, { "start": 481, "end": 507, "text": "(Mohammad and Hirst, 2012)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "word representation, a research field that has recently received massive research attention mainly as a result of the advancements in the use of neural networks for learning dense low-dimensional semantic representations, often referred to as word embeddings (Mikolov et al., 2013; Pennington et al., 2014) . Almost any application in NLP that deals with semantics can benefit from efficient semantic representation of words (Turney and Pantel, 2010) .", "cite_spans": [ { "start": 259, "end": 281, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF36" }, { "start": 282, "end": 306, "text": "Pennington et al., 2014)", "ref_id": "BIBREF44" }, { "start": 425, "end": 450, "text": "(Turney and Pantel, 2010)", "ref_id": "BIBREF57" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, research in semantic representation has in the main focused on the English language only. This is partly due to the limited availability of word similarity benchmarks in languages other than English. Given the central role of similarity datasets in lexical semantics, and given the importance of moving beyond the barriers of the English language and developing languageindependent and multilingual techniques, we felt that this was an appropriate time to conduct a task that provides a reliable framework for evaluating multilingual and cross-lingual semantic representation and similarity techniques. The task has two related subtasks: multilingual semantic similarity (Section 1.1), which focuses on representation learning for individual languages, and crosslingual semantic similarity (Section 1.2), which provides a benchmark for multilingual research that learns unified representations for multiple languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While the English community has been using standard word similarity datasets as a common evaluation benchmark, semantic representation for other languages has generally proved difficult to evaluate. A reliable multilingual word similarity benchmark can be hugely beneficial in evaluating the robustness and reliability of semantic representation techniques across languages. Despite this, very few word similarity datasets exist for languages other than English: The original English RG-65 (Rubenstein and Goodenough, 1965) and WordSim-353 (Finkelstein et al., 2002) datasets have been translated into other languages, either by experts (Gurevych, 2005; Joubarne and Inkpen, 2011; Granada et al., 2014; Camacho-Collados et al., 2015) , or by means of crowdsourcing (Leviant and Reichart, 2015) , thereby creating equivalent datasets in languages other than English. However, the existing English word similarity datasets suffer from various issues:", "cite_spans": [ { "start": 490, "end": 523, "text": "(Rubenstein and Goodenough, 1965)", "ref_id": "BIBREF52" }, { "start": 528, "end": 566, "text": "WordSim-353 (Finkelstein et al., 2002)", "ref_id": null }, { "start": 637, "end": 653, "text": "(Gurevych, 2005;", "ref_id": "BIBREF21" }, { "start": 654, "end": 680, "text": "Joubarne and Inkpen, 2011;", "ref_id": "BIBREF28" }, { "start": 681, "end": 702, "text": "Granada et al., 2014;", "ref_id": "BIBREF20" }, { "start": 703, "end": 733, "text": "Camacho-Collados et al., 2015)", "ref_id": "BIBREF9" }, { "start": 765, "end": 793, "text": "(Leviant and Reichart, 2015)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Subtask 1: Multilingual Semantic Similarity", "sec_num": "1.1" }, { "text": "1. The similarity scale used for the annotation of WordSim-353 and MEN (Bruni et al., 2014) does not distinguish between similarity and relatedness, and hence conflates these two. As a result, the datasets contain pairs that are judged to be highly similar even if they are not of similar type or nature. For instance, the WordSim-353 dataset contains the pairs weather-forecast or clothes-closet with assigned similarity scores of 8.34 and 8.00 (on the [0,10] scale), respectively. Clearly, the words in the two pairs are (highly) related, but they are not similar.", "cite_spans": [ { "start": 71, "end": 91, "text": "(Bruni et al., 2014)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Subtask 1: Multilingual Semantic Similarity", "sec_num": "1.1" }, { "text": "2. The performance of state-of-the-art systems have already surpassed the levels of human inter-annotator agreement (IAA) for many of the old datasets, e.g., for RG-65 and WordSim-353. This makes these datasets unreliable benchmarks for the evaluation of newly-developed systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subtask 1: Multilingual Semantic Similarity", "sec_num": "1.1" }, { "text": "3. Conventional datasets such as RG-65, MC-30 (Miller and Charles, 1991) , and WS-Sim (Agirre et al., 2009 ) (the similarity portion of WordSim-353) are relatively small, containing 65, 30, and 200 word pairs, respectively. Hence, these benchmarks do not allow reliable conclusions to be drawn, since performance improvements have to be large to be statistically significant (Batchkarov et al., 2016) .", "cite_spans": [ { "start": 46, "end": 72, "text": "(Miller and Charles, 1991)", "ref_id": "BIBREF38" }, { "start": 86, "end": 106, "text": "(Agirre et al., 2009", "ref_id": "BIBREF0" }, { "start": 375, "end": 400, "text": "(Batchkarov et al., 2016)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Subtask 1: Multilingual Semantic Similarity", "sec_num": "1.1" }, { "text": "4. The recent SimLex-999 dataset (Hill et al., 2015) improves both the size and consistency issues of the conventional datasets by providing word similarity scores for 999 word pairs on a consistent scale that focuses on similarity only (and not relatedness). However, the dataset suffers from other issues. First, given that SimLex-999 has been annotated by turkers, and not by human experts, the similarity scores assigned to individual word pairs have a high variance, resulting in relatively low IAA . In fact, the reported IAA for this dataset is 0.67 in terms of average pairwise correlation, which is considerably lower than conventional expert-based datasets whose IAA are generally above 0.80 (Rubenstein and Goodenough, 1965; Camacho-Collados et al., 2015) . Second, similarly to many of the above-mentioned datasets, SimLex-999 does not contain named entities (e.g., Microsoft), or multiword expressions (e.g., black hole).", "cite_spans": [ { "start": 33, "end": 52, "text": "(Hill et al., 2015)", "ref_id": "BIBREF24" }, { "start": 702, "end": 735, "text": "(Rubenstein and Goodenough, 1965;", "ref_id": "BIBREF52" }, { "start": 736, "end": 766, "text": "Camacho-Collados et al., 2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Subtask 1: Multilingual Semantic Similarity", "sec_num": "1.1" }, { "text": "In fact, the dataset includes only words that are defined in WordNet's vocabulary (Miller et al., 1990) , and therefore lacks the ability to test the reliability of systems for WordNet out-of-vocabulary words. Third, the dataset contains a large number of antonymy pairs. Indeed, several recent works have shown how significant performance improvements can be obtained on this dataset by simply tweaking usual word embedding approaches to handle antonymy (Schwartz et al., 2015; Pham et al., 2015; Nguyen et al., 2016) .", "cite_spans": [ { "start": 82, "end": 103, "text": "(Miller et al., 1990)", "ref_id": "BIBREF37" }, { "start": 455, "end": 478, "text": "(Schwartz et al., 2015;", "ref_id": "BIBREF53" }, { "start": 479, "end": 497, "text": "Pham et al., 2015;", "ref_id": "BIBREF45" }, { "start": 498, "end": 518, "text": "Nguyen et al., 2016)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Subtask 1: Multilingual Semantic Similarity", "sec_num": "1.1" }, { "text": "Since most existing multilingual word similarity datasets are constructed on the basis of conventional English datasets, any issues associated with the latter tend simply to be transferred to the former. This is the reason why we proposed this task and constructed new challenging datasets for five different languages (i.e., English, Farsi, German, Italian, and Spanish) addressing all the above-mentioned issues. Given that multiple large and high-quality verb similarity datasets have been created in recent years (Yang and Powers, 2006; Baker et al., 2014; Gerz et al., 2016) , we decided to focus on nominal words.", "cite_spans": [ { "start": 517, "end": 540, "text": "(Yang and Powers, 2006;", "ref_id": "BIBREF61" }, { "start": 541, "end": 560, "text": "Baker et al., 2014;", "ref_id": "BIBREF3" }, { "start": 561, "end": 579, "text": "Gerz et al., 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Subtask 1: Multilingual Semantic Similarity", "sec_num": "1.1" }, { "text": "Over the past few years multilingual embeddings that represent lexical items from multiple languages in a unified semantic space have garnered considerable research attention (Zou et al., 2013; de Melo, 2015; Vuli\u0107 and Moens, 2016; Ammar et al., 2016; Upadhyay et al., 2016) , while at the same time cross-lingual applications have also been increasingly studied (Xiao and Guo, 2014; Franco-Salvador et al., 2016) . However, there have been very few reliable datasets for evaluating cross-lingual systems. Similarly to the case of multilingual datasets, these cross-lingual datasets have been constructed on the basis of conventional English word similarity datasets: MC-30 and WordSim-353 (Hassan and Mihalcea, 2009) , and RG-65 (Camacho-Collados et al., 2015) . As a result, they inherit the issues affecting their parent datasets mentioned in the previous subsection: while MC-30 and RG-65 are composed of only 30 and 65 pairs, WordSim-353 conflates similarity and relatedness in different languages. Moreover, the datasets of Hassan and Mihalcea (2009) were not re-scored after having been translated to the other languages, thus ignoring possible semantic shifts across languages and producing unreliable scores for many translated word pairs. For this subtask we provided ten high quality cross-lingual datasets, constructed according to the procedure of Camacho-Collados et al. (2015) , in a semi-automatic manner exploiting the monolingual datasets of subtask 1. These datasets constitute a reliable evaluation framework across five languages.", "cite_spans": [ { "start": 175, "end": 193, "text": "(Zou et al., 2013;", "ref_id": "BIBREF62" }, { "start": 194, "end": 208, "text": "de Melo, 2015;", "ref_id": "BIBREF11" }, { "start": 209, "end": 231, "text": "Vuli\u0107 and Moens, 2016;", "ref_id": "BIBREF59" }, { "start": 232, "end": 251, "text": "Ammar et al., 2016;", "ref_id": "BIBREF2" }, { "start": 252, "end": 274, "text": "Upadhyay et al., 2016)", "ref_id": "BIBREF58" }, { "start": 363, "end": 383, "text": "(Xiao and Guo, 2014;", "ref_id": "BIBREF60" }, { "start": 384, "end": 413, "text": "Franco-Salvador et al., 2016)", "ref_id": "BIBREF16" }, { "start": 690, "end": 717, "text": "(Hassan and Mihalcea, 2009)", "ref_id": "BIBREF22" }, { "start": 730, "end": 761, "text": "(Camacho-Collados et al., 2015)", "ref_id": "BIBREF9" }, { "start": 1030, "end": 1056, "text": "Hassan and Mihalcea (2009)", "ref_id": "BIBREF22" }, { "start": 1361, "end": 1391, "text": "Camacho-Collados et al. (2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Subtask 2: Cross-lingual Semantic Similarity", "sec_num": "1.2" }, { "text": "Subtask 1, i.e., multilingual semantic similarity, has five datasets for the five languages of the task, i.e., English, Farsi, German, Italian, and Spanish. These datasets were manually created with the help of trained annotators (as opposed to Mechanical Turk) that were native or fluent speakers of the target language. Based on these five datasets, 10 cross-lingual datasets were automatically generated (described in Section 2.2) for subtask 2, i.e., cross-lingual semantic similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Data", "sec_num": "2" }, { "text": "In this section we focus on the creation of the evaluation test sets. We additionally created a set of small trial datasets by following a similar process. These datasets were used by some participants during system development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Data", "sec_num": "2" }, { "text": "As for monolingual datasets, we opted for a size of 500 word pairs in order to provide a large enough set to allow reliable evaluation and comparison of the systems. The following procedure was used for the construction of multilingual datasets: (1) we first collected 500 English word pairs from a wide range of domains (Section 2.1.1), (2) through translation of these pairs, we obtained word pairs for the other four languages (Section 2.1.2) and,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual datasets", "sec_num": "2.1" }, { "text": "(3) all word pairs of each dataset were manually scored by multiple annotators (Section 2.1.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual datasets", "sec_num": "2.1" }, { "text": "Seed set selection. The dataset creation started with the selection of 500 English words. One of the main objectives of the task was to provide an evaluation framework that contains named entities and multiword expressions and covers a wide range of domains. To achieve this, we considered the 34 different domains available in BabelDomains 1 (Camacho-Collados and Navigli, 2017) , which in the main correspond to the domains of the Wikipedia featured articles page 2 . Table 1 shows the list of all the 34 domains used for the creation of the datasets. From each domain, 12 words were sampled in such a way as to have at least one multiword expression and two named entities. In order to include words that may not belong to any of the pre-defined domains, we added 92 extra words whose domain was not decided beforehand. We also tried to sample these seed words in such a way as to have a balanced set across occurrence frequency. 3 Of the 500 English seed words, 84 (17%) and 83 were, respectively, named entities and multiwords.", "cite_spans": [ { "start": 343, "end": 379, "text": "(Camacho-Collados and Navigli, 2017)", "ref_id": "BIBREF8" }, { "start": 933, "end": 934, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 470, "end": 477, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "English dataset creation", "sec_num": "2.1.1" }, { "text": "Similarity scale. For the annotation of the datasets, we adopted the five-point Likert scale of the SemEval-2014 task on Cross-Level Semantic 4 Very similar", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English dataset creation", "sec_num": "2.1.1" }, { "text": "The two words are synonyms (e.g., midday-noon or motherboard-mainboard).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English dataset creation", "sec_num": "2.1.1" }, { "text": "The two words share many of the important ideas of their meaning but include slightly different details. They refer to similar but not identical concepts (e.g., lion-zebra or firefighter-policeman).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similar", "sec_num": "3" }, { "text": "2 Slightly similar The two words do not have a very similar meaning, but share a common topic/domain/function and ideas or concepts that are related (e.g., house-window or airplane-pilot).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similar", "sec_num": "3" }, { "text": "The two words describe clearly dissimilar concepts, but may share some small details, a far relationship or a domain in common and might be likely to be found together in a longer document on the same topic (e.g., software-keyboard or driver-suspension).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dissimilar", "sec_num": "1" }, { "text": "The two words do not mean the same thing and are not on the same topic (e.g., pencil-frog or PlayStationmonarchy). Table 4 for examples.", "cite_spans": [], "ref_spans": [ { "start": 115, "end": 122, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Totally dissimilar and unrelated", "sec_num": "0" }, { "text": "Similarity (Jurgens et al., 2014) which was designed to systematically order a broad range of semantic relations: synonymy, similarity, relatedness, topical association, and unrelatedness. Table 2 describes the five points in the similarity scale along with example word pairs.", "cite_spans": [ { "start": 11, "end": 33, "text": "(Jurgens et al., 2014)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 189, "end": 197, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Totally dissimilar and unrelated", "sec_num": "0" }, { "text": "Pairing word selection. Having the initial 500word seed set at hand, we selected a pair for each word. The selection was carried out in such a way as to ensure a uniform distribution of pairs across the similarity scale. In order to do this, we first assigned a random intended similarity to each pair. The annotator then had to pick the second word so as to match the intended score. In order to allow the annotator to have a broader range of candidate words, the intended score was considered as a similarity interval, one of [0-1], [1-2], [2-3] and [3, 4] . For instance, if the first word was helicopter and the presumed similarity was [3-4], the annotator had to pick a pairing word which was \"semantically similar\" (see Table 2 ) to helicopter, e.g., plane. Of the 500 pairing words, 45 (9%) and 71 (14%) were named entities and multiwords, respectively. This resulted in an English dataset comprising 500 word pairs, 105 (21%) and 112 (22%) of which have at least one named entity and multiword, respectively.", "cite_spans": [ { "start": 552, "end": 555, "text": "[3,", "ref_id": null }, { "start": 556, "end": 558, "text": "4]", "ref_id": null } ], "ref_spans": [ { "start": 726, "end": 733, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Totally dissimilar and unrelated", "sec_num": "0" }, { "text": "The remaining four multilingual datasets (i.e., Farsi, German, Italian, and Spanish) were constructed by translating words in the English dataset to the target language. We had two goals in mind while selecting translation as the construction strategy of these datasets (as opposed to independent word samplings per language): (1) to have comparable datasets across languages in terms of domain coverage, multiword and named en-tity distribution 4 and (2) to enable an automatic construction of cross-lingual datasets (see Section 2.2). Each English word pair was translated by two independent annotators. In the case of disagreement, a third annotator was asked to pick the preferred translation. While translating, the annotators were shown the word pair along with their initial similarity score, which was provided to help them in selecting the correct translation for the intended meanings of the words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset translation", "sec_num": "2.1.2" }, { "text": "The annotators were instructed to follow the guidelines, with special emphasis on distinguishing between similarity and relatedness. Furthermore, although the similarity scale was originally designed as a Likert scale, annotators were given flexibility to assign values between the defined points in the scale (with a step size of 0.25), indicating a blend of two relations. As a result of this procedure, we obtained 500 word pairs for each of the five languages. The pairs in each language were shuffled and their initial scores were discarded. Three annotators were then asked to assign a similarity score to each pair according to our similarity scale (see Section 2.1.1). Table 3 (first row) reports the average pairwise Pearson correlation among the three annotators for each of the five languages. Given the fact that our word pairs spanned a wide range of domains, and that there was a possibility for annotators to misunderstand some words, we devised a procedure to check the quality of the annotations and to improve the reliability of the similarity scores. To this end, for each dataset and for each annotator we picked the subset of pairs for which the difference between the assigned similarity score and the average of the other two annotations was more than 1.0, according to our similarity scale. The annotator was then asked to revise this subset performing a more careful investigation of the possible meanings of the word pairs contained therein, and change the score if necessary. This procedure resulted in considerable improvements in the consistency of the scores. The second row in Table 3 (\"Revised scores\") shows the average pairwise Pearson correlation among the three revised sets of scores for each of the five languages. The interannotator agreement for all the datasets is consistently in the 0.9 ballpark, which demonstrates the high quality of our multilingual datasets thanks to careful annotation of word pairs by experts.", "cite_spans": [], "ref_spans": [ { "start": 677, "end": 684, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1608, "end": 1616, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Scoring", "sec_num": "2.1.3" }, { "text": "The cross-lingual datasets were automatically created on the basis of the translations obtained with the method described in Section 2.1.2 and using the approach of Camacho-Collados et al. (2015 two languages (e.g., mind-brain in English and mente-cerebro in Spanish), the approach creates two cross-lingual pairs between the two languages (mind-cerebro and brain-mente in the example). The similarity scores for the constructed crosslingual pairs are computed as the average of the corresponding language-specific scores in the monolingual datasets. In order to avoid semantic shifts between languages interfering in the process, these pairs are only created if the difference between the corresponding language-specific scores is lower than 1.0. The full details of the algorithm can be found in Camacho-Collados et al. (2015) . The approach has been validated by human judges and shown to achieve agreements of around 0.90 with human judges, which is similar to inter-annotator agreements reported in Section 2.1.3. See Table 4 for some sample pairs in all monolingual and cross-lingual datasets. Table 5 shows the final number of pairs for each language pair.", "cite_spans": [ { "start": 165, "end": 194, "text": "Camacho-Collados et al. (2015", "ref_id": "BIBREF9" }, { "start": 798, "end": 828, "text": "Camacho-Collados et al. (2015)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 1023, "end": 1030, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 1100, "end": 1107, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Cross-lingual datasets", "sec_num": "2.2" }, { "text": "We carried out the evaluation on the datasets described in the previous section. The experimental setting is described in Section 3.1 and the results are presented in Section 3.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "Participating systems were evaluated according to standard Pearson and Spearman correlation mea-sures on all word similarity datasets, with the final official score being calculated as the harmonic mean of Pearson and Spearman correlations (Jurgens et al., 2014) . Systems were allowed to participate in either multilingual word similarity, crosslingual word similarity, or both. Each participating system was allowed to submit a maximum of two runs. For the multilingual word similarity subtask, some systems were multilingual (applicable to different languages), whereas others were monolingual (only applicable to a single language). While monolingual approaches were evaluated in their respective languages, multilingual and languageindependent approaches were additionally given a global ranking provided that they tested their systems on at least four languages. The final score of a system was calculated as the average harmonic mean of Pearson and Spearman correlations of the four languages on which it performed best.", "cite_spans": [ { "start": 240, "end": 262, "text": "(Jurgens et al., 2014)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation measures and official scores", "sec_num": "3.1.1" }, { "text": "Likewise, the participating systems of the crosslingual semantic similarity subtask were allowed to provide a score for a single cross-lingual dataset, but must have provided results for at least six cross-lingual word similarity datasets in order to be considered for the final ranking. For each system, the global score was computed as the average harmonic mean of Pearson and Spearman correlation on the six cross-lingual datasets on which it provided the best performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation measures and official scores", "sec_num": "3.1.1" }, { "text": "We encouraged the participants to use a shared text corpus for the training of their systems. The use of the shared corpus was intended to mitigate the influence that the underlying training corpus might have upon the quality of obtained representations, laying a common ground for a fair comparison of the systems. Farsi. For pairs involving Farsi, participants were allowed to use the OpenSubtitles2016 parallel corpora 8 . Additionally, we proposed a second type of multilingual corpus to allow the use of different techniques exploiting comparable corpora. To this end, some participants made use of Wikipedia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared training corpus", "sec_num": "3.1.2" }, { "text": "This task was targeted at evaluating multilingual and cross-lingual word similarity measurement techniques. However, it was not only limited to this area of research, as other fields such as semantic representation consider word similarity as one of their most direct benchmarks for evaluation. All kinds of semantic representation techniques and semantic similarity systems were encouraged to participate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Participating systems", "sec_num": "3.1.3" }, { "text": "In the end we received a wide variety of participants: proposing distributional semantic models learnt directly from raw corpora, using syntactic features, exploiting knowledge from lexical resources, and hybrid approaches combining corpus-based and knowledge-based clues. Due to lack of space we cannot describe all the systems in detail, but we recommend the reader to refer to the system description papers for more information about the individual systems: HCCL (He et al., 2017) , Citius (Gamallo, 2017) , jmp8 (Melka and Bernard, 2017) , l2f (Fialho et al., 2017) , QLUT (Meng et al., 2017) , RUFINO (Jimenez et al., 2017) , MERALI (Mensa et al., 2017) , Luminoso (Speer and Lowry-Duda, 2017) , hhu (Qasem-iZadeh and Kallmeyer, 2017), Mahtab (Ranjbar et al., 2017) , SEW (Delli Bovi and Raganato, 2017) and Wild Devs (Rotari et al., 2017) , and OoO.", "cite_spans": [ { "start": 466, "end": 483, "text": "(He et al., 2017)", "ref_id": "BIBREF23" }, { "start": 493, "end": 508, "text": "(Gamallo, 2017)", "ref_id": "BIBREF17" }, { "start": 516, "end": 541, "text": "(Melka and Bernard, 2017)", "ref_id": "BIBREF33" }, { "start": 548, "end": 569, "text": "(Fialho et al., 2017)", "ref_id": "BIBREF14" }, { "start": 577, "end": 596, "text": "(Meng et al., 2017)", "ref_id": "BIBREF34" }, { "start": 606, "end": 628, "text": "(Jimenez et al., 2017)", "ref_id": "BIBREF26" }, { "start": 638, "end": 658, "text": "(Mensa et al., 2017)", "ref_id": "BIBREF35" }, { "start": 670, "end": 698, "text": "(Speer and Lowry-Duda, 2017)", "ref_id": "BIBREF56" }, { "start": 748, "end": 770, "text": "(Ranjbar et al., 2017)", "ref_id": "BIBREF49" }, { "start": 793, "end": 808, "text": "Raganato, 2017)", "ref_id": "BIBREF12" }, { "start": 823, "end": 844, "text": "(Rotari et al., 2017)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Participating systems", "sec_num": "3.1.3" }, { "text": "As the baseline system we included the results of the concept and entity embeddings of NASARI . These embeddings were obtained by exploiting knowledge from Wikipedia and WordNet coupled with general domain corpus-based Word2Vec embeddings (Mikolov et al., 2013) . We performed the evaluation with the 300-dimensional English embedded vectors (version 3.0) 9 and used them for all languages. For the comparison within and Table 6 : Pearson (r), Spearman (\u03c1) and official (Final) results of participating systems on the five monolingual word similarity datasets (subtask 1).", "cite_spans": [ { "start": 239, "end": 261, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 421, "end": 428, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Baseline", "sec_num": "3.1.4" }, { "text": "across languages NASARI relies on the lexicalizations provided by BabelNet (Navigli and Ponzetto, 2012) for the concepts and entities in each language. Then, the final score was computed through the conventional closest senses strategy (Resnik, 1995; Budanitsky and Hirst, 2006) , using cosine similarity as the comparison measure.", "cite_spans": [ { "start": 75, "end": 103, "text": "(Navigli and Ponzetto, 2012)", "ref_id": "BIBREF42" }, { "start": 236, "end": 250, "text": "(Resnik, 1995;", "ref_id": "BIBREF50" }, { "start": 251, "end": 278, "text": "Budanitsky and Hirst, 2006)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.1.4" }, { "text": "We present the results of subtask 1 in Section 3.2.1 and subtask 2 in Section 3.2.2. Table 6 lists the results on all monolingual datasets. 10 The systems which made use of the shared Wikipedia corpus are marked with * in Table 6 . Luminoso achieved the best results in all languages except Farsi. Luminoso couples word embeddings with knowledge from ConceptNet using an extension of Retrofitting (Faruqui et al., 2015) , which proved highly effective. This system additionally proposed two fallback strategies to handle 10 Systems followed by (a.d.) submitted their results after the official deadline.", "cite_spans": [ { "start": 140, "end": 142, "text": "10", "ref_id": null }, { "start": 397, "end": 419, "text": "(Faruqui et al., 2015)", "ref_id": "BIBREF13" }, { "start": 521, "end": 523, "text": "10", "ref_id": null } ], "ref_spans": [ { "start": 85, "end": 92, "text": "Table 6", "ref_id": null }, { "start": 222, "end": 229, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "3.2" }, { "text": "Score Official Rank Table 7 : Global results of participating systems on subtask 1 (multilingual word similarity).", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 27, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "out-of-vocabulary (OOV) instances based on loanwords and cognates. These two fallback strategies proved essential given the amount of rare words or domain-specific words which were present in the datasets. In fact, most systems fail to provide scores for all pairs in the datasets, with OOV rates close to 10% in some cases. Table 8 : Pearson (r), Spearman (\u03c1) and the official (Final) results of participating systems on the ten cross-lingual word similarity datasets (subtask 2).", "cite_spans": [], "ref_spans": [ { "start": 325, "end": 332, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "Luminoso. In fact, most top performing systems combined these two sources of information. For Farsi, the best performing system was Mahtab, which couples information from Word2Vec word embeddings (Mikolov et al., 2013) and knowledge resources, in this case FarsNet (Shamsfard et al., 2010) and BabelNet. For English, the only system that came close to Luminoso was QLUT, which was the best-performing system that made use of the shared Wikipedia corpus for training. The best configuration of this system exploits the Skip-Gram model of Word2Vec with an additive compositional function for computing the similarity of multiwords. However, Mahtab and QLUT only performed their experiments in a single language (Farsi and English, respectively).", "cite_spans": [ { "start": 196, "end": 218, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF36" }, { "start": 265, "end": 289, "text": "(Shamsfard et al., 2010)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "For the systems that performed experiments in at least four of the five languages we computed a global score (see Section 3.1.1). Global rank-ings and results are displayed in Table 7 . Luminoso clearly achieves the best overall results. The second-best performing system was HCCL, which also managed to outperform the baseline. HCCL exploited the Skip-Gram model of Word2Vec and performed hyperparameter tuning on existing word similarity datasets. This system did not make use of external resources apart from the shared Wikipedia corpus for training. RUFINO, which also made use of the Wikipedia corpus only, attained the third overall position. The system exploits PMI and an association measure to capture second-order relations between words based on the Jaccard distance (Jimenez et al., 2016) .", "cite_spans": [ { "start": 778, "end": 800, "text": "(Jimenez et al., 2016)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 176, "end": 183, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "The results for all ten cross-lingual datasets are shown in Table 8 . Systems that made use of the shared Europarl parallel corpus are marked with * in the 0.034 9 Table 9 : Global results of participating systems in subtask 2 (cross-lingual word similarity).", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 67, "text": "Table 8", "ref_id": null }, { "start": 164, "end": 171, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Subtask 2", "sec_num": "3.2.2" }, { "text": "Wikipedia are marked with \u2020. Luminoso, the bestperforming system in Subtask 1, also achieved the best overall results on the ten cross-lingual datasets. This shows that the combination of knowledge from word embeddings and the Con-ceptNet graph is equally effective in the crosslingual setting. The global ranking for this subtask was computed by averaging the results of the six datasets on which each system performed best. The global rankings are displayed in Table 9 . Luminoso was the only system outperforming the baseline, achieving the best overall results. OoO achieved the second best overall performance using an extension of the Bilingual Bag-of-Words without Alignments (BilBOWA) approach of Gouws et al. (2015) on the shared Europarl corpus. The third overall system was SEW, which leveraged Wikipedia-based concept vectors (Raganato et al., 2016) and pre-trained word embeddings for learning language-independent concept embeddings.", "cite_spans": [ { "start": 705, "end": 724, "text": "Gouws et al. (2015)", "ref_id": "BIBREF19" }, { "start": 838, "end": 861, "text": "(Raganato et al., 2016)", "ref_id": "BIBREF48" } ], "ref_spans": [ { "start": 463, "end": 470, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Subtask 2", "sec_num": "3.2.2" }, { "text": "In this paper we have presented the SemEval 2017 task on Multilingual and Cross-lingual Semantic Word Similarity. We provided a reliable framework to measure the similarity between nominal instances within and across five different languages (English, Farsi, German, Italian, and Spanish). We hope this framework will contribute to the development of distributional semantics in general and for languages other than English in particular, with a special emphasis on multilin-gual and cross-lingual approaches. All evaluation datasets are available for download at http:// alt.qcri.org/semeval2017/task2/. The best overall system in both tasks was Luminoso, which is a hybrid system that effectively integrates word embeddings and information from knowledge resources. In general, this combination proved effective in this task, as most other top systems somehow combined knowledge from text corpora and lexical resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "http://lcl.uniroma1.it/babeldomains/ 2 https://en.wikipedia.org/wiki/ Wikipedia:Featured_articles3 We used the Wikipedia corpus for word frequency calculation during the dataset construction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Apart from the German dataset in which the proportion of multiwords significantly reduces (from 22% of English to around 11%) due to the compounding nature of the German language, other datasets maintain similar proportions of multiwords to those of the English dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://lcl.uniroma1.it/ similarity-datasets/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://opus.lingfil.uu.se/ OpenSubtitles2016.php 9 http://lcl.uniroma1.it/nasari/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors gratefully acknowledge the support of the MRC grant No. MR/M025160/1 for PheneBank and ERC Starting Grant MultiJEDI No. 259234. Jose Camacho-Collados is supported by a Google Doctoral Fellowship in Natural Language Processing.We would also like to thank\u00c1ngela Collados Ais, Claudio Delli Bovi, Afsaneh Hojjat, Ignacio Iacobacci, Tommaso Pasini, Valentina Pyatkin, Alessandro Raganato, Zahra Pilehvar, Milan Gritta and Sabine Ullrich for their help in the construction of the datasets. Finally, we also thank Jim McManus for his suggestions on the manuscript and the anonymous reviewers for their helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A study on similarity and relatedness using distributional and WordNet-based approaches", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Enrique", "middle": [], "last": "Alfonseca", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jana", "middle": [], "last": "Kravalova", "suffix": "" } ], "year": 2009, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "19--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pa\u015fca, and Aitor Soroa. 2009. A study on similarity and relatedness using distribu- tional and WordNet-based approaches. In Proceed- ings of NAACL. pages 19-27.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Polyglot: Distributed word representations for multilingual nlp", "authors": [ { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "183--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. In Proceedings of the Seven- teenth Conference on Computational Natural Lan- guage Learning. Sofia, Bulgaria, pages 183-192.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Massively multilingual word embeddings", "authors": [ { "first": "Waleed", "middle": [], "last": "Ammar", "suffix": "" }, { "first": "George", "middle": [], "last": "Mulcaire", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1602.01925" ] }, "num": null, "urls": [], "raw_text": "Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925 .", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An unsupervised model for instance level subcategorization acquisition", "authors": [ { "first": "Simon", "middle": [], "last": "Baker", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "278--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Baker, Roi Reichart, and Anna Korhonen. 2014. An unsupervised model for instance level subcate- gorization acquisition. In Proceedings of EMNLP. pages 278-289.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A critique of word similarity as a method for evaluating distributional semantic models", "authors": [ { "first": "Miroslav", "middle": [], "last": "Batchkarov", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Kober", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Reffin", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Weeds", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the ACL Workshop on Evaluating Vector Space Representations for NLP", "volume": "", "issue": "", "pages": "7--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miroslav Batchkarov, Thomas Kober, Jeremy Reffin, Julie Weeds, and David Weir. 2016. A critique of word similarity as a method for evaluating distribu- tional semantic models. In Proceedings of the ACL Workshop on Evaluating Vector Space Representa- tions for NLP. Berlin, Germany, pages 7-12.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Multimodal distributional semantics", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Nam-Khanh", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2014, "venue": "J. Artif. Intell. Res.(JAIR)", "volume": "49", "issue": "", "pages": "1--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Intell. Res.(JAIR) 49(1-47).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Evaluating WordNet-based measures of Lexical Semantic Relatedness", "authors": [ { "first": "Alexander", "middle": [], "last": "Budanitsky", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2006, "venue": "Computational Linguistics", "volume": "32", "issue": "1", "pages": "13--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Budanitsky and Graeme Hirst. 2006. Evalu- ating WordNet-based measures of Lexical Semantic Relatedness. Computational Linguistics 32(1):13- 47.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Find the word that does not belong: A framework for an intrinsic evaluation of word vector representations", "authors": [ { "first": "Jos\u00e9", "middle": [], "last": "Camacho", "suffix": "" }, { "first": "-Collados", "middle": [], "last": "", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the ACL Workshop on Evaluating Vector Space Representations for NLP", "volume": "", "issue": "", "pages": "43--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jos\u00e9 Camacho-Collados and Roberto Navigli. 2016. Find the word that does not belong: A framework for an intrinsic evaluation of word vector represen- tations. In Proceedings of the ACL Workshop on Evaluating Vector Space Representations for NLP. Berlin, Germany, pages 43-50.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "BabelDomains: Large-Scale Domain Labeling of Lexical Resources", "authors": [ { "first": "Jose", "middle": [], "last": "Camacho", "suffix": "" }, { "first": "-Collados", "middle": [], "last": "", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2017, "venue": "Proceedings of EACL (2). Valencia", "volume": "", "issue": "", "pages": "223--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jose Camacho-Collados and Roberto Navigli. 2017. BabelDomains: Large-Scale Domain Labeling of Lexical Resources. In Proceedings of EACL (2). Va- lencia, Spain, pages 223-228.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Framework for the Construction of Monolingual and Cross-lingual Word Similarity Datasets", "authors": [ { "first": "Jos\u00e9", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Taher Pilehvar", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jos\u00e9 Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. A Framework for the Construction of Monolingual and Cross-lingual Word Similarity Datasets. In Proceedings of ACL (2). Beijing, China, pages 1-7.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Nasari: Integrating explicit knowledge and corpus statistics for a multilingual representation of concepts and entities", "authors": [ { "first": "Jos\u00e9", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Taher Pilehvar", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2016, "venue": "Artificial Intelligence", "volume": "240", "issue": "", "pages": "36--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jos\u00e9 Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Nasari: Integrating ex- plicit knowledge and corpus statistics for a multilin- gual representation of concepts and entities. Artifi- cial Intelligence 240:36-64.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Wiktionary-based word embeddings", "authors": [ { "first": "Melo", "middle": [], "last": "Gerard De", "suffix": "" } ], "year": 2015, "venue": "Proceedings of MT Summit XV pages", "volume": "", "issue": "", "pages": "346--359", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerard de Melo. 2015. Wiktionary-based word embed- dings. Proceedings of MT Summit XV pages 346- 359.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Sew-embed at semeval-2017 task 2: Languageindependent concept representations from a semantically enriched wikipedia", "authors": [ { "first": "Claudio", "middle": [], "last": "Delli Bovi", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Raganato", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics", "volume": "", "issue": "", "pages": "261--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudio Delli Bovi and Alessandro Raganato. 2017. Sew-embed at semeval-2017 task 2: Language- independent concept representations from a seman- tically enriched wikipedia. In Proceedings of the 11th International Workshop on Semantic Evalu- ation (SemEval-2017). Association for Computa- tional Linguistics, Vancouver, Canada, pages 261- 266. http://www.aclweb.org/anthology/S17-2041.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Retrofitting word vectors to semantic lexicons", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "K", "middle": [], "last": "Sujay", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Jauhar", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Hovy", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "1606--1615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL. pages 1606-1615.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "L2f/inesc-id at semeval-2017 tasks 1 and 2: Lexical and semantic features in word and textual similarity", "authors": [ { "first": "Pedro", "middle": [], "last": "Fialho", "suffix": "" }, { "first": "Hugo", "middle": [ "Patinho" ], "last": "Rodrigues", "suffix": "" }, { "first": "Lu\u00edsa", "middle": [], "last": "Coheur", "suffix": "" }, { "first": "Paulo", "middle": [], "last": "Quaresma", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics", "volume": "", "issue": "", "pages": "213--219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedro Fialho, Hugo Patinho Rodrigues, Lu\u00edsa Coheur, and Paulo Quaresma. 2017. L2f/inesc-id at semeval- 2017 tasks 1 and 2: Lexical and semantic features in word and textual similarity. In Proceedings of the 11th International Workshop on Semantic Eval- uation (SemEval-2017). Association for Computa- tional Linguistics, Vancouver, Canada, pages 213- 219. http://www.aclweb.org/anthology/S17-2032.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Placing search in context: The concept revisited", "authors": [ { "first": "Lev", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "Gabrilovich", "middle": [], "last": "Evgenly", "suffix": "" }, { "first": "Matias", "middle": [], "last": "Yossi", "suffix": "" }, { "first": "Rivlin", "middle": [], "last": "Ehud", "suffix": "" }, { "first": "Solan", "middle": [], "last": "Zach", "suffix": "" }, { "first": "Wolfman", "middle": [], "last": "Gadi", "suffix": "" }, { "first": "Ruppin", "middle": [], "last": "Eytan", "suffix": "" } ], "year": 2002, "venue": "ACM Transactions on Information Systems", "volume": "20", "issue": "1", "pages": "116--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Finkelstein, Gabrilovich Evgenly, Matias Yossi, Rivlin Ehud, Solan Zach, Wolfman Gadi, and Rup- pin Eytan. 2002. Placing search in context: The concept revisited. ACM Transactions on Informa- tion Systems 20(1):116-131.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A systematic study of knowledge graph analysis for cross-language plagiarism detection", "authors": [ { "first": "Marc", "middle": [], "last": "Franco-Salvador", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rosso", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Montes-Y G\u00f3mez", "suffix": "" } ], "year": 2016, "venue": "Information Processing & Management", "volume": "52", "issue": "4", "pages": "550--570", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Franco-Salvador, Paolo Rosso, and Manuel Montes-y G\u00f3mez. 2016. A systematic study of knowledge graph analysis for cross-language plagia- rism detection. Information Processing & Manage- ment 52(4):550-570.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Citius at semeval-2017 task 2: Cross-lingual similarity from comparable corpora and dependency-based contexts", "authors": [ { "first": "Pablo", "middle": [], "last": "Gamallo", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics", "volume": "", "issue": "", "pages": "226--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pablo Gamallo. 2017. Citius at semeval-2017 task 2: Cross-lingual similarity from comparable corpora and dependency-based contexts. In Proceedings of the 11th International Workshop on Semantic Eval- uation (SemEval-2017). Association for Computa- tional Linguistics, Vancouver, Canada, pages 226- 229. http://www.aclweb.org/anthology/S17-2034.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Simverb-3500: A largescale evaluation set of verb similarity", "authors": [ { "first": "Daniela", "middle": [], "last": "Gerz", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniela Gerz, Ivan Vuli\u0107, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. Simverb-3500: A large- scale evaluation set of verb similarity. In Proceed- ings of EMNLP. Austin, USA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bilbowa: Fast bilingual distributed representations without word alignments", "authors": [ { "first": "Stephan", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning (ICML-15)", "volume": "", "issue": "", "pages": "748--756", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed represen- tations without word alignments. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15). pages 748-756.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Comparing semantic relatedness between word pairs in Portuguese using Wikipedia", "authors": [ { "first": "Roger", "middle": [], "last": "Granada", "suffix": "" }, { "first": "Cassia", "middle": [], "last": "Trojahn", "suffix": "" }, { "first": "Renata", "middle": [], "last": "Vieira", "suffix": "" } ], "year": 2014, "venue": "Computational Processing of the Portuguese Language", "volume": "", "issue": "", "pages": "170--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger Granada, Cassia Trojahn, and Renata Vieira. 2014. Comparing semantic relatedness between word pairs in Portuguese using Wikipedia. In Com- putational Processing of the Portuguese Language, Springer, pages 170-175.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Using the structure of a conceptual network in computing semantic relatedness", "authors": [ { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2005, "venue": "Natural Language Processing-IJCNLP 2005", "volume": "", "issue": "", "pages": "767--778", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iryna Gurevych. 2005. Using the structure of a conceptual network in computing semantic relat- edness. In Natural Language Processing-IJCNLP 2005, Springer, pages 767-778.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Cross-lingual semantic relatedness using encyclopedic knowledge", "authors": [ { "first": "Samer", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1192--1201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samer Hassan and Rada Mihalcea. 2009. Cross-lingual semantic relatedness using encyclopedic knowledge. In Proceedings of EMNLP. pages 1192-1201.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Hccl at semeval-2017 task 2: Combining multilingual word embeddings and transliteration model for semantic similarity", "authors": [ { "first": "Junqing", "middle": [], "last": "He", "suffix": "" }, { "first": "Long", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xuemin", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yonghong", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics", "volume": "", "issue": "", "pages": "220--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junqing He, Long Wu, Xuemin Zhao, and Yonghong Yan. 2017. Hccl at semeval-2017 task 2: Combining multilingual word embeddings and transliteration model for semantic similarity. In Proceedings of the 11th International Workshop on Semantic Eval- uation (SemEval-2017). Association for Computa- tional Linguistics, Vancouver, Canada, pages 220- 225. http://www.aclweb.org/anthology/S17-2033.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics .", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Epimenidis Voutsakis, Euripides GM Petrakis, and Evangelos Milios", "authors": [ { "first": "Angelos", "middle": [], "last": "Hliaoutakis", "suffix": "" }, { "first": "Giannis", "middle": [], "last": "Varelas", "suffix": "" } ], "year": 2006, "venue": "International Journal on Semantic Web and Information Systems", "volume": "2", "issue": "3", "pages": "55--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angelos Hliaoutakis, Giannis Varelas, Epimenidis Voutsakis, Euripides GM Petrakis, and Evangelos Milios. 2006. Information retrieval by semantic similarity. International Journal on Semantic Web and Information Systems 2(3):55-73.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Rufino at semeval-2017 task 2: Cross-lingual lexical similarity by extending pmi and word embeddings systems with a swadesh's-like list", "authors": [ { "first": "Sergio", "middle": [], "last": "Jimenez", "suffix": "" }, { "first": "George", "middle": [], "last": "Due\u00f1as", "suffix": "" }, { "first": "Lorena", "middle": [], "last": "Gaitan", "suffix": "" }, { "first": "Jorge", "middle": [], "last": "Segura", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics", "volume": "", "issue": "", "pages": "239--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergio Jimenez, George Due\u00f1as, Lorena Gaitan, and Jorge Segura. 2017. Rufino at semeval-2017 task 2: Cross-lingual lexical similarity by ex- tending pmi and word embeddings systems with a swadesh's-like list. In Proceedings of the 11th International Workshop on Semantic Evalu- ation (SemEval-2017). Association for Computa- tional Linguistics, Vancouver, Canada, pages 239- 244. http://www.aclweb.org/anthology/S17-2037.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Mathematical properties of soft cardinality: Enhancing jaccard, dice and cosine similarity measures with element-wise distance", "authors": [ { "first": "Sergio", "middle": [], "last": "Jimenez", "suffix": "" }, { "first": "Fabio", "middle": [ "A" ], "last": "Gonzalez", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Gelbukh", "suffix": "" } ], "year": 2016, "venue": "Information Sciences", "volume": "367", "issue": "", "pages": "373--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergio Jimenez, Fabio A. Gonzalez, and Alexander Gelbukh. 2016. Mathematical properties of soft car- dinality: Enhancing jaccard, dice and cosine simi- larity measures with element-wise distance. Infor- mation Sciences 367:373-389.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Comparison of semantic similarity for different languages using the Google n-gram corpus and second-order co-occurrence measures", "authors": [ { "first": "Colette", "middle": [], "last": "Joubarne", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2011, "venue": "Advances in Artificial Intelligence", "volume": "", "issue": "", "pages": "216--221", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colette Joubarne and Diana Inkpen. 2011. Compar- ison of semantic similarity for different languages using the Google n-gram corpus and second-order co-occurrence measures. In Advances in Artificial Intelligence, Springer, pages 216-221.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Semeval-2014 task 3: Cross-level semantic similarity", "authors": [ { "first": "David", "middle": [], "last": "Jurgens", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Taher Pilehvar", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2014, "venue": "SemEval", "volume": "", "issue": "", "pages": "17--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Jurgens, Mohammad Taher Pilehvar, and Roberto Navigli. 2014. Semeval-2014 task 3: Cross-level semantic similarity. SemEval 2014 pages 17-26.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The Meteor metric for automatic evaluation of Machine Translation", "authors": [ { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Denkowski", "suffix": "" } ], "year": 2009, "venue": "Machine Translation", "volume": "23", "issue": "2-3", "pages": "105--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon Lavie and Michael J. Denkowski. 2009. The Meteor metric for automatic evaluation of Machine Translation. Machine Translation 23(2-3):105-115.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Judgment language matters: Multilingual vector space models for judgment language aware lexical semantics", "authors": [ { "first": "Ira", "middle": [], "last": "Leviant", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ira Leviant and Roi Reichart. 2015. Judgment lan- guage matters: Multilingual vector space models for judgment language aware lexical semantics. CoRR, abs/1508.00106 .", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "The English lexical substitution task", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2009, "venue": "Language Resources and Evaluation", "volume": "43", "issue": "2", "pages": "139--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana McCarthy and Roberto Navigli. 2009. The En- glish lexical substitution task. Language Resources and Evaluation 43(2):139-159.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Jmp8 at semeval-2017 task 2: A simple and general distributional approach to estimate word similarity", "authors": [ { "first": "Josu\u00e9", "middle": [], "last": "Melka", "suffix": "" }, { "first": "Gilles", "middle": [], "last": "Bernard", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "230--234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Josu\u00e9 Melka and Gilles Bernard. 2017. Jmp8 at semeval-2017 task 2: A simple and gen- eral distributional approach to estimate word similarity. In Proceedings of the 11th In- ternational Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pages 230-234. http://www.aclweb.org/anthology/S17-2035.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Qlut at semeval-2017 task 2: Word similarity based on word embedding and knowledge base", "authors": [ { "first": "Fanqing", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Wenpeng", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Yuteng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ping", "middle": [], "last": "Jian", "suffix": "" }, { "first": "Shumin", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Heyan", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics", "volume": "", "issue": "", "pages": "235--238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fanqing Meng, Wenpeng Lu, Yuteng Zhang, Ping Jian, Shumin Shi, and Heyan Huang. 2017. Qlut at semeval-2017 task 2: Word similarity based on word embedding and knowledge base. In Proceedings of the 11th International Workshop on Semantic Eval- uation (SemEval-2017). Association for Computa- tional Linguistics, Vancouver, Canada, pages 235- 238. http://www.aclweb.org/anthology/S17-2036.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Merali at semeval-2017 task 2 subtask 1: a cognitively inspired approach", "authors": [ { "first": "Enrico", "middle": [], "last": "Mensa", "suffix": "" }, { "first": "Daniele", "middle": [ "P" ], "last": "Radicioni", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Lieto", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics", "volume": "", "issue": "", "pages": "245--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Enrico Mensa, Daniele P. Radicioni, and Antonio Li- eto. 2017. Merali at semeval-2017 task 2 subtask 1: a cognitively inspired approach. In Proceedings of the 11th International Workshop on Semantic Eval- uation (SemEval-2017). Association for Computa- tional Linguistics, Vancouver, Canada, pages 245- 249. http://www.aclweb.org/anthology/S17-2038.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word rep- resentations in vector space. CoRR abs/1301.3781. http://arxiv.org/abs/1301.3781.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "WordNet: an online lexical database", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "R", "middle": [ "T" ], "last": "Beckwith", "suffix": "" }, { "first": "Christiane", "middle": [ "D" ], "last": "Fellbaum", "suffix": "" }, { "first": "D", "middle": [], "last": "Gross", "suffix": "" }, { "first": "K", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography", "volume": "3", "issue": "4", "pages": "235--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller, R.T. Beckwith, Christiane D. Fell- baum, D. Gross, and K. Miller. 1990. WordNet: an online lexical database. International Journal of Lexicography 3(4):235-244.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Contextual correlates of semantic similarity", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "Walter", "middle": [ "G" ], "last": "Charles", "suffix": "" } ], "year": 1991, "venue": "Language and Cognitive Processes", "volume": "6", "issue": "1", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller and Walter G. Charles. 1991. Con- textual correlates of semantic similarity. Language and Cognitive Processes 6(1):1-28.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Using distributional similarity for lexical expansion in knowledge-based word sense disambiguation", "authors": [ { "first": "Tristan", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2012, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "1781--1796", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tristan Miller, Chris Biemann, Torsten Zesch, and Iryna Gurevych. 2012. Using distributional similar- ity for lexical expansion in knowledge-based word sense disambiguation. In Proceedings of COLING. pages 1781-1796.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Distributional measures of semantic distance: A survey", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad and Graeme Hirst. 2012. Distributional measures of semantic dis- tance: A survey. CoRR abs/1203.1858.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Learning to grade short answer questions using semantic similarity measures and dependency graph alignments", "authors": [ { "first": "Michael", "middle": [], "last": "Mohler", "suffix": "" }, { "first": "Razvan", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "752--762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Mohler, Razvan Bunescu, and Rada Mihal- cea. 2011. Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies -Vol- ume 1. Portland, Oregon, HLT'11, pages 752-762.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2012, "venue": "Artificial Intelligence", "volume": "193", "issue": "", "pages": "217--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual se- mantic network. Artificial Intelligence 193:217- 250.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Integrating distributional lexical contrast into word embeddings for antonymsynonym distinction", "authors": [ { "first": "Sabine", "middle": [], "last": "Kim Anh Nguyen", "suffix": "" }, { "first": "Ngoc", "middle": [ "Thang" ], "last": "Schulte Im Walde", "suffix": "" }, { "first": "", "middle": [], "last": "Vu", "suffix": "" } ], "year": 2016, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "454--459", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating distributional lexical contrast into word embeddings for antonym- synonym distinction. In Proc. of ACL. pages 454- 459.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP. pages 1532-1543.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "A multitask objective to inject lexical contrast into distributional semantics", "authors": [ { "first": "Angeliki", "middle": [], "last": "Nghia The Pham", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "21--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nghia The Pham, Angeliki Lazaridou, and Marco Ba- roni. 2015. A multitask objective to inject lexical contrast into distributional semantics. In Proceed- ings of ACL. pages 21-26.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "A robust approach to aligning heterogeneous lexical resources", "authors": [ { "first": "Mohammad", "middle": [], "last": "Taher", "suffix": "" }, { "first": "Pilehvar", "middle": [], "last": "", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "468--478", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Taher Pilehvar and Roberto Navigli. 2014. A robust approach to aligning heterogeneous lexical resources. In Proceedings of ACL. pages 468-478.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Hhu at semeval-2017 task 2: Fast hash-based embeddings for semantic word similarity assessment", "authors": [ { "first": "Behrang", "middle": [], "last": "Qasemizadeh", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Kallmeyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "250--255", "other_ids": {}, "num": null, "urls": [], "raw_text": "Behrang QasemiZadeh and Laura Kallmeyer. 2017. Hhu at semeval-2017 task 2: Fast hash-based embeddings for semantic word similarity as- sessment. In Proceedings of the 11th In- ternational Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pages 250-255. http://www.aclweb.org/anthology/S17-2039.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Automatic Construction and Evaluation of a Large Semantically Enriched Wikipedia", "authors": [ { "first": "Alessandro", "middle": [], "last": "Raganato", "suffix": "" }, { "first": "Claudio", "middle": [ "Delli" ], "last": "Bovi", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2016, "venue": "Proceedings of IJCAI", "volume": "", "issue": "", "pages": "2894--2900", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Raganato, Claudio Delli Bovi, and Roberto Navigli. 2016. Automatic Construction and Evalua- tion of a Large Semantically Enriched Wikipedia. In Proceedings of IJCAI. New York City, USA, pages 2894-2900.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Mahtab at semeval-2017 task 2: Combination of corpus-based and knowledge-based methods to measure semantic word similarity", "authors": [ { "first": "Niloofar", "middle": [], "last": "Ranjbar", "suffix": "" }, { "first": "Fatemeh", "middle": [], "last": "Mashhadirajab", "suffix": "" }, { "first": "Mehrnoush", "middle": [], "last": "Shamsfard", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "256--260", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niloofar Ranjbar, Fatemeh Mashhadirajab, Mehrnoush Shamsfard, Rayeheh Hosseini pour, and Aryan Vahid pour. 2017. Mahtab at semeval-2017 task 2: Combination of corpus-based and knowledge-based methods to measure seman- tic word similarity. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pages 256-260. http://www.aclweb.org/anthology/S17-2040.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Using information content to evaluate semantic similarity in a taxonomy", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1995, "venue": "Proceedings of IJCAI", "volume": "", "issue": "", "pages": "448--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Pro- ceedings of IJCAI. pages 448-453.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Wild devs' at semeval-2017 task 2: Using neural networks to discover word similarity", "authors": [ { "first": "R\u01cezvan-Gabriel", "middle": [], "last": "Rotari", "suffix": "" }, { "first": "Ionut", "middle": [], "last": "Hulub", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Oprea", "suffix": "" }, { "first": "Mihaela", "middle": [], "last": "Plamada-Onofrei", "suffix": "" }, { "first": "Alina", "middle": [ "Beatrice" ], "last": "Lorent", "suffix": "" }, { "first": "Raluca", "middle": [], "last": "Preisler", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Iftene", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Trandabat", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "267--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "R\u01cezvan-Gabriel Rotari, Ionut Hulub, Stefan Oprea, Mihaela Plamada-Onofrei, Alina Beatrice Lorent, Raluca Preisler, Adrian Iftene, and Diana Trand- abat. 2017. Wild devs' at semeval-2017 task 2: Using neural networks to discover word similarity. In Proceedings of the 11th In- ternational Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pages 267-270. http://www.aclweb.org/anthology/S17-2042.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Contextual correlates of synonymy", "authors": [ { "first": "Herbert", "middle": [], "last": "Rubenstein", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Goodenough", "suffix": "" } ], "year": 1965, "venue": "Communications of the ACM", "volume": "8", "issue": "10", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Communica- tions of the ACM 8(10):627-633.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Symmetric pattern based word embeddings for improved word similarity prediction", "authors": [ { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2015, "venue": "CoNLL", "volume": "", "issue": "", "pages": "258--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for im- proved word similarity prediction. CoNLL 2015 pages 258-267.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Semi automatic development of farsnet; the persian wordnet", "authors": [ { "first": "Mehrnoush", "middle": [], "last": "Shamsfard", "suffix": "" }, { "first": "Akbar", "middle": [], "last": "Hesabi", "suffix": "" }, { "first": "Hakimeh", "middle": [], "last": "Fadaei", "suffix": "" }, { "first": "Niloofar", "middle": [], "last": "Mansoory", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Famian", "suffix": "" }, { "first": "Somayeh", "middle": [], "last": "Bagherbeigi", "suffix": "" }, { "first": "Elham", "middle": [], "last": "Fekri", "suffix": "" }, { "first": "Maliheh", "middle": [], "last": "Monshizadeh", "suffix": "" }, { "first": "S Mostafa", "middle": [], "last": "Assi", "suffix": "" } ], "year": 2010, "venue": "Proceedings of 5th Global WordNet Conference", "volume": "29", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mehrnoush Shamsfard, Akbar Hesabi, Hakimeh Fadaei, Niloofar Mansoory, Ali Famian, Somayeh Bagherbeigi, Elham Fekri, Maliheh Monshizadeh, and S Mostafa Assi. 2010. Semi automatic develop- ment of farsnet; the persian wordnet. In Proceedings of 5th Global WordNet Conference, Mumbai, India. volume 29.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "authors": [ { "first": "Robert", "middle": [], "last": "Speer", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chin", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of AAAI. San Francisco, USA.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Conceptnet at semeval-2017 task 2: Extending word embeddings with multilingual relational knowledge", "authors": [ { "first": "Robert", "middle": [], "last": "Speer", "suffix": "" }, { "first": "Joanna", "middle": [], "last": "Lowry-Duda", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "85--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Speer and Joanna Lowry-Duda. 2017. Con- ceptnet at semeval-2017 task 2: Extending word embeddings with multilingual relational knowledge. In Proceedings of the 11th In- ternational Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pages 85-89. http://www.aclweb.org/anthology/S17-2008.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "From frequency to meaning: Vector space models of semantics", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Turney", "suffix": "" }, { "first": "", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2010, "venue": "Journal of Artificial Intelligence Research", "volume": "37", "issue": "", "pages": "141--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D. Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of Artificial Intelligence Research 37:141-188.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Cross-lingual models of word embeddings: An empirical comparison", "authors": [ { "first": "Shyam", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1661--1670", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word em- beddings: An empirical comparison. In Proceed- ings of ACL. Berlin, Germany, pages 1661-1670. http://www.aclweb.org/anthology/P16-1157.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Bilingual distributed word representations from documentaligned comparable data", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2016, "venue": "Journal of Artificial Intelligence Research", "volume": "55", "issue": "", "pages": "953--994", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2016. Bilingual distributed word representations from document- aligned comparable data. Journal of Artificial In- telligence Research 55:953-994.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Semi-supervised matrix completion for cross-lingual text classification", "authors": [ { "first": "Min", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Yuhong", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2014, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "1607--1614", "other_ids": {}, "num": null, "urls": [], "raw_text": "Min Xiao and Yuhong Guo. 2014. Semi-supervised matrix completion for cross-lingual text classifica- tion. In Proceedings of AAAI. pages 1607-1614.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Verb similarity on the taxonomy of wordnet", "authors": [ { "first": "Dongqiang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "M", "middle": [ "W" ], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Powers", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Third International WordNet Conference", "volume": "", "issue": "", "pages": "121--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dongqiang Yang and David MW Powers. 2006. Verb similarity on the taxonomy of wordnet. In Proceed- ings of the Third International WordNet Conference. Jeju Island, Korea, pages 121-128.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Bilingual word embeddings for phrase-based machine translation", "authors": [ { "first": "Will", "middle": [ "Y" ], "last": "Zou", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Daniel", "middle": [ "M" ], "last": "Cer", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1393--1398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Will Y. Zou, Richard Socher, Daniel M. Cer, and Christopher D Manning. 2013. Bilingual word em- beddings for phrase-based machine translation. In Proceedings of EMNLP. pages 1393-1398.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "0.40 0.39 0.25 0.26 0.26 0.38 0.36 0.37 0.30 0.31 0.31 0.40 0.41 0.41 SEW run1 0.37 0.41 0.39 0.38 0.40 0.39 0.45 0.45 0.45 0.57 0.57 0.57 0.61 0.62 0.62 hjpwhuer run1 -0.04 -0.03 0.00 0.00 0.00 0.00 0.02 0.02 0.02 0.05 0.05 0.05 -0.06 -0.06 0", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "0.59 0.60 0.77 0.79 0.78 0.62 0.63 0.63 0.74 0.77 0.75 0.60 0.61 0.60 Luminoso run1 0.60 0.59 0.60 0.76 0.78 0.77 0.62 0.63 0.63 0.74 0.76 0.75 0.60 0.60 0) 0.52 0.49 0.51 0.65 0.65 0.65 0.49 0.47 0.48 0.60 0.59 0.60 0.50 0.48 0a.d.) 0.46 0.49 0.48 0.58 0.60 0.59 0.50 0.53 0.52 0.59 0.60 0.60 0.48 0.50 0.49 HCCL run2 * (a.d.) 0.44 0.42 0.43 0.50 0.49 0.49 0.37 0.33 0.35 0.43 0.41 0.42 0.33 0.28 0", "uris": null, "type_str": "figure", "num": null }, "TABREF1": { "text": "The set of thirty-four domains.", "num": null, "content": "", "type_str": "table", "html": null }, "TABREF2": { "text": "The five-point Likert scale used to rate the similarity of item pairs. See", "num": null, "content": "
", "type_str": "table", "html": null }, "TABREF4": { "text": "Average pairwise Pearson correlation among annotators for the five monolingual datasets.", "num": null, "content": "
MONOLINGUAL
DETuberkuloseLED0.25
ESzumobatido3.00
ENMultiple Sclerosis MS4.00
ITNazioni UniteBan Ki-moon2.25
FA2.08
CROSS-LINGUAL
DE-ES Sesseltaburete3.08
DE-FA Lawine2.25
DE-ITTaifunciclone3.46
EN-DE pancreatic cancerChemotherapie 1.75
EN-ESJupiterMercurio3.25
EN-FAfilm0.25
EN-ITislandpenisola3.08
ES-FAduna2.25
ES-ITestrellapianeta2.83
IT-FAavvocato0.08
", "type_str": "table", "html": null }, "TABREF5": { "text": "", "num": null, "content": "", "type_str": "table", "html": null }, "TABREF7": { "text": "Number of word pairs in each dataset.", "num": null, "content": "
The cells in the main diagonal of the table (e.g.,
EN-EN) correspond the monolingual datasets of
subtask 1.
", "type_str": "table", "html": null }, "TABREF10": { "text": ".74 0.73 0.59 0.59 0.59 0.74 0.75 0.74 0.76 0.77 0.76 0.75 0.77 0.76 Luminoso run1 0.72 0.73 0.72 0.59 0.59 0.59 0.73 0.74 0.73 0.75 0.77 0.76 0.75 0.77 0.76 NASARI (baseline) 0.55 0.55 0.55 0.46 0.45 0.46 0.56 0.56 0.56 0.60 0.59 0.60 0.64 0.63 0.63", "num": null, "content": "
SystemGerman-SpanishGerman-FarsiGerman-ItalianEnglish-GermanEnglish-Spanish
r\u03c1Finalr\u03c1Final r\u03c1Finalr\u03c1Finalr\u03c1Final
Luminoso run2 0.72 0OoO run1 0.54 0.56 0.55---0.54 0.55 0.55 0.56 0.58 0.57 0.58 0.59 0.58
SEW run2 (a.d.)0.52 0.54 0.53 0.42 0.44 0.43 0.52 0.52 0.52 0.50 0.53 0.51 0.59 0.60 0.59
SEW run10.52 0.54 0.53 0.42 0.44 0.43 0.52 0.52 0.52 0.46 0.47 0.46 0.50 0.51 0.50
HCCL run2 * (a.d.) 0.42 0.39 0.41 0.33 0.28 0.30 0.38 0.34 0.36 0.49 0.48 0.48 0.55 0.56 0.55
RUFINO run1 \u20200.31 0.32 0.32 0.23 0.25 0.24 0.32 0.33 0.33 0.33 0.34 0.33 0.34 0.34 0.34
RUFINO run2 \u20200.30 0.30 0.30 0.26 0.27 0.27 0.22 0.24 0.23 0.30 0.30 0.30 0.34 0.33 0.34
hjpwhu run20.05 0.05 0.05 0.01 0.01 0.01 0.06 0.05 0.05 0.04 0.04 0.04 0.04 0.04 0.04
hjpwhu run10.05 0.05 0.05 0.01 0.01 0.01 0.06 0.05 0.05 -0.01 -0.01 0.00 0.04 0.04 0.04
HCCL run1 *0.03 0.02 0.02 0.03 0.02 0.02 0.03 -0.01 0.00 0.34 0.28 0.31 0.10 0.08 0.09
UniBuc-Sem run1 * \u2212\u2212\u2212------0.05 0.06 0.06 0.08 0.10 0.09
Citius run1 \u2020\u2212\u2212\u2212---------0.57 0.59 0.58
Citius run2 \u2020\u2212\u2212\u2212---------0.56 0.58 0.57
SystemEnglish-FarsiEnglish-ItalianSpanish-FarsiSpanish-ItalianItalian-Farsi
The combination of corpus-based and
knowledge-based features was not unique to
", "type_str": "table", "html": null }, "TABREF11": { "text": "table, while systems making use of", "num": null, "content": "
SystemScore Official Rank
Luminoso run20.7541
Luminoso run10.7502
NASARI (baseline) 0.598-
OoO run1 *0.5673
SEW run2 (a.d.)0.558-
SEW run10.5324
HCCL run2 * (a.d.) 0.464-
RUFINO run1 \u20200.3365
RUFINO run2 \u20200.3176
HCCL run1 *0.1037
hjpwhu run20.0398
hjpwhu run1
", "type_str": "table", "html": null } } } }