|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:31:36.003011Z" |
|
}, |
|
"title": "SuperSim: a test set for word similarity and relatedness in Swedish", |
|
"authors": [ |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Hengchen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Gothenburg", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Nina", |
|
"middle": [], |
|
"last": "Tahmasebi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Gothenburg", |
|
"location": {} |
|
}, |
|
"email": "nina.tahmasebi@gu.se" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Language models are notoriously difficult to evaluate. We release SuperSim, a large-scale similarity and relatedness test set for Swedish built with expert human judgments. The test set is composed of 1,360 word-pairs independently judged for both relatedness and similarity by five annotators. We evaluate three different models (Word2Vec, fastText, and GloVe) trained on two separate Swedish datasets, namely the Swedish Gigaword corpus and a Swedish Wikipedia dump, to provide a baseline for future comparison. We release the fully annotated test set, code, baseline models, and data. 1", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Language models are notoriously difficult to evaluate. We release SuperSim, a large-scale similarity and relatedness test set for Swedish built with expert human judgments. The test set is composed of 1,360 word-pairs independently judged for both relatedness and similarity by five annotators. We evaluate three different models (Word2Vec, fastText, and GloVe) trained on two separate Swedish datasets, namely the Swedish Gigaword corpus and a Swedish Wikipedia dump, to provide a baseline for future comparison. We release the fully annotated test set, code, baseline models, and data. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "It is said that a cup and coffee are not very similar while car and train are much more so given that they share multiple similar features. Instead, cup and coffee are highly related, as we typically enjoy the one in the other. Of course, an immediate question that arises is whether we have words that are similar but not related? Existing similarity datasets have tended to rate words for their similarity, relatedness, or a mixture of both, but not either or. However, without both kind of information, we cannot know if words are related but not similar, or similar but not related.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The most common motivation for using word similarity datasets, such as SimLex-999 (Hill et al., 2015) and WordSim353 (Finkelstein et al., 2001) , is for use as a quality check for word embedding models. The aim of most embedding models is to capture a word's semantic relationships, such that words that are similar in meaning are placed close in the semantic space; foods with other foods, technical terms together and separated from the musical instruments, to give an example. However, the optimal performance of such a semantic space is judged by whether or not one wishes to capture similarity of words, or relatedness. It seems obvious that presenting cup as a query reformulation for coffee in information retrieval seems off, while presenting lamborghini when searching for ferrari can be completely acceptable. Inversely, in places where relatedness is needed, offering a cup when one asks for a coffee is correct.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 101, |
|
"text": "(Hill et al., 2015)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 117, |
|
"end": 143, |
|
"text": "(Finkelstein et al., 2001)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While the first word similarity datasets appeared for English, in the past few years we have seen datasets for a range of different languages (see Section 2). For Swedish, there exists one automatically-created resource based on an association lexicon by Fallgren et al. (2016) . However, there are to date no test sets that are (1) expertly-annotated, (2) comparable to other international test sets, and (3) annotated for both relatedness and similarity. And because we cannot know which motivation lies behind creating a vector space, and because both relatedness and similarity seem equally valid, we have opted to create SuperSim. The SuperSim test set is a largerscale similarity and relatedness set for Swedish, consisting of 1,301 words and 1,360 pairs rated by 5 expert annotators. The pairs are based on SimLex-999 and WordSim353, and can be used to assess the performance of word embedding models, but also answer questions as to whether words are likely to be similar but not related.", |
|
"cite_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 277, |
|
"text": "Fallgren et al. (2016)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Several works aim to provide test sets to assess the quality of word embedding models. Most of them tackle English (Rubenstein and Goodenough, 1965; Miller and Charles, 1991; Agirre et al., 2009; Bruni et al., 2012; Hill et al., 2015) . Russian, Italian and German are cov-ered by Leviant and Reichart (2015) who translated the pairs in WordSim353 and SimLex-999, and asked crowdworkers to judge them on a 0-10 scale. The SemEval-2017 Task 2 on Multilingual and Cross-lingual Semantic Word Similarity (Camacho-Collados et al., 2017) provides pairs in 5 languages: English, Farsi, German, Italian and Spanish. Ercan and Y\u0131ld\u0131z (2018) provide 500 word pairs in Turkish annotated by 12 humans for both similarity and relatedness on a scale ranging from 0 to 10, while Finnish is covered in Venekoski and Vankka (2017) . More recently, Multi-SimLex (Vuli\u0107 et al., 2020) provides annotations in Mandarin Chinese, Yue Chinese, Welsh, English, Estonian, Finnish, French, Hebrew, Polish, Russian, Spanish, Kiswahili, and Arabic, with open guidelines and encouragement to join in with more languages. 2 For Swedish, Fallgren et al. 2016harness the Swedish Association Lexicon SALDO (Borin et al., 2013) , a large lexical-semantic resource that differs much from Wordnet (Fellbaum, 1998) insofar as it organises words mainly with the 'association' relation. The authors use SALDO's 'supersenses' to adapt 's QVEC-CCA intrinsic evaluation measure to Swedish. Still on evaluating Swedish language models, Adewumi et al. (2020b) propose an analogy test set built on the one proposed by Mikolov et al. (2013) , and evaluate common architectures on downstream tasks. The same authors further compare these architectures on models trained on different datasets (namely the Swedish Gigaword corpus (R\u00f8dven-Eide et al., 2016) and the Swedish Wikipedia) by focusing on Swedish and utilising their analogy test set (Adewumi et al., 2020a). Finally, for Swedish, SwedishGLUE/SuperLim 3 (Adesam et al., 2020) is currently being developed as a benchmark suite for language models in Swedish, somewhat mirroring English counterparts (Wang et al., 2018 (Wang et al., , 2019 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 148, |
|
"text": "(Rubenstein and Goodenough, 1965;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 149, |
|
"end": 174, |
|
"text": "Miller and Charles, 1991;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 195, |
|
"text": "Agirre et al., 2009;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 215, |
|
"text": "Bruni et al., 2012;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 234, |
|
"text": "Hill et al., 2015)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 308, |
|
"text": "Leviant and Reichart (2015)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 532, |
|
"text": "(Camacho-Collados et al., 2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 609, |
|
"end": 632, |
|
"text": "Ercan and Y\u0131ld\u0131z (2018)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 787, |
|
"end": 814, |
|
"text": "Venekoski and Vankka (2017)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 845, |
|
"end": 865, |
|
"text": "(Vuli\u0107 et al., 2020)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 921, |
|
"end": 1025, |
|
"text": "Welsh, English, Estonian, Finnish, French, Hebrew, Polish, Russian, Spanish, Kiswahili, and Arabic, with", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1092, |
|
"end": 1093, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1173, |
|
"end": 1193, |
|
"text": "(Borin et al., 2013)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1261, |
|
"end": 1277, |
|
"text": "(Fellbaum, 1998)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1573, |
|
"end": 1594, |
|
"text": "Mikolov et al. (2013)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1781, |
|
"end": 1807, |
|
"text": "(R\u00f8dven-Eide et al., 2016)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1965, |
|
"end": 1986, |
|
"text": "(Adesam et al., 2020)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 2109, |
|
"end": 2127, |
|
"text": "(Wang et al., 2018", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 2128, |
|
"end": 2148, |
|
"text": "(Wang et al., , 2019", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Whether similarity test sets actually allow to capture and evaluate lexical semantics is debatable Schnabel et al., 2015) . Nonetheless, they have the advantage of providing a straightforward way of optimising word embeddings (through hyper-parameter search, at the risk of overfitting), or to be used more creatively in other tasks (Dubossarsky et al., 2019) where \"quantifiable synonymy\" is required. Finally, task-specific evaluation (as recommended by ) is, for languages other than English, more than often nonexistent -making test sets such as the one presented in this work a good alternative.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 121, |
|
"text": "Schnabel et al., 2015)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 333, |
|
"end": 359, |
|
"text": "(Dubossarsky et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our dataset differs from previous work in the sense that it provides expert judgments for Swedish for both relatedness and similarity, and hence comprises two separate sets of judgments, as done by skilled annotators. 4 A description of the procedure is available in Section 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our work heavily draws from Hill et al. (2015) , who made a large distinction between relatedness and similarity. Indeed, the authors report that previous work such as Agirre et al. (2009) or Bruni et al. (2012) do not consider relatedness and similarity to be different. Words like coffee and cup, to reuse the example by Hill et al. (2015) , are obviously related (one is used to drink the other, they can both be found in a kitchen, etc.) but at the same time dissimilar (one is (...usually) a liquid and the other is a solid, one is ingested and not the other, etc.).", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 46, |
|
"text": "Hill et al. (2015)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 188, |
|
"text": "Agirre et al. (2009)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 192, |
|
"end": 211, |
|
"text": "Bruni et al. (2012)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 341, |
|
"text": "Hill et al. (2015)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relatedness and Similarity", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "All pairs in SuperSim are independently judged for similarity and relatedness. To explain the concept of similarity to annotators, we have reused the approach of Hill et al. (2015) who introduced it via the idea of synonymy, and in contrast to association: \"In contrast, although the following word pairs are related, they are not very similar. The words represent entirely different types of things.\" They further give the example of \"car / tyre.\" We use this definition embedded in the SimLex-999 guidelines to define relatedness according to the following: \"In Task 2, we also ask that you rate the same word pairs for their relatedness. For this task, consider the inverse of similarity: car and tyre are related even if they are not synonyms. However, synonyms are also related.\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 180, |
|
"text": "Hill et al. (2015)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relatedness and Similarity", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "While the WordSim353 pairs were chosen for use in information retrieval and to some extent mix similarity and relatedness, the original SimLex-999 pairs were chosen with more care. They were meant to measure the ability of different models to capture similarity as opposed to association, contain words from different part-of-speech (nouns, verbs, and adjectives), and represent different concreteness levels. Despite the risks of losing some intended effect in translation, we opted to base Su-perSim on both of these resources rather than start from scratch.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relatedness and Similarity", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We machine-translated all words in WordSim353 and SimLex-999 to Swedish. The translations were manually checked by a semanticist who is a native speaker of Swedish, holds an MA in linguistics, and is currently working towards obtaining a PhD in linguistics. The semanticist was presented a list of words, out of context, decoupled from the pairs they were parts of. Where needed, translations were corrected. Pairs were reconstructed according to the original datasets, except for the few cases where the translation process would create duplicates. In a few cases where one single translation was not obvious -i.e. cases where either Google Translate or the semanticist would output two (equally likely) possible Swedish translations for the same English word -, two pairs were constructed: one with each possible translation. For example, the presence of 'drug' led to pairs with both the l\u00e4kemedel (a medical drug aimed at treating pathologies) and drog (a narcotic or stimulant substance, usually illicit) translations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We selected 5 annotators (4F/1M) who are native speakers of Swedish and all have experience working with annotation tasks. One of the annotators was the same person who manually checked the correctness of the translations. The other 4 annotators can be described as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 holds an MA in linguistics and has experience in lexicography,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 holds an MA in linguistics,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 holds BAs in linguistics and Spanish and is studying for an MSc in language technology,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 holds a BA in linguistics and has extensive work experience with different language-related tasks such as translation and NLP (on top of annotation).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Annotators were each given (i) the original SimLex-999 annotation instructions containing examples illustrating the difference between relatedness and similarity; (ii) one file for the relatedness scores; and (iii) one file for the similarity scores. They were instructed to complete the annotation for similarity before moving on to relatedness, and complied. The annotation took place, and was monitored, on Google Sheets. Annotators did not have access to each others' sheets, nor were they aware of who the other annotators were.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To allow for a finer granularity as well as to echo previous work, annotators were tasked with assigning scores on a 0-10 scale, rather than 1-6 as in SimLex-999. Unlike the procedure for Simlex, where sliders were given (and hence the annotators could choose real values), our annotators assigned discrete values between 0-10. This procedure resulted in pairs with the same score, and thus many rank ties.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The entire SuperSim consists of 1,360 pairs. Out of these, 351 pairs stem from WordSim353 and 997 pairs from SimLex-999. Pairs where both words translate into one in Swedish are removed from the SimLex-999 and WordSim353 subsets, thus resulting in fewer pairs than the original datasets: for example, 'engine' and 'motor' are both translated as motor and therefore the 'motor' -'engine' pair is removed. The SuperSim set consists of both sets, as well as of a set of additional pairs where multiple translations were used (see the l\u00e4kemedel and drog example above). The full set of 1,360 pairs is annotated for both similarity and relatedness separately, resulting in a total of 2 * 1,360 gold scores, and thus 13,600 individual judgments. An example of relatedness judgments for two pairs is available in table form in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 820, |
|
"end": 827, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "SuperSim stats", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We release two tab-separated files (one for relatedness, one for similarity) containing judgments from all annotators as well as the mean gold score. We additionally release all baseline models, code, and pre-processed data where permissible. The data is freely available for download at https: //zenodo.org/record/4660084. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SuperSim stats", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For quality control, annotation files contained a total of 69 randomly sampled duplicate pairs, in addition to the 1,360 true pairs. 5 These duplicates allowed us to calculate every annotator's consistency, and to judge how difficult each task was in practice. Table 2 illustrates the consistency of every annotator in the similarity and relatedness tasks for our 69 control pairs. 'Disagreement' indicates two different values for any given pair and 'hard disagreement' two values with an absolute difference higher than 2 (on the scale of 0-10). On average, the hard disagreements differed by 4.3 points for relatedness, and by 3.0 for similarity, and there were more disagreements (both kinds) for relatedness, indicating that for humans, relatedness is the harder task. In addition, we indicate the computed self-agreement score (Krippendorff's alpha, Krippendorff 2018) for every annotator for both tasks. Despite annotators disagreeing somewhat with themselves, Krippendorff's alpha indicates they annotated word pairs consistently. Out of the 69 control pairs, 4 were inconsistently annotated by four annotators for similarity, while 12 pairs were inconsistently annotated by four or more annotators for relatedness: 3 by all five annotators, and 9 by four. The three \"hardest\" pairs to annotate for relatedness are lycklig-arg 'happy-angry,' sommarnatur 'summer-nature,' tillk\u00e4nnagivande-varning 'announcement-warning.'", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 268, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Intra-rater agreement", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Following Hill et al. (2015), we use the average Spearman's \u03c1 for measuring inter-rater agreement by taking the average of pairwise Spearman's \u03c1 correlations between the ratings of all respondents. 6 For the original SimLex-999, over-all agreement was \u03c1 = 0.67 as compared to Word-Sim353 where \u03c1 = 0.61 using the same method. Spearman's \u03c1 for our similarity rankings is 0.67. In addition, we have a Spearman's \u03c1 for our relatedness rankings of 0.73. 7 It is unclear how the background of our annotators affects the quality of their annotation. In another semantic annotation study, although on historical data, Schlechtweg et al. (2018) show a larger agreement between annotators sharing a background in historical linguistics than between a historical linguist and a 'non-expert' native speaker. It is, however, fully possible that the linguistic expertise of the annotators affects the similarity and relatedness judgments in a negative way. We leave this investigation for further work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 199, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 611, |
|
"end": 636, |
|
"text": "Schlechtweg et al. (2018)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inter-rater agreement", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "To provide a baseline for evaluation of embedding models on SuperSim, we trained three different models on two separate datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We chose three standard models, Word2Vec (Mikolov et al., 2013) , fastText (Bojanowski et al., 2017) , and GloVe (Pennington et al., 2014) . Word2Vec and fastText models are trained with gensim (\u0158eh\u016f\u0159ek and Sojka, 2010) while the GloVe embeddings are trained using the official C implementation provided by Pennington et al. (2014 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 63, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 75, |
|
"end": 100, |
|
"text": "(Bojanowski et al., 2017)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 113, |
|
"end": 138, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 307, |
|
"end": 330, |
|
"text": "Pennington et al. (2014", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We use two datasets. The largest of the two comprises the Swedish Culturomics Gigaword corpus (R\u00f8dven-Eide et al., 2016) , which con- 7 These results are opposing those of the disagreements which indicate that similarity is easier than relatedness for our annotators. We postulate that this can be due to the many rank ties we have in the similarity testset (where many pairs have 0 similarity). If we use the Pearson's \u03c1, we get values of \u03c1 = 0.722 for relatedness, and \u03c1 = 0.715 for similarity bringing the two tasks much closer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 120, |
|
"text": "(R\u00f8dven-Eide et al., 2016)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 135, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training data", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "8 Tests were also made using the Python implementation available at https://github.com/maciejkula/ glove-python, with similar performance. Table 2 : Number of control word-pairs with annotator self-disagreements. 'Disagreem.' = different values between two annotations for a given pair (0-10 scale), 'hard disagreem.' = difference > 2 between values between two annotations for a given pair (0-10 scale), \u03b1 = Krippendorff's alpha. Total number of control pairs is 69, percentages follow absolute counts in parentheses. tains a billion words 9 in Swedish from different sources including fiction, government, news, science, and social media. The second dataset is a recent Swedish Wikipedia dump with a total of 696,500,782 tokens. 10 While the Swedish Gigaword corpus contains text from the Swedish Wikipedia, R\u00f8dven-Eide et al. (2016) precise that about 150M tokens out of the 1G in Gigaword (14.9%) stem from the Swedish Wikipedia. In that respect, there is an overlap in terms of content in our baseline corpora. However, as the Swedish Wikipedia has grown extensively over the years and only a sub-part of it was used in in R\u00f8dven-Eide et al. (2016), the overlap is small and we thus have opted to also use the Gigaword corpus as it is substantially larger and contains other genres of text.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 146, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training data", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The Wikipedia dump was processed with a version of the Perl script released by Matt Mahoney 11 9 1,015,635,151 tokens in 59,736,642 sentences, to be precise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Consistency of judgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "10 Available at https://dumps.wikimedia.org/ svwiki/20201020/svwiki-20201020-pagesarticles.xml.bz2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Consistency of judgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "11 The script is available at http:// mattmahoney.net/dc/textdata.html.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Consistency of judgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It effectively only keeps what should be displayed in a web browser modified to account for specific non-ASCII characters (\u00e4\u00e5\u00f6\u00e9) and to transform digits to their Swedish written form (eg: 2 \u2192 tv\u00e5). 12 All baseline models are trained on lowercased tokens with default hyperparameters. 13", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 200, |
|
"text": "12", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Consistency of judgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An overview of the performance of the three baseline models is available in Table 3 and Table 4 . In both tables we show model performance on similarity and relatedness judgments. We split the results into three sets, one for the entire Super-Sim, and two for its subsets: WordSim353 and SimLex-999. For each model and dataset, we present Spearman's rank correlation \u03c1 between the ranking produced by the model compared to the gold ranking in each testset (relatedness and similarity). As fastText uses subword information to build vectors, it deals better with out-ofvocabulary words, hence the higher number of and removes tables but keeps image captions, while links are converted to normal text. Characters are lowercased.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 95, |
|
"text": "Table 3 and Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "12 '1', which can be either en or ett in Swedish, was replaced by 'ett' every time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "13 Except for sg = 1, min count = 100 and seed = 1830. From the results in Table 3 and 4, it appears that fastText is the most impacted by the size of the training data, as its performance when trained on the smaller Wikipedia corpus is 'much' lower than on the larger Gigaword: 0.349 vs 0.550 for SuperSim relatedness and 0.365 vs 0.528 for Supersim similarity -both tasks where fastText actually performs best on Gigawords out of the three models tested. We find that all models perform better when trained on Gigaword as compared to Wikipedia. Contrary to results on the analogy task reported by Adewumi et al. (2020a), our experiments on SuperSim seem to confirm the usual trope that training on more data indeed leads to overall better embeddings, as the higher scores, in terms of absolute numbers, are all from models trained on the larger Gigaword corpus. Nonetheless, the discrepancy between our results and theirs might be due to a range of factors, including pre-processing and hyperparameter tuning (which we did not do). 14 Note that for similarity, Word2Vec trained on Gigaword performs slightly better on the translated SimLex-999 pairs (0.436) than Word2Vec does on English SimLex-999 (0.414) but substantially lower for WordSim (0.436 vs 0.655) (Hill et al., 2015) . We make the comparison for Gigaword, rather than Wikipedia because of the com- 14 The effect of the benefits of more training data is confounded with the broader genre definitions in Gigaword that could be an indication of the advantage of including e.g., fiction and social media text in defining for example emotions. We leave a detailed investigation into this for future work. parable size, rather than the genre. This effect could be due to different pre-processing and model parameters used, but it could also be an effect of the multiple ties present in our test set. We do, however, consistently confirm the original conclusion: SimLex-999 seems harder for the models than WordSim353.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1034, |
|
"end": 1036, |
|
"text": "14", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1262, |
|
"end": 1281, |
|
"text": "(Hill et al., 2015)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1363, |
|
"end": 1365, |
|
"text": "14", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 82, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "GloVe is the clear winner on the smaller Wikipedia dataset, where it outperforms the other two models for all test sets, and is on par with Word2Vec for Gigaword.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Overall, our results indicate that for the tested models relatedness is an easier task than similarity: every model -aside from fastText on Su-perSim -performs better (or equally well) on relatedness on the whole test set, as well as on its subparts, compared to similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this paper, we presented SuperSim, a Swedish similarity and relatedness test set made of new judgments of the translated pairs of both SimLex-999 and WordSim353. All pairs have been rated by five expert annotators, independently for both similarity and relatedness. Our inter-annotator agreements mimic those of the original test sets, but also indicate that similarity is an easier task to rate than relatedness, while our intra-rater agreements on 69 control pairs indicate that the annotation is reasonably consistent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To provide a baseline for model performance, we trained three different models, namely Word2Vec, fastText and GloVe, on two separate Swedish datasets. The first comprises a general purpose dataset, namely the The Swedish Culturomics Gigaword Corpus with different genres of text spanning 1950-2015. The second comprises a recent Swedish Wikipedia dump. On the Gigaword corpus, we find that fastText is best at capturing both relatedness and similarity while for Wikipedia, GloVe performs the best.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Finally, to answer the question posed in the introduction: it is common to have words that are highly related, but not similar. To give a few examples, these are pairs with relatedness 10 and similarity 0: bil-motorv\u00e4g 'carhighway,' datum-kalender 'date-calendar,' ordordbok 'word-dictionary,' skola-betyg 'schoolgrade,' and tennis-racket 'tennis-racket.'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The opposite however, does not hold. Only four pairs have a similarity score higher than the relatedness score, and in all cases the difference is smaller than 0.6: bli-verka 'become-seem,' r\u00f6rcigarr 'pipe-cigarr,' st\u00e5ltr\u00e5d-sladd 'wire-cord,' till\u00e4gna sig-skaffa sig 'get-acquire.'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For future work, the SuperSim testset can be improved both in terms of added annotations (more annotators), and with respect to more fine-grained judgements (real values in contrast to discrete ones currently used) to reduce the number of rank ties.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://zenodo.org/record/4660084.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The website is updated with new annotations: https: //multisimlex.com/.3 https://spraakbanken.gu.se/projekt/ superlim-en-svensk-testmangd-forsprakmodeller", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have opted not to follow Multi-SimLex because (1) we want to have annotations for both relatedness and similarity, and (2) we have limited possibility to use platforms such as Amazon Mechanical Turk, and have thus resorted to using skilled annotators: to illustrate, we are bound to the hourly rate of 326 SEK (32.08 EUR). As a result the cost of annotating with 10 annotators is significantly higher, in particular if we want two separate sets of annotations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "SuperSim includes the values for the first seen annotation of a duplicate pair. To illustrate: if a control pair was annotated first to have a score of 3 and then to have a score of 6, the first score of 3 is kept.6 We use the scipy.stats.mstats spearmanr(Virtanen et al., 2020) implementation with rank ties.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Tosin P. Adewumi, Lidia Pivovarova, Elaine Zosa, Sasha (Aleksandrs) Berdicevskis, Lars Borin, Erika Wauthia, Haim Dubossarsky, Stian R\u00f8dven-Eide as well as the anonymous reviewers for their insightful comments. This work has been funded in part by the project Towards Computational Lexical Semantic Change Detection supported by the Swedish Research Council (2019-2022; dnr 2018-01184), and Nationella Spr\u00e5kbanken (the Swedish National Language Bank), jointly funded by the Swedish Research Council (2018-2024; dnr 2017-00626) and its ten partner institutions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": "6" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Swedishglue -towards a swedish test set for evaluating natural language understanding models", |
|
"authors": [ |
|
{ |
|
"first": "Yvonne", |
|
"middle": [], |
|
"last": "Adesam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aleksandrs", |
|
"middle": [], |
|
"last": "Berdicevskis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Morger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yvonne Adesam, Aleksandrs Berdicevskis, and Felix Morger. 2020. Swedishglue -towards a swedish test set for evaluating natural language understand- ing models. Technical report, University of Gothen- burg.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Corpora compared: The case of the swedish gigaword & wikipedia corpora", |
|
"authors": [ |
|
{ |
|
"first": "Foteini", |
|
"middle": [], |
|
"last": "Tosin P Adewumi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcus", |
|
"middle": [], |
|
"last": "Liwicki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Liwicki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2011.03281" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tosin P Adewumi, Foteini Liwicki, and Marcus Li- wicki. 2020a. Corpora compared: The case of the swedish gigaword & wikipedia corpora. arXiv preprint arXiv:2011.03281.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Exploring Swedish & English fasttext embeddings with the transformer", |
|
"authors": [ |
|
{ |
|
"first": "Foteini", |
|
"middle": [], |
|
"last": "Tosin P Adewumi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcus", |
|
"middle": [], |
|
"last": "Liwicki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Liwicki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2007.16007" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tosin P Adewumi, Foteini Liwicki, and Marcus Li- wicki. 2020b. Exploring Swedish & English fasttext embeddings with the transformer. arXiv preprint arXiv:2007.16007.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A study on similarity and relatedness using distributional and wordnet-based approaches", |
|
"authors": [ |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enrique", |
|
"middle": [], |
|
"last": "Alfonseca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jana", |
|
"middle": [], |
|
"last": "Kravalov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalov\u00e1, Marius Pasca, and Aitor Soroa. 2009. A study on similarity and relatedness using distribu- tional and wordnet-based approaches. In Proceed- ings of NAACL-HLT, pages 19-27.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "135--146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Saldo: a touch of yin to wordnet's yang. Language resources and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Lars", |
|
"middle": [], |
|
"last": "Borin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Forsberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lennart", |
|
"middle": [], |
|
"last": "L\u00f6nngren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "47", |
|
"issue": "", |
|
"pages": "1191--1211", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lars Borin, Markus Forsberg, and Lennart L\u00f6nngren. 2013. Saldo: a touch of yin to wordnet's yang. Lan- guage resources and evaluation, 47(4):1191-1211.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Distributional semantics in technicolor", |
|
"authors": [ |
|
{ |
|
"first": "Elia", |
|
"middle": [], |
|
"last": "Bruni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gemma", |
|
"middle": [], |
|
"last": "Boleda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nam-Khanh", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "136--145", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- nicolor. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 136-145, Jeju Island, Korea. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "SemEval-2017 task 2: Multilingual and cross-lingual semantic word similarity", |
|
"authors": [ |
|
{ |
|
"first": "Jose", |
|
"middle": [], |
|
"last": "Camacho-Collados", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [ |
|
"Taher" |
|
], |
|
"last": "Pilehvar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nigel", |
|
"middle": [], |
|
"last": "Collier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "15--26", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S17-2002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jose Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. 2017. SemEval- 2017 task 2: Multilingual and cross-lingual semantic word similarity. In Proceedings of the 11th Interna- tional Workshop on Semantic Evaluation (SemEval- 2017), pages 15-26, Vancouver, Canada. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Time-out: Temporal referencing for robust modeling of lexical semantic change", |
|
"authors": [ |
|
{ |
|
"first": "Haim", |
|
"middle": [], |
|
"last": "Dubossarsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Hengchen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nina", |
|
"middle": [], |
|
"last": "Tahmasebi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dominik", |
|
"middle": [], |
|
"last": "Schlechtweg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "457--470", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1044" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haim Dubossarsky, Simon Hengchen, Nina Tah- masebi, and Dominik Schlechtweg. 2019. Time-out: Temporal referencing for robust modeling of lexical semantic change. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 457-470, Florence, Italy. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "An-lamVer: Semantic model evaluation dataset for Turkish -word similarity and relatedness", |
|
"authors": [ |
|
{ |
|
"first": "G\u00f6khan", |
|
"middle": [], |
|
"last": "Ercan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olcay", |
|
"middle": [], |
|
"last": "Taner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y\u0131ld\u0131z", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3819--3836", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G\u00f6khan Ercan and Olcay Taner Y\u0131ld\u0131z. 2018. An- lamVer: Semantic model evaluation dataset for Turkish -word similarity and relatedness. In Pro- ceedings of the 27th International Conference on Computational Linguistics, pages 3819-3836, Santa Fe, New Mexico, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Towards a standard dataset of Swedish word vectors", |
|
"authors": [ |
|
{ |
|
"first": "Jesper", |
|
"middle": [], |
|
"last": "Per Fallgren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Segeblad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kuhlmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Sixth Swedish Language Technology Conference (SLTC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "17--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Per Fallgren, Jesper Segeblad, and Marco Kuhlmann. 2016. Towards a standard dataset of Swedish word vectors. In Sixth Swedish Language Technology Conference (SLTC), Ume\u00e5 17-18 nov 2016.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Problems with evaluation of word embeddings using word similarity tasks", |
|
"authors": [ |
|
{ |
|
"first": "Manaal", |
|
"middle": [], |
|
"last": "Faruqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Tsvetkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpendre", |
|
"middle": [], |
|
"last": "Rastogi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "30--35", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W16-2506" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. 2016. Problems with evaluation of word embeddings using word similarity tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 30- 35, Berlin, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "WordNet: An electronic lexical database", |
|
"authors": [ |
|
{ |
|
"first": "Christiane", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Applied Psycholinguistics", |
|
"volume": "22", |
|
"issue": "01", |
|
"pages": "131--134", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christiane Fellbaum. 1998. WordNet: An electronic lexical database. Christiane Fellbaum (Ed.). Cam- bridge, MA: MIT Press, 1998. Pp. 423. Applied Psycholinguistics, 22(01):131-134.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Placing search in context: The concept revisited", |
|
"authors": [ |
|
{ |
|
"first": "Lev", |
|
"middle": [], |
|
"last": "Finkelstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evgeniy", |
|
"middle": [], |
|
"last": "Gabrilovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yossi", |
|
"middle": [], |
|
"last": "Matias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ehud", |
|
"middle": [], |
|
"last": "Rivlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zach", |
|
"middle": [], |
|
"last": "Solan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gadi", |
|
"middle": [], |
|
"last": "Wolfman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eytan", |
|
"middle": [], |
|
"last": "Ruppin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 10th international conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "406--414", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th inter- national conference on World Wide Web, pages 406- 414.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "SimLex-999: Evaluating semantic models with (genuine) similarity estimation", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Computational Linguistics", |
|
"volume": "41", |
|
"issue": "4", |
|
"pages": "665--695", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/COLI_a_00237" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Lin- guistics, 41(4):665-695.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Content analysis: An introduction to its methodology", |
|
"authors": [ |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Krippendorff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klaus Krippendorff. 2018. Content analysis: An intro- duction to its methodology. Sage publications.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Separated by an un-common language: Towards judgment language informed vector space modeling", |
|
"authors": [ |
|
{ |
|
"first": "Ira", |
|
"middle": [], |
|
"last": "Leviant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.00106" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ira Leviant and Roi Reichart. 2015. Separated by an un-common language: Towards judgment language informed vector space modeling. arXiv preprint arXiv:1508.00106.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1301.3781" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Contextual correlates of semantic similarity", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Walter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Charles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Language and cognitive processes", |
|
"volume": "6", |
|
"issue": "1", |
|
"pages": "1--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A Miller and Walter G Charles. 1991. Contex- tual correlates of semantic similarity. Language and cognitive processes, 6(1):1-28.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Software Framework for Topic Modelling with Large Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Radim\u0159eh\u016f\u0159ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sojka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta. ELRA.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The Swedish culturomics gigaword corpus: A one billion word Swedish reference dataset for NLP", |
|
"authors": [ |
|
{ |
|
"first": "Nina", |
|
"middle": [], |
|
"last": "Stian R\u00f8dven-Eide", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lars", |
|
"middle": [], |
|
"last": "Tahmasebi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Borin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Digital Humanities", |
|
"volume": "126", |
|
"issue": "", |
|
"pages": "8--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stian R\u00f8dven-Eide, Nina Tahmasebi, and Lars Borin. 2016. The Swedish culturomics gigaword corpus: A one billion word Swedish reference dataset for NLP. In Digital Humanities 2016., 126, pages 8- 12. Link\u00f6ping University Electronic Press.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Contextual correlates of synonymy", |
|
"authors": [ |
|
{ |
|
"first": "Herbert", |
|
"middle": [], |
|
"last": "Rubenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goodenough", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1965, |
|
"venue": "Communications of the ACM", |
|
"volume": "8", |
|
"issue": "10", |
|
"pages": "627--633", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communica- tions of the ACM, 8(10):627-633.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Diachronic usage relatedness (DURel): A framework for the annotation of lexical semantic change", |
|
"authors": [ |
|
{ |
|
"first": "Dominik", |
|
"middle": [], |
|
"last": "Schlechtweg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabine", |
|
"middle": [], |
|
"last": "Schulte Im Walde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefanie", |
|
"middle": [], |
|
"last": "Eckmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "169--174", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2027" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dominik Schlechtweg, Sabine Schulte im Walde, and Stefanie Eckmann. 2018. Diachronic usage related- ness (DURel): A framework for the annotation of lexical semantic change. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 169-174, New Orleans, Louisiana. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Evaluation methods for unsupervised word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Schnabel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Igor", |
|
"middle": [], |
|
"last": "Labutov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mimno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "298--307", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D15-1036" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 298-307, Lisbon, Portugal. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Correlation-based intrinsic evaluation of word vector representations", |
|
"authors": [ |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Tsvetkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manaal", |
|
"middle": [], |
|
"last": "Faruqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "111--115", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W16-2520" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yulia Tsvetkov, Manaal Faruqui, and Chris Dyer. 2016. Correlation-based intrinsic evaluation of word vec- tor representations. In Proceedings of the 1st Work- shop on Evaluating Vector-Space Representations for NLP, pages 111-115, Berlin, Germany. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Finnish resources for evaluating language model semantics", |
|
"authors": [ |
|
{ |
|
"first": "Viljami", |
|
"middle": [], |
|
"last": "Venekoski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jouko", |
|
"middle": [], |
|
"last": "Vankka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "231--236", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Viljami Venekoski and Jouko Vankka. 2017. Finnish resources for evaluating language model semantics. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 231-236, Gothen- burg, Sweden. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Multi-simlex: A largescale evaluation of multilingual and cross-lingual lexical semantic similarity", |
|
"authors": [ |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Baker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Edoardo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulla", |
|
"middle": [], |
|
"last": "Ponti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ira", |
|
"middle": [], |
|
"last": "Petti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kelly", |
|
"middle": [], |
|
"last": "Leviant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Wing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eden", |
|
"middle": [], |
|
"last": "Majewska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Bar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thierry", |
|
"middle": [], |
|
"last": "Malone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Poibeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--51", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/coli_a_00391" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivan Vuli\u0107, Simon Baker, Edoardo Maria Ponti, Ulla Petti, Ira Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibeau, Roi Reichart, and Anna Korhonen. 2020. Multi-simlex: A large- scale evaluation of multilingual and cross-lingual lexical semantic similarity. Computational Linguis- tics, 0(0):1-51.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yada", |
|
"middle": [], |
|
"last": "Pruksachatkun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikita", |
|
"middle": [], |
|
"last": "Nangia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel R", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1905.00537" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Super- glue: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "353--355", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: An- alyzing and Interpreting Neural Networks for NLP, pages 353-355.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"8\">Word 1 Word 2 Anno 1 Anno 2 Anno 3 Anno 4 Anno 5 Average</td></tr><tr><td>flicka</td><td>barn</td><td>10</td><td>10</td><td>10</td><td>8</td><td>10</td><td>9.6</td></tr><tr><td>skola</td><td>mitten</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0.2</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Example of relatedness judgments on pairs flicka-barn 'girl-child' and skola-mitten 'schoolcentre.'" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"content": "<table><tr><td>Model</td><td>Test set</td><td colspan=\"3\">Spearman's \u03c1 Spearman's \u03c1 Included pairs relatedness similarity</td></tr><tr><td/><td>SuperSim</td><td>0.539</td><td>0.496</td><td>1,255</td></tr><tr><td>Word2Vec</td><td>WordSim353 pairs</td><td>0.560</td><td>0.453</td><td>325</td></tr><tr><td/><td>SimLex-999 pairs</td><td>0.499</td><td>0.436</td><td>923</td></tr><tr><td/><td>SuperSim</td><td>0.550</td><td>0.528</td><td>1,297</td></tr><tr><td>fastText</td><td>WordSim353 pairs</td><td>0.547</td><td>0.477</td><td>347</td></tr><tr><td/><td>SimLex-999 pairs</td><td>0.520</td><td>0.471</td><td>942</td></tr><tr><td/><td>SuperSim</td><td>0.548</td><td>0.499</td><td>1,255</td></tr><tr><td>GloVe</td><td>WordSim353 pairs</td><td>0.546</td><td>0.435</td><td>325</td></tr><tr><td/><td>SimLex-999 pairs</td><td>0.516</td><td>0.448</td><td>923</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Evaluation of models trained on the Swedish Gigaword corpus. WordSim353 and SimLex-999 are subsets of the SuperSim. Best results for each \"test set -task\" combination are bolded." |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table><tr><td>Model</td><td>Test set</td><td colspan=\"3\">Spearman's \u03c1 Spearman's \u03c1 Included pairs relatedness similarity</td></tr><tr><td/><td>SuperSim</td><td>0.410</td><td>0.410</td><td>1,197</td></tr><tr><td>Word2Vec</td><td>WordSim353 pairs</td><td>0.469</td><td>0.415</td><td>315</td></tr><tr><td/><td>SimLex-999 pairs</td><td>0.352</td><td>0.337</td><td>876</td></tr><tr><td/><td>SuperSim</td><td>0.349</td><td>0.365</td><td>1,297</td></tr><tr><td>fastText</td><td>WordSim353 pairs</td><td>0.339</td><td>0.334</td><td>347</td></tr><tr><td/><td>SimLex-999 pairs</td><td>0.322</td><td>0.311</td><td>942</td></tr><tr><td/><td>SuperSim</td><td>0.467</td><td>0.440</td><td>1,197</td></tr><tr><td>GloVe</td><td>WordSim353 pairs</td><td>0.524</td><td>0.429</td><td>315</td></tr><tr><td/><td>SimLex-999 pairs</td><td>0.418</td><td>0.375</td><td>876</td></tr><tr><td colspan=\"2\">pairs included in the evaluation.</td><td/><td/><td/></tr><tr><td colspan=\"3\">To provide a partial reference point, Hill et al.</td><td/><td/></tr><tr><td colspan=\"3\">(2015) report, for Word2Vec trained on English</td><td/><td/></tr><tr><td colspan=\"3\">Wikipedia, \u03c1 scores of 0.655 on WordSim353, and</td><td/><td/></tr><tr><td>0.414 on SimLex-999.</td><td/><td/><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Evaluation of models trained on the Swedish Wikipedia. WordSim353 and SimLex-999 are subsets of the SuperSim. Best results for each \"test set -task\" combination are bolded." |
|
} |
|
} |
|
} |
|
} |