ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2020.eval4nlp-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:32.635826Z"
},
"title": "One of these words is not like the other: a reproduction of outlier identification using non-contextual word representations",
"authors": [
{
"first": "Jesper",
"middle": [],
"last": "Brink",
"suffix": "",
"affiliation": {},
"email": "jesperbrink@post.au.dk"
},
{
"first": "Mikkel",
"middle": [],
"last": "Bak Bertelsen",
"suffix": "",
"affiliation": {},
"email": "mikkelbak@post.au.dk"
},
{
"first": "Mikkel",
"middle": [
"H\u00f8rby"
],
"last": "Schou",
"suffix": "",
"affiliation": {},
"email": "mikkelschou@post.au.dk"
},
{
"first": "Manuel",
"middle": [
"R"
],
"last": "Ciosici",
"suffix": "",
"affiliation": {},
"email": "manuelc@isi.edu"
},
{
"first": "Ira",
"middle": [],
"last": "Assent",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word embeddings are an active topic in the NLP research community. State-of-the-art neural models achieve high performance on downstream tasks, albeit at the cost of computationally expensive training. Cost aware solutions require cheaper models that still achieve good performance. We present several reproduction studies of intrinsic evaluation tasks that evaluate non-contextual word representations in multiple languages. Furthermore, we present 50-8-8, a new data set for the outlier identification task, which avoids limitations of the original data set, such as ambiguous words, infrequent words, and multi-word tokens, while increasing the number of test cases. The data set is expanded to contain semantic and syntactic tests and is multilingual (English, German, and Italian). We provide an in-depth analysis of word embedding models with a range of hyperparameters. Our analysis shows the suitability of different models and hyper-parameters for different tasks and the greater difficulty of representing German and Italian languages.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Word embeddings are an active topic in the NLP research community. State-of-the-art neural models achieve high performance on downstream tasks, albeit at the cost of computationally expensive training. Cost aware solutions require cheaper models that still achieve good performance. We present several reproduction studies of intrinsic evaluation tasks that evaluate non-contextual word representations in multiple languages. Furthermore, we present 50-8-8, a new data set for the outlier identification task, which avoids limitations of the original data set, such as ambiguous words, infrequent words, and multi-word tokens, while increasing the number of test cases. The data set is expanded to contain semantic and syntactic tests and is multilingual (English, German, and Italian). We provide an in-depth analysis of word embedding models with a range of hyperparameters. Our analysis shows the suitability of different models and hyper-parameters for different tasks and the greater difficulty of representing German and Italian languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Unsupervised word embeddings have largely replaced language-specific hand-designed representations of syntax and semantics (Mikolov et al., 2013a; Levy and Goldberg, 2014a; Devlin et al., 2019) . Models based on deep neural networks such as the BERT family (Devlin et al., 2019; Liu et al., 2019; Sanh et al., 2019) construct contextualized word vector representations. Showing state-of-theart results in benchmarks such as GLUE (Wang et al., 2018) , they are computationally expensive for both training and inference (Devlin et al., 2019; You et al., 2020) with significant cost for the environment (Strubell et al., 2019) . In this paper, \u21e4 Equal contribution we turn our attention back to the non-contextual, less resource-hungry word representations of the word2vec family (Mikolov et al., 2013a; Levy and Goldberg, 2014a) .",
"cite_spans": [
{
"start": 123,
"end": 146,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF29"
},
{
"start": 147,
"end": 172,
"text": "Levy and Goldberg, 2014a;",
"ref_id": "BIBREF22"
},
{
"start": 173,
"end": 193,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 257,
"end": 278,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 279,
"end": 296,
"text": "Liu et al., 2019;",
"ref_id": null
},
{
"start": 297,
"end": 315,
"text": "Sanh et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 429,
"end": 448,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF42"
},
{
"start": 518,
"end": 539,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 540,
"end": 557,
"text": "You et al., 2020)",
"ref_id": "BIBREF43"
},
{
"start": 600,
"end": 623,
"text": "(Strubell et al., 2019)",
"ref_id": "BIBREF38"
},
{
"start": 777,
"end": 800,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF29"
},
{
"start": 801,
"end": 826,
"text": "Levy and Goldberg, 2014a)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We contribute reproduction studies on the quality of the non-contextual word representations using outlier identification (Camacho-Collados and Navigli, 2016) and the classic word analogy task (Mikolov et al., 2013a) . Replicability and reproducibility have gained increasing importance in the NLP community: focus on the publication of code and data with papers, special sections in leading journals (Branco et al., 2017) , and dedicated shared tasks (Branco et al., 2020) . Unfortunately, there exist opposing definitions of the terms reproduction and replication (e.g., Branco et al. (2017) and Chris (2009) ), while others propose a spectrum of reproducibility (Peng, 2011). While we aim to reproduce the experiments in our target papers closely, we go beyond a straight-forward reproduction and address further questions such as effect of hyper-parameters, linear contexts (CBOW vs. skipgram) , and non-linear dependency-based contexts (word2vecf).",
"cite_spans": [
{
"start": 122,
"end": 158,
"text": "(Camacho-Collados and Navigli, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 193,
"end": 216,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF29"
},
{
"start": 401,
"end": 422,
"text": "(Branco et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 452,
"end": 473,
"text": "(Branco et al., 2020)",
"ref_id": null
},
{
"start": 573,
"end": 593,
"text": "Branco et al. (2017)",
"ref_id": "BIBREF8"
},
{
"start": 598,
"end": 610,
"text": "Chris (2009)",
"ref_id": "BIBREF13"
},
{
"start": 878,
"end": 897,
"text": "(CBOW vs. skipgram)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also propose 50-8-8, an alternative to the 8-8-8 outlier identification data set (Camacho-Collados and Navigli, 2016) that is several times larger, includes both semantic and syntactic evaluations, and addresses result variance issues that affect the original 8-8-8 data set. Finally, our 50-8-8 data set is multilingual, covering English (EN), German (DE), and Italian (IT). The three languages are challenging for word representations due to their large vocabulary, heavy reliance on word compounding (DE), and complex grammar and sentence structure (DE and IT) .",
"cite_spans": [
{
"start": 84,
"end": 120,
"text": "(Camacho-Collados and Navigli, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 555,
"end": 566,
"text": "(DE and IT)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our paper, we contribute:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Reproduction studies of outlier identification and word analogy (Camacho-Collados and Navigli, 2016; K\u00f6per et al., 2015; Berardi et al., 2015; Mikolov et al., 2013a ) through which we find that most evaluations are reproducible, albeit some, namely outlier identification, only after taking variance into account.",
"cite_spans": [
{
"start": 66,
"end": 102,
"text": "(Camacho-Collados and Navigli, 2016;",
"ref_id": "BIBREF10"
},
{
"start": 103,
"end": 122,
"text": "K\u00f6per et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 123,
"end": 144,
"text": "Berardi et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 145,
"end": 166,
"text": "Mikolov et al., 2013a",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 50-8-8, an improved outlier identification data set that addresses issues with the 8-8-8 data set used in the original outlier identification paper. 50-8-8 is multiple times larger than 8-8-8, multilingual (English, German, and Italian), excludes polysemous and rare words, and contains both semantic and syntactic tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Comparative study and analysis of CBOW, skip-gram, word2vecf, and word2vecf without relation-suffixes, on multiple corpora and languages (English, German, and Italian) , for multiple hyper-parameters, on outlier identification and analogy reasoning tasks (both semantic and syntactic). All results are based upon multiple instances of the models and quantify variation in results.",
"cite_spans": [
{
"start": 139,
"end": 169,
"text": "(English, German, and Italian)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contextualized neural word embeddings (Devlin et al., 2019; Liu et al., 2019) show impressive performance in downstream NLP tasks, at the cost of training time; pre-training of the base version of BERT took four days using 16 TPU chips (Devlin et al., 2019) . Efforts to reduce the training time still require significant computing power on dedicated hardware (You et al., 2020) , with high environmental cost (Strubell et al., 2019) . Some reduction of memory usage (Sanh et al., 2019) or of training time and memory usage (Lan et al., 2020) still does not eliminate the high resource consumption. As such, less computationally expensive models, such as word2vec (Mikolov et al., 2013a) , word2vecf (Levy and Goldberg, 2014a) , FastText (Bojanowski et al., 2017) , and GloVe (Pennington et al., 2014) , are attractive when showing good performance on NLP tasks. Computationally cheaper models, like word2vec, have some of the same evaluation drawbacks as their more complicated and expensive counterparts: there is no generally agreed upon evaluation. Ghannay et al. (2016) compare word2vec and word2vecf on attributional similarity, extended by Li et al. (2017) for combinations of context representations and context types for CBOW, skipgram, and GloVe. But, Faruqui et al. (2016) and Batchkarov et al. (2016) note that attributional similarity is subjective, lacks statistical significance, and has a low correlation with extrinsic evaluation, making it inconsistent and not necessarily indicative of model properties. However, Schnabel et al. (2015) argue that different extrinsic evaluation tasks prefer different embeddings, suggesting that extrinsic tasks might not be indicators of general embedding quality either.",
"cite_spans": [
{
"start": 38,
"end": 59,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 60,
"end": 77,
"text": "Liu et al., 2019)",
"ref_id": null
},
{
"start": 236,
"end": 257,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 360,
"end": 378,
"text": "(You et al., 2020)",
"ref_id": "BIBREF43"
},
{
"start": 410,
"end": 433,
"text": "(Strubell et al., 2019)",
"ref_id": "BIBREF38"
},
{
"start": 467,
"end": 486,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 524,
"end": 542,
"text": "(Lan et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 664,
"end": 687,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF29"
},
{
"start": 700,
"end": 726,
"text": "(Levy and Goldberg, 2014a)",
"ref_id": "BIBREF22"
},
{
"start": 738,
"end": 763,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 776,
"end": 801,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 1053,
"end": 1074,
"text": "Ghannay et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 1147,
"end": 1163,
"text": "Li et al. (2017)",
"ref_id": "BIBREF25"
},
{
"start": 1262,
"end": 1283,
"text": "Faruqui et al. (2016)",
"ref_id": "BIBREF15"
},
{
"start": 1288,
"end": 1312,
"text": "Batchkarov et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 1532,
"end": 1554,
"text": "Schnabel et al. (2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The outlier identification task (Camacho-Collados and Navigli, 2016) avoids subjective similarity measurements. Instead, it employs relative word vector similarity to identify an outlier from a group of otherwise semantically related words. Blair et al. (2017) expanded the outlier identification data set algorithmically based on Wikidata. However, the automatic approach has several limitations, including ambiguous, infrequent, or duplicate words in the same category, and word variants in the same category, likely due to hierarchy inconsistencies in Wikidata (Brasileiro et al., 2016) . In this paper, we return to manually curated data sets with controlled quality and difficulty.",
"cite_spans": [
{
"start": 32,
"end": 68,
"text": "(Camacho-Collados and Navigli, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 241,
"end": 260,
"text": "Blair et al. (2017)",
"ref_id": "BIBREF5"
},
{
"start": 564,
"end": 589,
"text": "(Brasileiro et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In light of recent revelations into the instability of word2vec (Antoniak and Mimno, 2018) , we reproduce several word vector evaluations. We find that the original 8-8-8 data set used in the outlier identification evaluation leads to high results variance. We address this issue by proposing an expanded evaluation data set we call 50-8-8. Both the original outlier identification (Camacho-Collados and Navigli, 2016) and word similarity publications (Ghannay et al., 2016; Li et al., 2017) do not fully explore the effects of hyper-parameters and randomness. We systematically evaluate models and hyper-parameters on ten training runs and measure average performance and variance.",
"cite_spans": [
{
"start": 64,
"end": 90,
"text": "(Antoniak and Mimno, 2018)",
"ref_id": "BIBREF0"
},
{
"start": 382,
"end": 418,
"text": "(Camacho-Collados and Navigli, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 452,
"end": 474,
"text": "(Ghannay et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 475,
"end": 491,
"text": "Li et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, most evaluations of word2vec embeddings focus on English, with notable exceptions (K\u00f6per et al., 2015; Berardi et al., 2015; Svoboda and Brychc\u00edn, 2018; Venekoski and Vankka, 2017; Rodrigues et al., 2016; Chen et al., 2015; Grave et al., 2018) . However, these are translations of word similarity tasks and share the weaknesses of their English language counterparts. We reproduce the evaluation of core word analogy evaluations of K\u00f6per et al. (2015) and Berardi et al. (2015) and expand them by comparing word2vec to its dependency-based counterpart, word2vecf. We use the word analogy task from Mikolov et al. (2013a) to give a reference point for model performance and ease comparison with other research, even though the pitfalls from the similarity tasks also apply to this task (Faruqui et al., 2016; Batchkarov et al., 2016) . To supplement the evaluations on non-English languages, we manually translate our new 50-8-8 data set into German and Italian and thus provide a multilingual outlier identification data set and evaluation.",
"cite_spans": [
{
"start": 91,
"end": 111,
"text": "(K\u00f6per et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 112,
"end": 133,
"text": "Berardi et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 134,
"end": 161,
"text": "Svoboda and Brychc\u00edn, 2018;",
"ref_id": "BIBREF39"
},
{
"start": 162,
"end": 189,
"text": "Venekoski and Vankka, 2017;",
"ref_id": "BIBREF41"
},
{
"start": 190,
"end": 213,
"text": "Rodrigues et al., 2016;",
"ref_id": "BIBREF35"
},
{
"start": 214,
"end": 232,
"text": "Chen et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 233,
"end": 252,
"text": "Grave et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 441,
"end": 460,
"text": "K\u00f6per et al. (2015)",
"ref_id": "BIBREF20"
},
{
"start": 465,
"end": 486,
"text": "Berardi et al. (2015)",
"ref_id": "BIBREF4"
},
{
"start": 607,
"end": 629,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF29"
},
{
"start": 794,
"end": 816,
"text": "(Faruqui et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 817,
"end": 841,
"text": "Batchkarov et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we introduce the intrinsic tasks and data sets we use for evaluation. Furthermore, we summarize previous data sets' limitations and introduce a new data set for the outlier identification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks",
"sec_num": "3"
},
{
"text": "Evaluations of word similarity rely on a similarity score of words. Therefore it is difficult (if not impossible) to obtain a gold standard as people cannot agree on similarity scores between words (e.g., Which is more like a cat? a tiger or a lion?). On the other hand, outlier identification aims to identify an outlier in a set of similar words. The outlier is the word with the lowest average cosine similarity to the rest of the set. This formulation makes constructing a gold standard more straightforward as the attribution of specific similarity scores is avoided (Camacho-Collados and Navigli, 2016) . Even though word embeddings cannot answer questions involving subtle similarity, they can represent outliers as sufficiently distinctive from a group of words that share some similarities (the inliers).",
"cite_spans": [
{
"start": 572,
"end": 608,
"text": "(Camacho-Collados and Navigli, 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Outlier identification",
"sec_num": "3.1"
},
{
"text": "We use two performance measures to evaluate, Accuracy (Acc) and Outlier Position Percentage (OPP). Accuracy is the ratio of correctly identified outliers to the total number of test cases and provides a strict, narrow-focused measure of performance. OPP indicates how close the outliers are to being correctly classified. OPP is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measures for outlier identification",
"sec_num": "3.1.1"
},
{
"text": "OP P = P W 2D OP (W ) |W | 1 |D| \u2022 100",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measures for outlier identification",
"sec_num": "3.1.1"
},
{
"text": "W is a word set (8 inliers and one outlier), and D is a data set consisting of |D| such sets of words. Outlier Position (OP ) is the outlier position in the list of words ordered by the average cosine similarity to the other words in the set. The positions range from 0 to |W | 1, where an OP equal to |W | 1 indicates a correct classification of the outlier, and a lower OP indicates the computed position of the outlier in the sorted list. The lower the OP , the worse the system does at identifying the outlier. While accuracy takes a black-and-white approach to measuring performance, OPP accounts for differences in the words' rankings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measures for outlier identification",
"sec_num": "3.1.1"
},
{
"text": "For our experiments, we modify the original evaluation script of Camacho-Collados and Navigli to address a bug. In the script, vectors are set to the zero vector for Out-Of-Vocabulary (OOV) words, resulting in an undeserved successful outlier identification. In our experiments, we instead mark such test cases as unsuccessful. Accordingly, OOV words decrease performance scores instead of increasing it. We describe the error and our fix in Appendix A and share our fixed script with our 50-8-8 data set 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measures for outlier identification",
"sec_num": "3.1.1"
},
{
"text": "Camacho-Collados and Navigli (2016) provide a manually curated 8-8-8 data set with their task; namely, 8 test groups of 8 semantically related inliers and 8 alternatives for non-related outliers, resulting in 64 test cases. The data set, however, has some limitations. First of all, its low number of test cases results in a significant change in accuracy for each misclassification. The low number also results in limited coverage of concepts in a vector space, which may not represent the semantic information encoded. Secondly, it contains ambiguous words. For example, Smart (used in the German car manufacturers test group) can denote both the car manufacturer and an unrelated adjective. Because the adjective might be more common in a corpus, it will have a higher influence on the resulting vector and might lead to its corresponding word being classified as an outlier. We claim that selecting the word \"Smart\" as an outlier when the adjective is prevalent is, in fact, the correct behavior. However, since this goes against the intention of the data set design (and the ground-truth labels), we consider such ambiguous words a drawback. Thirdly, multi-token words are handled by taking the average vector of all constituting tokens, which is problematic. The concept denoted by a multitoken word does not necessarily have connections to the meaning (i.e., vector) of the tokens that comprise it. 1 Finally, some words in the data set have a very low frequency in the corpora used for training in the original paper. 2 Low-frequency terms tend to have unstable word vectors, which can lead to high variance in evaluation using the 8-8-8 data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets for outlier identification",
"sec_num": "3.1.2"
},
{
"text": "WikiSem500 (Blair et al., 2017) is an automatically generated extension of 8-8-8. By treating Wikidata as a graph such that semantic word similarities are distances in the graph, the authors of WikiSem500 automatically construct 500 test groups and 2 816 test cases. However, WikiSem500 has severe limitations. First of all, many inlier sets have a vague semantic connection that makes outliers difficult to identify (even for humans), which may be caused by Wikidata not always following structural rules from multilevel model theory (Brasileiro et al., 2016) . Wiki-Data's crowd-sourced nature causes many hierarchies spanning more than one classification level to follow known anti-patterns such as items that are simultaneously instances and subclasses other items; items that are subclasses of several items, with one of the superclasses an instance of the other, and lastly, items representing instances of several items, with one of those also an instance of the other (Brasileiro et al., 2016) . Such inconsistencies in the graph are reflected in some of the test groups in WikiSem500. Take, for example, test group Q197, which consists of instances of airplanes. The inliers include various specific combat aircraft models (e.g., B-29_Superfortress and F/A-18_Hornet) and also the terms glider and fighter_aircraft, which should be subclasses rather than instances of airplanes and should therefore not be inliers. At the same time, Mitsubishi F-1 (a Japanese combat aircraft) is an outlier, although it should be an instance of an airplane, and therefore an inlier.",
"cite_spans": [
{
"start": 11,
"end": 31,
"text": "(Blair et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 535,
"end": 560,
"text": "(Brasileiro et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 976,
"end": 1001,
"text": "(Brasileiro et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets for outlier identification",
"sec_num": "3.1.2"
},
{
"text": "Other problems include: ambiguous words, the same outliers appear several times in the same test group (thus overly impacting evaluation results), the same words with different spellings in the same test group, infrequent words, and inconsistency between using the same words or new ones in the same test group for different languages 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets for outlier identification",
"sec_num": "3.1.2"
},
{
"text": "To overcome the above issues with 8-8-8 and is a popular female name in latin-language countries, not related to cars like Mercedes Benz. 2 E.g. Nestl\u00e9, Thaddaeus, and Alpina have a frequency of 17, 24, and 27, respectively, in UMBC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets for outlier identification",
"sec_num": "3.1.2"
},
{
"text": "3 e.g. Q9143, Q341, Q16970, Q23691, and Q349, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets for outlier identification",
"sec_num": "3.1.2"
},
{
"text": "WikiSem500, we propose 50-8-8 4 , a manually curated data set comprising two sections: 25-8-8-Sem and 25-8-8-Syn. We select unambiguous single-token 5 words with a minimum frequency of 350 in each training corpus (details in Section 4.2). We determine word ambiguity using dictionaries and native speakers. Our outliers have different degrees of connectedness to the inliers for different levels of test complexity, i.e., the further down the list of outliers, the weaker the connection to the inliers, and more evidently an outlier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets for outlier identification",
"sec_num": "3.1.2"
},
{
"text": "For example, in the test group Greek Gods, the first two outliers are Cupid (Roman god of love) and Odysseus (Greek legendary king), which could be misclassified by someone with little domain knowledge. The following are Jesus, Sparta, Delphi, and Rome, all of which have only a weak connection to the inliers. The last two outliers are wrath and Atlanta, with no connection to the inliers. 25-8-8-Sem contains 25 test groups, each comprising eight inliers and eight alternatives for outliers, resulting in 200 unique test cases, a more than 3-fold increase in size over the original 8-8-8 data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets for outlier identification",
"sec_num": "3.1.2"
},
{
"text": "Please note that in preliminary experiments, we found that random selection of outliers produces trivial test cases, with all models scoring above 97.05 in accuracy and 99.15 in OPP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets for outlier identification",
"sec_num": "3.1.2"
},
{
"text": "The second part of our 50-8-8 data set, the syntactic 25-8-8-Syn data set consists of 25 syntactic test groups, as defined by part-of-speech tags (PoS). We choose words with a unique PoS tag in dictionaries to avoid syntactic ambiguity 6 . Furthermore, we ensure that the words in each test case share no semantic connection, such that evaluation can focus exclusively on distinction by syntactic role.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets for outlier identification",
"sec_num": "3.1.2"
},
{
"text": "The two distinct subsets of 50-8-8 improve the outlier identification task by allowing for evaluations that target semantics and syntax, the two core aspects that word vectors encode.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets for outlier identification",
"sec_num": "3.1.2"
},
{
"text": "In addition to English, we also look at German (another West Germanic language) and Italian (a Romance language), which both employ a more complex grammatical structure than English, and use declension to mark gender and plurality. German also relies heavily on compound words and grammatical cases. We manually translate our 50-8-8 data set using dictionaries and native speakers. We address translation and language-specific challenges as follows. First of all, words that are unambiguous in one language can be ambiguous in another. We address semantic ambiguity by replacing ambiguous words in any language with words that are unambiguous in all languages, and syntactic ambiguity by replacing the ambiguous word with one belonging to the same PoS tag. Syntactic ambiguity is language-specific, e.g., when translating adverbs to German, as the suffixes -ly and -mente often distinguish adjectives from adverbs in English and Italian, respectively, but German can use the same lexical form for both 6 . In Italian, many adjectives are also nouns, and many nouns are also conjugations of verbs, which are not as prevalent in German and English. Secondly, when a word translates to two synonymous words, we use the most common, as determined by native speakers.",
"cite_spans": [
{
"start": 1002,
"end": 1003,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets for outlier identification",
"sec_num": "3.1.2"
},
{
"text": "Furthermore, for nouns in German, we use the nominative case of the nouns to avoid the effects of different grammatical cases. For adjectives in Italian, we use the masculine gender where applicable to avoid the effects of gender. Removing syntactic variation allows the semantic tests to stay focused on semantics. Thus, all the versions of 25-8-8-Sem are identical, all versions of 25-8-8-Syn have an identical distribution of PoS tags within a given test group, and we use consistent and frequent variants of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets for outlier identification",
"sec_num": "3.1.2"
},
{
"text": "Our study's second task is the word analogy task, which measures how well a model captures the relational similarity between pairs of words. A high degree of relational similarity between the pairs means that the words are analogous (Mikolov et al., 2013c; Turney, 2006) . It includes questions like Berlin is to Germany as what is to France? where the model should return Paris. Word analogy also has separation into semantic and syntactic tests. As we note in Section 2, there is heavy criticism of this task (Faruqui et al., 2016) . We include it for easy comparison with existing work and to contextualize the outlier identification results.",
"cite_spans": [
{
"start": 233,
"end": 256,
"text": "(Mikolov et al., 2013c;",
"ref_id": "BIBREF31"
},
{
"start": 257,
"end": 270,
"text": "Turney, 2006)",
"ref_id": "BIBREF40"
},
{
"start": 511,
"end": 533,
"text": "(Faruqui et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word analogy task",
"sec_num": "3.2"
},
{
"text": "We use the analogy data set of Mikolov et al. (2013a) (K\u00f6per et al., 2015) . We use an Italian translation of the analogy data set (Berardi et al., 2015) , with 19 791 test cases, with small changes to the data set to keep all words as single token words. Please note that the word analogy data set is not balanced. Size varies by category, causing some relations to be over-represented, e.g., two of the semantic categories evaluate knowledge about countries and corresponding capitals and represent more than half of the total semantic tests (Gladkova et al., 2016) .",
"cite_spans": [
{
"start": 31,
"end": 53,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF29"
},
{
"start": 54,
"end": 74,
"text": "(K\u00f6per et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 131,
"end": 153,
"text": "(Berardi et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 544,
"end": 567,
"text": "(Gladkova et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data set for word analogy task",
"sec_num": "3.2.1"
},
{
"text": "This section introduces the word embedding models and the training corpora we use for the evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and corpora",
"sec_num": "4"
},
{
"text": "Word2vec consists of two types of models: CBOW (continuous-bag-of-words) and skip-gram (Mikolov et al., 2013a,b) . Both models use a linear context, consisting of the n words before and n words after the current word.",
"cite_spans": [
{
"start": 87,
"end": 112,
"text": "(Mikolov et al., 2013a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "Word2vecf (Levy and Goldberg, 2014a) replaces the linear context with one based on words directly connected via the dependency graph of the sentence. Thus, word2vecf eliminates the window size hyper-parameter of word2vec, increases the pool of available context tokens up to the sentence boundaries, and focuses context words selection by eliminating irrelevant words. The example Australian scientist discovers star with a telescope from the original paper can help understand the difference in context. For the word discovers and a window size of 2, word2vec would consider the words Australian, scientist, star, and with to be part of the context. There is nothing inherently Australian about discovering; hence, this word and with provide noise to the context of discovers. Word2vecf, instead, includes scientist_nsubj, star_obj, and telescope_prepwith into the context. Thus, word2vecf both removes noisy words (australian, with) and includes relevant terms (telescope) into the context.",
"cite_spans": [
{
"start": 10,
"end": 36,
"text": "(Levy and Goldberg, 2014a)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "On the downside, word2vecf requires the corpus to be dependency parsed using a dependency parser, introducing some noise (Chen and Manning, 2014) . Word2vecf suffixes the dependency relation to each word in the context, which massively increases vocabulary size up to |V | \u2022 |D| where |V | denotes the vocabulary size and |D| denotes the number of relation types supported by the dependency parser. The massive vocabulary increase leads to lower frequency counts and can result in instability in vectors' values. Furthermore, the word vectors are trained on the auxiliary words with the relation as suffix instead of training word vectors directly on each other, and as such, words with dependency relations suffixed act as barriers to information flow between context words and target words.",
"cite_spans": [
{
"start": 121,
"end": 145,
"text": "(Chen and Manning, 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "Word2vecf+ addresses the limitations of word2vecf, more specifically the inclusion of dependency relations as word suffixes; thus, the vocabulary size does not increase. Word2vec+ maintains the vocabulary size fixed by removing the suffix from the word before training, thereby training words directly on each other and discarding the auxiliary words (Li et al., 2017) . For example, the word scientist_nsubj from above becomes scientist. While the original paper calls this method generalized skip-gram with unbound dependencybased context, for readability, we refer to it as word2vecf+.",
"cite_spans": [
{
"start": 351,
"end": 368,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "We use multiple corpora to derive word vectors for our evaluations (see summary in Table 1 ). For English, we use the UMBC web-based corpus (Han et al., 2013) and the September 2019 dump of English Wikipedia. The choice of corpora aims to reproduce the experiments in the original outlier identification work (Camacho-Collados and Navigli, 2016) . The newer version of Wikipedia is a super-set of the one used in the original experiments. As in the original paper, the use of two English corpora should eliminate questions of corpus-specific results.",
"cite_spans": [
{
"start": 140,
"end": 158,
"text": "(Han et al., 2013)",
"ref_id": "BIBREF19"
},
{
"start": 309,
"end": 345,
"text": "(Camacho-Collados and Navigli, 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Training Corpora",
"sec_num": "4.2"
},
{
"text": "For German, we derive vectors from the January 2020 version of Wikipedia; for Italian from the April 2020 version of Wikipedia. The three ver-sions of Wikipedia have widely different sizes. The largest (Wiki EN) is almost five times bigger than the smallest (Wiki IT). However, even the smallest has over 500 million tokens for a vocabulary of less than one million word types (average word type frequency of 670). The smallest corpus (Wiki IT) has a larger average word type frequency (670) than the second smallest, Wiki DE (average word type frequency 416). Such large corpora, combined with repetitions of training and evaluation cycles, provide a good overview of model performance and avoid the of word2vec (Antoniak and Mimno, 2018) .",
"cite_spans": [
{
"start": 713,
"end": 739,
"text": "(Antoniak and Mimno, 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Corpora",
"sec_num": "4.2"
},
{
"text": "We use WikiExtractor (Attardi, 2018) to extract plain text from the Wikipedia corpora, and tokenize all corpora using Stanford CoreNLP v3.9.2 . We remove words that appear less than five times using the original word2vec code (Mikolov et al., 2013a) or word2vecf (Levy and Goldberg, 2014a) , as appropriate. We dependency parse using the Stanford neural-network dependency parser for models that require dependency relations (word2vecf, word2vecf+) (Chen and Manning, 2014) . For dependency parsing Italian, we use the model trained by Palmero Aprosio and Moretti (2016) .",
"cite_spans": [
{
"start": 21,
"end": 36,
"text": "(Attardi, 2018)",
"ref_id": null
},
{
"start": 226,
"end": 249,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF29"
},
{
"start": 263,
"end": 289,
"text": "(Levy and Goldberg, 2014a)",
"ref_id": "BIBREF22"
},
{
"start": 449,
"end": 473,
"text": "(Chen and Manning, 2014)",
"ref_id": "BIBREF11"
},
{
"start": 544,
"end": 570,
"text": "Aprosio and Moretti (2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Corpora",
"sec_num": "4.2"
},
{
"text": "This section presents experimental results, focusing on the reproduction, new data set, window size, different corpora, and languages. In our tables and figures, we denote the different approaches as follows: CBOW, SG (skip-gram), W2VF (word2vecf), W2VF+ (word2vecf+); each followed by the size of the window used. We include a detailed description of the experimental setup in Appendix C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In Table 2 , we compare our reproduction results with those of Camacho-Collados and Navigli (2016) . We observe a high variance in accuracy, which illustrates the small 8-8-8 data set's weakness and further underlines the importance of evaluating multiple training runs. We conclude that the original outlier identification results can be reproduced, but with the caveat that accuracy can suffer from large variance. In Section 5.2, we propose 50-8-8, a data set that alleviates this issue. word analogy task 7, 8 . For English, we see the same pattern as (Mikolov et al., 2013a) , where skip-gram outperforms CBOW, even though our corpora and hyper-parameters differ. Comparing the German Wikipedia results to those of (K\u00f6per et al., 2015) , we see a similar pattern in the semantic part, where skip-gram outperforms CBOW. However, in the syntactic part, our results differ. K\u00f6per et al. observe that CBOW outperforms skip-gram, whereas we observe the opposite, which could be due to the difference in corpora and hyper-parameters such as vector dimensionality. Due to the different focus of this paper and that of Berardi et al. (2015) , we can only compare skipgram results with window size 10. We observe a similar semantic performance, but a significant difference in syntactic performance where Berardi et al. observe a score of 32.62 compared to our result of 44.63, which could be the result of the difference in the number of negative samples (we use 15, they use 10) and the different Wikipedia version. However, as they do not cover the CBOW model, it is difficult to get an overview of model performance.",
"cite_spans": [
{
"start": 63,
"end": 98,
"text": "Camacho-Collados and Navigli (2016)",
"ref_id": "BIBREF10"
},
{
"start": 509,
"end": 511,
"text": "7,",
"ref_id": null
},
{
"start": 512,
"end": 513,
"text": "8",
"ref_id": null
},
{
"start": 556,
"end": 579,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF29"
},
{
"start": 720,
"end": 740,
"text": "(K\u00f6per et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 1116,
"end": 1137,
"text": "Berardi et al. (2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Reproduction results",
"sec_num": "5.1"
},
{
"text": "The results of outlier identification using our proposed 50-8-8 are in Table 4 . As expected, given the more comprehensive tests, on both UMBC and English Wikipedia, we see significantly lower accuracy variance for 25-8-8-Sem than 8-8-8. The only exception is word2vecf, where the accuracy variance grows slightly from 0 on 8-8-8 up to 0.15 on 25-8-8-Sem. Although word2vecf accuracy variance on 8-8-8 is 0, the ten instances do differ in their answers, as can be observed in the OPP variance in Table 2 . Except for a few individual cases, the variance on 25-8-8-Syn is also low. The performance of the best models on 25-8-8-Syn usually matches that on 25-8-8-Sem, suggesting that the two subsets of 50-8-8 are balanced in terms of difficulty. The best performing models on 25-8-8-Syn is CBOW 2 (except for Italian). Table 4 shows that window size has a limited impact on OPP for semantic tests (25-8-8-Sem), but affects the results on syntactic tests (25-8-8-Syn), where skip-gram performs best with low window size across all corpora. For the word analogy task (Table 3) , the opposite is true for the semantic evaluation, where larger window sizes have improved performance. These results align with Bansal et al. (2014) , who observe that larger window sizes result in more semantic information, while smaller lead to more syntactic.",
"cite_spans": [
{
"start": 1204,
"end": 1224,
"text": "Bansal et al. (2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 71,
"end": 78,
"text": "Table 4",
"ref_id": null
},
{
"start": 496,
"end": 503,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 818,
"end": 825,
"text": "Table 4",
"ref_id": null
},
{
"start": 1064,
"end": 1073,
"text": "(Table 3)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "The effect of the new 50-8-8 data set",
"sec_num": "5.2"
},
{
"text": "The same pattern can be observed on syntactic German Wikipedia and syntactic UMBC when taking variance into account. Bansal et al. observe that CBOW and skip-gram with lower window size perform better on syntactic tests, and larger window size performs better on semantic tests. However, our results show that window size performance varies with the task. These two tasks' preferred window sizes indicate that lower window sizes better capture clusters with semantically and syntactically similar words. Larger window sizes are better suited for capturing word relations. These observations also indicate that hyper-parameters can have a big influence on the performance of the models. Table 4 casts a shadow on the superiority of the word2vecf context construction strategy. Word2vecf matches or trails the best word2vec Table 4 : Outlier identification on 50-8-8 (25-8-8-Sem, 25-8-8-Syn); model name followed by window size. model on semantic tests on all corpora. However, word2vecf seems better suited to syntactic tests, where it matches or outperforms the best word2vec model on all four corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 686,
"end": 693,
"text": "Table 4",
"ref_id": null
},
{
"start": 822,
"end": 829,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of window size",
"sec_num": "5.3"
},
{
"text": "We observe the same results in the word analogy task (Table 3) . Despite the expected improvements in the contexts of word2vecf and word2vecf+, they consistently underperform the word2vec models, sometimes underperforming even the weakest of the word2vec models. This observation is consistent across all data sets on all languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 62,
"text": "(Table 3)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Effect of context type",
"sec_num": "5.4"
},
{
"text": "The results in Table 4 show that word2vecf+ outperforms word2vecf on semantic outlier identification across all corpora. On the syntactic subset, 25-8-8-Syn, word2vecf consistently outperforms word2vecf+ on all corpora. The consistent difference in performance between word2vecf and word2vecf+ on both the semantic and syntactic tests suggests that word2vecf might be better suited for encoding syntactic information and word2vecf+ might be better suited for encoding semantic information.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of relation-suffix",
"sec_num": "5.5"
},
{
"text": "We observe a large drop in syntactic OPP and accuracy for both word2vecf and word2vecf+ from UMBC to Wiki EN. The drop may be due to the quality of dependency relations from the Stanford CoreNLP dependency parser, which learned from the Penn Treebank, a corpus of scientific abstracts, news stories, and bulletins (Chen and Manning, 2014; Marcus et al., 1993) . Thus, Penn Treebank resembles UMBC more than English Wikipedia, which could explain the performance drop. On the word analogy task (Table 3) , word2vecf+ performs better than word2vecf. On the syntactic tests, word2vecf is comparable to CBOW, but removing the relation suffix (word2vecf+) results in scores closer to skip-gram, which is the best performing model; on the semantic tests, removing the relation suffix results in a 3-fold increase in word2vecf+ performance over word2vecf.",
"cite_spans": [
{
"start": 314,
"end": 338,
"text": "(Chen and Manning, 2014;",
"ref_id": "BIBREF11"
},
{
"start": 339,
"end": 359,
"text": "Marcus et al., 1993)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 493,
"end": 502,
"text": "(Table 3)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Effect of relation-suffix",
"sec_num": "5.5"
},
{
"text": "Based on these observations, we conclude that word2vecf+ is better able to capture semantic information as it avoids word2vecf's dramatic, artificial, increase in vocabulary. It allows word vectors to directly influence each other during training resulting in better semantically positioned related words in the embedding space and better capturing both syntactic and semantic similarities in word pairs. In contrast, the relational suffixes improve the clustering of syntactically related words. Table 4 shows that the models trained on German and Italian are generally less capable than those trained on the English corpora. The difference between German and English is noticeable in syntactic analogy (Table 3) . The German performance is almost half that of English across all models while Italian is better, but is still significantly lower than English. Furthermore, in the semantic part of word analogy, the performance of models trained on UMBC is closer to models trained on Wiki DE than models trained on Wiki EN. In general, Table 4 shows a drop in performance for languages other than English, in line with our expectation that German and Italian are more difficult to model.",
"cite_spans": [],
"ref_spans": [
{
"start": 497,
"end": 504,
"text": "Table 4",
"ref_id": null
},
{
"start": 704,
"end": 713,
"text": "(Table 3)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Effect of relation-suffix",
"sec_num": "5.5"
},
{
"text": "We contribute several reproduction studies of the outlier identification task and the classic word analogy task, both intrinsic evaluations of noncontextual word representations. We provide an in-depth analysis of word2vec, word2vecf, and word2vecf+ on the two tasks analyzing the effects of window size, context type, and context representation on English, German, and Italian. We find that the context construction strategy of word2vecf and word2vecf+ is not always effective. Sometimes the two models underperform even the weakest of the word2vec models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Our reproduction of outlier identification shows high variance, which we attribute to the original data set's limitations. To address these limitations, we propose 50-8-8, a new data set that is multiple times larger, manually curated, multilingual, and contains syntactic and semantic tests. Besides eliminating the variance issues, 50-8-8 quantifies the drop in performance in representations of languages with more complicated grammar and mor-phology than English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Mercedes Benz comprises two proper names. Mercedes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The 50-8-8 data set is available for download at https: //github.com/JesperBrink/50-8-85 Except in special cases as explained in Appendix B 6 There are minor differences in the definition of syntactic ambiguity, as explained in Appendix B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that 5% of the questions were skipped by the German models and 10% of the questions were skipped by the Italian models due to OOV words. This was also observed byBerardi et al. (2015).8 We use the 3CosAdd method for solving the task, just like(Mikolov et al., 2013a). The alternative 3CosMul improves the analogy results and is discussed in Appendix D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Davide Mottin for helping with the translation of 50-8-8 to Italian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evaluating the stability of embedding-based word similarities",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Antoniak",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "107--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Antoniak and David Mimno. 2018. Evaluating the stability of embedding-based word similarities. Transactions of the Association for Computational Linguistics, 6:107-119.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Tailoring continuous word representations for dependency parsing",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "809--815",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for de- pendency parsing. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 809- 815, Baltimore, Maryland. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A critique of word similarity as a method for evaluating distributional semantic models",
"authors": [
{
"first": "Miroslav",
"middle": [],
"last": "Batchkarov",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kober",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Reffin",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miroslav Batchkarov, Thomas Kober, Jeremy Reffin, Julie Weeds, and David Weir. 2016. A critique of word similarity as a method for evaluating distribu- tional semantic models. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representa- tions for NLP, pages 7-12, Berlin, Germany. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Word embeddings go to italy: A comparison of models and training datasets",
"authors": [
{
"first": "Giacomo",
"middle": [],
"last": "Berardi",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 6th Italian Information Retrieval Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giacomo Berardi, Andrea Esuli, and Diego Marcheg- giani. 2015. Word embeddings go to italy: A com- parison of models and training datasets. In Pro- ceedings of the 6th Italian Information Retrieval Workshop, Cagliari, Italy, May 25-26, 2015, vol- ume 1404 of CEUR Workshop Proceedings. CEUR- WS.org.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automated generation of multilingual clusters for the evaluation of distributed representations",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Blair",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Merhav",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Barry",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Blair, Yuval Merhav, and Joel Barry. 2017. Au- tomated generation of multilingual clusters for the evaluation of distributed representations. In 5th International Conference on Learning Representa- tions, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Andr\u00e9 Moreira, and Willem Elbers. 2020. A Shared Task of a New, Collaborative Type to Foster Reproducibility: A First Exercise in the Area of Language Science and Technology with REPROLANG2020",
"authors": [
{
"first": "Ant\u00f3nio",
"middle": [],
"last": "Branco",
"suffix": ""
},
{
"first": "Nicoletta",
"middle": [],
"last": "Calzolari",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
},
{
"first": "Gertjan",
"middle": [],
"last": "Van Noord",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Dieter Van Uytvanck",
"suffix": ""
},
{
"first": "Lu\u00eds",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gomes",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "5541--5547",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ant\u00f3nio Branco, Nicoletta Calzolari, Piek Vossen, Gertjan Van Noord, Dieter van Uytvanck, Jo\u00e3o Silva, Lu\u00eds Gomes, Andr\u00e9 Moreira, and Willem El- bers. 2020. A Shared Task of a New, Collaborative Type to Foster Reproducibility: A First Exercise in the Area of Language Science and Technology with REPROLANG2020. pages 5541-5547, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Replicability and reproducibility of research results for human language technology: introducing an LRE special section",
"authors": [
{
"first": "Ant\u00f3nio",
"middle": [],
"last": "Branco",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"Bretonnel"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Ide",
"suffix": ""
},
{
"first": "Nicoletta",
"middle": [],
"last": "Calzolari",
"suffix": ""
}
],
"year": 2017,
"venue": "Language Resources and Evaluation",
"volume": "51",
"issue": "1",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ant\u00f3nio Branco, Kevin Bretonnel Cohen, Piek Vossen, Nancy Ide, and Nicoletta Calzolari. 2017. Replica- bility and reproducibility of research results for hu- man language technology: introducing an LRE spe- cial section. Language Resources and Evaluation, 51(1):1-5.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Applying a multi-level modeling theory to assess taxonomic hierarchies in wikidata",
"authors": [
{
"first": "Freddy",
"middle": [],
"last": "Brasileiro",
"suffix": ""
},
{
"first": "Paulo",
"middle": [
"A"
],
"last": "Jo\u00e3o",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Almeida",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Victorio",
"suffix": ""
},
{
"first": "Giancarlo",
"middle": [],
"last": "Carvalho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guizzardi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference Companion on World Wide Web, WWW '16 Companion",
"volume": "",
"issue": "",
"pages": "975--980",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freddy Brasileiro, Jo\u00e3o Paulo A. Almeida, Victorio A. Carvalho, and Giancarlo Guizzardi. 2016. Apply- ing a multi-level modeling theory to assess taxo- nomic hierarchies in wikidata. In Proceedings of the 25th International Conference Companion on World Wide Web, WWW '16 Companion, page 975-980, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Find the word that does not belong: A framework for an intrinsic evaluation of word vector representations",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Camacho",
"suffix": ""
},
{
"first": "-Collados",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "43--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Camacho-Collados and Roberto Navigli. 2016. Find the word that does not belong: A framework for an intrinsic evaluation of word vector representa- tions. In Proceedings of the 1st Workshop on Evalu- ating Vector-Space Representations for NLP, pages 43-50, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Joint learning of character and word embeddings",
"authors": [
{
"first": "Xinxiong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Huan-Bo",
"middle": [],
"last": "Luan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence",
"volume": "2015",
"issue": "",
"pages": "1236--1242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huan-Bo Luan. 2015. Joint learning of char- acter and word embeddings. In Proceedings of the Twenty-Fourth International Joint Conference on Ar- tificial Intelligence, IJCAI 2015, Buenos Aires, Ar- gentina, July 25-31, 2015, pages 1236-1242. AAAI Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Replicability is not Reproducibility: Nor is it Good Science",
"authors": [
{
"first": "Drummond",
"middle": [],
"last": "Chris",
"suffix": ""
}
],
"year": 2009,
"venue": "The 4th workshop on Evaluation Methods for Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Drummond Chris. 2009. Replicability is not Repro- ducibility: Nor is it Good Science. In The 4th work- shop on Evaluation Methods for Machine Learning held at ICML 2009, Montreal, Canada.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Problems with evaluation of word embeddings using word similarity tasks",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "30--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. 2016. Problems with evaluation of word embeddings using word similarity tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 30- 35, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Word embedding evaluation and combination",
"authors": [
{
"first": "Sahar",
"middle": [],
"last": "Ghannay",
"suffix": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Benoit Favre",
"suffix": ""
},
{
"first": "Nathalie",
"middle": [],
"last": "Est\u00e8ve",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Camelin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "300--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sahar Ghannay, Benoit Favre, Yannick Est\u00e8ve, and Nathalie Camelin. 2016. Word embedding evalua- tion and combination. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 300-305, Por- toro\u017e, Slovenia. European Language Resources As- sociation (ELRA).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn't",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Gladkova",
"suffix": ""
},
{
"first": "Aleksandr",
"middle": [],
"last": "Drozd",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Matsuoka",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "8--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Gladkova, Aleksandr Drozd, and Satoshi Mat- suoka. 2016. Analogy-based detection of morpho- logical and semantic relations with word embed- dings: what works and what doesn't. In Pro- ceedings of the NAACL Student Research Workshop, pages 8-15, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning Word Vectors for 157 Languages",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learning Word Vectors for 157 Languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "UMBC_EBIQUITY-CORE: Semantic textual similarity systems",
"authors": [
{
"first": "Lushan",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Abhay",
"middle": [
"L"
],
"last": "Kashyap",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Finin",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mayfield",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Weese",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity",
"volume": "1",
"issue": "",
"pages": "44--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lushan Han, Abhay L. Kashyap, Tim Finin, James Mayfield, and Jonathan Weese. 2013. UMBC_EBIQUITY-CORE: Semantic textual similarity systems. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 44-52, Atlanta, Georgia, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Multilingual reliability and \"semantic\" structure of continuous word spaces",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "K\u00f6per",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Scheible",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 11th International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "40--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian K\u00f6per, Christian Scheible, and Sabine Schulte im Walde. 2015. Multilingual reliability and \"semantic\" structure of continuous word spaces. In Proceedings of the 11th International Conference on Computational Semantics, pages 40-45, London, UK. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learn- ing of language representations. In International Conference on Learning Representations.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "302--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 302-308, Baltimore, Maryland. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Linguistic regularities in sparse and explicit word representations",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014b. Linguistic regularities in sparse and explicit word representa- tions. In Proceedings of the Eighteenth Confer- ence on Computational Natural Language Learning, pages 171-180, Ann Arbor, Michigan. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improving distributional similarity with lessons learned from word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "211--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211- 225.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Investigating different syntactic context types and context representations for learning word embeddings",
"authors": [
{
"first": "Bofang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Buzhou",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Aleksandr",
"middle": [],
"last": "Drozd",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Xiaoyong",
"middle": [],
"last": "Du",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2421--2431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bofang Li, Tao Liu, Zhe Zhao, Buzhou Tang, Alek- sandr Drozd, Anna Rogers, and Xiaoyong Du. 2017. Investigating different syntactic context types and context representations for learning word embed- dings. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 2421-2431, Copenhagen, Denmark. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60, Bal- timore, Maryland. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Building a large annotated corpus of english: The penn treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Comput. Linguist",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Comput. Lin- guist., 19(2):313-330.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed represen- tations of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746-751, Atlanta, Georgia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Italy goes to Stanford: a collection of CoreNLP modules for Italian",
"authors": [
{
"first": "A",
"middle": [],
"last": "Aprosio",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Moretti",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Palmero Aprosio and G. Moretti. 2016. Italy goes to Stanford: a collection of CoreNLP modules for Italian. ArXiv e-prints.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Reproducible Research in Computational Science",
"authors": [
{
"first": "D",
"middle": [],
"last": "Roger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2011,
"venue": "Science",
"volume": "334",
"issue": "6060",
"pages": "1226--1227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger D Peng. 2011. Reproducible Research in Com- putational Science. Science, 334(6060):1226-1227.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Lx-dsemvectors: Distributional semantics models for portuguese",
"authors": [
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Rodrigues",
"suffix": ""
},
{
"first": "Ant\u00f3nio",
"middle": [],
"last": "Branco",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Neale",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Silva",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Processing of the Portuguese Language",
"volume": "",
"issue": "",
"pages": "259--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jo\u00e3o Rodrigues, Ant\u00f3nio Branco, Steven Neale, and Jo\u00e3o Silva. 2016. Lx-dsemvectors: Distributional semantics models for portuguese. In Computational Processing of the Portuguese Language, pages 259- 270, Cham. Springer International Publishing.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "5th Workshop on Energy Efficient Machine Learning and Cognitive Computing (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. In 5th Workshop on Energy Efficient Machine Learn- ing and Cognitive Computing (NeurIPS).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Evaluation methods for unsupervised word embeddings",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Schnabel",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "298--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 298-307, Lis- bon, Portugal. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Energy and policy considerations for deep learning in NLP",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Ananya",
"middle": [],
"last": "Ganesh",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3645--3650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "New word analogy corpus for exploring embeddings of czech words",
"authors": [
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Svoboda",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Brychc\u00edn",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "103--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luk\u00e1\u0161 Svoboda and Tom\u00e1\u0161 Brychc\u00edn. 2018. New word analogy corpus for exploring embeddings of czech words. In Computational Linguistics and Intelligent Text Processing, pages 103-114, Cham. Springer In- ternational Publishing.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Similarity of semantic relations",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "3",
"pages": "379--416",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2006. Similarity of semantic relations. Computational Linguistics, 32(3):379-416.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Finnish resources for evaluating language model semantics",
"authors": [
{
"first": "Viljami",
"middle": [],
"last": "Venekoski",
"suffix": ""
},
{
"first": "Jouko",
"middle": [],
"last": "Vankka",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "231--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viljami Venekoski and Jouko Vankka. 2017. Finnish resources for evaluating language model semantics. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 231-236, Gothen- burg, Sweden. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Large batch optimization for deep learning: Training bert in 76 minutes",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sashank",
"middle": [],
"last": "Reddi",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Hseu",
"suffix": ""
},
{
"first": "Sanjiv",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Srinadh",
"middle": [],
"last": "Bhojanapalli",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Demmel",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Keutzer",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Hsieh",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. 2020. Large batch optimization for deep learning: Training bert in 76 minutes. In International Con- ference on Learning Representations.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"content": "<table><tr><td>: Summary of corpora</td></tr><tr><td>tactic and 8 869 semantic test cases. For German,</td></tr><tr><td>we use a version of the analogy data set, which has</td></tr><tr><td>a total of 18 552 test cases (the adjective-adverb</td></tr><tr><td>category is missing as it does not exist in German)</td></tr></table>",
"type_str": "table",
"html": null,
"text": ""
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>Model</td><td>Work</td><td>UMBC OPP</td><td>UMBC Acc</td><td>Wiki OPP</td><td>Wiki Acc</td></tr><tr><td>CBOW 5 SG 10 CBOW 2 CBOW 10 SG 2 SG 5 W2VF W2VF+</td><td>Original Our Original Our Our Our Our Our Our Our</td><td>93.80 93.69 \u00b1 0.11 92.60 92.75 \u00b1 0.20 93.38 \u00b1 0.06 94.10 \u00b1 0.10 94.61 \u00b1 0.04 94.16 \u00b1 0.02 89.43 \u00b1 0.06 92.46 \u00b1 0.05</td><td>73.40 71.88 \u00b1 4.39 64.10 62.81 \u00b1 5.27 67.97 \u00b1 2.08 72.34 \u00b1 2.47 69.53 \u00b1 1.59 69.84 \u00b1 0.51 62.50 \u00b1 0.00 66.41 \u00b1 0.61</td><td>95.30 94.51 \u00b1 0.09 93.80 94.16 \u00b1 0.05 94.94 \u00b1 0.04 94.41 \u00b1 0.02 95.41 \u00b1 0.07 94.59 \u00b1 0.08 92.83 \u00b1 0.20 94.47 \u00b1 0.04</td><td>73.40 67.66 \u00b1 1.00 70.30 69.53 \u00b1 1.10 68.13 \u00b1 1.07 68.13 \u00b1 1.07 71.72 \u00b1 1.68 69.69 \u00b1 2.54 68.75 \u00b1 0.00 75.78 \u00b1 1.10</td></tr></table>",
"type_str": "table",
"html": null,
"text": "shows the results of reproducing the"
},
"TABREF3": {
"num": null,
"content": "<table><tr><td>: Outlier identification reproduction of Camacho-Collados and Navigli (2016) (10 runs, 8-8-8 data set);</td></tr><tr><td>word2vec with different window sizes, word2vecf and word2vecf+ added for easier comparison with other results.</td></tr></table>",
"type_str": "table",
"html": null,
"text": ""
}
}
}
}