{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:31:40.941332Z" }, "title": "Multilingual ELMo and the Effects of Corpus Sampling", "authors": [ { "first": "Vinit", "middle": [], "last": "Ravishankar", "suffix": "", "affiliation": { "laboratory": "Language Technology Group", "institution": "University of Oslo", "location": {} }, "email": "vinitr@ifi.uio.no" }, { "first": "Andrey", "middle": [], "last": "Kutuzov", "suffix": "", "affiliation": { "laboratory": "Language Technology Group", "institution": "University of Oslo", "location": {} }, "email": "" }, { "first": "Lilja", "middle": [], "last": "\u00d8vrelid", "suffix": "", "affiliation": { "laboratory": "Language Technology Group", "institution": "University of Oslo", "location": {} }, "email": "liljao@ifi.uio.no" }, { "first": "Erik", "middle": [], "last": "Velldal", "suffix": "", "affiliation": { "laboratory": "Language Technology Group", "institution": "University of Oslo", "location": {} }, "email": "erikve@ifi.uio.no" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Multilingual pretrained language models are rapidly gaining popularity in NLP systems for non-English languages. Most of these models feature an important corpus sampling step in the process of accumulating training data in different languages, to ensure that the signal from better resourced languages does not drown out poorly resourced ones. In this study, we train multiple multilingual recurrent language models, based on the ELMo architecture, and analyse both the effect of varying corpus size ratios on downstream performance, as well as the performance difference between monolingual models for each language, and broader multilingual language models. As part of this effort, we also make these trained models available for public use.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Multilingual pretrained language models are rapidly gaining popularity in NLP systems for non-English languages. Most of these models feature an important corpus sampling step in the process of accumulating training data in different languages, to ensure that the signal from better resourced languages does not drown out poorly resourced ones. In this study, we train multiple multilingual recurrent language models, based on the ELMo architecture, and analyse both the effect of varying corpus size ratios on downstream performance, as well as the performance difference between monolingual models for each language, and broader multilingual language models. As part of this effort, we also make these trained models available for public use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As part of the recent emphasis on language model pretraining, there also has been considerable focus on multilingual language model pretraining; this is distinguished from merely training language models in multiple languages by the creation of a multilingual space. These have proved to be very useful in 'zero-shot learning'; i.e., training on a wellresourced language (typically English), and relying on the encoder's multilingual space to create reasonable priors across languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main motivation of this paper is to study the effect of corpus sampling strategy on downstream performance. Further, we also examine the utility of multilingual models (when constrained to monolingual tasks), over individual monolingual models, one per language. This paper therefore has two main contributions: the first of these is a multilingual ELMo model that we hope would see further use in probing studies as well as evaluative studies, downstream; we train these models over 13 languages, namely Arabic, Basque, Chinese, English, Finnish, Hebrew, Hindi, Italian, Japanese, Korean, Russian, Swedish and Turkish. The second contribution is an analysis of sampling mechanism on downstream performance; we elaborate on this later.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Section 2 of this paper, we contextualise our work in the present literature. Section 3 describes our experimental setup and Section 4 our results. Finally, we conclude with a discussion of our results in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Multilingual embedding architectures (static or contextualised) are different from cross-lingual ones (Ruder et al., 2019; Liu et al., 2019) in that they are not products of aligning several monolingual models. Instead, a deep neural model is trained end to end on texts in multiple languages, thus making the whole process more straightforward and yielding truly multilingual representations (Pires et al., 2019) . Following Artetxe et al. (2020), we will use the term 'deep multilingual pretraining' for such approaches.", "cite_spans": [ { "start": 102, "end": 122, "text": "(Ruder et al., 2019;", "ref_id": null }, { "start": 123, "end": 140, "text": "Liu et al., 2019)", "ref_id": "BIBREF12" }, { "start": 393, "end": 413, "text": "(Pires et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Prior work", "sec_num": "2" }, { "text": "One of the early examples of deep multilingual pretraining was BERT, which featured a multilingual variant trained on the 104 largest languagespecific Wikipedias (Devlin et al., 2019) . To counter the effects of some languages having overwhelmingly larger Wikipedias than others, Devlin et al. (2019) used exponentially smoothed data weighting; i.e., they exponentiated the probability of a token being in a certain language by a certain \u03b1, and re-normalised. This has the effect of 'squashing' the distribution of languages in their training data; larger languages become smaller, to avoid drowning out the signal from smaller languages. One can also look at this technique as a sort of sampling. Other multilingual models, such as XLM (Lample and Conneau, 2019) and its larger variant, XLM-R (Conneau et al., 2020) , use different values of \u03b1 for this sampling (0.5 and 0.3 respectively). The current paper is aimed at analysing the effects of different \u03b1 choices; in spirit, this work is very similar to Arivazhagan et al. (2019) ; where it differs is our analysis on downstream tasks, as opposed to machine translation, where models are trained and evaluated on a very specific task. We also position our work as a resource, and we make our multilingual ELMo models available for public use.", "cite_spans": [ { "start": 162, "end": 183, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" }, { "start": 280, "end": 300, "text": "Devlin et al. (2019)", "ref_id": "BIBREF7" }, { "start": 737, "end": 763, "text": "(Lample and Conneau, 2019)", "ref_id": "BIBREF11" }, { "start": 794, "end": 816, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF5" }, { "start": 1007, "end": 1032, "text": "Arivazhagan et al. (2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Prior work", "sec_num": "2" }, { "text": "3 Experimental setup", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior work", "sec_num": "2" }, { "text": "When taken to its logical extreme, sampling essentially reduces to truncation, where all languages have the same amount of data; thus, in theory, in a truncated model, no language ought to dominate any other. Of course, for much larger models, like the 104-language BERT, this is unfeasible, as the smallest languages are too small to create meaningful models. By selecting a set of languages such that the smallest language is still reasonably sized for the language model being trained, however, we hope to experimentally determine whether truncation leads to truly neutral, equally capable multilingual spaces; if not, we attempt to answer the question of whether compression helps at all.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3.1" }, { "text": "Our encoder of choice for this analysis is an LSTM-based ELMo architecture introduced by Peters et al. (2018) . This might strike some as a curious choice of model, given the (now) much wider use of transformer-based architectures. There are several factors that make ELMo more suitable for our analysis. Our main motivation was, of course, resources -ELMo is far cheaper to train, computationally. Next, while pre-trained ELMo models already exist for several languages (Che et al., 2018 ; Ul\u010dar and Robnik-Sikonja, 2020), there is, to the best of our knowledge, no multilingual ELMo. The release of our multilingual model may therefore also prove to be useful in the domain of probing, encouraging research on multilingual encoders, constrained to recurrent encoders.", "cite_spans": [ { "start": 89, "end": 109, "text": "Peters et al. (2018)", "ref_id": "BIBREF14" }, { "start": 471, "end": 488, "text": "(Che et al., 2018", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3.1" }, { "text": "Our initial starting point for collecting the language model training corpora were the CoNLL 2017 Wikipedia/Common Crawl dumps released as part of the shared task on Universal Dependencies parsing (Ginter et al., 2017) ; we extracted the Wikipedia portions of these corpora for our set of 13 languages. This gives us a set of fairly typologically distinct languages, that still are not entirely poorly resourced. The smallest language in this collection, Hindi, has \u223c 91M tokens, which we deemed sufficient to train a reasonable ELMo model.", "cite_spans": [ { "start": 197, "end": 218, "text": "(Ginter et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "3.2" }, { "text": "Despite eliminating Common Crawl data, this gave us, for our set of languages, a total corpus size of approximately 35B tokens, which would be an unfeasible amount of data given computational constraints. We therefore selected a baseline model to be somewhat synthetic -note that this is a perfectly valid choice given our goals, which were to compare various sampling exponents. Our 'default' model, therefore, was trained on data that we obtained by weighting this 'realworld' Wikipedia data. The largest \u03b1 we could use, that would still allow for feasible training, was \u03b1 = 0.4 (further on, we refer to this model as M0.4); this gave us a total corpus size of \u223c4B tokens. Our second, relatively more compressed model, used \u03b1 = 0.2 (further on, M0.2); giving us a total corpus size of \u223c2B tokens; for our final, most compressed model (further on, TRUNC), we merely truncated each corpus to the size of our smallest corpus (Hindi; 91M), giving us a corpus sized \u223c1.2B tokens. Sampling was carried out as follows: if the probability of a token being sampled from a certain language i is p i , the adjusted probability is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "3.2" }, { "text": "q i = p i N j=1 p j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "3.2" }, { "text": ". Note that this is a similar sampling strategy to the one followed by more popular models, like mBERT. We trained an out-of-the box ELMo encoder for approximately the same number of steps on each corpus; this was equivalent to 2 epochs for M0.4 and 3 for M0.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "3.2" }, { "text": "Detailed training hyperparameters and precise corpus sizes are presented in Appendices A and B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "3.2" }, { "text": "While there is a dizzying array of downstream evaluation tasks for monolingual models, looking to evaluate multilingual models is a bit harder. We settled on a range of tasks in two different groups:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tasks", "sec_num": "3.3" }, { "text": "1. Monolingual tasks: these tasks directly test the monolingual capabilities of the model, per language. We include PoS tagging and dependency parsing in this category. In addition to our multilingual models, we also evaluate our monolingual ELMo variants on these tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tasks", "sec_num": "3.3" }, { "text": "2. Transfer tasks: these tasks involve leveraging the model's multilingual space, to transfer knowledge from the language it was trained on, to the language it is being evaluated on. These tasks include natural language inference and text retrieval; we also convert PoS tagging into a transfer task, by training our model on English and asking it to tag text in other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tasks", "sec_num": "3.3" }, { "text": "In an attempt to illuminate precisely what the contribution of the different ELMo models is, we ensure that our decoder architectures -that translate from ELMo's representations to the task's label space -are kept relatively simple, particularly for lower-level tasks. We freeze ELMo's parameters: this is not a study on fine-tuning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tasks", "sec_num": "3.3" }, { "text": "The tasks that we select are a subset of the tasks mentioned in XTREME (Hu et al., 2020) ; i.e., the subset most suitable to the languages we trained our encoder on. A brief description follows:", "cite_spans": [ { "start": 71, "end": 88, "text": "(Hu et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Tasks", "sec_num": "3.3" }, { "text": "PoS tagging: For part-of-speech tagging, we use Universal Dependencies part-of-speech tagged corpora (Nivre et al., 2020) . Built on top of our ELMo-encoder is a simple MLP, that maps representations onto the PoS label space.", "cite_spans": [ { "start": 101, "end": 121, "text": "(Nivre et al., 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Tasks", "sec_num": "3.3" }, { "text": "We use the same architecture as for regular PoS tagging, but train on English and evaluate on our target languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PoS tagging (transfer):", "sec_num": null }, { "text": "We use dependencyannotated Universal Dependencies corpora; our metrics are both unlabelled and labelled attachment scores (UAS/LAS). Our parsing architecture is a biaffine graph-based parser (Dozat and Manning, 2018) .", "cite_spans": [ { "start": 191, "end": 216, "text": "(Dozat and Manning, 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Dependency parsing:", "sec_num": null }, { "text": "XNLI: A transfer-based language inference task; we use Chen et al.'s 2017 ESIM architecture, train a tagging head on English, and evaluate on the translated dev portions of other languages (Conneau et al., 2018 ).", "cite_spans": [ { "start": 189, "end": 210, "text": "(Conneau et al., 2018", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Dependency parsing:", "sec_num": null }, { "text": "The task here is to pick out, for each sentence in our source corpus (English), the appropriate translation of the sentence in our target language corpus. This, in a sense, is the most 'raw' tasks; target language sentences are ranked based on similarity. We follow Hu et al. (2020) and use the Tatoeba dataset.", "cite_spans": [ { "start": 266, "end": 282, "text": "Hu et al. (2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Tatoeba:", "sec_num": null }, { "text": "We tokenize all our text using the relevant UD-Pipe (Straka et al., 2019) model, and train/evaluate on each task three times; the scores we report are mean scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tatoeba:", "sec_num": null }, { "text": "First, we examine the costs of multilingualism, as far as monolingual tasks are concerned. We present our results on our monolingual tasks in Figure 1 . Monolingual models appear to perform consistently better, particularly PoS tagging; this appears to be especially true for our underresourced languages, strengthening the claim that compression is necessary to avoid drowning out signal. For PoS tagging, the correlation between performance difference (monolingual vs. M0.4) and corpus size is highly significant (\u03c1 = 0.74; p = 0.006). Table 1 : Average scores for each task and encoder; non-monolingual best scores in bold.", "cite_spans": [], "ref_spans": [ { "start": 142, "end": 150, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 538, "end": 545, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "We find that compression appears to result in visible improvements, when moving from \u03b1 = 0.4 to \u03b1 = 0.2. These improvements, while not dramatic, apply across the board (see Table 1 ), over virtually all task/language combinations; this is visible in Figure 2a . Note the drop in performance on certain tasks for English, Swedish and Italian - we hypothesise that this is due to Swedish and Italian being closer to English (our most-sampled language), and therefore suffering from the combination of the drop in their corpus sizes, as well as the more significant drop in English corpus size. The Pearson correlation between the trend in performance for PoS tagging and the size of a language's corpus is statistically significant (\u03c1 = 0.65; p = 0.02); note that while this is over multiple points, it is single runs per data point. Figure 2b also shows the difference in performance between the truncated model, TRUNC, and M0.4; this is a lot less convincing than the difference to M0.2, indicating that no additional advantage is to be gained by downsampling data for better-resourced languages.", "cite_spans": [], "ref_spans": [ { "start": 173, "end": 180, "text": "Table 1", "ref_id": null }, { "start": 250, "end": 259, "text": "Figure 2a", "ref_id": "FIGREF1" }, { "start": 832, "end": 841, "text": "Figure 2b", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "PoS", "sec_num": null }, { "text": "We include full, detailed results in Appendix C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PoS", "sec_num": null }, { "text": "Cross-lingual differences Finally, in an attempt to study the differences in model performance across languages, we examine the results of all models on Tatoeba. This task has numerous advantages for a more detailed analysis; i) it covers all our languages, bar Hindi, ii) the results have significant variance across languages, and iii) the task does not involve any additional training. We present these results in Figure 3 . We observe that M0.2 consistently appears to perform better, as illustrated earlier. Performance does not appear to have much correlation with corpus size; however, the languages for which M0.4 performs better are Swedish and Italian, coincidentally, the only other Latin-scripted Indo-European languages. Given the specific nature of Tatoeba, which involves picking out appropriate translations, these results make more sense: these languages receive not only the advantage of having more data for themselves, but also from the ", "cite_spans": [], "ref_spans": [ { "start": 417, "end": 425, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "PoS", "sec_num": null }, { "text": "Our results allow us to draw conclusions that come across as very 'safe': some compression helps, too much hurts; when compression does help, however, the margin appears rather moderate yet significant for most tasks, even given fewer training cycles. Immediately visible differences along linguistic lines do not emerge when ratios differ, despite the relative linguistic diversity of our language choices; we defer analysis of this to a future work, that is less focused on downstream analysis, and more on carefully designed probes that might illuminate the difference between our models' internal spaces. Note that a possible confounding factor in our results is also the complexity of the architectures we build on top of mELMO: they also have significant learning capacity, and it is not implausible that whatever differences there are between our models, are drowned out by highly parameterised downstream decoders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "To reiterate, this study is not (nor does it aim to be) a replication of models with far larger parameter spaces and more training data. This is something of a middle-of-the-road approach; future work could involve this sort of evaluation on downscaled transformer models, which we shy away from in order to provide a usable model release. We hope that the differences between these models provide some insight, and pave the way for further research, not only specifically addressing the question of sampling from a perspective of performance, but also analytically. There has already been considerable work in this direction on multilingual variants of BERT (Pires et al., 2019; Chi et al., 2020) , and we hope that this work motivates papers applying the same to recurrent mELMo, as well as comparing and contrasting the two. ", "cite_spans": [ { "start": 659, "end": 679, "text": "(Pires et al., 2019;", "ref_id": "BIBREF15" }, { "start": 680, "end": 697, "text": "Chi et al., 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" } ], "back_matter": [ { "text": "Our experiments were run on resources provided by UNINETT Sigma2 -the National Infrastructure for High Performance Computing and Data Storage in Norway, under the NeIC-NLPL umbrella.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Massively Multilingual Neural Machine Translation in the Wild: Findings and Challenges", "authors": [ { "first": "Naveen", "middle": [], "last": "Arivazhagan", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Bapna", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Lepikhin", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Mia", "middle": [ "Xu" ], "last": "Chen", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Cao", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.05019[cs].ArXiv:1907.05019" ] }, "num": null, "urls": [], "raw_text": "Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen, and Yonghui Wu. 2019. Massively Multilingual Neu- ral Machine Translation in the Wild: Findings and Challenges. arXiv:1907.05019 [cs]. ArXiv: 1907.05019.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A call for more rigor in unsupervised cross-lingual learning", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7375--7388", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.658" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Sebastian Ruder, Dani Yogatama, Gorka Labaka, and Eneko Agirre. 2020. A call for more rigor in unsupervised cross-lingual learn- ing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7375-7388, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation", "authors": [ { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yuxuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", "volume": "", "issue": "", "pages": "55--64", "other_ids": { "DOI": [ "10.18653/v1/K18-2005" ] }, "num": null, "urls": [], "raw_text": "Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 55-64, Brussels, Belgium. Association for Compu- tational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "repository/ for Natural Language Inference", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhenhua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1657--1668", "other_ids": { "DOI": [ "10.18653/v1/P17-1152" ] }, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM 1 http://vectors.nlpl.eu/repository/ for Natural Language Inference. Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1657-1668. ArXiv: 1609.06038.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Finding universal grammatical relations in multilingual BERT", "authors": [ { "first": "Ethan", "middle": [ "A" ], "last": "Chi", "suffix": "" }, { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5564--5577", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.493" ] }, "num": null, "urls": [], "raw_text": "Ethan A. Chi, John Hewitt, and Christopher D. Man- ning. 2020. Finding universal grammatical relations in multilingual BERT. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 5564-5577, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "XNLI: Evaluating cross-lingual sentence representations", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2475--2485", "other_ids": { "DOI": [ "10.18653/v1/D18-1269" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Simpler but more accurate semantic dependency parsing", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "484--490", "other_ids": { "DOI": [ "10.18653/v1/P18-2077" ] }, "num": null, "urls": [], "raw_text": "Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 2: Short Papers), pages 484-490, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "CoNLL 2017 shared task -automatically annotated raw texts and word embeddings. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics", "authors": [ { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Juhani", "middle": [], "last": "Luotolahti", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filip Ginter, Jan Haji\u010d, Juhani Luotolahti, Milan Straka, and Daniel Zeman. 2017. CoNLL 2017 shared task -automatically annotated raw texts and word embeddings. LINDAT/CLARIAH-CZ digi- tal library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization", "authors": [ { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.11080" ] }, "num": null, "urls": [], "raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generaliza- tion. arXiv preprint arXiv:2003.11080.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.07291" ] }, "num": null, "urls": [], "raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Investigating cross-lingual alignment methods for contextualized embeddings with token-level evaluation", "authors": [ { "first": "Qianchu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "33--43", "other_ids": { "DOI": [ "10.18653/v1/K19-1004" ] }, "num": null, "urls": [], "raw_text": "Qianchu Liu, Diana McCarthy, Ivan Vuli\u0107, and Anna Korhonen. 2019. Investigating cross-lingual align- ment methods for contextualized embeddings with token-level evaluation. In Proceedings of the 23rd Conference on Computational Natural Lan- guage Learning (CoNLL), pages 33-43, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Universal Dependencies v2: An evergrowing multilingual treebank collection", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Tyers", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "4034--4043", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Haji\u010d, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Mar- seille, France. European Language Resources Asso- ciation.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "How multilingual is multilingual BERT?", "authors": [ { "first": "Telmo", "middle": [], "last": "Pires", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4996--5001", "other_ids": { "DOI": [ "10.18653/v1/P19-1493" ] }, "num": null, "urls": [], "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001, Florence, Italy. Association for Computa- tional Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Performance difference between monolingual and multilingual models, on our monolingual tasks. Absent bars indicate that the language was missing.", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Performance differences between our models on our selected tasks.", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "Accuracy on Tatoeba per model additional data available to English, which in turn optimises their biases solely by virtue of language similarity.", "uris": null, "type_str": "figure" }, "TABREF2": { "content": "
LanguageARENEUFIHEHIITJAKORUSVTRZHTotal
M0.4242.29585.52113.42239.57208.4691.74468.45460.53184.63379.9366.86396.01282.764020.14
M0.2149.09231.76102.01148.25138.2991.74207.3205.54130.15186.68183.45190.6161.062125.92
TRUNC91.7491.7491.7491.7491.7491.7491.7491.7491.7491.7491.7491.7491.741192.62
", "html": null, "text": "Models were bidirectional LSTMs. Monolingual models were trained on individual sizes given at \u03b1 = 0.4.", "type_str": "table", "num": null }, "TABREF3": { "content": "
LanguageARENEUFIHEHIITJAKORUSVTRZH
MONO0.890.890.880.820.840.90.910.940.670.88-0.830.86
POS0.4 0.20.81 0.860.89 0.890.81 0.850.78 0.790.82 0.830.87 0.90.89 0.890.94 0.940.64 0.640.87 0.87--0.81 0.820.84 0.85
TRUNC0.820.890.840.80.820.90.880.930.630.86-0.810.85
MONO0.860.890.840.880.890.940.930.950.8-0.850.690.8
UASM0.4 M0.20.85 0.850.89 0.890.83 0.840.85 0.870.89 0.880.94 0.940.93 0.930.95 0.950.79 0.79--0.84 0.840.68 0.670.78 0.79
TRUNC0.850.890.830.860.890.940.930.950.78-0.840.680.79
MONO0.790.860.790.840.840.90.90.940.74-0.810.590.74
LAS0.4 0.20.78 0.790.85 0.850.78 0.780.81 0.820.84 0.840.9 0.90.9 0.90.94 0.940.72 0.73--0.79 0.80.57 0.570.72 0.72
TRUNC0.790.850.780.820.840.90.90.930.72-0.790.570.72
0.40.230.890.250.430.360.310.520.220.180.49-0.230.22
POS (trf.)0.20.260.890.290.470.370.330.540.240.180.55-0.290.28
TRUNC0.230.890.30.480.320.260.480.20.170.49-0.270.28
M0.40.410.67---0.44---0.48-0.350.35
XNLIM0.20.460.56---0.45---0.49-0.450.34
TRUNC0.430.66---0.43---0.43-0.430.35
0.40.05-0.050.190.16-0.360.110.040.260.550.120.11
Tatoeba0.20.12-0.120.260.21-0.340.110.050.330.40.170.19
TRUNC0.05-0.10.20.09-0.220.050.030.150.290.10.13
", "html": null, "text": "Corpus sizes, in million tokens", "type_str": "table", "num": null }, "TABREF4": { "content": "", "html": null, "text": "Full score table across all languages, tasks and models", "type_str": "table", "num": null } } } }