{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:13:30.723960Z" }, "title": "Bilingual Alignment Pre-Training for Zero-Shot Cross-Lingual Transfer", "authors": [ { "first": "Ziqing", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "Joint Laboratory of HIT and iFLYTEK (HFL)", "institution": "", "location": { "country": "China" } }, "email": "zqyang5@iflytek.com" }, { "first": "Wentao", "middle": [], "last": "Ma", "suffix": "", "affiliation": { "laboratory": "Joint Laboratory of HIT and iFLYTEK (HFL)", "institution": "", "location": { "country": "China" } }, "email": "wtma@iflytek.com" }, { "first": "Yiming", "middle": [], "last": "Cui", "suffix": "", "affiliation": { "laboratory": "Joint Laboratory of HIT and iFLYTEK (HFL)", "institution": "", "location": { "country": "China" } }, "email": "ymcui@iflytek.com" }, { "first": "Jiani", "middle": [], "last": "Ye", "suffix": "", "affiliation": { "laboratory": "Joint Laboratory of HIT and iFLYTEK (HFL)", "institution": "", "location": { "country": "China" } }, "email": "jnye@iflytek.com" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "settlement": "Harbin", "country": "China" } }, "email": "" }, { "first": "Shijin", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "sjwang3@iflytek.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Multilingual pre-trained models have achieved remarkable performance on cross-lingual transfer learning. Some multilingual models such as mBERT, have been pre-trained on unlabeled corpora, therefore the embeddings of different languages in the models may not be aligned very well. In this paper, we aim to improve the zero-shot cross-lingual transfer performance by proposing a pre-training task named Word-Exchange Aligning Model (WEAM), which uses the statistical alignment information as the prior knowledge to guide cross-lingual word prediction. We evaluate our model on multilingual machine reading comprehension task MLQA and natural language interface task XNLI. The results show that WEAM can significantly improve the zero-shot performance.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Multilingual pre-trained models have achieved remarkable performance on cross-lingual transfer learning. Some multilingual models such as mBERT, have been pre-trained on unlabeled corpora, therefore the embeddings of different languages in the models may not be aligned very well. In this paper, we aim to improve the zero-shot cross-lingual transfer performance by proposing a pre-training task named Word-Exchange Aligning Model (WEAM), which uses the statistical alignment information as the prior knowledge to guide cross-lingual word prediction. We evaluate our model on multilingual machine reading comprehension task MLQA and natural language interface task XNLI. The results show that WEAM can significantly improve the zero-shot performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Large-scale multilingual pre-trained language models such as mBERT (Devlin et al., 2019) , XLM (Conneau and Lample, 2019) and XLM-R (Conneau et al., 2020) have shown significant effectiveness in transfer learning on various cross-lingual tasks. The pre-training methods of the multilingual language models can be divided into two groups: unsupervised pre-training like Multilingual Masked Language Model (MMLM) (Devlin et al., 2019; Conneau et al., 2020) , and supervised pretraining like Translation Language Model (TLM) (Conneau and Lample, 2019) . In the MMLM, the model predicts the masked tokens with the monolingual context; in the TLM, the model can attend to both the contexts in the source language and target language. Variations of TLM model can be found in Huang et al. (2019) ; Chi et al. (2021) ; Ouyang et al. (2020) .", "cite_spans": [ { "start": 67, "end": 88, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 95, "end": 121, "text": "(Conneau and Lample, 2019)", "ref_id": "BIBREF3" }, { "start": 126, "end": 154, "text": "XLM-R (Conneau et al., 2020)", "ref_id": null }, { "start": 411, "end": 432, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF5" }, { "start": 433, "end": 454, "text": "Conneau et al., 2020)", "ref_id": "BIBREF2" }, { "start": 522, "end": 548, "text": "(Conneau and Lample, 2019)", "ref_id": "BIBREF3" }, { "start": 769, "end": 788, "text": "Huang et al. (2019)", "ref_id": "BIBREF8" }, { "start": 791, "end": 808, "text": "Chi et al. (2021)", "ref_id": "BIBREF1" }, { "start": 811, "end": 831, "text": "Ouyang et al. (2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While it is possible for the model to learn the alignment knowledge by itself, some works have * Equal contribution. investigated injecting prior knowledge to the model to help it to align better. Cao et al. (2020) proposed a bilingual pre-training model for mBERT, where it identifies matched word pairs in parallel bilingual corpora using unsupervised standard techniques such as FastAlign (Dyer et al., 2013) , and aligns the contextual representations between the matched words with a similarity loss function.", "cite_spans": [ { "start": 197, "end": 214, "text": "Cao et al. (2020)", "ref_id": "BIBREF0" }, { "start": 392, "end": 411, "text": "(Dyer et al., 2013)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The previous works focus on aligning the contextual representations of the pre-trained models. In this paper, we propose a new cross-lingual pre-trained model called Word-Exchange Aligning Model (WEAM). Different from previous works, we align the static embeddings and the contextual representations of different languages in the multilingual pre-trained models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Specifically, in the pre-training stage, we first use FastAlign to identify bilingual word pairs in parallel bilingual sentence pairs as our prior knowledge. Then we randomly mask some tokens in the bilingual sentence pairs. For each masked token, WEAM performs two kinds of predictions: a multilingual prediction and a cross-lingual prediction. The multilingual prediction task predicts the original masked word in the standard way. while the cross-lingual task predicts the corresponding word from the representations in the other language. For example, if the words apple and Apfel (German for apple) appear in the the English-German parallel sentence and apple is masked in the sentence, WEAM takes the representation of the masked apple and Apfel for multilingual prediction and crosslingual prediction respectively to recover the original word apple.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Through the two ways of prediction, both the contextual representations from the last transformer layer and the static embeddings from the embedding layer can be aligned. We evaluated our method on the word-level machine reading comprehension task MLQA (Lewis et al., 2019) ", "cite_spans": [ { "start": 253, "end": 273, "text": "(Lewis et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\uff1f [PAD] raining Is \u4eca \u5929 raining Is \u4eca \u5929 1 1 1 \u4eca \u5929 \u4e0b \u2fac \u5417 \uff1f [PAD]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Is it raining today ? Figure 1 : An overview of the Word-Exchange Aligning Model (WEAM). For each language pair, There are two tasks. The multilingual prediction task predicts the masked tokens. The cross-lingual prediction task utilizes a word alignment matrix to swap the representations of aligned words in parallel sentences, then predicts the masked tokens in the swapped sentences.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 30, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2018). The results show that WEAM significantly improves the cross-lingual transfer performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We first briefly describe the Translation Language Model (TLM) (Conneau and Lample, 2019) . Like MMLM in (Devlin et al., 2019) , TLM performs the masked word prediction task, where it randomly masks some words and predicts the original ones within a parallel sentence pair. For each masked word, the model can either attend to the surrounding words or the translated context in the other language, encouraging the model to align the words in different languages.", "cite_spans": [ { "start": 63, "end": 89, "text": "(Conneau and Lample, 2019)", "ref_id": "BIBREF3" }, { "start": 105, "end": 126, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Language Model", "sec_num": "2.1" }, { "text": "Our proposed method WEAM is based on the multilingual pre-trained model and consists of two tasks: the multilingual prediction task and the crosslingual prediction task, as shown in Figure 1 . Multilingual Prediction. In the multilingual prediction, we randomly mask tokens in the bilingual parallel sentences and predict the original tokens with the outputs from the last transformer layer. unlike TLM, we did not reset the position embeddings or add the language embeddings, so the distinction between languages will be purely learned from the token embeddings. We construct the inputs and obtain the representations for a source-target sentence pair S, T as", "cite_spans": [], "ref_spans": [ { "start": 182, "end": 190, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "X = [CLS]S[SEP]T [SEP] (1) H = Encoder(X) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "where X is the token sequence and H \u2208 R m\u00d7h is the output from the last transformer layer of the pretrained model Encoder; m is the max sequence length and h is the hidden size. For a masked token X i , we predict the original token w i with the corresponding representation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "H i = \u03b4(W 1 \u2022 H i + b 1 ) (3) p(X i = w i |H i ) = exp(linear(H i ) \u2022 e i ) |V| k=1 exp(linear(H i ) \u2022 e k ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "where \u03b4 is the GELU activation (Hendrycks and Gimpel, 2016) , linear(\u2022) is a linear layer, H i is the token representation for X i , as given by Eq (2). |V| is the vocabulary size. e i is the emebdding vector of token w i . Cross-lingual prediction. In the cross-lingual prediction, we predict the masked tokens with the representations from the other language. Specifically, we first use FastAlign to construct an alignment words set from parallel sentences S, T . We denote the words set as", "cite_spans": [ { "start": 31, "end": 59, "text": "(Hendrycks and Gimpel, 2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "d(s, t) = {(i 1 , j 1 ), ..., (i n , j n )},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "where i is the word index of source language in the input sequence, j is the word index of the target language. n is the number of word pairs in the sentence pair. Then we generate effectively code-mixed representations by exchanging the positions of each word pair in parallel sentences. We denote the exchange operation with an off-diagonal matrix A \u2208 {0, 1} m\u00d7m :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "A(i, j) = 1, if {(i, j) or (j, i)} \u2208 d 0, otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "We take A as the transformation matrix to construct the word-exchange representations H , which is calculated by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H = A T \u2022 H (5) H = W 2 \u2022 H + b 2", "eq_num": "(6)" } ], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "We have applied another linear transformation on H and obtainedH. Lastly, we conduct the masked word predictions onH similar to the multilingual prediction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H i = \u03b4(W 3 \u2022H i + b 3 ) (7) p(X i = w i |H i ) = exp(linear(H i ) \u2022 e i ) |V| k=1 exp(linear(H i ) \u2022 e k )", "eq_num": "(8)" } ], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "If the word w i is paired with word w j , what the cross-lingual prediction does is predicting w i with the contextual representation of w j . In this way we are align the embedding of w i (e i ) with the contextual representation of w j (H j ). Pre-training Objective. Given a bilingual parallel corpus D, we train the multilingual model with the cross-entropy loss. Based on the discussion above, the objective function of pre-training consists of multilingual part L mp and cross-lingual prediction part L cp . Let \u0398 denote the parameters of the model, then the objective function L(D, \u0398) can be formulated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L mp = \u2212 M i=1 log(p(w i ))", "eq_num": "(9)" } ], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "L cp = \u2212 M i=1 log(p(w i )) (10) L(D, \u0398) = L mp + \u03bbL cp (11)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "where M is the number of masked tokens in the instance, p(w i ) andp(w i ), given by Eq. 6and Eq.(8), are the predicted probability of the masked token w i over the vocabulary size, \u03bb is a hyperparameter to balance L mp and L cp .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Exchange Aligning Model", "sec_num": "2.2" }, { "text": "We use three parallel corpora with the source language English and the target languages Chinese 1 , German and Spanish 2 respectively. We initialize the mBERT model with the weights released by Google 3 . We pre-train three models for the three target languages separately to avoid alignment interference among different language pairs. During the pre-training steps, we empirically set the masking probability as 0.3. Experimentally we find that 0.3 gives better performance. The other settings for masking are the same as the MLM (Devlin et al., 2019) . The hyper-parameters of the three models are the same: we set the learning rate as 5e-5, the batch size as 32, the max sequence length as 128, and the number of pre-training epochs as 2. We set \u03bb to 1. Figure 2 : A visualization of the word embeddings from mBERT before and after WEAM pre-training. We select 20 English-German word alignment pairs that appear most frequently in the pre-training corpus. Each word alignment pair is connected by a blue dotted line. All the word pairs are identified by FastAlign (Dyer et al., 2013) .", "cite_spans": [ { "start": 532, "end": 553, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 1068, "end": 1087, "text": "(Dyer et al., 2013)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 758, "end": 766, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Experiment Setup", "sec_num": "3.1" }, { "text": "For the downstream evaluation, we fine-tune and test our pre-trained model along with several baselines on the MLQA and XNLI tasks respectively. The specific settings of baselines are described in the following section. Since in this work we mainly focus on evaluating the zero-shot performance, we fine-tune all the models in the zero-shot setting where only the English training set is available. We also fine-tune mBERT in the translate-train setting for comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "3.1" }, { "text": "We use mBERT (Devlin et al., 2019) as our main baseline, which consists of 12 transformer layers, with a hidden size of 768 and 12 attention heads. For a fair comparison, we also include a baseline mBERT+TLM with the same pre-training settings but uses TLM as the pre-training task. An additional baseline word-aligned mBERT from Cao et al. (2020) is included for the XNLI dataset. Table 1 shows our results on MLQA. Note that the results on the target languages of the TLM and WEAM are from models of different language pairs as introduced in the experiment setup section. The results of TLM and WEAM on English are the average of the three models.", "cite_spans": [ { "start": 13, "end": 34, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 330, "end": 347, "text": "Cao et al. (2020)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 382, "end": 389, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "The mBERT+TLM model outperforms mBERT by a large margin in the zero-shot setting, but is not as good as the mBERT in the translate-train setting. Our model mBERT+WEAM improves the scores in the zero-shot setting and also outperforms mBERT in the translate-train setting. This result is promising, as it indicates that a properly aligned pre-training model can exceed the performance of translate-train even with zero-shot training. Table 2 shows our results on XNLI. The mBERT+TLM and word-aligned mBERT achieved similar improvements on this task compared to mBERT, whereas mBERT+WEAM has significantly outperformed both of them. Because all of these models are pre-trained with the same parallel corpus, the differences in performance indicate the importance of considering both the word-level and contextual-level alignment. Compared with the translate-train result, the mBERT+WEAM result is slightly lower but is close. This is different from MLQA. This observation may indicate that the examples in XNLI have shorter input sequences and thus have fewer translation noises.", "cite_spans": [], "ref_spans": [ { "start": 432, "end": 439, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results on MLQA", "sec_num": "3.3" }, { "text": "The effect of contextual alignment has been well studied in Cao et al. (2020) , where the authors demonstrate that the contextual alignment is powerful in improving the transferability of mBERT. but the effect of the word-level information alignment is still unclear. To further explore this problem, we use t-SNE (Maaten and Hinton, 2008) to visualize the distances between embeddings of word alignment pairs with the highest frequencies (excluding stop words). The result is illustrated in Figure 2 .", "cite_spans": [ { "start": 60, "end": 77, "text": "Cao et al. (2020)", "ref_id": "BIBREF0" }, { "start": 314, "end": 339, "text": "(Maaten and Hinton, 2008)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 492, "end": 500, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Visualization", "sec_num": "4" }, { "text": "The left panel shows word pairs in the embedding layer of mBERT without WEAM pre-training, we can see that these word pairs are partly aligned. For example, the pairs today-heute, Council-Rat are aligned well, but Beriche-report, Mr-Herr are distant away. As a comparison, we show the word pairs from the embedding layer of mBERT with WEAM pre-training in the right panel, where most of the word pairs are aligned much better. There are also words that remained poorly aligned even with WEAM. For example, our-uns, which may be due to that they are not the exact translation pair (us-uns are more exact pairs in this case). In general, the embeddings are aligned much better after the WEAM pre-training procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visualization", "sec_num": "4" }, { "text": "In this paper, we propose a new pre-training task named WEAM to align the contextual representations and static word embeddings from multilingual pre-trained models. WEAM consists of a multilingual prediction task and a cross-lingual prediction task. As a supplement to previous works MMLM or TLM, WEAM introduces the statistic alignment information as prior knowledge to guide the cross-lingual prediction. Through the experiments on MLQA and XNLI, we show that WEAM can improve the transfer performance significantly and align the word embeddings within the models much better. In the future, we plan to extend our method to other multilingual models like XLM-R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We use the corpus from Xu (2019). 2 http://www.statmt.org/europarl 3 https://github.com/google-research/bert", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Multilingual alignment of contextual word representations", "authors": [ { "first": "Steven", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Kitaev", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.03518" ] }, "num": null, "urls": [], "raw_text": "Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Multi- lingual alignment of contextual word representations. arXiv preprint arXiv:2002.03518.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "InfoXLM: An information-theoretic framework for cross-lingual language model pre-training", "authors": [ { "first": "Zewen", "middle": [], "last": "Chi", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Saksham", "middle": [], "last": "Singhal", "suffix": "" }, { "first": "Wenhui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Song", "suffix": "" }, { "first": "Xian-Ling", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Heyan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "3576--3588", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.280" ] }, "num": null, "urls": [], "raw_text": "Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 3576-3588, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "7059--7069", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems, pages 7059- 7069.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Xnli: Evaluating crosslingual sentence representations", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2475--2485", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross- lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 2475-2485.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4171- 4186.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A simple, fast, and effective reparameterization of ibm model 2", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Chahuneau", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "644--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Gaussian error linear units (gelus)", "authors": [ { "first": "Dan", "middle": [], "last": "Hendrycks", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.08415" ] }, "num": null, "urls": [], "raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Gaus- sian error linear units (gelus). arXiv preprint arXiv:1606.08415.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unicoder: A universal language encoder by pretraining with multiple cross-lingual tasks", "authors": [ { "first": "Haoyang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yaobo", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Linjun", "middle": [], "last": "Shou", "suffix": "" }, { "first": "Daxin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2485--2494", "other_ids": { "DOI": [ "10.18653/v1/D19-1252" ] }, "num": null, "urls": [], "raw_text": "Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, and Ming Zhou. 2019. Unicoder: A universal language encoder by pre- training with multiple cross-lingual tasks. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2485-2494, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Mlqa: Evaluating cross-lingual extractive question answering", "authors": [ { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.07475" ] }, "num": null, "urls": [], "raw_text": "Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. Mlqa: Eval- uating cross-lingual extractive question answering. arXiv preprint arXiv:1910.07475.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Visualizing data using t-sne", "authors": [ { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2008, "venue": "Journal of machine learning research", "volume": "9", "issue": "", "pages": "2579--2605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "ERNIE-M: enhanced multilingual representation by aligning cross-lingual semantics with monolingual corpora", "authors": [ { "first": "Xuan", "middle": [], "last": "Ouyang", "suffix": "" }, { "first": "Shuohuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Hao Tian", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2020. ERNIE-M: enhanced multilingual representation by aligning cross-lingual semantics with monolingual corpora. CoRR, abs/2012.15674.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Nlp chinese corpus: Large scale chinese corpus for nlp", "authors": [ { "first": "Bright", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5281/zenodo.3402023" ] }, "num": null, "urls": [], "raw_text": "Bright Xu. 2019. Nlp chinese corpus: Large scale chi- nese corpus for nlp.", "links": null } }, "ref_entries": { "TABREF1": { "num": null, "type_str": "table", "content": "
ModelenesdezhAVG(all) AVG(zero-shot)
Translate-Train
mBERT \u2020 65.2Zero-Shot
mBERT \u202065.2/77.7 46.6/64.3 44.3/57.9 37.3/57.5 48.4/64.444.2/61.0
mBERT+TLM66.8/80.0 47.7/65.7 48.4/63.1 40.1/62.0 50.7/67.746.7/64.6
mBERT+WEAM 66.7/79.7 49.6/67.8 49.7/64.3 41.7/63.7 51.7/68.948.2/66.2
", "html": null, "text": "/77.7 37.4/53.9 47.5/62.0 39.5/61.4 47.4/63.8 43.0/60.3 mBERT (ours) 67.3/80.3 48.4/67.1 49.1/63.5 42.8/63.6 51.9/68.6 48.1/65.7" }, "TABREF2": { "num": null, "type_str": "table", "content": "", "html": null, "text": "EM/F1 scores on the test set of MLQA dataset. The results with \u2020 are taken fromLewis et al. (2019). AVG(all) is the average scores on all languages. AVG(zero-shot) is the average scores on the languages excluding English." }, "TABREF4": { "num": null, "type_str": "table", "content": "
", "html": null, "text": "Accuracy scores on XNLI dataset. The results with \u2020 are taken fromConneau et al. (2020)." } } } }