{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:14:05.577721Z" }, "title": "Regularising Fisher Information Improves Cross-lingual Generalisation", "authors": [ { "first": "Asa", "middle": [ "Cooper" ], "last": "Stickland", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "a.cooper.stickland@ed.ac.uk" }, { "first": "Iain", "middle": [], "last": "Murray", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "i.murray@ed.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "Multilingual pre-trained representations (Devlin et al., 2019; Huang et al., 2019; Conneau et al., 2020) are ubiquitous in state-of-the-art methods for cross-lingual transfer (Wu and Dredze, 2019; Pires et al., 2019) . These methods learn from raw textual data in up to hundreds of languages. A typical pipeline transfers to another language by fine-tuning a downstream task in a high-resource language, often English.", "cite_spans": [ { "start": 41, "end": 62, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF4" }, { "start": 63, "end": 82, "text": "Huang et al., 2019;", "ref_id": "BIBREF6" }, { "start": 83, "end": 104, "text": "Conneau et al., 2020)", "ref_id": "BIBREF1" }, { "start": 175, "end": 196, "text": "(Wu and Dredze, 2019;", "ref_id": "BIBREF14" }, { "start": 197, "end": 216, "text": "Pires et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many recent works use 'consistency regularisation' to improve the generalisation of fine-tuned pre-trained models, both multilingual and Englishonly (Jiang et al., 2020; Aghajanyan et al., 2020; Zheng et al., 2021; Park et al., 2021; Liang et al., 2021) . These works encourage model outputs to be similar between a perturbed and normal version of the input, usually via penalising the Kullback-Leibler (KL) divergence between the probability distribution of the perturbed and normal model. 'Generic' perturbations can be adversarial inputs (Jiang et al., 2020) or inputs with Gaussian or uniform noise (Aghajanyan et al., 2020) . For cross-lingual generalisation in particular, probabilistic subword segmentations (Kudo, 2018) of the input or translations of the input generated by machine translation can be used Zheng et al., 2021) . Other work has found improvement by enforcing consistency for perturbations within models in addition to at the input (Hua et al., 2021; Liang et al., 2021) .", "cite_spans": [ { "start": 149, "end": 169, "text": "(Jiang et al., 2020;", "ref_id": "BIBREF8" }, { "start": 170, "end": 194, "text": "Aghajanyan et al., 2020;", "ref_id": "BIBREF0" }, { "start": 195, "end": 214, "text": "Zheng et al., 2021;", "ref_id": "BIBREF16" }, { "start": 215, "end": 233, "text": "Park et al., 2021;", "ref_id": "BIBREF11" }, { "start": 234, "end": 253, "text": "Liang et al., 2021)", "ref_id": null }, { "start": 541, "end": 561, "text": "(Jiang et al., 2020)", "ref_id": "BIBREF8" }, { "start": 603, "end": 628, "text": "(Aghajanyan et al., 2020)", "ref_id": "BIBREF0" }, { "start": 715, "end": 727, "text": "(Kudo, 2018)", "ref_id": "BIBREF9" }, { "start": 815, "end": 834, "text": "Zheng et al., 2021)", "ref_id": "BIBREF16" }, { "start": 955, "end": 973, "text": "(Hua et al., 2021;", "ref_id": "BIBREF5" }, { "start": 974, "end": 993, "text": "Liang et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While these works show improved generalisation compared to 'vanilla' fine-tuning, they present multiple explanations of the effectiveness of consistency regularisation. They also rarely compare to traditional regularisation methods like dropout or L2 regularisation. Finally such methods either require a complex adversarial training step (Jiang et al., 2020; Park et al., 2021) , or tuning many hyper-parameters like type of noise, level of noise, and weight given to the consistency loss term.", "cite_spans": [ { "start": 339, "end": 359, "text": "(Jiang et al., 2020;", "ref_id": "BIBREF8" }, { "start": 360, "end": 378, "text": "Park et al., 2021)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We believe that consistency losses may be implicitly regularizing the loss landscape. In particular, we build on the work of Jastrzebski et al. 2021, who hypothesize that implicitly or explicitly regularizing trace of the Fisher Information Matrix (FIM), Tr(F ), amplifies the implicit bias of SGD to avoid memorization. Briefly, the FIM is defined as F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(\u03b8) = E x\u223cX ,y\u223cp \u03b8 (y|x) [g(x, y)g(x, y) T ],", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "where g(x, y) is the gradient w.r.t to \u03b8 on the loss for label y and input x. Jastrzebski et al. 2021propose directly penalising a proxy of Tr(F ), the Fisher penalty defined as 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "B B i=1 g(x i , y i ) 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the multilingual setting we may wish to first fine-tune on a high-resource language like English, then further fine-tune on a smaller amount data in a lower-resource language, a 'two-stage' fine-tuning procedure. The FIM is a measure of the local curvature, and a small Tr(F ) at the end of training implies a flatter minimum. Intuitively, such flat minima imply we can 'travel further' in parameter space before reaching a region of high loss, allowing for better performance in the two-stage fine-tuning setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our (preliminary) key contributions are", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We show that the trace of the FIM is correlated with generalisation, confirming that the results of Jastrzebski et al. (2021) apply to cross-lingual transfer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Adding a direct Fisher penalty can achieve similar results to subword consistency regularization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We show for models fine-tuned on an English downstream task, improvements from fine-tuning on data from another language are correlated with low curvature (i.e. small trace of the FIM) in the English fine-tuned model. Figure 1 : English validation set accuracy on a subset of the XNLI dataset vs. trace of the FIM at the end of training, for models with different weight given to consistency regularisation (crosses) and fisher penalty (circles) losses. For clarity we leave various models with different L2 penalties (squares) off the legend. kl=k means a consistency loss was given weight k during training. fpen=k means the Fisher penalty was given weight k.", "cite_spans": [], "ref_spans": [ { "start": 220, "end": 228, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We first present a theoretical argument that in some simple situations, consistency losses penalize the trace of the FIM. We perturb model parameters \u03b8 with small, zero-mean, i.i.d. noise . For small , we can Taylor expand the KL divergence between models with these parameters:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initial Results", "sec_num": "2" }, { "text": "KL [p \u03b8 (y | x) || p \u03b8+ (y | x)] \u2248 1 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initial Results", "sec_num": "2" }, { "text": "T F , (see e.g., Dabak and Johnson, 2003) . Taking expectations w.r.t. and writing T F as a sum, we have,", "cite_spans": [ { "start": 17, "end": 41, "text": "Dabak and Johnson, 2003)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Initial Results", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E \u223cp( ) [ T F ] = i,j E[ i j F i,j ] = i =j E[ i ]E[ j ]F i,j + i E[ 2 i ]F i,i = 0 + C i F i,i = C Tr(F ),", "eq_num": "(1)" } ], "section": "Initial Results", "sec_num": "2" }, { "text": "where C is the variance of the i.i.d. noise. Consistency losses that use larger or more structured perturbations could potentially have a useful effect not captured by the Fisher Information Matrix alone. We empirically investigate the relationship between a subword segmentation consistency loss and penalizing the FIM. We use the same loss and hyper-parameters as . All experiments use multilingual BERT. Figure 1 presents results from an experiment on a subset (20k examples, with the small size chosen due to compute constraints) of the XNLI dataset (Conneau et al., 2018) . It shows 1) a correlation between generalisation and small Tr(F ), and 2) decreasing Tr(F ) with increasing weight given to the 1.2 \u00d7 10 1 1.4 \u00d7 10 1 1.6 \u00d7 10 1 1.8 \u00d7 10 1 2 \u00d7 10 Figure 2 : Increase in target language (either de, es or zh) validation set accuracy when fine-tuning on data in the target language vs. trace of the FIM after training on English, for the PAWS-X dataset (Yang et al., 2019) .", "cite_spans": [ { "start": 554, "end": 576, "text": "(Conneau et al., 2018)", "ref_id": "BIBREF2" }, { "start": 962, "end": 981, "text": "(Yang et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 407, "end": 415, "text": "Figure 1", "ref_id": null }, { "start": 758, "end": 766, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Initial Results", "sec_num": "2" }, { "text": "consistency loss term. Additionally we see that directly penalising the FIM (models marked 'fpen=') has a similar effect to these consistency losses. Figure 2 shows the effect of small Tr(F ), i.e. flat minima, on fine-tuning a model trained on English data on another language. To obtain non-English training data, we split the 2000 dev set examples in two, leaving 1000 training examples in each language and 1000 new dev examples. We see that improvements on the non-English language are correlated with flat minima.", "cite_spans": [], "ref_spans": [ { "start": 150, "end": 158, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Initial Results", "sec_num": "2" }, { "text": "We aim to confirm these initial results on more datasets, and use our insights to develop better multilingual fine-tuning techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initial Results", "sec_num": "2" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Better fine-tuning by reducing representational collapse", "authors": [ { "first": "Armen", "middle": [], "last": "Aghajanyan", "suffix": "" }, { "first": "Akshat", "middle": [], "last": "Shrivastava", "suffix": "" }, { "first": "Anchit", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2008.03156" ] }, "num": null, "urls": [], "raw_text": "Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta. 2020. Better fine-tuning by reducing representa- tional collapse. arXiv preprint arXiv:2008.03156.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "XNLI: Evaluating cross-lingual sentence representations", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2475--2485", "other_ids": { "DOI": [ "10.18653/v1/D18-1269" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Relations between Kullback-Leibler distance and Fisher information", "authors": [ { "first": "Anand", "middle": [], "last": "Dabak", "suffix": "" }, { "first": "Don", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anand Dabak and Don Johnson. 2003. Relations be- tween Kullback-Leibler distance and Fisher informa- tion.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Noise stability regularization for improving BERT fine-tuning", "authors": [ { "first": "Hang", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Xingjian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dejing", "middle": [], "last": "Dou", "suffix": "" }, { "first": "Chengzhong", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jiebo", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "3229--3241", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.258" ] }, "num": null, "urls": [], "raw_text": "Hang Hua, Xingjian Li, Dejing Dou, Chengzhong Xu, and Jiebo Luo. 2021. Noise stability regularization for improving BERT fine-tuning. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 3229-3241, Online. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Unicoder: A universal language encoder by pretraining with multiple cross-lingual tasks", "authors": [ { "first": "Haoyang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yaobo", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Linjun", "middle": [], "last": "Shou", "suffix": "" }, { "first": "Daxin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2485--2494", "other_ids": { "DOI": [ "10.18653/v1/D19-1252" ] }, "num": null, "urls": [], "raw_text": "Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, and Ming Zhou. 2019. Unicoder: A universal language encoder by pre- training with multiple cross-lingual tasks. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2485-2494, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Catastrophic Fisher explosion: Early phase Fisher matrix impacts generalization", "authors": [ { "first": "Stanislaw", "middle": [], "last": "Jastrzebski", "suffix": "" }, { "first": "Devansh", "middle": [], "last": "Arpit", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "\u00c5strand", "suffix": "" }, { "first": "Giancarlo", "middle": [], "last": "Kerg", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Krzysztof", "middle": [ "J" ], "last": "Geras", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 38th International Conference on Machine Learning, ICML 2021", "volume": "139", "issue": "", "pages": "4772--4784", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanislaw Jastrzebski, Devansh Arpit, Oliver \u00c5s- trand, Giancarlo Kerg, Huan Wang, Caiming Xiong, Richard Socher, Kyunghyun Cho, and Krzysztof J. Geras. 2021. Catastrophic Fisher explosion: Early phase Fisher matrix impacts generalization. In Pro- ceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Ma- chine Learning Research, pages 4772-4784. PMLR.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "SMART: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization", "authors": [ { "first": "Haoming", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Tuo", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2177--2190", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.197" ] }, "num": null, "urls": [], "raw_text": "Haoming Jiang, Pengcheng He, Weizhu Chen, Xi- aodong Liu, Jianfeng Gao, and Tuo Zhao. 2020. SMART: Robust and efficient fine-tuning for pre- trained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 2177-2190, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "66--75", "other_ids": { "DOI": [ "10.18653/v1/P18-1007" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple sub- word candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 66-75, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Consistency training with virtual adversarial discrete perturbation", "authors": [ { "first": "Jungsoo", "middle": [], "last": "Park", "suffix": "" }, { "first": "Gyuwan", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jungsoo Park, Gyuwan Kim, and Jaewoo Kang. 2021. Consistency training with virtual adversarial discrete perturbation. CoRR, abs/2104.07284.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "How multilingual is multilingual BERT?", "authors": [ { "first": "Telmo", "middle": [], "last": "Pires", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4996--5001", "other_ids": { "DOI": [ "10.18653/v1/P19-1493" ] }, "num": null, "urls": [], "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001, Florence, Italy. Association for Computa- tional Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Multi-view subword regularization", "authors": [ { "first": "Xinyi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "473--482", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.40" ] }, "num": null, "urls": [], "raw_text": "Xinyi Wang, Sebastian Ruder, and Graham Neubig. 2021. Multi-view subword regularization. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 473-482, Online. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "833--844", "other_ids": { "DOI": [ "10.18653/v1/D19-1077" ] }, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Com- putational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "PAWS-X: A cross-lingual adversarial dataset for paraphrase identification", "authors": [ { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Tar", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3687--3692", "other_ids": { "DOI": [ "10.18653/v1/D19-1382" ] }, "num": null, "urls": [], "raw_text": "Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual ad- versarial dataset for paraphrase identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3687- 3692, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Consistency regularization for cross-lingual fine-tuning", "authors": [ { "first": "Bo", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Shaohan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wenhui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zewen", "middle": [], "last": "Chi", "suffix": "" }, { "first": "Saksham", "middle": [], "last": "Singhal", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Song", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Zheng, Li Dong, Shaohan Huang, Wenhui Wang, Zewen Chi, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, and Furu Wei. 2021. Consistency regularization for cross-lingual fine-tuning. CoRR, abs/2106.08226.", "links": null } }, "ref_entries": {} } }