{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:12:31.928509Z" }, "title": "Soft Layer Selection with Meta-Learning for Zero-Shot Cross-Lingual Transfer", "authors": [ { "first": "Weijia", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Maryland", "location": {} }, "email": "" }, { "first": "Batool", "middle": [], "last": "Haider", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Maryland", "location": {} }, "email": "" }, { "first": "Jason", "middle": [], "last": "Krone", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Maryland", "location": {} }, "email": "" }, { "first": "Saab", "middle": [], "last": "Mansour", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Maryland", "location": {} }, "email": "saabm@amazon.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Multilingual pre-trained contextual embedding models (Devlin et al., 2019) have achieved impressive performance on zero-shot cross-lingual transfer tasks. Finding the most effective strategy to fine-tune these models on high-resource languages so that it transfers well to the zero-shot languages is a nontrivial task. In this paper, we propose a novel meta-optimizer to soft-select which layers of the pre-trained model to freeze during fine-tuning. We train the meta-optimizer by simulating the zero-shot transfer scenario. Results on cross-lingual natural language inference show that our approach improves over the simple fine-tuning baseline and X-MAML (Nooralahzadeh et al., 2020).", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Multilingual pre-trained contextual embedding models (Devlin et al., 2019) have achieved impressive performance on zero-shot cross-lingual transfer tasks. Finding the most effective strategy to fine-tune these models on high-resource languages so that it transfers well to the zero-shot languages is a nontrivial task. In this paper, we propose a novel meta-optimizer to soft-select which layers of the pre-trained model to freeze during fine-tuning. We train the meta-optimizer by simulating the zero-shot transfer scenario. Results on cross-lingual natural language inference show that our approach improves over the simple fine-tuning baseline and X-MAML (Nooralahzadeh et al., 2020).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Despite the impressive performance of neural models on a wide variety of NLP tasks, these models are extremely data hungry -training them requires a large amount of annotated data. As collecting such amounts of data for every language of interest is extremely expensive, cross-lingual transfer that aims to transfer the task knowledge from high-resource (source) languages for which annotated data are more readily available to lowresource (target) languages becomes a promising direction. Cross-lingual transfer approaches using cross-lingual resources such as machine translation (MT) systems (Wan, 2009; Conneau et al., 2018) or bilingual dictionaries (Prettenhofer and Stein, 2010) have effectively reduced the amount of annotated data required to obtain reasonable performance on the target language. However, such cross-lingual resources are often limited for low-resource languages.", "cite_spans": [ { "start": 595, "end": 606, "text": "(Wan, 2009;", "ref_id": "BIBREF34" }, { "start": 607, "end": 628, "text": "Conneau et al., 2018)", "ref_id": "BIBREF4" }, { "start": 655, "end": 685, "text": "(Prettenhofer and Stein, 2010)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent advances in cross-lingual contextual embedding models have reduced the need for cross- * Work done while interning at Amazon AI. lingual supervision (Devlin et al., 2019; Lample and Conneau, 2019) . Wu and Dredze (2019) show that multilingual BERT (mBERT) (Devlin et al., 2019) , a contextual embedding model pre-trained on the concatenated Wikipedia data from 104 languages without cross-lingual alignment, does surprisingly well on zero-shot cross-lingual transfer tasks, where they fine-tune the model on the annotated data from the source languages and evaluate on the target language. Wu and Dredze (2019) propose to freeze the bottom layers of mBERT during fine-tuning to improve the cross-lingual performance over the simple fine-tune-all-parameters strategy, as different layers of mBERT captures different linguistic information (Jawahar et al., 2019) .", "cite_spans": [ { "start": 156, "end": 177, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF5" }, { "start": 178, "end": 203, "text": "Lample and Conneau, 2019)", "ref_id": "BIBREF14" }, { "start": 206, "end": 226, "text": "Wu and Dredze (2019)", "ref_id": "BIBREF40" }, { "start": 263, "end": 284, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 597, "end": 617, "text": "Wu and Dredze (2019)", "ref_id": "BIBREF40" }, { "start": 845, "end": 867, "text": "(Jawahar et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Selecting which layers to freeze for a downstream task is a non-trivial problem. In this paper, we propose a novel meta-learning algorithm for soft layer selection. Our meta-learning algorithm learns layer-wise update rate by simulating the zero-shot transfer scenario -at each round, we randomly split the source languages into a heldout language and the rest as training languages, fine-tune the model on the training languages, and update the meta-parameters based on the model performance on the held-out language. We build the meta-optimizer on top of a standard optimizer and learnable update rates, so that it generalizes well to large numbers of updates. Our method uses much less meta-parameters than the X-MAML approach (Nooralahzadeh et al., 2020) adapted from model-agnostic meta-learning (MAML) (Finn et al., 2017) to zero-shot cross-lingual transfer.", "cite_spans": [ { "start": 730, "end": 758, "text": "(Nooralahzadeh et al., 2020)", "ref_id": "BIBREF21" }, { "start": 808, "end": 827, "text": "(Finn et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experiments on zero-shot cross-lingual natural language inference show that our approach outperforms both the simple fine-tuning baseline and the X-MAML algorithm and that our approach brings larger gains when transferring from multiple source languages. Ablation study shows that both the layer-wise update rate and cross-lingual metatraining are key to the success of our approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The idea of transfer learning is to improve the performance on the target task T 0 by learning from a set of related source tasks {T 1 , T 2 , ..., T K }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning for Zero-Shot Cross-lingual Transfer", "sec_num": "2" }, { "text": "In the context of cross-lingual transfer, we treat different languages as separate tasks, and our goal is to transfer the task knowledge from the source languages to the target language. In contrast to the transfer learning case where the inputs of the source and target tasks are from the same language, in cross-lingual transfer learning we need to handle inputs from different languages with different vocabularies and syntactic structures. To handle the issue, we use the pre-trained multilingual BERT (Devlin et al., 2019 ), a language model encoder trained on the concatenation of monolingual corpora from 104 languages. The most widely used approach to zero-shot cross-lingual transfer using multilingual BERT is to fine-tune the BERT model \u03b8 on the source language tasks T 1...K with training objective L", "cite_spans": [ { "start": 506, "end": 526, "text": "(Devlin et al., 2019", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning for Zero-Shot Cross-lingual Transfer", "sec_num": "2" }, { "text": "\u03b8 * = Learn(L, T 1 , ..., T K ; \u03b8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning for Zero-Shot Cross-lingual Transfer", "sec_num": "2" }, { "text": "and then evaluate the fine-tuned model \u03b8 * on the target language task T 0 . The gap between training and testing can lead to sub-optimal performance on the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning for Zero-Shot Cross-lingual Transfer", "sec_num": "2" }, { "text": "To address the issue, we propose to train a metaoptimizer f \u03d5 for fine-tuning so that the fine-tuned model generalizes better to unseen languages. We train the meta-optimizer by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning for Zero-Shot Cross-lingual Transfer", "sec_num": "2" }, { "text": "\u03d5 * = Learn(L, T k ; MetaLearn(L, T 1...K \\T k ; \u03d5))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning for Zero-Shot Cross-lingual Transfer", "sec_num": "2" }, { "text": "where T k is a \"surprise\" language randomly selected from the source language tasks T 1...K .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning for Zero-Shot Cross-lingual Transfer", "sec_num": "2" }, { "text": "Our meta-optimizer consists of a standard optimizer as the base optimizer and a set of metaparameters to control the layer-wise update rates. An update step is formulated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Optimizer", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 t = \u03b8 t\u22121 \u2212 \u03bb \u2299 \u2206\u03b8 t \u2206\u03b8 t = f opt (g 1 , ..., g t )", "eq_num": "(1)" } ], "section": "Meta-Optimizer", "sec_num": "2.1" }, { "text": "where \u03b8 t represent the parameters of the learner model at time step t, and \u2206\u03b8 t is the update vector produced by the base optimizer f opt given the ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Optimizer", "sec_num": "2.1" }, { "text": "D test = D k . 7 D train \u2190 {D 1 , ..., D K } \\ D test 8 repeat L times 9 X t , Y t \u2190 random batch from D train 10 L t \u2190 L(M (X t ; \u03b8 t\u22121 ), Y t ) 11 g 1...t \u2190 [g 1...t\u22121 , \u2207 \u03b8 t\u22121 L t ] 12 \u2206\u03b8 t \u2190 f opt (g 1 , ..., g t ) 13 \u03b8 t \u2190 \u03b8 t\u22121 \u2212 \u03c3(\u03d5 s\u22121 ) \u2299 \u2206\u03b8 t 14 t \u2190 t + 1 15 end 16 X, Y \u2190 D test 17 L test \u2190 L(M (X; \u03b8 t ), Y ) 18 \u03d5 s \u2190 Update(\u03d5 s\u22121 , \u2207 \u03d5 s\u22121 L test ) 19 s \u2190 s + 1 20 end gradients {g i = \u2207 \u03b8 i\u22121 L i } t i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Optimizer", "sec_num": "2.1" }, { "text": "at the current and previous steps. The function f opt is defined by the optimization algorithm and its hyper-parameters. For example, a typical gradient descent algorithm uses f opt = \u03b1g t where \u03b1 represents the learning rate. A standard optimization algorithm will update the model parameters by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Optimizer", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 t = \u03b8 t\u22121 \u2212 f opt (g 1 , ..., g t )", "eq_num": "(2)" } ], "section": "Meta-Optimizer", "sec_num": "2.1" }, { "text": "Our meta-optimizer is different in that we perform gated update using parametric update rates \u03bb, which is computed by \u03bb = \u03c3(\u03d5), where \u03d5 represents the meta-parameters of the metaoptimizer f \u03d5 . The sigmoid function ensures that the update rates are within the range [0, 1]. Different from Andrychowicz et al. (2016) in which the optimizer parameters are shared across all coordi-", "cite_spans": [ { "start": 289, "end": 315, "text": "Andrychowicz et al. (2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Meta-Optimizer", "sec_num": "2.1" }, { "text": "\u2212\u2206 ! \u2212\u2206 \" \u2212\u2206 # Model Base Optimizer \u2207 $ ! \u2112 % Model Base Optimizer \u2207 $ \" \u2112 # \" ( # , # ) ( \" , \" ) Model Base Optimizer \u2207 $ #$\" \u2112 !&# ( ! , ! ) \u2026 ! Model ( '()' , '()' ) \u2112( '()' ; ! , '()' ) \u2026 + \u00d7 \u00d7 + \u00d7 +", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Optimizer", "sec_num": "2.1" }, { "text": "Figure 1: Computational graph for the forward pass of the meta-optimizer. Each batch (X t , Y t ) is from the training data D train , and (X test , Y test ) denotes the entire test set. The meta-learner is comprised of a base optimizer that takes the history and current step gradients as inputs and suggests an update \u2206\u03b8 t , and the meta parameters that control the layer-wise update rates \u03bb for the learner model \u03b8. The dashed arrows indicate that we do not back-propagate the gradients through that step when updating the meta-parameters. nates of the model, our meta-optimizer learns different update rates for different model layers. This is based on the findings that different layers of the BERT encoder capture different linguistic information, with syntactic features in middle layers and semantic information in higher layers (Jawahar et al., 2019) . And thus, different layers may generalize differently across languages. Figure 1 illustrates the computational graph for the forward pass when training the meta-optimizer. Note that as the losses L t and gradients \u2207 \u03b8 t\u22121 L t are dependent on the parameters of the metaoptimizer, computing the gradients along the dashed edges would normally require taking second derivatives, which is computationally expensive. Following Andrychowicz et al. 2016, we drop the gradients along the dashed edges and only compute gradients along the solid edges.", "cite_spans": [ { "start": 836, "end": 858, "text": "(Jawahar et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 933, "end": 941, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Meta-Optimizer", "sec_num": "2.1" }, { "text": "A good meta-optimizer will, given the training data in the source languages and the training objective, suggest an update rule for the learner model so that it performs well on the target language. Thus, we would like the training condition to match that of the test time. However, in zero-shot transfer we assume no access to the target language data, so we need to simulate the test scenario using only the training data on the source languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Training", "sec_num": "2.2" }, { "text": "As shown in Algorithm 1, at each episode in the outer loop, we randomly choose a test language k to construct the test data D test = D k and use the remaining data as the training data D train .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Training", "sec_num": "2.2" }, { "text": "Then, we re-initialize the parameters of the learner model and start the training simulation. At each training step, we first use the base optimizer f opt to compute the update vector \u2206\u03b8 t based on the current and history gradients g 1...t . We then perform the gated update using the meta-optimizer \u03d5 s\u22121 with Eq. (1). The resulting model \u03b8 t can be viewed as the output of a forward pass of the meta-optimizer. After every L iterations of model update, we compute the gradient of the loss on the test data D test with respect to the old meta parameters \u03d5 s\u22121 and make an update to the meta parameters. Our meta-learning algorithm is different from X-MAML (Nooralahzadeh et al., 2020) in that 1) X-MAML is designed mainly for few-shot transfer while our algorithm is designated for zero-shot transfer, and 2) our algorithm uses much less meta-parameters than X-MAML as it only requires training the update rate for each layer while in X-MAML we meta-learn the initial parameters of the entire model.", "cite_spans": [ { "start": 657, "end": 685, "text": "(Nooralahzadeh et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Meta-Training", "sec_num": "2.2" }, { "text": "We evaluate our meta-learning approach on natural language inference. Natural Language Inference (NLI) can be cast into a sequence pair classification problem where, given a premise and a hypothesis sentence, the model needs to predict whether the premise entails the hypothesis, contradicts it, or neither (neutral). We use the Multi-Genre Natural Language Inference Corpus , which consists of 433k English sentence pairs labeled with textual entailment information, and the XNLI dataset (Conneau Table 1 : Accuracy of our approach compared with baselines on the XNLI dataset (averaged over five runs). We compare our approach (Meta-Optimizer) with our fine-tuning baseline with one or two auxiliary languages, the fine-tuning results in Devlin et al. (2019) , the highest scores (with a selected subset of layers fixed during finetuning) in Wu and Dredze (2019) , the best zero-shot results using X-MAML (Nooralahzadeh et al., 2020) with one auxiliary language. We boldface the highest scores within each auxiliary language setting. et al., 2018), which has 2.5k development and 5k test sentence pairs in 15 languages including English (en), French (fr), Spanish (es), German (de), Greek (el), Bulgarian (bg), Russian (ru), Turkish (tr), Arabic (ar), Vietnamese (vi), Thai (th), Chinese (zh), Hindi (hi), Swahili (sw), and Urdu (ur). We use this dataset to evaluate the effectiveness of our meta-learning algorithm when transferring from English and one or more low-resource auxiliary languages to the target language.", "cite_spans": [ { "start": 739, "end": 759, "text": "Devlin et al. (2019)", "ref_id": "BIBREF5" }, { "start": 843, "end": 863, "text": "Wu and Dredze (2019)", "ref_id": "BIBREF40" }, { "start": 906, "end": 934, "text": "(Nooralahzadeh et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 498, "end": 505, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Our model is based on the multilingual BERT (mBERT) (Devlin et al., 2019) implemented in GluonNLP (Guo et al., 2020) . As in previous work (Devlin et al., 2019; Wu and Dredze, 2019) , we tokenize the input sentences using WordPiece, concatenate them, feed the sequence to BERT, and use the hidden representation of the first token ([CLS]) for classification. The final output is computed by applying a linear projection and a softmax layer to the hidden representation. We use a dropout rate of 0.1 on the final encoder layer and fix the embedding layer during fine-tuning. Following Nooralahzadeh et al. (2020), we fine-tune mBERT by 1) fine-tune mBERT on the English data for one epoch to get initial model parameters, and 2) continue fine-tuning the model on the other source languages for two epochs. We compare using the standard optimizer (fine-tuning baseline) and our meta-optimizer for Step 2. We use Adam optimizer (Kingma and Ba, 2015) with a learning rate of 2 \u00d7 10 \u22125 , \u03b2 1 = 0.9, and \u03b2 2 = 0.999 as the standard optimizer and base optimizer in our meta-optimizer. To train our meta-optimizer, we use Adam with a learning rate of 0.05 for N = 10 epochs with L = 15 training batches per iteration (Algorithm 1). Different from Nooralahzadeh et al. (2020) who select the auxiliary languages for each target language that lead to the best transfer results, we simulate a more realistic scenario where only a limited set of auxiliary languages is available. We choose two distant auxiliary languages -Greek (Hellenic branch of the Indo-European language family) and Urdu (Indo-Aryan branch of the Indo-European language family)and evaluate the transfer performance on the other languages.", "cite_spans": [ { "start": 52, "end": 73, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 98, "end": 116, "text": "(Guo et al., 2020)", "ref_id": "BIBREF9" }, { "start": 139, "end": 160, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF5" }, { "start": 161, "end": 181, "text": "Wu and Dredze, 2019)", "ref_id": "BIBREF40" }, { "start": 1239, "end": 1266, "text": "Nooralahzadeh et al. (2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Model and Training Configurations", "sec_num": "3.1" }, { "text": "As shown in Table 1 , we compare our metalearning approach with the fine-tuning baseline and the zero-shot transfer results reported in prior work that uses mBERT. Our approach outperforms the fine-tuning methods in Devlin et al. (2019) by 1.6-8.5%. Compared with the best fine-tuning method in Wu and Dredze (2019) which freezes a selected subset of mBERT layers during fine-tuning, our approach achieves +0.4% higher accuracy on average. We compare our approach with a strong fine-tuning baseline which achieves competitive accuracy scores to the best X-MAML results (Nooralahzadeh et al., 2020) using a single auxiliary language, even though we limit our choice of the auxiliary language to Greek and Urdu, while Nooralahzadeh et al. Table 2 : Ablation results on the XNLI dataset using Greek and Urdu as the auxiliary languages (averaged over five runs). Results show that ablating the layer-wise update rate or cross-lingual meta-training degrades accuracy on all target languages. baseline on 10 out of 14 languages and by +0.2% accuracy on average.", "cite_spans": [ { "start": 295, "end": 315, "text": "Wu and Dredze (2019)", "ref_id": "BIBREF40" }, { "start": 569, "end": 597, "text": "(Nooralahzadeh et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 1", "ref_id": null }, { "start": 737, "end": 744, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Main Results", "sec_num": "3.2" }, { "text": "Our approach brings larger gains when using two auxiliary languages -it outperforms the finetuning baseline on all languages and improves the average accuracy by +0.6%. This suggests that our meta-learning approach is more effective when transferring from multiple source languages. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Results", "sec_num": "3.2" }, { "text": "Our approach is different from Andrychowicz et al. (2016) in that 1) it adopts layer-wise update rates while the meta-parameters are shared across all model parameters in Andrychowicz et al. (2016) , and 2) it trains the meta-parameters in a cross-lingual setting while Andrychowicz et al. (2016) is designated to few-shot learning. We conduct ablation experiments on XNLI using Greek and Urdu as the auxiliary languages to understand how they contribute to the model performance.", "cite_spans": [ { "start": 31, "end": 57, "text": "Andrychowicz et al. (2016)", "ref_id": "BIBREF0" }, { "start": 171, "end": 197, "text": "Andrychowicz et al. (2016)", "ref_id": "BIBREF0" }, { "start": 270, "end": 296, "text": "Andrychowicz et al. (2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.3" }, { "text": "We compare our approach with its variant that replaces the layer-wise update rate with one update rate for all layers. Table 2 shows that our approach significantly outperforms this variant on all target languages with an average margin of 2.0%. This suggests that layer-wise update rate contributes greatly to the effectiveness of our approach.", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 126, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Impact of Layer-Wise Update Rate", "sec_num": null }, { "text": "We measure the impact of cross-lingual meta-training by replacing the cross-lingual meta-training in our approach with a joint training of the layer-wise update rate and model parameters. As shown in Table 2 , ablating the cross-lingual meta-training 1 Using two auxiliary languages improves over one auxiliary language the most on lower-resource languages in mBERT pre-training (such as Turkish and Hindi), but does not bring gains or even hurts on high-resource languages (such as French and German). This is consistent with the findings in prior work that the choice of the auxiliary languages is crucial in cross-lingual transfer . We leave further investigation on its impact on our meta-learning approach for future work. degrades accuracy significantly on all target languages by 1.4% on average, which shows that our cross-lingual meta-training strategy is beneficial.", "cite_spans": [], "ref_spans": [ { "start": 200, "end": 207, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Impact of Cross-Lingual Meta-Training", "sec_num": null }, { "text": "The idea of cross-lingual transfer is to use the annotated data in the source languages to improve the task performance on the target language with minimal or even zero target labeled data (aka zeroshot). There is a large body of work on using external cross-lingual resources such as bilingual word dictionaries (Prettenhofer and Stein, 2010; Schuster et al., 2019b; Liu et al., 2020a) , MT systems (Wan, 2009) , or parallel corpora (Eriguchi et al., 2018; Singla et al., 2018; Conneau et al., 2018) to bridge the gap between the source and target languages. Recent advances in unsupervised cross-lingual representations have paved the road for transfer learning without crosslingual resources (Yang et al., 2017; Schuster et al., 2019a) . Our work builds on Mulcaire et al. (2019) ; Lample and Conneau (2019) ; Pires et al. (2019) who show that language models trained on monolingual text from multiple languages provide powerful multilingual representations that generalize across languages. Recent work has shown that more advanced techniques such as freezing the model's bottom layers (Wu and Dredze, 2019) or continual learning (Liu et al., 2020b) can further boost the cross-lingual performance on downstream tasks. In this paper, we explore meta-learning to softly select the layers to freeze during fine-tuning.", "cite_spans": [ { "start": 313, "end": 343, "text": "(Prettenhofer and Stein, 2010;", "ref_id": "BIBREF23" }, { "start": 344, "end": 367, "text": "Schuster et al., 2019b;", "ref_id": "BIBREF30" }, { "start": 368, "end": 386, "text": "Liu et al., 2020a)", "ref_id": "BIBREF17" }, { "start": 400, "end": 411, "text": "(Wan, 2009)", "ref_id": "BIBREF34" }, { "start": 434, "end": 457, "text": "(Eriguchi et al., 2018;", "ref_id": "BIBREF6" }, { "start": 458, "end": 478, "text": "Singla et al., 2018;", "ref_id": "BIBREF31" }, { "start": 479, "end": 500, "text": "Conneau et al., 2018)", "ref_id": "BIBREF4" }, { "start": 695, "end": 714, "text": "(Yang et al., 2017;", "ref_id": "BIBREF41" }, { "start": 715, "end": 738, "text": "Schuster et al., 2019a)", "ref_id": "BIBREF29" }, { "start": 760, "end": 782, "text": "Mulcaire et al. (2019)", "ref_id": "BIBREF20" }, { "start": 785, "end": 810, "text": "Lample and Conneau (2019)", "ref_id": "BIBREF14" }, { "start": 813, "end": 832, "text": "Pires et al. (2019)", "ref_id": "BIBREF22" }, { "start": 1090, "end": 1111, "text": "(Wu and Dredze, 2019)", "ref_id": "BIBREF40" }, { "start": 1134, "end": 1153, "text": "(Liu et al., 2020b)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Transfer Learning", "sec_num": "4.1" }, { "text": "A typical meta-learning algorithm consists of two loops of training: 1) an inner loop where the learner model is trained, and 2) an outer loop where, given a meta-objective, we optimize a set of meta-parameters which controls aspects of the learning process in the inner loop. The goal is to find the optimal meta-parameters such that the inner loop performs well on the metaobjective. Existing meta-learning approaches differ in the choice of meta-parameters to be optimized and the meta-objective. Depending on the choice of meta-parameters, existing work can be divided into four categories: (a) neural architecture search (Stanley and Miikkulainen, 2002; Zoph and Le, 2016; Baker et al., 2016; Real et al., 2017; Zoph et al., 2018) ; (b) metric-based (Koch et al., 2015; Vinyals et al., 2016) ; (c) modelagnostic (MAML) (Finn et al., 2017; Ravi and Larochelle, 2016) ; (d) model-based (learning update rules) (Schmidhuber, 1987; Hochreiter et al., 2001; Maclaurin et al., 2015; Li and Malik, 2017) .", "cite_spans": [ { "start": 626, "end": 658, "text": "(Stanley and Miikkulainen, 2002;", "ref_id": "BIBREF32" }, { "start": 659, "end": 677, "text": "Zoph and Le, 2016;", "ref_id": "BIBREF43" }, { "start": 678, "end": 697, "text": "Baker et al., 2016;", "ref_id": "BIBREF1" }, { "start": 698, "end": 716, "text": "Real et al., 2017;", "ref_id": "BIBREF25" }, { "start": 717, "end": 735, "text": "Zoph et al., 2018)", "ref_id": "BIBREF44" }, { "start": 755, "end": 774, "text": "(Koch et al., 2015;", "ref_id": "BIBREF13" }, { "start": 775, "end": 796, "text": "Vinyals et al., 2016)", "ref_id": "BIBREF33" }, { "start": 824, "end": 843, "text": "(Finn et al., 2017;", "ref_id": "BIBREF7" }, { "start": 844, "end": 870, "text": "Ravi and Larochelle, 2016)", "ref_id": "BIBREF24" }, { "start": 913, "end": 932, "text": "(Schmidhuber, 1987;", "ref_id": "BIBREF26" }, { "start": 933, "end": 957, "text": "Hochreiter et al., 2001;", "ref_id": "BIBREF10" }, { "start": 958, "end": 981, "text": "Maclaurin et al., 2015;", "ref_id": "BIBREF19" }, { "start": 982, "end": 1001, "text": "Li and Malik, 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Meta Learning", "sec_num": "4.2" }, { "text": "In this paper, we focus on model-based metalearning for zero-shot cross-lingual transfer. Early work introduces a type of networks that can update their own weights (Schmidhuber, 1987 (Schmidhuber, , 1992 (Schmidhuber, , 1993 . More recently, Andrychowicz et al. (2016) propose to model gradient-based update rules using an RNN and optimize it with gradient descent. However, as Wichrowska et al. (2017) point out, the RNN-based meta-optimizers fail to make progress when run for large numbers of steps. They address the issue by incorporating features motivated by the standard optimizers into the metaoptimizer. We instead base our meta-optimizer on a standard optmizer like Adam so that it generalizes better to large-scale training.", "cite_spans": [ { "start": 165, "end": 183, "text": "(Schmidhuber, 1987", "ref_id": "BIBREF26" }, { "start": 184, "end": 204, "text": "(Schmidhuber, , 1992", "ref_id": "BIBREF27" }, { "start": 205, "end": 225, "text": "(Schmidhuber, , 1993", "ref_id": "BIBREF28" }, { "start": 243, "end": 269, "text": "Andrychowicz et al. (2016)", "ref_id": "BIBREF0" }, { "start": 379, "end": 403, "text": "Wichrowska et al. (2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Meta Learning", "sec_num": "4.2" }, { "text": "Meta-learning has been previously applied to few-shot cross-lingual named entity recognition , low-resource machine translation (Gu et al., 2018) , and improving cross-domain generalization for semantic parsing (Wang et al., 2021) .", "cite_spans": [ { "start": 128, "end": 145, "text": "(Gu et al., 2018)", "ref_id": "BIBREF8" }, { "start": 211, "end": 230, "text": "(Wang et al., 2021)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Meta Learning", "sec_num": "4.2" }, { "text": "For zero-shot cross-lingual transfer, Nooralahzadeh et al. (2020) introduce an optimization-based meta-learning algorithm called X-MAML which meta-learns the initial model parameters on supervised data from low-resource languages. By contrast, our meta-learning algorithm requires much less metaparameters and is thus simpler than X-MAML. Bansal et al. (2020) show that MAML combined with meta-learning for learning rates improves few-shot learning. Different from their approach which learns layer-wise learning rates only for task-specific layers specified as a hyper-parameter as part of the MAML algorithm, our approach learns layer-wise learning rates for all layers, and we show the effectiveness of our approach without being used with MAML on zero-shot cross-lingual transfer.", "cite_spans": [ { "start": 38, "end": 65, "text": "Nooralahzadeh et al. (2020)", "ref_id": "BIBREF21" }, { "start": 339, "end": 359, "text": "Bansal et al. (2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Meta Learning", "sec_num": "4.2" }, { "text": "We propose a novel meta-optimizer that learns to soft-select which layers to freeze when fine-tuning a pretrained language model (mBERT) for zeroshot cross-lingual transfer. Our meta-optimizer learns the update rate for each layer by simulating the zero-shot transfer scenario where the model fine-tuned on the source languages is tested on an unseen language. Experiments show that our approach outperforms the simple fine-tuning baseline and the X-MAML algorithm on cross-lingual natural language inference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning to learn by gradient descent by gradient descent", "authors": [ { "first": "Marcin", "middle": [], "last": "Andrychowicz", "suffix": "" }, { "first": "Misha", "middle": [], "last": "Denil", "suffix": "" }, { "first": "Sergio", "middle": [], "last": "G\u00f3mez", "suffix": "" }, { "first": "W", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "David", "middle": [], "last": "Hoffman", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Pfau", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Schaul", "suffix": "" }, { "first": "Nando", "middle": [], "last": "Shillingford", "suffix": "" }, { "first": "", "middle": [], "last": "De Freitas", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3981--3989", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcin Andrychowicz, Misha Denil, Sergio G\u00f3mez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando de Freitas. 2016. Learning to learn by gradient descent by gradient de- scent. In Advances in Neural Information Process- ing Systems, pages 3981-3989. Curran Associates, Inc.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Designing neural network architectures using reinforcement learning", "authors": [ { "first": "Bowen", "middle": [], "last": "Baker", "suffix": "" }, { "first": "Otkrist", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Naik", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Raskar", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.02167" ] }, "num": null, "urls": [], "raw_text": "Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. 2016. Designing neural network architec- tures using reinforcement learning. arXiv preprint arXiv:1611.02167.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning to few-shot learn across diverse natural language classification tasks", "authors": [ { "first": "Trapit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Rishikesh", "middle": [], "last": "Jha", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "5108--5123", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.448" ] }, "num": null, "urls": [], "raw_text": "Trapit Bansal, Rishikesh Jha, and Andrew McCallum. 2020. Learning to few-shot learn across diverse natural language classification tasks. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 5108-5123, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Adversarial deep averaging networks for cross-lingual sentiment classification", "authors": [ { "first": "Xilun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Athiwaratkun", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Kilian", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "557--570", "other_ids": { "DOI": [ "10.1162/tacl_a_00039" ] }, "num": null, "urls": [], "raw_text": "Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2018. Adversarial deep av- eraging networks for cross-lingual sentiment classi- fication. Transactions of the Association for Compu- tational Linguistics, 6:557-570.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "XNLI: Evaluating cross-lingual sentence representations", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2475--2485", "other_ids": { "DOI": [ "10.18653/v1/D18-1269" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Zero-shot cross-lingual classification using multilingual neural machine translation", "authors": [ { "first": "Akiko", "middle": [], "last": "Eriguchi", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Hideto", "middle": [], "last": "Kazawa", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akiko Eriguchi, Melvin Johnson, Orhan Firat, Hideto Kazawa, and Wolfgang Macherey. 2018. Zero-shot cross-lingual classification using multilingual neural machine translation. CoRR, abs/1809.04686.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "authors": [ { "first": "Chelsea", "middle": [], "last": "Finn", "suffix": "" }, { "first": "Pieter", "middle": [], "last": "Abbeel", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Levine", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "1126--1135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th Interna- tional Conference on Machine Learning-Volume 70, pages 1126-1135. JMLR. org.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Meta-learning for lowresource neural machine translation", "authors": [ { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "O", "middle": [ "K" ], "last": "Victor", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3622--3631", "other_ids": { "DOI": [ "10.18653/v1/D18-1398" ] }, "num": null, "urls": [], "raw_text": "Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li, and Kyunghyun Cho. 2018. Meta-learning for low- resource neural machine translation. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3622-3631, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Gluoncv and gluonnlp: Deep learning in computer vision and natural language processing", "authors": [ { "first": "Jian", "middle": [], "last": "Guo", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Tong", "middle": [], "last": "He", "suffix": "" }, { "first": "Leonard", "middle": [], "last": "Lausen", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Haibin", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Xingjian", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Chenguang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Junyuan", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Zha", "suffix": "" }, { "first": "Aston", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongyue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shuai", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "23", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Guo, He He, Tong He, Leonard Lausen, Mu Li, Haibin Lin, Xingjian Shi, Chenguang Wang, Jun- yuan Xie, Sheng Zha, Aston Zhang, Hang Zhang, Zhi Zhang, Zhongyue Zhang, Shuai Zheng, and Yi Zhu. 2020. Gluoncv and gluonnlp: Deep learning in computer vision and natural language processing. Journal of Machine Learning Research, 21(23):1-7.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning to learn using gradient descent", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Younger", "suffix": "" }, { "first": "", "middle": [], "last": "Peter R Conwell", "suffix": "" } ], "year": 2001, "venue": "International Conference on Artificial Neural Networks", "volume": "", "issue": "", "pages": "87--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter, A Steven Younger, and Peter R Con- well. 2001. Learning to learn using gradient descent. In International Conference on Artificial Neural Net- works, pages 87-94. Springer.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "What does BERT learn about the structure of language", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3651--3657", "other_ids": { "DOI": [ "10.18653/v1/P19-1356" ] }, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 3651-3657. Association for Compu- tational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederick", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Siamese neural networks for one-shot image recognition", "authors": [ { "first": "Gregory", "middle": [], "last": "Koch", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2015, "venue": "ICML deep learning workshop", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Koch, Richard Zemel, and Ruslan Salakhutdi- nov. 2015. Siamese neural networks for one-shot im- age recognition. In ICML deep learning workshop, volume 2. Lille.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems (NeurIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Learning to optimize", "authors": [ { "first": "Ke", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jitendra", "middle": [], "last": "Malik", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ke Li and Jitendra Malik. 2017. Learning to optimize. In International Conference on Learning Represen- tations.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Choosing transfer languages for cross-lingual learning", "authors": [ { "first": "Yu-Hsiang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Chian-Yu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Zirui", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yuyan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Mengzhou", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Rijhwani", "suffix": "" }, { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Zhisong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Antonios", "middle": [], "last": "Anastasopoulos", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3125--3135", "other_ids": { "DOI": [ "10.18653/v1/P19-1301" ] }, "num": null, "urls": [], "raw_text": "Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3125-3135, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Attention-informed mixed-language training for zero-shot cross-lingual task-oriented dialogue systems", "authors": [ { "first": "Zihan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zhaojiang", "middle": [], "last": "Genta Indra Winata", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Xu", "suffix": "" }, { "first": "", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "34", "issue": "", "pages": "8433--8440", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6362" ] }, "num": null, "urls": [], "raw_text": "Zihan Liu, Genta Indra Winata, Zhaojiang Lin, Peng Xu, and Pascale Fung. 2020a. Attention-informed mixed-language training for zero-shot cross-lingual task-oriented dialogue systems. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8433-8440.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Exploring fine-tuning techniques for pre-trained cross-lingual models via continual learning", "authors": [ { "first": "Zihan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Genta Indra Winata", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zihan Liu, Genta Indra Winata, Andrea Madotto, and Pascale Fung. 2020b. Exploring fine-tuning tech- niques for pre-trained cross-lingual models via con- tinual learning.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Gradient-based hyperparameter optimization through reversible learning", "authors": [ { "first": "Dougal", "middle": [], "last": "Maclaurin", "suffix": "" }, { "first": "David", "middle": [], "last": "Duvenaud", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Adams", "suffix": "" } ], "year": 2015, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "2113--2122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dougal Maclaurin, David Duvenaud, and Ryan Adams. 2015. Gradient-based hyperparameter optimization through reversible learning. In International Confer- ence on Machine Learning, pages 2113-2122.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Polyglot contextual representations improve crosslingual transfer", "authors": [ { "first": "Phoebe", "middle": [], "last": "Mulcaire", "suffix": "" }, { "first": "Jungo", "middle": [], "last": "Kasai", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3912--3918", "other_ids": { "DOI": [ "10.18653/v1/N19-1392" ] }, "num": null, "urls": [], "raw_text": "Phoebe Mulcaire, Jungo Kasai, and Noah A. Smith. 2019. Polyglot contextual representations improve crosslingual transfer. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3912-3918. Association for Compu- tational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Zero-shot cross-lingual transfer with meta learning", "authors": [ { "first": "Farhad", "middle": [], "last": "Nooralahzadeh", "suffix": "" }, { "first": "Giannis", "middle": [], "last": "Bekoulis", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Bjerva", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4547--4562", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.368" ] }, "num": null, "urls": [], "raw_text": "Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. 2020. Zero-shot cross-lingual transfer with meta learning. In Pro- ceedings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 4547-4562, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "How multilingual is multilingual bert? CoRR", "authors": [ { "first": "Telmo", "middle": [], "last": "Pires", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? CoRR, abs/1906.01502.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Crosslanguage text classification using structural correspondence learning", "authors": [ { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1118--1127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Prettenhofer and Benno Stein. 2010. Cross- language text classification using structural corre- spondence learning. In Proceedings of the 48th An- nual Meeting of the Association for Computational Linguistics, pages 1118-1127. Association for Com- putational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Optimization as a model for few-shot learning", "authors": [ { "first": "Sachin", "middle": [], "last": "Ravi", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" } ], "year": 2016, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sachin Ravi and Hugo Larochelle. 2016. Optimization as a model for few-shot learning. In International Conference on Learning Representations.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Large-scale evolution of image classifiers", "authors": [ { "first": "Esteban", "middle": [], "last": "Real", "suffix": "" }, { "first": "Sherry", "middle": [], "last": "Moore", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Selle", "suffix": "" }, { "first": "Saurabh", "middle": [], "last": "Saxena", "suffix": "" }, { "first": "Yutaka", "middle": [ "Leon" ], "last": "Suematsu", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Tan", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Kurakin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "2902--2911", "other_ids": {}, "num": null, "urls": [], "raw_text": "Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V Le, and Alexey Kurakin. 2017. Large-scale evolution of image classifiers. In Proceedings of the 34th Inter- national Conference on Machine Learning-Volume 70, pages 2902-2911. JMLR. org.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook", "authors": [ { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00fcrgen Schmidhuber. 1987. Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook. Ph.D. thesis, Technis- che Universit\u00e4t M\u00fcnchen.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Learning to control fastweight memories: An alternative to dynamic recurrent networks", "authors": [ { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1992, "venue": "Neural Computation", "volume": "4", "issue": "1", "pages": "131--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00fcrgen Schmidhuber. 1992. Learning to control fast- weight memories: An alternative to dynamic recur- rent networks. Neural Computation, 4(1):131-139.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A neural network that embeds its own meta-levels", "authors": [ { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1993, "venue": "IEEE International Conference on Neural Networks", "volume": "", "issue": "", "pages": "407--412", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00fcrgen Schmidhuber. 1993. A neural network that em- beds its own meta-levels. In IEEE International Conference on Neural Networks, pages 407-412. IEEE.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Cross-lingual transfer learning for multilingual task oriented dialog", "authors": [ { "first": "Sebastian", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Rushin", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3795--3805", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019a. Cross-lingual transfer learning for multilingual task oriented dialog. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3795-3805. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing", "authors": [ { "first": "Tal", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Ori", "middle": [], "last": "Ram", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Globerson", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1599--1613", "other_ids": { "DOI": [ "10.18653/v1/N19-1162" ] }, "num": null, "urls": [], "raw_text": "Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019b. Cross-lingual alignment of con- textual word embeddings, with applications to zero- shot dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 1599-1613. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A multi-task approach to learning multilingual representations", "authors": [ { "first": "Karan", "middle": [], "last": "Singla", "suffix": "" }, { "first": "Dogan", "middle": [], "last": "Can", "suffix": "" }, { "first": "Shrikanth", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "214--220", "other_ids": { "DOI": [ "10.18653/v1/P18-2035" ] }, "num": null, "urls": [], "raw_text": "Karan Singla, Dogan Can, and Shrikanth Narayanan. 2018. A multi-task approach to learning multilin- gual representations. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 214- 220. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Evolving neural networks through augmenting topologies", "authors": [ { "first": "O", "middle": [], "last": "Kenneth", "suffix": "" }, { "first": "Risto", "middle": [], "last": "Stanley", "suffix": "" }, { "first": "", "middle": [], "last": "Miikkulainen", "suffix": "" } ], "year": 2002, "venue": "Evolutionary computation", "volume": "10", "issue": "2", "pages": "99--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth O Stanley and Risto Miikkulainen. 2002. Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2):99- 127.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Matching networks for one shot learning", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Blundell", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Lillicrap", "suffix": "" }, { "first": "Daan", "middle": [], "last": "Wierstra", "suffix": "" } ], "year": 2016, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3630--3638", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. In Advances in neural informa- tion processing systems, pages 3630-3638.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Co-training for cross-lingual sentiment classification", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "235--243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojun Wan. 2009. Co-training for cross-lingual senti- ment classification. In Proceedings of the Joint Con- ference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 235-243. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Meta-learning for domain generalization in semantic parsing", "authors": [ { "first": "Bailin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "366--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bailin Wang, Mirella Lapata, and Ivan Titov. 2021. Meta-learning for domain generalization in seman- tic parsing. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 366-379, Online. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Learned optimizers that scale and generalize", "authors": [ { "first": "Sergio", "middle": [ "G\u00f3mez" ], "last": "Matthew W Hoffman", "suffix": "" }, { "first": "Misha", "middle": [], "last": "Colmenarejo", "suffix": "" }, { "first": "Nando", "middle": [], "last": "Denil", "suffix": "" }, { "first": "Jascha", "middle": [], "last": "Freitas", "suffix": "" }, { "first": "", "middle": [], "last": "Sohl-Dickstein", "suffix": "" } ], "year": 2017, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "3751--3760", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew W Hoffman, Sergio G\u00f3mez Colmenarejo, Misha Denil, Nando Freitas, and Jascha Sohl- Dickstein. 2017. Learned optimizers that scale and generalize. In International Conference on Machine Learning, pages 3751-3760.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": { "DOI": [ "10.18653/v1/N18-1101" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Enhanced meta-learning for cross-lingual named entity recognition with minimal resources", "authors": [ { "first": "Qianhui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zijia", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Guoxin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Chen", "suffix": "" }, { "first": "F", "middle": [], "last": "B\u00f6rje", "suffix": "" }, { "first": "Biqing", "middle": [], "last": "Karlsson", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.06161" ] }, "num": null, "urls": [], "raw_text": "Qianhui Wu, Zijia Lin, Guoxin Wang, Hui Chen, B\u00f6rje F Karlsson, Biqing Huang, and Chin-Yew Lin. 2019. Enhanced meta-learning for cross-lingual named entity recognition with minimal resources. arXiv preprint arXiv:1911.06161.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "833--844", "other_ids": { "DOI": [ "10.18653/v1/D19-1077" ] }, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Com- putational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Transfer learning for sequence tagging with hierarchical recurrent networks", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tag- ging with hierarchical recurrent networks. In Inter- national Conference on Learning Representations.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Multilingual seq2seq training with similarity loss for cross-lingual document classification", "authors": [ { "first": "Katherine", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Haoran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The Third Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "175--179", "other_ids": { "DOI": [ "10.18653/v1/W18-3023" ] }, "num": null, "urls": [], "raw_text": "Katherine Yu, Haoran Li, and Barlas Oguz. 2018. Multilingual seq2seq training with similarity loss for cross-lingual document classification. In Pro- ceedings of The Third Workshop on Representation Learning for NLP, pages 175-179. Association for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Neural architecture search with reinforcement learning", "authors": [ { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.01578" ] }, "num": null, "urls": [], "raw_text": "Barret Zoph and Quoc V Le. 2016. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Learning transferable architectures for scalable image recognition", "authors": [ { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "Vijay", "middle": [], "last": "Vasudevan", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "8697--8710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. 2018. Learning transferable architec- tures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pat- tern recognition, pages 8697-8710.", "links": null } }, "ref_entries": { "TABREF1": { "text": "Dredze (2019) 74.60 74.90 72.00 66.10 58.60 69.80 49.40 55.70 62.00 71.90 70.40 69.80 67.90 61.20 66.02 Nooralahzadeh et al. (2020) 74.42 75.07 71.83 66.05 61.51 69.45 49.76 55.39 61.20 71.82 71.11 70.19 67.95 62.20 66.28 75.77 72.57 67.22 61.08 70.23 51.70 51.03 64.26 71.61 72.52 69.97 69.16 55.40 66.28 Meta-Optimizer 75.78 75.87 73.15 67.34 62.00 70.47 51.22 50.54 63.96 72.06 72.32 70.20 69.34 55.88 66.44 Aux. language: el + ur Fine-tuning baseline 74.87 75.78 72.27 66.96 62.73 70.16 50.21 48.20 63.86 71.61 71.97 70.24 69.64 56.04 66.04 Meta-Optimizer 75.53 75.93 72.68 67.04 63.33 70.88 51.51 49.89 64.33 72.06 72.36 70.32 70.38 56.29 66.61", "content": "
fresdearurbgswthtrvizhruelhiavg
Devlin et al. (2019)-74.30 70.50 62.10 58.35 -----63.80 ----
Wu and Aux. languageelelelelelelelelelelurururur
Fine-tuning baseline75.42
", "type_str": "table", "html": null, "num": null }, "TABREF2": { "text": "Optim 75.53 75.93 72.68 67.04 63.33 70.88 51.51 49.89 64.33 72.06 72.36 70.32 70.38 56.29 66.61 No layer-wise update 73.45 73.90 70.73 65.19 60.31 69.10 50.87 46.47 62.74 70.42 70.24 68.85 68.17 53.50 64.57 No cross-lingual meta-train 73.66 74.84 71.54 66.15 61.16 69.33 50.89 48.43 63.16 71.57 70.53 69.14 67.93 55.07 65.24", "content": "
fresdearurbgswthtrvizhruelhiavg
Meta-
select the best auxiliary language among
all languages except for the target one. Overall,
our approach outperforms the strong fine-tuning
", "type_str": "table", "html": null, "num": null } } } }