|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:06:48.202514Z" |
|
}, |
|
"title": "Meta-Learning for Few-Shot NMT Adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Amr", |
|
"middle": [], |
|
"last": "Sharaf", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hany", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "hanyh@microsoft.com" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present META-MT, a meta-learning approach to adapt Neural Machine Translation (NMT) systems in a few-shot setting. META-MT provides a new approach to make NMT models easily adaptable to many target domains with the minimal amount of in-domain data. We frame the adaptation of NMT systems as a meta-learning problem, where we learn to adapt to new unseen domains based on simulated offline meta-training domain adaptation tasks. We evaluate the proposed metalearning strategy on ten domains with general large scale NMT systems. We show that META-MT significantly outperforms classical domain adaptation when very few indomain examples are available. Our experiments shows that META-MT can outperform classical fine-tuning by up to 2.5 BLEU points after seeing only 4, 000 translated words (300 parallel sentences).", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present META-MT, a meta-learning approach to adapt Neural Machine Translation (NMT) systems in a few-shot setting. META-MT provides a new approach to make NMT models easily adaptable to many target domains with the minimal amount of in-domain data. We frame the adaptation of NMT systems as a meta-learning problem, where we learn to adapt to new unseen domains based on simulated offline meta-training domain adaptation tasks. We evaluate the proposed metalearning strategy on ten domains with general large scale NMT systems. We show that META-MT significantly outperforms classical domain adaptation when very few indomain examples are available. Our experiments shows that META-MT can outperform classical fine-tuning by up to 2.5 BLEU points after seeing only 4, 000 translated words (300 parallel sentences).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Neural Machine Translation (NMT) systems (Bahdanau et al., 2016; are usually trained on large general-domain parallel corpora to achieve state-of-the-art results (Barrault et al., 2019) . Unfortunately, these generic corpora are often qualitatively different from the target domain of the translation system. Moreover, NMT models trained on one domain tend to perform poorly when translating sentences in a significantly different domain (Koehn and Knowles, 2017; Chu and Wang, 2018) . A widely used approach for adapting NMT is domain adaptation by fine-tuning (Luong and Manning, 2015; Freitag and Al-Onaizan, 2016; Sennrich et al., 2016) , where a model is first trained on general-domain data and then adapted by continuing the training on a smaller amount of in-domain data. This approach often leads to empirical improvements in the targeted domain; however, it falls short when the amount of in-domain training data is insufficient, leading to model over-fitting and catastrophic forgetting, where adapting to a new domain leads to degradation on the general-domain (Thompson et al., 2019) . Ideally, we would like to have a model that is easily adaptable to many target domains with minimal amount of in-domain data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 64, |
|
"text": "(Bahdanau et al., 2016;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 185, |
|
"text": "(Barrault et al., 2019)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 438, |
|
"end": 463, |
|
"text": "(Koehn and Knowles, 2017;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 464, |
|
"end": 483, |
|
"text": "Chu and Wang, 2018)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 562, |
|
"end": 587, |
|
"text": "(Luong and Manning, 2015;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 617, |
|
"text": "Freitag and Al-Onaizan, 2016;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 618, |
|
"end": 640, |
|
"text": "Sennrich et al., 2016)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1073, |
|
"end": 1096, |
|
"text": "(Thompson et al., 2019)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We present a meta-learning approach that learns to adapt neural machine translation systems to new domains given only a small amount of training data in that domain. To achieve this, we simulate many domain adaptation tasks, on which we use a metalearning strategy to learn how to adapt. Specifically, based on these simulations, our proposed approach, META-MT (Meta-learning for Machine Translation), learns model parameters that should generalize to future (real) adaptation tasks ( \u00a73.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "At training time ( \u00a7 3.2), META-MT simulates many small-data domain adaptation tasks from a large pool of data. Using these tasks, META-MT simulates what would happen after fine-tuning the model parameters to each such task. It then uses this information to compute parameter updates that will lead to efficient adaptation during deployment. We optimize this using the Model Agnostic Meta-Learning algorithm (MAML) (Finn et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 415, |
|
"end": 434, |
|
"text": "(Finn et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The contribution of this paper is as follows: first, we propose a new approach that enables NMT systems to effectively adapt to a new domain using few-shots learning. Second, we show what models and conditions enable meta-learning to be useful for NMT adaptation. Finally, We evaluate META-MT on ten different domains, showing the efficacy of our approach. To the best of our knowledge, this is the first work on adapting large scale NMT systems in a few-shot learning setup 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our goal for few-shot NMT adaptation is to adapt a pre-trained NMT model (e.g. trained on general domain data) to new domains (e.g. medical domain) with a small amount of training examples. surveyed several recent approaches that address the shortcomings of traditional fine-tuning when applied to domain adaptation. Our work distinguishes itself from prior work by learning to fine-tune with tiny amounts of training examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Most recently, Bapna et al. (2019) proposed a simple approach for adaptation in NMT. The approach consists of injecting task specific adapter layers into a pre-trained model. These adapters enable the model to adapt to new tasks as it introduces a bottleneck in the architecture that makes it easier to adapt. Our approach uses a similar model architecture, however, instead of injecting a new adapter for each task separately, META-MT uses a single adapter layer, and meta-learns a better initialization for this layer that can easily be fine-tuned to new domains with very few training examples.", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 34, |
|
"text": "Bapna et al. (2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Similar to our goal, Michel and Neubig (2018) proposed a space efficient approach to adaptation that learns domain specific biases to the output vocabulary. This enables large-scale personalization for NMT models when small amounts of data are available for a lot of different domains. However, this approach assumes that these domains are static and known at training time, while META-MT can dynamically generalize to totally new domains, previously unseen at meta-training time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Several approaches have been proposed for lightweight adaptation of NMT systems. Vilar (2018) introduced domain specific gates to control the contribution of hidden units feeding into the next layer. However, Bapna et al. (2019) showed that this introduced a limited amount of per-domain capacity; in addition, the learned gates are not guaranteed to be easily adaptable to unseen domains. Khayrallah et al. (2017) proposed a lattice search algorithm for NMT adaptation, however, this algorithm assumes access to lattices generated from a phrase based machine translation system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 93, |
|
"text": "Vilar (2018)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 228, |
|
"text": "Bapna et al. (2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 414, |
|
"text": "Khayrallah et al. (2017)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our meta-learning strategy mirrors that of Gu et al. (2018) in the low resource translation setting, as well as Wu et al. (2019) for cross-lingual named entity recognition with minimal resources, Mi et al. (2019) for low-resource natural language generation in task-oriented dialogue systems, and Dou et al. (2019) for low-resource natural language un-derstanding tasks. To the best of our knowledge, this is the first work using meta-learning for fewshot NMT adaptation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 59, |
|
"text": "Gu et al. (2018)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 128, |
|
"text": "Wu et al. (2019)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 212, |
|
"text": "Mi et al. (2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 314, |
|
"text": "Dou et al. (2019)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Approach: Meta-Learning for Few-Shot NMT Adaptation", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Neural Machine Translation systems are not robust to domain shifts (Chu and Wang, 2018) . It is a highly desirable characteristic of the system to be adaptive to any domain shift using weak supervision without degrading the performance on the general domain. This dynamic adaptation task can be viewed naturally as a learning-to-learn (metalearning) problem: how can we train a global model that is capable of using its previous experience in adaptation to learn to adapt faster to unseen domains? A particularly simple and effective strategy for adaptation is fine-tuning: the global model is adapted by training on in-domain data. One would hope to improve on such a strategy by decreasing the amount of required in-domain data. META-MT takes into account information from previous adaptation tasks, and aims at learning how to update the global model parameters, so that the resulting learned parameters after meta-learning can be adapted faster and better to previously unseen domains via a weakly supervised fine-tuning approach on a tiny amount of data. Our goal in this paper is to learn how to adapt a neural machine translation system from experience. The training procedure for META-MT uses offline simulated adaptation problems to learn model parameters \u03b8 which can adapt faster to previously unseen domains. In this section, we describe META-MT, first by describing how it operates at test time when applied to a new domain adaptation task ( \u00a73.1), and then by describing how to train it using offline simulated adaptation tasks ( \u00a73.2).", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 87, |
|
"text": "(Chu and Wang, 2018)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "At test time, META-MT adapts a pre-trained NMT model to a new given domain. The adaptation is done using a small in-domain data that we call the support set and then tested on the new domain using a query set. More formally, the model parametrized by \u03b8 takes as input a new adaptation task T. This is illustrated in Figure 1 : the adaptation task T consists of a standard domain adaptation problem: T includes a support set T support used for training the fine-tuned model, and a query set T query used for evaluation. We're particularly Figure 1 : Example meta-learning set-up for few-shot NMT adaptation. The top represents the meta-training set D meta-train , where inside each box is a separate dataset T that consists of the support set T support (left side of dashed line) and the query set T query (right side of dashed line). In this illustration, we are considering the books and TED talks domains for meta-training. The meta-test set D meta-test is defined in the same way, but with a different set of domains not present in any of the datasets in D meta-train : Medical and News.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 324, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 538, |
|
"end": 546, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Test Time Behavior of META-MT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 1: Sample a domain (e.g. Books)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Time Behavior of META-MT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 2: Fine-tune on Support Set", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Time Behavior of META-MT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 3: Compute Meta-Loss on Query Set \u2713 < l a t e x i t s h a 1 _ b a s e 6 4 = \" J J [Bottom-B] Differences between meta-learning and Traditional fine-tuning. Wide lines represent high resource domains (Medical, News), while thin lines represent low-resource domains (TED, Books). Traditional fine-tuning may favor high-resource domains over low-resource ones while meta-learning aims at learning a good initialization that can be adapted to any domain with minimal training samples. 2 interested in the distribution of tasks P(T ) where the support and query sets are very small. In our experiments, we restrict the size of these sets to only few hundred parallel training sentences. We consider support sets of sizes: 4k to 64k source words (i.e. \u223c 200 to 3200 sentences). At test time, the meta-learned model \u03b8 interacts with the world as follows (Figure 2 ):", |
|
"cite_spans": [ |
|
{ |
|
"start": 486, |
|
"end": 487, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 852, |
|
"end": 861, |
|
"text": "(Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Test Time Behavior of META-MT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "2 S C z x c w 3 W L 7 W f o D b A 0 N Q r M R V g = \" > A A A B 7 X i c b V D L S g N B E J y N r x h f U Y 9 e B o P g K e z 6 Q I 9 B L x 4 j m A c k S 5 i d 9 C Z j Z n e W m V 4 h L P k H L x 4 U 8 e r / e P N v n C R 7 0 M S C h q K q m + 6 u I J H C o O t + O 4 W V 1 b X 1 j e J m a W t 7 Z 3 e v v H / Q N C r V H B p c S a X b A T M g R Q w N F C i h n W h g U S C h F Y x u p 3 7 r C b Q R K n 7 A c Q J + x A a x C A V n a K V m F 4 e A r F e u u F V 3 B r p M v J x U S I 5 6 r / z V 7 S u e R h A j l 8 y Y j u c m 6 G d M o + A S J q V u a i B h f M Q G 0 L E 0 Z h E Y P 5 t d O 6 E n V u n T U G l b M d K Z + n s i Y 5 E x 4 y i w n R H D o V n 0 p u J / X i f F 8 N r P R J y k C D G f L w p T S V H R 6 e u 0 L z R w l G N L G N f C 3 k r 5 k G n G 0 Q Z U s i F 4 i y 8 v k + Z Z 1 T u v X t 5 f V G o 3 e R x F c k S O y S n x y B W p k T t S J w 3 C y S N 5 J q / k z V H O i / P u f M x b C 0 4 + c 0 j + w P n 8 A a Y 7 j y 8 = < / l a t e x i t > En: Chapter I De: Erstes Kapitel \u2713 0 < l a t e x i t s h a 1 _ b a s e 6 4 = \" V 4 b 3 T S S + 3 b g t m E E W J r V b C h X 2 0 9 4 = \" > A A A B 7 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P o K e z 6 Q I 9 B L x 4 j m A c k S 5 i d z C Z D Z m e X m V 4 h L P k I L x 4 U 8 e r 3 e P N v n C R 7 0 M S C h q K q m + 6 u I J H C o O t + O 4 W V 1 b X 1 j e J m a W t 7 Z 3 e v v H / Q N H G q G W + w W M a 6 H V D D p V C 8 g Q I l b y e a 0 y i Q v B W M 7 q Z + 6 4 l r I 2 L 1 i O O E + x E d K B E K R t F K r S 4 O O d L T X r n i V t 0 Z y D L x c l K B H P V e + a v b j 1 k a c Y V M U m M 6 n p u g n 1 G N g k k + K X V T w x P K R n T A O 5 Y q G n H j Z 7 N z J + T E K n 0 S x t q W Q j J T f 0 9 k N D J m H A W 2 M 6 I 4 N I v e V P z P 6 6 Q Y 3 v i Z U E m K X L H 5 o j C V B G M y / Z 3 0 h e Y M 5 d g S y r S w t x I 2 p J o y t A m V b A j e 4 s v L p H l e 9 S 6 q V w + X l d p t H k c R j u A Y z s C D a 6 j B P d S h A Q x G 8 A y v 8 O Y k z o v z 7 n z M W w t O P n M I f + B 8 / g A I x o 9 g < / l", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Time Behavior of META-MT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "J J 2 S C z x c w 3 W L 7 W f o D b A 0 N Q r M R V g = \" > A A A B 7 X i c b V D L S g N B E J y N r x h f U Y 9 e B o P g K e z 6 Q I 9 B L x 4 j m A c k S 5 i d 9 C Z j Z n e W m V 4 h L P k H L x 4 U 8 e r / e P N v n C R 7 0 M S C h q K q m + 6 u I J H C o O t + O 4 W V 1 b X 1 j e J m a W t 7 Z 3 e v v H / Q N C r V H B p c S a X b A T M g R Q w N F C i h n W h g U S C h F Y x u p 3 7 r C b Q R K n 7 A c Q J + x A a x C A V n a K V m F 4 e A r F e u u F V 3 B r p M v J x U S I 5 6 r / z V 7 S u e R h A j l 8 y Y j u c m 6 G d M o + A S J q V u a i B h f M Q G 0 L E 0 Z h E Y P 5 t d O 6 E n V u n T U G l b M d K Z + n s i Y 5 E x 4 y i w n R H D o V n 0 p u J / X i f F 8 N r P R J y k C D G f L w p T S V H R 6 e u 0 L z R w l G N L G N f C 3 k r 5 k G n G 0 Q Z U s i F 4 i y 8 v k + Z Z 1 T u v X t 5 f V G o 3 e R x F c k S O y S n x y B W p k T t S J w 3 C y S N 5 J q / k z V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Time Behavior of META-MT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Time Behavior of META-MT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 1: The world draws an adaptation task T from a distribution P, T \u223c P(T ); 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Time Behavior of META-MT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 2: The model adapts from \u03b8 to \u03b8 by fine-tuning on the task's support set T support ; 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Time Behavior of META-MT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 3: The fine-tuned model \u03b8 is evaluated on the query set T query . Intuitively, meta-training should optimize for a representation \u03b8 that can quickly adapt to new tasks, rather than a single individual task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Time Behavior of META-MT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The meta-learning challenge is: how do we learn a good representation \u03b8? We initialize \u03b8 by training an NMT model on global-domain data. In addition, we assume access to meta-training tasks on which we can train \u03b8; these tasks must include support/query pairs, where we can simulate a domain adaptation setting by fine-tuning on the support set and then evaluating on the query. This is a weak assumption: in practice, we use purely simulated data as this meta-training data. We construct this data as follows: given a parallel corpus for the desired language pair, we randomly sample training example to form a few-shot adaptation task. We build tasks of 4k, 8k, 16k, 32k, and 64k training words. Under this formulation, it's natural to think of \u03b8's learning process as a process to learn a good parameter initialization for fast adaptation, for which a class of learning algorithms to consider are Model-agnostic Meta-Learning (MAML) and it's first order approximations like First-order MAML (FoMAML) (Finn et al., 2017) and Reptile (Nichol et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1003, |
|
"end": 1022, |
|
"text": "(Finn et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1035, |
|
"end": 1056, |
|
"text": "(Nichol et al., 2018)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training META-MT via Meta-learning", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Informally, at training time, META-MT will treat one of these simulated domains T as if it were a domain adaptation dataset. At each time step, it will update the current model representation from \u03b8 to \u03b8 by fine-tuning on T support and then ask: what is the meta-learning loss estimate given \u03b8, \u03b8 , and T query ? The model representation \u03b8 is then updated to minimize this meta-learning loss. More formally, in meta-learning, we assume access to a Algorithm 1 META-MT (trained model f \u03b8 , metatraining dataset D meta-train , learning rates \u03b1, \u03b2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training META-MT via Meta-learning", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1: while not done do", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training META-MT via Meta-learning", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Sample a batch of domain adaptation tasks T \u223c D meta-train 3:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for all T i \u2208 T do 4: Evaluate \u2207 \u03b8 L T i (f \u03b8 ) on the support set T i,support 5:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Compute adapted parameters with gradient descent:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u03b8 i = \u03b8 \u2212 \u03b1\u2207 \u03b8 L T i (f \u03b8 ) 6: end for 7: Update \u03b8 \u2190 \u03b8 \u2212 \u03b2\u2207 \u03b8 \u03a3 T i \u2208T L T i (f \u03b8 i ) on the query set T i,query \u2200T i \u2208 T 8: end while", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "distribution P over different tasks T . From this, we can sample a meta-training dataset D meta-train . The meta-learning problem is then to estimate \u03b8 to minimize the meta-learning loss on D meta-train .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The meta-learning algorithm we use is MAML by Finn et al. (2017) , and is instantiated for the meta-learning to adapt NMT systems in Alg 1. MAML considers a model represented by a parametrized function f \u03b8 with parameters \u03b8. When adapting to a new task T, the model's parameters \u03b8 become \u03b8 . The updated vector \u03b8 is computed using one or more gradient descent updates on the task T. For example, when using one gradient update:", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 64, |
|
"text": "Finn et al. (2017)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 = \u03b8 \u2212 \u03b1\u2207 \u03b8 L T (f \u03b8 )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "2:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where \u03b1 is the learning rate and L is the task loss function. The model parameters are trained by optimizing for the performance of f \u03b8 with respect to \u03b8 across tasks sampled from P(T ). More concretely, the meta-learning objective is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "min \u03b8 \u03a3 T\u223cP(T ) L T (f \u03b8 ), L T (f \u03b8 ) = L T (f \u03b8\u2212\u03b1\u2207 \u03b8 L T (f \u03b8 ) )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "2:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Following the MAML template, META-MT operates in an iterative fashion, starting with a trained NMT model f \u03b8 and improving it through optimizing the meta-learning loss from Eq 2 on the metatraining dataset D meta-train . Over learning rounds, META-MT selects a random batch of training tasks from the meta-training dataset and simulates the test-time behavior on these tasks (Line 2). The core functionality is to observe how the current model representation \u03b8 is adapted for each task in the batch, and to use this information to improve \u03b8 by optimizing the meta-learning loss (Line 7). META-MT achieves this by simulating a domain adaptation setting by fine-tuning on the task specific support set (Line 4). This yields, for each task T i , a new adapted set of parameters \u03b8 i (Line 5). These parameters are evaluated on the query sets for each task T i,query , and a meta-gradient w.r.t the original model representation \u03b8 is used to improve \u03b8 (Line 7).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our pre-trained baseline NMT model f \u03b8 is a sequence to sequence model that parametrizes the conditional probability of the source and target sequences as an encoder-decoder architecture using self-attention Transformer models (Vaswani et al., 2017) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 249, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We seek to answer the following questions experimentally:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup and Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "1. How does META-MT compare empirically to alternative adaptation strategies? ( \u00a74.4) 2. What is the impact of the support and the query sizes used for meta-learning? ( \u00a74.5) 3. What is the effect of the NMT model architecture on performance? ( \u00a74.6) In our experiments, we train META-MT only on simulated data, where we simulate a few-shot domain adaptation setting as described in \u00a73.2. This is possible because META-MT learns model parameters \u03b8 that can generalize to future adaptation tasks by optimizing the meta-objective function in Eq 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup and Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We train and evaluate META-MT on a collection of ten different datasets. All of these datasets are collected from the Open Parallel Corpus (OPUS) (Tiedemann, 2012) , and are publicly available online. The datasets cover a variety of diverse domains that should enable us to evaluate our proposed approach. The datasets we consider are:", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 163, |
|
"text": "(Tiedemann, 2012)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup and Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "1. Bible: a parallel corpus created from translations of the Bible (Christodouloupoulos and Steedman, 2015 Duh (2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 106, |
|
"text": "(Christodouloupoulos and Steedman, 2015", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 117, |
|
"text": "Duh (2018)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup and Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We simulate the few-shot NMT adaptation scenarios by randomly sub-sampling these datasets with different sizes. We sample different data sets with sizes ranging from 4k to 64k training words (i.e. \u223c 200 to 3200 sentences). This data is the only data used for any given domain across all adaptation setups. It is worth noting that different datasets have a wide range of sentence lengths. We opted to sample using number of words instead of number of sentences to avoid introducing any advantages for domains with longer sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup and Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our experiments aim to determine how META-MT compares to standard domain adaptation strategies. In particular, we compare to: (A) No fine-tuning: The non-adaptive baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Adaptation Approaches", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Here, the pre-trained model is evaluated on the meta-test and meta-validation datasets (see Figure 1 ) without any kind of adaptation. (B) Fine-tuning on a single task: The domain adaptation by fine-tuning baseline. For a single adaptation task T, this approach performs domain adaptation by fine-tuning only on the support set T support . For instance, if |T support | = K words, we fine tune the pretrained model f \u03b8 only on K training words to show how classical fine-tuning behaves in few-shot settings. (C) Fine-tuning on meta-train: Similar to (B), however, this approach fine-tunes on much more data. This approach fine-tunes on all the support sets used for meta-training: {T support , \u2200T \u2208 D meta-train }. The goal of this baseline is to ensure that META-MT doesn't get an additional advantage by training on more data during the meta-training phase. For instance, if we are using N adaptation tasks each with a support set of size K, this will be using N * K words for classical fine-tuning. This establishes a fair baseline to evaluate how classical fine-tuning would perform using the same data albeit in a different configuration. (D) META-MT: Our proposed approach from Alg 1. In this setup, we use N adaptation tasks T in D meta-train , each with a support set of size K words to perform Meta-Learning. Second order meta-gradients are ignored to decrease the computational complexity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 100, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Domain Adaptation Approaches", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We use the Transformer Model (Vaswani et al., 2017) implemented in fairseq . In this work, we use a transformer model with a modified architecture that can facilitate better adaptation. We use \"Adapter Modules\" (Houlsby et al., 2019; Bapna et al., 2019) which introduce an extra layer after each transformer block that can enable more efficient tuning of the models. Following Bapna et al. 2019, we augment the Transformer model with feed-forward adapters: simple single hiddenlayer feed-forward networks, with a nonlinear activation function between the two projection layers. These adapter modules are introduced after the Layer Norm and before the residual connection layers. It is composed of a down projection layer, followed by a ReLU, followed by an up projection layer. This bottle-necked module with fewer parameters is very attractive for domain adaptation as we will discuss in \u00a74.6. These modules are introduced after every layer in both the encoder and the decoder. All experiments are based on the \"base\" transformer model with six blocks in the encoder and decoder networks. Each encoder block contains a self-attention layer, followed by two fully connected feed-forward layers with a ReLU nonlinearity between them. Each decoder block contains self-attention, followed by encoder-decoder attention, followed by two fully connected feedforward layers with a ReLU non-linearity between them.", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 51, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 233, |
|
"text": "(Houlsby et al., 2019;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 253, |
|
"text": "Bapna et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture and Implementation Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We use word representations of size 512, feedforward layers with inner dimensions 2, 048, multi-head attention with 8 attention heads, and adapter modules with 32 hidden units. We apply dropout (Srivastava et al., 2014) with probability 0.1. The model is optimized with Adam (Kingma and Ba, 2014) using \u03b2 1 = 0.9, \u03b2 2 = 0.98, and a learning rate \u03b1 = 7e \u2212 4. We use the same learning rate schedule as Vaswani et al. (2017) where the learning rate increases linearly for 4, 000 steps to 7e \u2212 4, after which it is decayed proportionally to the inverse square root of the number of steps. For meta-learning, we used a meta-batch size of 1. We optimized the meta-learning loss function using Adam with a learning rate of 1e \u2212 5 and default parameters for \u03b2 1 , \u03b2 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 219, |
|
"text": "(Srivastava et al., 2014)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 400, |
|
"end": 421, |
|
"text": "Vaswani et al. (2017)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture and Implementation Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "All data is pre-processed with joint sentencepieces (Kudo and Richardson, 2018) of size 40k. In all cases, the baseline machine translation system is a neural English to German (En-De) transformer model (Vaswani et al., 2017) , initially trained on 5.2M sentences filtered from the the standard parallel data (Europarl-v9, CommonCrawl, NewsCommentary-v14, wikititles-v1 and Rapid-2019) from the WMT-19 shared task (Barrault et al., 2019) . We use WMT14 and WMT19 newtests as validation and test sets respectively. The baseline system scores 37.99 BLEU on the full WMT19 newstest which compares favorably with strong single system baselines at WMT19 shared task Junczys-Dowmunt, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 79, |
|
"text": "(Kudo and Richardson, 2018)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 203, |
|
"end": 225, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 385, |
|
"text": "(Europarl-v9, CommonCrawl, NewsCommentary-v14, wikititles-v1 and Rapid-2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 437, |
|
"text": "(Barrault et al., 2019)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 661, |
|
"end": 683, |
|
"text": "Junczys-Dowmunt, 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture and Implementation Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For meta-learning, we use the MAML algorithm as described in Alg 1. To minimize memory consumption, we ignored the second order gradient terms from Eq 2. We implement the First-Order MAML approximation (FoMAML) as described in Finn et al. (2017) . We also experimented with the first-order meta-learning algorithm Reptile (Nichol et al., 2018) . We found that since Reptile doesn't directly account for the performance on the task query set, along with the large model capacity of the Transformer architecture, it can easily over-fit to the support set, thus achieving almost perfect performance on the support, while the performance on the query degrades significantly. Even after performing early stopping on the query set, Reptile didn't account correctly for learning rate scheduling, and finding suitable learning rates for optimizing the meta-learner and the task adaptation was difficult. In our experiments, we found it essential to match the behavior of the dropout layers when computing the meta-objective function in Eq 2 with the test-time behavior described in \u00a7 3.1. In particular, the model has to run in \"evaluation mode\" when computing the loss on the task query set to match the test-time behavior during evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 245, |
|
"text": "Finn et al. (2017)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 322, |
|
"end": 343, |
|
"text": "(Nichol et al., 2018)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture and Implementation Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Our experimental setup operates as follows: using a collection of simulated machine translation adaptation tasks, we train an NMT model f \u03b8 using META-MT (Alg 1). This model learns to adapt faster to new domains, by fine-tuning on a tiny support set. Once f \u03b8 is learned and fixed, we follow the test-time behavior described in \u00a73.1. We evaluate META-MT on the collection of ten different domains described in \u00a74. We simulate domain adaptation problems by sub-sampling tasks with 4k English tokens for the support set, and 32k tokens for the query set. We study the effect of varying the size of the query and the support sets in \u00a74.5. We use N = 160 tasks for the meta-training dataset D meta-train , where we sample 16 tasks from each of the ten different domains. We use a meta-validation D meta-test and meta-test D meta-test sets of size 10, where we sample a single task from each domain. We report the mean and standard-deviation over three different meta-test sets. For evaluation, we use BLEU (Papineni et al., 2002) . We measure case-sensitive de-tokenized BLEU with SacreBLEU (Post, 2018) . All results use beam search with a beam of size five.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1002, |
|
"end": 1025, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1087, |
|
"end": 1099, |
|
"text": "(Post, 2018)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Tasks and Metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Here, we describe our experimental results comparing the several algorithms from \u00a74.1. The overall results are shown in Table 1 and Figure 3 . Table 1 shows the BLEU scores on the meta-test dataset for all the different approaches across the ten domains. From these results we draw the following conclusions:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 127, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 132, |
|
"end": 140, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 150, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "1. The pre-trained En-De NMT model performs well on general domains. For instance, BLEU for WMT-News 3 , GlobalVoices, and ECB is at least 26 points. However, performance degrades on closed domains like Books, Quran, and Bible. [Column A]. 2. Domain adaptation by fine-tuning on a single task doesn't improve the BLEU score. This is expected, since we're only fine-tuning on 4k tokens (i.e. \u223c 200 \u2212 300 sentences) [A vs B] . 3. Significant leverage is gained by increasing the amount of fine-tuning data. Fine-tuning on all the available data used for meta-learning improves the BLEU score significantly across all domains. [B vs C] . To put this into perspective, this setup is tuned on all data aggregated from all tasks: 160 * 4k words which is approximately 40K sentences. 4. META-MT outperforms alternative domain adaptation approaches on all domains with negligible degradation on the baseline domain. META-MT is better than the non-adaptive baseline [A vs D] , and succeeds in learning to adapt faster given the same amount of finetuning data [B vs D, C vs D] . Both Finetuning on meta-train [C] and META-MT [D] have access to exactly the same amount of training data, and both use the same model architecture. The difference however is in the learning algorithm. META-MT uses MAML (Alg 1) to optimize the meta-objective function in Eq 2. This ensures that the learned model initialization can easily be fine-tuned to new domains with very few examples.", |
|
"cite_spans": [ |
|
{ |
|
"start": 414, |
|
"end": 422, |
|
"text": "[A vs B]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 624, |
|
"end": 632, |
|
"text": "[B vs C]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 957, |
|
"end": 965, |
|
"text": "[A vs D]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1050, |
|
"end": 1066, |
|
"text": "[B vs D, C vs D]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "To evaluate the effectiveness of META-MT when adapting with small in-domain corpora, we further compare the performance of META-MT with classical fine-tuning on varying amounts of training data per adaptation task. In Figure 4 we plot the overall adaptation performance on the ten domains when using different data sizes for the support set. In this experiment, the only parameter that varies is the size of the task support set T support . We fix the size of the query set per task to 16k tokens, and we vary the size of the support set from 4k to 64k. To ensure that the total amount of meta-training data D meta-train is the same, we use N = 160 tasks for meta-training when the support size T support is 4k, N = 80 tasks when the support size is 8k, N = 40 tasks for support size of 16k, N = 20 tasks when the support size is 32k, and finally N = 10 meta-training tasks when the support size is 64k. This controlled setup ensures that no setting has any advantage by getting access to additional amounts of training data. We notice that for reasonably small size of the support set (4k and 8k), META-MT outperforms the classical fine-tuning baseline. However, when the data size increase (16k to 64), META-MT is outperformed by the fine-tuning baseline. This happens because for a larger support size, e.g. 64k, we only have access to 10 meta-training tasks in D meta-train , this is not enough to generalize to new unseen adaptation tasks, and META-MT over-fits to the training tasks from D meta-train , however, the performance degrades and doesn't generalize to D meta-test . To understand more directly the impact of the query set on META-MT's performance, in Figure 5 we show META-MT and fine-tuning adaptation performance on the meta-test set D meta-test on varying sizes for the query set. We fix the support size to 4k and vary the query set size from 16k to 64k. We observe that the edge of improvement of META-MT over fine-tuning adaptation increases as we increase the size of the query set. For instance, when we use a query set of size 64k, META-MT outperforms fine-tuning by 1.93 BLEU points, while the improvement is only 0.95 points when the query set is 16k.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 226, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 1668, |
|
"end": 1676, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Impact of Adaptation Task Size", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "In our experiments, we used the Adapter Transformer architecture (Bapna et al., 2019) . This architecture fixes the parameters of the pre-trained Transformer model, and only adapts the feed-forward adapter module. Our model included \u223c 66M parameters, out of which we adapt only 405K (only 0.6%). We found this adaptation strategy to be more robust to meta-learning. To better understand this, Figure 6 shows the BLEU scores for the two different model architectures. We find that while the meta-learned Transformer architecture (Right) slightly outperforms the Adapter model (Left), it suffers from catastrophic forgetting: META-MT-0 shows the zero-shot BLEU score before fine-tuning the task on the support set. For the Transformer model, the score drops to zero and then quickly improves once the parameters are tuned on the support set. This is undesirable, since it hurts the performance of the pre-trained model, even on the general domain data. We notice that the Adapter model doesn't suffer from this problem.", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 85, |
|
"text": "(Bapna et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 393, |
|
"end": 401, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Impact of Model Architecture", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "We presented META-MT, a meta-learning approach for few shot NMT adaptation. We formulated few shot NMT adaptation as a meta-learning problem, and presented a strategy that learns better parameters for NMT systems that can be easily adapted to new domains. We validated the superiority of META-MT to alternative domain adaptation approaches. META-MT outperforms alternative strategies in most domains using only a small fraction of fine-tuning data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "colorblind friendly palette was selected fromNeuwirth and Brewer (2014).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This is subset of the full test set to match the sizes of query sets from other domains", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors would like to thank members of the Microsoft Machine Translation Team as well as members of the Computational Linguistics and Information Processing (CLIP) lab for reviewing earlier versions of this work. Part of this work was conducted when the first author was on a summer internship with Microsoft Research. This material is based upon work supported by the National Science Foundation under Grant No. 1618193. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A. No fine-tuning B. Fine-tuning on task C. Neural Machine Translation (NMT) is a sequence to sequence model that parametrizes the conditional probability of the source and target sequences as a neural network following encoderdecoder architecture (Bahdanau et al., 2016; . Initially, the encodedecoder architecture was represented by recurrent networks. Currently, this has been replaced by selfattention models aka Transformer models (Vaswani et al., 2017) ). Currently, Transformer models achieves state-of-the-art performance in NMT as well as many other language modeling tasks. While transformers models are performing quite well on large scale NMT tasks, the models have huge number of parameters and require large amount of training data which is really prohibitive for adaptation tasks especially in few-shot setup like ours.", |
|
"cite_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 271, |
|
"text": "(Bahdanau et al., 2016;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 458, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Traditional domain adaptation for NMT models assumes the availability of relatively large amount of in domain data. For instances most of the related work utilizing traditional fine-tuning experiment with hundred-thousand sentences in-domain. This setup in quite prohibitive, since practically the domain can be defined by few examples. In this work we focus on few-shot adaptation scenario where we can adapt to a new domain not seen during training time using just couple of hundreds of in-domain sentences. This introduces a new challenge where the models have to be quickly responsive to adaptation as well as robust to domain shift. Since we focus on the setting in which very few in-domain data is available, this renders many traditional domain adaptation approaches inappropriate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 Few Shots Domain Adaptation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Meta-learning or Learn-to-Learn is widely used for few-shot learning in many applications where a model trained for a particular task can learn another task with a few examples. A number of approaches are used in Meta-learning, namely: Model-agnostic Meta-Learning (MAML) and its first order approximations like First-order MAML (FoMAML) (Finn et al., 2017) and Reptile (Nichol et al., 2018 this work, we focus on using MAML to enable few-shots adaptation of NMT transformer models.B Statistics of in-domain data sets Table 2 lists the sizes of various in-domain datasets from which we sample our in-domain data to simulate the few-shot adaptation setup.", |
|
"cite_spans": [ |
|
{ |
|
"start": 338, |
|
"end": 357, |
|
"text": "(Finn et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 390, |
|
"text": "(Nichol et al., 2018", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 518, |
|
"end": 525, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A.3 Meta-Learning", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1409.0473" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2016. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Simple, scalable adaptation for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Bapna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naveen", |
|
"middle": [], |
|
"last": "Arivazhagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.08478" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ankur Bapna, Naveen Arivazhagan, and Orhan Firat. 2019. Simple, scalable adaptation for neural ma- chine translation. arXiv preprint arXiv:1909.08478.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Findings of the 2019 conference on machine translation (wmt19)", |
|
"authors": [ |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Costa-Juss\u00e0", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Fishel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shervin", |
|
"middle": [], |
|
"last": "Malmasi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Conference on Machine Translation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--61", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, et al. 2019. Findings of the 2019 conference on machine translation (wmt19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A massively parallel corpus: the bible in 100 languages. Language resources and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Christodouloupoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "49", |
|
"issue": "", |
|
"pages": "375--395", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christos Christodouloupoulos and Mark Steedman. 2015. A massively parallel corpus: the bible in 100 languages. Language resources and evaluation, 49(2):375-395.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A comprehensive empirical comparison of domain adaptation methods for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Chenhui", |
|
"middle": [], |
|
"last": "Chu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raj", |
|
"middle": [], |
|
"last": "Dabre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sadao", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Journal of Information Processing", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "529--538", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2018. A comprehensive empirical comparison of domain adaptation methods for neural machine translation. Journal of Information Processing, 26:529-538.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A survey of domain adaptation for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Chenhui", |
|
"middle": [], |
|
"last": "Chu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1304--1319", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chenhui Chu and Rui Wang. 2018. A survey of do- main adaptation for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1304-1319, Santa Fe, New Mexico, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Investigating meta-learning algorithms for low-resource natural language understanding tasks", |
|
"authors": [ |
|
{ |
|
"first": "Zi-Yi", |
|
"middle": [], |
|
"last": "Dou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keyi", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonios", |
|
"middle": [], |
|
"last": "Anastasopoulos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1192--1197", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1112" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zi-Yi Dou, Keyi Yu, and Antonios Anastasopoulos. 2019. Investigating meta-learning algorithms for low-resource natural language understanding tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1192- 1197, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The multitarget ted talks task", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Duh. 2018. The multitarget ted talks task. http://www.cs.jhu.edu/~kevinduh/a/ multitarget-tedtalks/.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Model-agnostic meta-learning for fast adaptation of deep networks", |
|
"authors": [ |
|
{ |
|
"first": "Chelsea", |
|
"middle": [], |
|
"last": "Finn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pieter", |
|
"middle": [], |
|
"last": "Abbeel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Levine", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 34th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1126--1135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th In- ternational Conference on Machine Learning, vol- ume 70 of Proceedings of Machine Learning Re- search, pages 1126-1135, International Convention Centre, Sydney, Australia. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Fast domain adaptation for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Freitag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaser", |
|
"middle": [], |
|
"last": "Al-Onaizan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. ArXiv, abs/1612.06897.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Meta-learning for lowresource neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Victor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3622--3631", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1398" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li, and Kyunghyun Cho. 2018. Meta-learning for low- resource neural machine translation. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3622-3631, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Parameter-efficient transfer learning for NLP", |
|
"authors": [ |
|
{ |
|
"first": "Neil", |
|
"middle": [], |
|
"last": "Houlsby", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Giurgiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stanislaw", |
|
"middle": [], |
|
"last": "Jastrzkebski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruna", |
|
"middle": [], |
|
"last": "Morrone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quentin", |
|
"middle": [], |
|
"last": "De Laroussilhe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Gesmundo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Attariyan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Gelly", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzkeb- ski, Bruna Morrone, Quentin de Laroussilhe, An- drea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. CoRR, abs/1902.00751.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Microsoft translator at wmt 2019: Towards large-scale document-level neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "WMT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcin Junczys-Dowmunt. 2019. Microsoft translator at wmt 2019: Towards large-scale document-level neural machine translation. In WMT.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Neural lattice search for domain adaptation in machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Huda", |
|
"middle": [], |
|
"last": "Khayrallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gaurav", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "20--25", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huda Khayrallah, Gaurav Kumar, Kevin Duh, Matt Post, and Philipp Koehn. 2017. Neural lattice search for domain adaptation in machine translation. In Proceedings of the Eighth International Joint Con- ference on Natural Language Processing (Volume 2: Short Papers), pages 20-25, Taipei, Taiwan. Asian Federation of Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Six challenges for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Knowles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Neural Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "28--39", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-3204" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Proceed- ings of the First Workshop on Neural Machine Trans- lation, pages 28-39, Vancouver. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
|
"authors": [ |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--71", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-2012" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Stanford neural machine translation systems for spoken language domain", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Workshop on Spoken Language Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spo- ken language domain. In International Workshop on Spoken Language Translation.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Meta-learning for low-resource natural language generation in task-oriented dialogue systems", |
|
"authors": [ |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Mi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minlie", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiyong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boi", |
|
"middle": [], |
|
"last": "Faltings", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI'19", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3151--3157", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fei Mi, Minlie Huang, Jiyong Zhang, and Boi Faltings. 2019. Meta-learning for low-resource natural lan- guage generation in task-oriented dialogue systems. In Proceedings of the 28th International Joint Con- ference on Artificial Intelligence, IJCAI'19, pages 3151-3157. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Extreme adaptation for personalized neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "312--318", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-2050" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Michel and Graham Neubig. 2018. Extreme adap- tation for personalized neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 312-318, Melbourne, Aus- tralia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Colorbrewer palettes. R package version", |
|
"authors": [ |
|
{ |
|
"first": "Erich", |
|
"middle": [], |
|
"last": "Neuwirth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R Color", |
|
"middle": [], |
|
"last": "Brewer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--1", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erich Neuwirth and R Color Brewer. 2014. Color- brewer palettes. R package version, pages 1-1.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Facebook FAIR's WMT19 news translation task submission", |
|
"authors": [ |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyra", |
|
"middle": [], |
|
"last": "Yee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-5333" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 news translation task submission.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Proceedings of the Fourth Conference on Machine Translation", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "314--319", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "In Proceedings of the Fourth Conference on Ma- chine Translation (Volume 2: Shared Task Papers, Day 1), pages 314-319, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "On first-order meta-learning algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Nichol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Achiam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Schulman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1803.02999" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "fairseq: A fast, extensible toolkit for sequence modeling", |
|
"authors": [ |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A call for clarity in reporting bleu scores", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.08771" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Post. 2018. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Improving neural machine translation models with monolingual data", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "86--96", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1009" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "15", |
|
"issue": "", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Advances in Neural Information Processing Systems", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"Q" |
|
], |
|
"last": "Lawrence", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "27", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Overcoming catastrophic forgetting during domain adaptation of neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Thompson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Gwinnup", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huda", |
|
"middle": [], |
|
"last": "Khayrallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2062--2068", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1209" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 2062-2068, Minneapolis, Min- nesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Parallel data, tools and interfaces in opus", |
|
"authors": [ |
|
{ |
|
"first": "Jorg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jorg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey. European Lan- guage Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Smaller alignment models for better translations: Unsupervised word alignment with the l0-norm", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--319", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Liang Huang, and David Chiang. 2012. Smaller alignment models for better trans- lations: Unsupervised word alignment with the l0- norm. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 311-319, Jeju Island, Korea. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Learning hidden unit contribution for adapting neural machine translation models", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Vilar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "500--505", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2080" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Vilar. 2018. Learning hidden unit contribu- tion for adapting neural machine translation mod- els. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 2 (Short Papers), pages 500-505, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Enhanced meta-learning for cross-lingual named entity recognition with minimal resources", |
|
"authors": [ |
|
{ |
|
"first": "Qianhui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zijia", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guoxin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "B\u00f6rje", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Biqing", |
|
"middle": [], |
|
"last": "Karlsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.06161" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qianhui Wu, Zijia Lin, Guoxin Wang, Hui Chen, B\u00f6rje F Karlsson, Biqing Huang, and Chin-Yew Lin. 2019. Enhanced meta-learning for cross-lingual named entity recognition with minimal resources. arXiv preprint arXiv:1911.06161.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Findings of the wmt 2017 biomedical translation shared task", |
|
"authors": [ |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Jimeno Yepes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aur\u00e9lie", |
|
"middle": [], |
|
"last": "N\u00e9v\u00e9ol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariana", |
|
"middle": [], |
|
"last": "Neves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karin", |
|
"middle": [], |
|
"last": "Verspoor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ondrej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Boyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cristian", |
|
"middle": [], |
|
"last": "Grozea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madeleine", |
|
"middle": [], |
|
"last": "Kittner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvonne", |
|
"middle": [], |
|
"last": "Lichtblau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Second Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "234--247", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antonio Jimeno Yepes, Aur\u00e9lie N\u00e9v\u00e9ol, Mariana Neves, Karin Verspoor, Ondrej Bojar, Arthur Boyer, Cristian Grozea, Barry Haddow, Madeleine Kittner, Yvonne Lichtblau, et al. 2017. Findings of the wmt 2017 biomedical translation shared task. In Proceed- ings of the Second Conference on Machine Transla- tion, pages 234-247.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "t e x i t s h a 1 _ b a s e 6 4 = \"" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "H O i / P u f M x b C 0 4 + c 0 j + w P n 8 A a Y 7 j y 8 = < / l a t e x i t >" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "[Top-A] a training step of META-MT." |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "BLEU scores on meta-test split for different approaches evaluated across ten domains." |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "META-MT and fine-tuning adaptation performance on the meta-test set D meta-test vs different support set sizes per adaptation task." |
|
}, |
|
"FIGREF5": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "BLEU scores reported for two different model architectures: Adapter Transformer (Bapna et al., 2019) (Left), and the Transformer base architecture (Vaswani et al., 2012) (Right)." |
|
} |
|
} |
|
} |
|
} |