{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:24:44.246805Z" }, "title": "Efficient Machine Translation Domain Adaptation", "authors": [ { "first": "Pedro", "middle": [ "Henrique" ], "last": "Martins", "suffix": "", "affiliation": { "laboratory": "Instituto de Telecomunica\u00e7\u00f5es DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)", "institution": "Instituto Superior T\u00e9cnico Unbabel Lisbon", "location": { "country": "Portugal" } }, "email": "pedrohenriqueamartins@tecnico.ulisboa.pt" }, { "first": "Zita", "middle": [], "last": "Marinho", "suffix": "", "affiliation": { "laboratory": "Instituto de Telecomunica\u00e7\u00f5es DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)", "institution": "Instituto Superior T\u00e9cnico Unbabel Lisbon", "location": { "country": "Portugal" } }, "email": "zmarinho@google.com" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Martins", "suffix": "", "affiliation": { "laboratory": "Instituto de Telecomunica\u00e7\u00f5es DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)", "institution": "Instituto Superior T\u00e9cnico Unbabel Lisbon", "location": { "country": "Portugal" } }, "email": "andre.t.martins@tecnico.ulisboa.pt." } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Machine translation models struggle when translating out-of-domain text, which makes domain adaptation a topic of critical importance. However, most domain adaptation methods focus on fine-tuning or training the entire or part of the model on every new domain, which can be costly. On the other hand, semi-parametric models have been shown to successfully perform domain adaptation by retrieving examples from an in-domain datastore (Khandelwal et al., 2021). A drawback of these retrievalaugmented models, however, is that they tend to be substantially slower. In this paper, we explore several approaches to speed up nearest neighbor machine translation. We adapt the methods recently proposed by He et al. (2021) for language modeling, and introduce a simple but effective caching strategy that avoids performing retrieval when similar contexts have been seen before. Translation quality and runtimes for several domains show the effectiveness of the proposed solutions. 1", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Machine translation models struggle when translating out-of-domain text, which makes domain adaptation a topic of critical importance. However, most domain adaptation methods focus on fine-tuning or training the entire or part of the model on every new domain, which can be costly. On the other hand, semi-parametric models have been shown to successfully perform domain adaptation by retrieving examples from an in-domain datastore (Khandelwal et al., 2021). A drawback of these retrievalaugmented models, however, is that they tend to be substantially slower. In this paper, we explore several approaches to speed up nearest neighbor machine translation. We adapt the methods recently proposed by He et al. (2021) for language modeling, and introduce a simple but effective caching strategy that avoids performing retrieval when similar contexts have been seen before. Translation quality and runtimes for several domains show the effectiveness of the proposed solutions. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Modern neural machine translation models are mostly parametric (Bahdanau et al., 2015; Vaswani et al., 2017) , meaning that, for each input, the output depends only on a fixed number of model parameters, obtained using some training data, hopefully in the same domain. However, when running machine translation systems in the wild, it is often the case that the model is given input sentences or documents from domains that were not part of the training data, which frequently leads to subpar translations. One solution is training or fine-tuning the entire model or just part of it for each domain, but this can be expensive and may lead to catastrophic forgetting (Saunders, 2021) .", "cite_spans": [ { "start": 63, "end": 86, "text": "(Bahdanau et al., 2015;", "ref_id": "BIBREF1" }, { "start": 87, "end": 108, "text": "Vaswani et al., 2017)", "ref_id": "BIBREF16" }, { "start": 666, "end": 682, "text": "(Saunders, 2021)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, an approach that has achieved promising results is augmenting parametric models with a retrieval component, leading to semi-parametric models (Gu et al., 2018; Zhang et al., 2018; Bapna and Firat, 2019; Khandelwal et al., 2021; Meng et al., 2021; Jiang et al., 2021) . These models construct a datastore based on a set of source / target sentences or word-level contexts (translation memories) and retrieve similar examples from this datastore, using this information in the generation process. This allows having only one model that can be used for every domain. However, the model's runtime increases with the size of the domain's datastore and searching for related examples on large datastores can be computationally very expensive: for example, when retrieving 64 neighbors from the datastore, the model may become two orders of magnitude slower (Khandelwal et al., 2021) . Due to this, some recent works have proposed methods that aim to make this process more efficient. Meng et al. (2021) proposed constructing a different datastore for each source sentence, by first searching for the neighbors of the source tokens; and He et al. (2021) proposed several techniques -datastore pruning, adaptive retrieval, dimension reduction -for nearest neighbor language modeling.", "cite_spans": [ { "start": 152, "end": 169, "text": "(Gu et al., 2018;", "ref_id": "BIBREF3" }, { "start": 170, "end": 189, "text": "Zhang et al., 2018;", "ref_id": "BIBREF17" }, { "start": 190, "end": 212, "text": "Bapna and Firat, 2019;", "ref_id": "BIBREF2" }, { "start": 213, "end": 237, "text": "Khandelwal et al., 2021;", "ref_id": "BIBREF7" }, { "start": 238, "end": 256, "text": "Meng et al., 2021;", "ref_id": "BIBREF9" }, { "start": 257, "end": 276, "text": "Jiang et al., 2021)", "ref_id": "BIBREF5" }, { "start": 861, "end": 886, "text": "(Khandelwal et al., 2021)", "ref_id": "BIBREF7" }, { "start": 988, "end": 1006, "text": "Meng et al. (2021)", "ref_id": "BIBREF9" }, { "start": 1140, "end": 1156, "text": "He et al. (2021)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we adapt several methods proposed by He et al. (2021) to machine translation, and we further propose a new approach that increases the model's efficiency: the use of a retrieval distributions cache. By caching the kNN probability distributions, together with the corresponding decoder representations, for the previous steps of the generation of the current translation(s), the model can quickly retrieve the retrieval distribution when the current representation is similar to a cached one, instead of having to search for neighbors in the datastore at every single step.", "cite_spans": [ { "start": 52, "end": 68, "text": "He et al. (2021)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We perform a thorough analysis of the model's efficiency on a controlled setting, which shows that the combination of our proposed techniques results in a model, the efficient kNN-MT, which is approx-imately twice as fast as the vanilla kNN-MT. This comes without harming translation performance, which is, on average, more than 8 BLEU points and 5 COMET points better than the base MT model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In sum, this paper presents the following contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We adapt the methods proposed by He et al. (2021) for efficient nearest neighbor language modeling to machine translation.", "cite_spans": [ { "start": 35, "end": 51, "text": "He et al. (2021)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a caching strategy to store the retrieval probability distributions, improving the translation speed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We compare the efficiency and translation quality of the different methods, which show the benefits of the proposed and adapted techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "When performing machine translation, the model is given a source sentence or document,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "x = [x 1 , . . . , x L ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": ", on one language, and the goal is to output a translation of the sentence in the desired language, y = [y 1 , . . . , y N ]. This is usually done using a parametric sequence-to-sequence model (Bahdanau et al., 2015; Vaswani et al., 2017) , in which the encoder receives the source sentence as input and outputs a set of hidden states. Then, at each step t, the decoder attends to these hidden states and outputs a probability distribution p NMT (y t |y \u03b1. However, we observed that this leads to results ( \u00a7A.3) similar to randomly selecting when to search the datastore. We posit that this occurs because it is difficult to predict when the model should perform retrieval, for domain adaptation (He et al., 2021) , and because in machine translation error propagation occurs more prominently than in language modeling.", "cite_spans": [ { "start": 479, "end": 496, "text": "(He et al., 2021)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Cache", "sec_num": "3.3" }, { "text": "Cache. Because it is common to have similar contexts along the generation process, when using beam search, the model can be often retrieving similar neighbors at different steps, which is not efficient. To avoid repeating searches on the datastore for similar context vectors, f (x, y Medical LawBLEU ITKoran AverageMedical LawCOMET ITKoran AverageBaselines Base MT kNN-MT Fast kNN-MT40.01 54.47 52.9045.64 37.91 16.35 61.23 45.96 21.02 55.71 44.73 21.2934.98 45.67 43.66.4702 .5760 .5293.5770 .3942 -.0097 .6781 .5163 .0480 .5944 .5445 -.0455.3579 .4546 .4057Efficient kNN-MT cache PCA + cache PCA + pruning PCA + cache + pruning53.30 53.58 53.23 51.9059.12 45.39 20.67 58.57 46.29 20.67 60.38 45.16 20.52 57.82 44.44 20.1144.62 44.78 44.82 43.57.5625 .5457 .5658 .5513.6403 .5085 .0346 .6379 .5311 -.0021 .6639 .4981 .0298 .6260 .4909 -.0052.4365 .4282 .4394 .4158Generation speed10 2 10 3base kNN-MT fast kNN-MT efficient kNN-MT Medical10 2 10 3Law10 2 10 3IT10 2 10 3Koran18 Batch size1618 Batch size1618 Batch size1618 Batch size16" }, "TABREF1": { "num": null, "type_str": "table", "text": ".", "html": null, "content": "
Medical LawITKoran Average
kNN-MT54.4761.23 45.96 21.0245.67
k = 1 k = 2 k = 553.60 52.95 51.6360.23 45.03 20.81 59.40 44.76 20.12 57.55 44.07 19.2944.92 44.31 43.14
" }, "TABREF2": { "num": null, "type_str": "table", "text": "BLEU scores on the multi-domains test set when performing datastore pruning with several values of k, for a batch size of 8.", "html": null, "content": "
MedicalLawITKoran
kNN-MT 6,903,141 19,061,382 3,602,862 524,374
k = 14,780,514 13,130,326 2,641,709 400,385
k = 2
" }, "TABREF3": { "num": null, "type_str": "table", "text": "Sizes of the in-domain datastores when performing datastore pruning with several values of k, for a batch size of 8.", "html": null, "content": "" }, "TABREF5": { "num": null, "type_str": "table", "text": "BLEU scores on the multi-domains test set when performing PCA with different dimension, d, values, for a batch size of 8.", "html": null, "content": "
" }, "TABREF7": { "num": null, "type_str": "table", "text": "BLEU scores on the multi-domains test set when performing adaptive retrieval for different values of the threshold \u03b1, for a batch size of 8.", "html": null, "content": "
MedicalLawITKoran
kNN-MT100%100% 100% 100%
\u03b1 = 0.25 \u03b1 = 0.5 \u03b1 = 0.7578% 96% 98%73% 96% 99%38% 60% 92%4% 61% 91%
" }, "TABREF8": { "num": null, "type_str": "table", "text": "", "html": null, "content": "
Medical LawITKoran Average
kNN-MT54.4761.23 45.96 21.0245.67
\u03c4 = 2 \u03c4 = 4 \u03c4 = 6 \u03c4 = 854.47 54.17 53.30 30.0661.23 45.93 20.98 61.10 46.07 21.00 59.12 45.39 20.67 23.01 25.53 16.0845.65 45.58 44.62 23.67
" }, "TABREF9": { "num": null, "type_str": "table", "text": "BLEU scores on the multi-domains test set when using a retrieval distributions' cache for different values of the threshold \u03c4 , for a batch size of 8.", "html": null, "content": "
MedicalLawITKoran
kNN-MT100%100% 100% 100%
\u03c4 = 2 \u03c4 = 4 \u03c4 = 6 \u03c4 = 859% 50% 43% 26%51% 42% 35% 16%67% 57% 49% 29%64% 53% 45% 31%
" } } } }