{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:53:20.080394Z" }, "title": "Unsupervised Bitext Mining and Translation via Self-Trained Contextual Embeddings", "authors": [ { "first": "Phillip", "middle": [], "last": "Keung", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for AI", "location": {} }, "email": "keung@amazon.com" }, { "first": "Julian", "middle": [], "last": "Salazar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for AI", "location": {} }, "email": "" }, { "first": "Yichao", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for AI", "location": {} }, "email": "yichaolu@amazon.com" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for AI", "location": {} }, "email": "nasmith@cs.washington.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe an unsupervised method to create pseudo-parallel corpora for machine translation (MT) from unaligned text. We use multilingual BERT to create source and target sentence embeddings for nearest-neighbor search and adapt the model via self-training. We validate our technique by extracting parallel sentence pairs on the BUCC 2017 bitext mining task and observe up to a 24.5 point increase (absolute) in F 1 scores over previous unsupervised methods. We then improve an XLM-based unsupervised neural MT system pre-trained on Wikipedia by supplementing it with pseudo-parallel text mined from the same corpus, boosting unsupervised translation performance by up to 3.5 BLEU on the WMT'14 French-English and WMT'16 German-English tasks and outperforming the previous stateof-the-art. Finally, we enrich the IWSLT'15 English-Vietnamese corpus with pseudoparallel Wikipedia sentence pairs, yielding a 1.2 BLEU improvement on the low-resource MT task. We demonstrate that unsupervised bitext mining is an effective way of augmenting MT datasets and complements existing techniques like initializing with pre-trained contextual embeddings. 1 By unsupervised, we mean that no cross-lingual resources like parallel text or bilingual lexicons are used. Unsupervised techniques have been used to bootstrap MT systems for low-resource languages like Khmer and Burmese (Marie et al., 2019).", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We describe an unsupervised method to create pseudo-parallel corpora for machine translation (MT) from unaligned text. We use multilingual BERT to create source and target sentence embeddings for nearest-neighbor search and adapt the model via self-training. We validate our technique by extracting parallel sentence pairs on the BUCC 2017 bitext mining task and observe up to a 24.5 point increase (absolute) in F 1 scores over previous unsupervised methods. We then improve an XLM-based unsupervised neural MT system pre-trained on Wikipedia by supplementing it with pseudo-parallel text mined from the same corpus, boosting unsupervised translation performance by up to 3.5 BLEU on the WMT'14 French-English and WMT'16 German-English tasks and outperforming the previous stateof-the-art. Finally, we enrich the IWSLT'15 English-Vietnamese corpus with pseudoparallel Wikipedia sentence pairs, yielding a 1.2 BLEU improvement on the low-resource MT task. We demonstrate that unsupervised bitext mining is an effective way of augmenting MT datasets and complements existing techniques like initializing with pre-trained contextual embeddings. 1 By unsupervised, we mean that no cross-lingual resources like parallel text or bilingual lexicons are used. Unsupervised techniques have been used to bootstrap MT systems for low-resource languages like Khmer and Burmese (Marie et al., 2019).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Large corpora of parallel sentences are prerequisites for training models across a diverse set of applications, such as neural machine translation (NMT; Bahdanau et al., 2015) , paraphrase generation (Bannard and Callison-Burch, 2005) , and aligned multilingual sentence embeddings (Artetxe and Schwenk, 2019b) . Systems that extract parallel corpora typically rely on various cross-lingual resources (e.g., bilingual lexicons, parallel cor-pora), but recent work has shown that unsupervised parallel sentence mining (Hangya et al., 2018) and unsupervised NMT (Artetxe et al., 2018; Lample et al., 2018a) produce surprisingly good results. 1 Existing approaches to unsupervised parallel sentence (or bitext) mining start from bilingual word embeddings (BWEs) learned via an unsupervised, adversarial approach (Lample et al., 2018b) . Hangya et al. (2018) created sentence representations by mean-pooling BWEs over content words. To disambiguate semantically similar but non-parallel sentences, Hangya and Fraser (2019) additionally proposed parallel segment detection by searching for paired substrings with high similarity scores per word. However, using word embeddings to generate sentence embeddings ignores sentential context, which may degrade bitext retrieval performance.", "cite_spans": [ { "start": 153, "end": 175, "text": "Bahdanau et al., 2015)", "ref_id": "BIBREF3" }, { "start": 200, "end": 234, "text": "(Bannard and Callison-Burch, 2005)", "ref_id": "BIBREF4" }, { "start": 282, "end": 310, "text": "(Artetxe and Schwenk, 2019b)", "ref_id": "BIBREF2" }, { "start": 517, "end": 538, "text": "(Hangya et al., 2018)", "ref_id": "BIBREF14" }, { "start": 560, "end": 582, "text": "(Artetxe et al., 2018;", "ref_id": "BIBREF0" }, { "start": 583, "end": 604, "text": "Lample et al., 2018a)", "ref_id": "BIBREF24" }, { "start": 640, "end": 641, "text": "1", "ref_id": null }, { "start": 809, "end": 831, "text": "(Lample et al., 2018b)", "ref_id": "BIBREF25" }, { "start": 834, "end": 854, "text": "Hangya et al. (2018)", "ref_id": "BIBREF14" }, { "start": 994, "end": 1018, "text": "Hangya and Fraser (2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We describe a new unsupervised bitext mining approach based on contextual embeddings. We create sentence embeddings by mean-pooling the outputs of multilingual BERT (mBERT; Devlin et al., 2019) , which is pre-trained on unaligned Wikipedia sentences across 104 languages. For a pair of source and target languages, we find candidate translations by using nearest-neighbor search with margin-based similarity scores between pairs of mBERT-embedded source and target sentences. We bootstrap a dataset of positive and negative sentence pairs from these initial neighborhoods of candidates, then self-train mBERT on its own outputs. A final retrieval step gives a corpus of pseudo-parallel sentence pairs, which we expect to be a mix of actual translations and semantically related non-translations.", "cite_spans": [ { "start": 173, "end": 193, "text": "Devlin et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We apply our technique on the BUCC 2017 parallel sentence mining task (Zweigenbaum et al., 2017) . We achieve state-of-the-art F 1 scores on unsupervised bitext mining, with an improvement of up to 24.5 points (absolute) on published results (Hangya and Fraser, 2019) . Other work (e.g., Libovick\u00fd et al., 2019) has shown that retrieval performance varies substantially with the layer of mBERT used to generate sentence representations; using the optimal mBERT layer yields an improvement as large as 44.9 points.", "cite_spans": [ { "start": 70, "end": 96, "text": "(Zweigenbaum et al., 2017)", "ref_id": null }, { "start": 242, "end": 267, "text": "(Hangya and Fraser, 2019)", "ref_id": "BIBREF15" }, { "start": 288, "end": 311, "text": "Libovick\u00fd et al., 2019)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Furthermore, our pseudo-parallel text improves unsupervised NMT (UNMT) performance. We build upon the UNMT framework of Lample et al. (2018c) and XLM (Lample and Conneau, 2019) by incorporating our pseudo-parallel text (also derived from Wikipedia) at training time. This boosts performance on WMT'14 En-Fr and WMT'16 En-De by up to 3.5 BLEU over the XLM baseline, outperforming the state-of-the-art on unsupervised NMT (Song et al., 2019) .", "cite_spans": [ { "start": 120, "end": 141, "text": "Lample et al. (2018c)", "ref_id": "BIBREF26" }, { "start": 150, "end": 176, "text": "(Lample and Conneau, 2019)", "ref_id": "BIBREF22" }, { "start": 420, "end": 439, "text": "(Song et al., 2019)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, we demonstrate the practical value of unsupervised bitext mining in the low-resource setting. We augment the English-Vietnamese corpus (133k pairs) from the IWSLT'15 translation task (Cettolo et al., 2015) with our pseudobitext from Wikipedia (400k pairs), and observe a 1.2 BLEU increase over the best published model (Nguyen and Salazar, 2019). When we reduced the amount of parallel and monolingual Vietnamese data by a factor of ten (13.3k pairs), the model trained with pseudo-bitext performed 7 BLEU points better than a model trained on the reduced parallel text alone.", "cite_spans": [ { "start": 192, "end": 214, "text": "(Cettolo et al., 2015)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our aim is to create a bilingual sentence embedding space where, for each source sentence embedding, a sufficiently close nearest neighbor among the target sentence embeddings is its translation. By aligning source and target sentence embeddings in this way, we can extract sentence pairs to create new parallel corpora. Artetxe and Schwenk (2019a) construct this space by training a joint encoder-decoder MT model over multiple language pairs and using the resulting encoder to generate sentence embeddings. A marginbased similarity score is then computed between embeddings for retrieval (Section 2.2). However, this approach requires large parallel corpora to train the encoder-decoder model in the first place.", "cite_spans": [ { "start": 321, "end": 348, "text": "Artetxe and Schwenk (2019a)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "We investigate whether contextualized sentence embeddings created with unaligned text are useful for unsupervised bitext retrieval. Previous work explored the use of multilingual sentence encoders taken from machine translation models (e.g., Artetxe and Schwenk, 2019b; Lu et al., 2018) for zero-shot cross-lingual transfer. Our work is motivated by recent success in tasks like zero-shot text classification and named entity recognition (e.g., Keung et al., 2019; Mulcaire et al., 2019) with multilingual contextual embeddings, which exhibit cross-lingual properties despite being trained without parallel sentences.", "cite_spans": [ { "start": 242, "end": 269, "text": "Artetxe and Schwenk, 2019b;", "ref_id": "BIBREF2" }, { "start": 270, "end": 286, "text": "Lu et al., 2018)", "ref_id": "BIBREF29" }, { "start": 445, "end": 464, "text": "Keung et al., 2019;", "ref_id": "BIBREF19" }, { "start": 465, "end": 487, "text": "Mulcaire et al., 2019)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "We illustrate our method in Figure 1 . We first retrieve the candidate translation pairs:", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 36, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "\u2022 Each source and target language sentence is converted into an embedding vector with mBERT via mean-pooling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "\u2022 Margin-based scores are computed for each sentence pair using the k nearest neighbors of the source and target sentences (Sec. 2.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "\u2022 Each source sentence is paired with its nearest neighbor in the target language based on this score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "\u2022 We select a threshold score that keeps some top percentage of pairs (Sec. 2.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "\u2022 Rule-based filters are applied to further remove mismatched sentence pairs (Sec. 2.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "The remaining candidate pairs are used to bootstrap a dataset for self-training mBERT as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "\u2022 Each candidate pair (a source sentence and its closest nearest neighbor above the threshold) is taken as a positive example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "\u2022 This source sentence is also paired with its next k \u2212 1 neighbors to give hard negative examples (we compare this with random negative samples in Sec. 3.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "\u2022 We finetune mBERT to produce sentence embeddings that discriminate between positive and negative pairs (Sec. 2.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "After self-training, the finetuned mBERT model is used to generate new sentence embeddings. Parallel sentences should be closer to each other in this new embedding space, which improves retrieval performance. 1: Our self-training scheme. Left: We index sentences using our two encoders. For each source sentence, we retrieve k nearest-neighbor target sentences per the margin criterion (Eq. 1), depicted here for k = 4. If the nearest neighbor is within a threshold, it is treated with the source sentence as a positive pair, and the remaining k \u2212 1 are treated with the source sentence as negative pairs. Right: We refine one of the encoders such that the cosine similarity of the two embeddings is maximized on positive pairs and minimized on negative pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "2" }, { "text": "Nearest-neighbor Search", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Embeddings and", "sec_num": "2.1" }, { "text": "We use mBERT (Devlin et al., 2019) to create sentence embeddings for both languages by meanpooling the representations from the final layer. We use FAISS (Johnson et al., 2017) to perform exact nearest-neighbor search on the embeddings. We compare every sentence in the source language to every sentence in the target language; we do not use links between Wikipedia articles or other metadata to reduce the size of the search space.", "cite_spans": [ { "start": 13, "end": 34, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF8" }, { "start": 154, "end": 176, "text": "(Johnson et al., 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence Embeddings and", "sec_num": "2.1" }, { "text": "In our experiments, we retrieve the k = 4 closest target sentences for each source sentence; the source language is always non-English, while the target language is always English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Embeddings and", "sec_num": "2.1" }, { "text": "We compute a margin-based similarity score between each source sentence and its k nearest target neighbors. Following Artetxe and Schwenk (2019a), we use the ratio margin score, which calibrates the cosine similarity by dividing it by the average cosine distance of each embedding's k nearest neighbors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Margin-based Score", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "margin(x, y) = cos(x, y) z\u2208NN tgt k (x) cos(x,z) 2k + z\u2208NN src k (y) cos(y,z) 2k .", "eq_num": "(1)" } ], "section": "Margin-based Score", "sec_num": "2.2" }, { "text": "We remove the sentence pairs with margin scores below some pre-selected threshold. For BUCC, we do not have development data for tuning the threshold hyperparameter, so we simply use the prior probability. For example, the creators of the dataset estimate that \u223c2% of De sentences have an En translation, so we choose a score threshold such that we retrieve \u223c2% of the pairs. We set the threshold in the same way for the other BUCC pairs. For UNMT with Wikipedia bitext mining, we set the threshold such that we always retrieve 2.5 million sentence pairs for each language pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Margin-based Score", "sec_num": "2.2" }, { "text": "We also apply two simple filtering steps before finalizing the candidate pairs list:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-based Filtering", "sec_num": "2.3" }, { "text": "\u2022 Digit filtering: Sentence pairs that are translations of each other must have digit sequences that match exactly. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-based Filtering", "sec_num": "2.3" }, { "text": "\u2022 Edit distance: Sentences from English Wikipedia sometimes appear in non-English pages and vice versa. We remove sentence pairs where the content of the source and target share substantial overlap (i.e., the character-level edit distance is \u226450%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-based Filtering", "sec_num": "2.3" }, { "text": "We devise an unsupervised self-training technique to improve mBERT for bitext retrieval using mBERT's own outputs. For each source sentence, if the nearest target sentence is within the threshold and not filtered out, the pair is treated as a positive sentence. We then keep the next k \u2212 1 nearest neighbors as negative sentences. Altogether, these give us a training set of examples which are labeled as positive or negative pairs. We train mBERT to discriminate between positive and negative sentence pairs as a binary classification task. We distinguish the mBERT encoders for the source and target languages as f src , f tgt respectively. Our training objective is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Self-training", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(X, Y ; \u0398 src ) = f src (X; \u0398 src ) \u22a4 f tgt (Y ) f src (X; \u0398 src ) f tgt (Y ) \u2212 Par(X, Y ) ,", "eq_num": "(2)" } ], "section": "Self-training", "sec_num": "2.4" }, { "text": "where f src (X) and f tgt (Y ) are the mean-pooled representations of the source sentence X and target sentence Y , and where Par(X, Y ) is 1 if X, Y are parallel and 0 otherwise. This loss encourages the cosine similarity between the source and target embeddings to increase for positive pairs and decrease otherwise. The process is depicted in Figure 1 . Note that we only finetune f src (parameters \u0398 src ) and we hold f tgt fixed. If both f src and f tgt are updated, then the training process collapses to a trivial solution, since the model will map all pseudo-parallel pairs to one representation and all non-parallel pairs to another. We hold f tgt fixed, which forces f src to align its outputs to the target (in our experiments, always English) mBERT embeddings.", "cite_spans": [], "ref_spans": [ { "start": 346, "end": 354, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Self-training", "sec_num": "2.4" }, { "text": "After finetuning, we use the updated f src to generate new non-English sentence embeddings. We then repeat the retrieval process with FAISS, yielding a final set of pseudo-parallel pairs after thresholding and filtering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Self-training", "sec_num": "2.4" }, { "text": "We apply our method to the BUCC 2017 shared task, ''Spotting Parallel Sentences in Comparable Corpora'' (Zweigenbaum et al., 2017). The task involves retrieving parallel sentences from monolingual corpora derived from Wikipedia. Parallel sentences were inserted into the corpora in a contextually appropriate manner by the task organizers. The shared task assessed retrieval systems for precision, recall, and F 1 -score on four language pairs: De-En, Fr-En, Ru-En, and Zh-En. Prior work on unsupervised bitext mining has generally studied the European language pairs to avoid dealing with Chinese word segmentation (Hangya et al., 2018; Hangya and Fraser, 2019) .", "cite_spans": [ { "start": 616, "end": 637, "text": "(Hangya et al., 2018;", "ref_id": "BIBREF14" }, { "start": 638, "end": 662, "text": "Hangya and Fraser, 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Bitext Mining", "sec_num": "3" }, { "text": "For each BUCC language pair, we take the corresponding source and target monolingual corpus, which have been pre-split into training, sample, and test sets at a ratio of 49%-2%-49%. The identity of the parallel sentence pairs for the test set were not publicly released, and are only available for the training set. Following the convention established in Hangya and Fraser (2019) and Artetxe and Schwenk (2019a) , we use the test portion for unsupervised system development and evaluate on the training portion.", "cite_spans": [ { "start": 356, "end": 380, "text": "Hangya and Fraser (2019)", "ref_id": "BIBREF15" }, { "start": 385, "end": 412, "text": "Artetxe and Schwenk (2019a)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "3.1" }, { "text": "We use the reference FAISS implementation 3 for nearest-neighbor search. We used the GluonNLP toolkit (Guo et al., 2020) with pretrained mBERT weights 4 for inference and self-training. We compute the margin similarity score in Eq. 1 with k = 4 nearest neighbors. We set a threshold on the score such that we retrieve the prior proportion (e.g., \u223c2%) of parallel pairs in each language.", "cite_spans": [ { "start": 102, "end": 120, "text": "(Guo et al., 2020)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "3.1" }, { "text": "We then finetune mBERT via self-training. We take minibatches of 100 sentence pairs. We use the Adam optimizer with a constant learning rate of 0.00001 for 2 epochs. To avoid noisy translations, we finetune on the top 50% of the highest-scoring pairs from the retrieved bitext (e.g., if the prior proportion is 2%, then we would use the top 1% of sentence pairs for self-training).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "3.1" }, { "text": "We considered performing more than one round of self-training but found it was not helpful for the BUCC task. BUCC has very few parallel pairs (e.g., 9,000 pairs for Fr-En) per language and thus few positive pairs for our unsupervised method to find. The size of the self-training corpus is limited by the proportion of parallel sentences, and mBERT rapidly overfits to small datasets. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "3.1" }, { "text": "In the strange new world of today, the modern and the pre-modern depend on each other. Table 2 : Examples of parallel sentences that were extracted by our method on the BUCC 2017 shared task.", "cite_spans": [], "ref_spans": [ { "start": 87, "end": 94, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Zh-En", "sec_num": null }, { "text": "We provide a few examples of the bitext we retrieved in Table 2 . The examples were chosen from the high-scoring pairs and verified to be correct translations.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 63, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "3.2" }, { "text": "Our retrieval results are in Table 1 . We compare our results with strictly unsupervised techniques, which do not use bilingual lexicons, parallel text, or other cross-lingual resources.", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 36, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "3.2" }, { "text": "Using mBERT as-is with the margin-based score works reasonably well, giving F 1 scores in the range of 35.8 to 45.8, which is competitive with the previous state-of-the-art for some pairs, and outperforming by 12 points in the case of Ru-En. Furthermore, applying simple rule-based filters (Sec. 2.3) on the candidate translation pairs adds a few more points, although the edit distance filter has a negligible effect when compared with the digit filter. We see that finetuning mBERT on its own chosen sentence pairs (i.e., unsupervised selftraining) yields significant improvements, adding another 8 to 14 points to the F 1 score on top of filtering. In all, these F 1 scores represent a 34% to 98% relative improvement over existing techniques in unsupervised parallel sentence extraction for these language pairs. Libovick\u00fd et al. (2019) explored bitext mining with mBERT in the supervised context and found that retrieval performance significantly varies with the mBERT layer used to create sentence embeddings. In particular, they found layer 8 embeddings gave the highest precision-at-1. We also observe an improvement (Table 1) in unsupervised retrieval of another 13 to 20 points by using the 8th layer instead of the default final layer (12th). We include these results but do not consider them unsupervised, as we would not know a priori which layer was best to use.", "cite_spans": [ { "start": 817, "end": 840, "text": "Libovick\u00fd et al. (2019)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 1125, "end": 1134, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "3.2" }, { "text": "Other authors (e.g., Guo et al., 2018) have noted that the choice of negative examples has a considerable impact on metric learning. Specifically, using negative examples which are difficult to distinguish from the positive nearest neighbor is often beneficial for performance. We examine the impact of taking random sentences instead of the remaining k \u22121 nearest neighbors as the negatives during self-training.", "cite_spans": [ { "start": 21, "end": 38, "text": "Guo et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Choosing Negative Sentence Pairs", "sec_num": "3.3" }, { "text": "Our results are in Table 3 . While self-training with random negatives still greatly improves the untuned baseline, the use of hard negative examples mined from the k-nearest neighborhood can make a significant difference to the final F 1 score.", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 26, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Choosing Negative Sentence Pairs", "sec_num": "3.3" }, { "text": "A major application of bitext mining is to create new corpora for machine translation. We conduct an extrinsic evaluation of our unsupervised bitext mining approach on unsupervised (WMT'14 French-English, WMT'16 German-English) and low-resource (IWSLT'15 English-Vietnamese) translation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bitext for Neural Machine Translation", "sec_num": "4" }, { "text": "We perform large-scale unsupervised bitext extraction on the October 2019 Wikipedia dumps in various languages. We use wikifil.pl 5 to extract paragraphs from Wikipedia and remove markup. We then use the syntok 6 package for sentence segmentation. Finally, we reduce the size of the corpus by removing sentences that aren't part of the body of Wikipedia pages. Sentences that contain *, =, //, ::, #, www, (talk), or the pattern [0-9]{2}:[0-9]{2} are filtered out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bitext for Neural Machine Translation", "sec_num": "4" }, { "text": "We index, retrieve, and filter candidate sentence pairs with the procedure in Sec. 3. Unlike BUCC, the Wikipedia dataset does not fit in GPU memory. The processed corpus is quite large, with 133 million, 67 million, 36 million, and 6 million sentences in English, German, French, and Vietnamese respectively. We therefore shard the dataset into chunks of 32,768 sentences and perform nearest-neighbor comparisons in chunks for each language pair. We use a simple mapreduce algorithm to merge the intermediate results back together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bitext for Neural Machine Translation", "sec_num": "4" }, { "text": "We follow the approach outlined in Sec. 2 for Wikipedia bitext mining. For each source sentence, we retrieve the four nearest target neighbors across the millions of sentences that we extracted from Wikipedia and compute the margin-based scores for each pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bitext for Neural Machine Translation", "sec_num": "4" }, { "text": "We show that our pseudo-parallel text can complement existing techniques for unsupervised translation (Artetxe et al., 2018; Lample et al., 2018c) . In line with existing work on UNMT, we evaluate our approach on the WMT'14 Fr-En and WMT'16 De-En test sets.", "cite_spans": [ { "start": 102, "end": 124, "text": "(Artetxe et al., 2018;", "ref_id": "BIBREF0" }, { "start": 125, "end": 146, "text": "Lample et al., 2018c)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised NMT", "sec_num": "4.1" }, { "text": "Our UNMT experiments build upon the reference implementation 7 of XLM (Lample and Conneau, 2019 ). The UNMT model is trained by alternating between two steps: a denoising autoencoder step and a backtranslation step (refer to Lample et al., 2018c for more details). The backtranslation step generates pseudo-parallel training data, and we incorporate our bitext during UNMT training in the same way, as another set of pseudo-parallel sentences. We also use the same initialization as Lample and Conneau (2019) , where the UNMT models have encoders and decoders that are initialized with contextual embeddings trained on the source and target language Wikipedia corpora with the masked language model (MLM) objective; no parallel data is used. We performed the exhaustive (Fr Wiki)-(En Wiki) and (De Wiki)-(En Wiki) nearest-neighbor comparison on eight V100 GPUs, which requires 3 to 4 days to complete per language pair. We retained the top 2.5 million pseudo-parallel Fr-En and De-En sentence pairs after mining.", "cite_spans": [ { "start": 70, "end": 95, "text": "(Lample and Conneau, 2019", "ref_id": "BIBREF22" }, { "start": 483, "end": 508, "text": "Lample and Conneau (2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised NMT", "sec_num": "4.1" }, { "text": "Our results are in Table 4 . The addition of mined bitext consistently increases the BLEU score in both directions for WMT'14 Fr-En and WMT'16 De-En. Much of the existing work on improving UNMT focuses on improved initialization with contextual embeddings like XLM or MASS (Song et al., 2019) . These embeddings were already pretrained on Wikipedia data, so it is surprising that adding our pseudo-parallel Wikipedia sentences leads to a 2 to 3 BLEU improvement. In other words, our approach is complementary to pretrained initialization techniques.", "cite_spans": [ { "start": 273, "end": 292, "text": "(Song et al., 2019)", "ref_id": "BIBREF43" } ], "ref_spans": [ { "start": 19, "end": 26, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Previously (in Table 1 ), we saw that selftraining improved the F 1 score for BUCC bitext retrieval. The improvement in bitext quality carries over to UNMT, and providing better pseudoparallel text yields a consistent improvement for all translation directions.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Our results are state-of-the-art in UNMT, but they should be interpreted relative to the strength of our XLM baseline. We are building on top of the XLM initialization, and the effectiveness of the initialization (and the various hyperparameters used during training and decoding) affects the strength of our final results. For example, we adjusted the beam width on our XLM baselines to attain BLEU scores which are similar to what others have published. One can apply our method to MASS, which performs better than XLM on UNMT, but we chose to report results on XLM because it has been validated on a wider range of tasks and languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "We also trained a standard 6-layer transformer encoder-decoder model directly on the pseudoparallel text. We used the standard implementation in Sockeye (Hieber et al., 2018) as-is, and trained models for French and German on 2.5 million Wikipedia sentence pairs. We withheld 10k pseudo-parallel pairs per language pair to serve as a development set. We achieved BLEU scores of 20.8, 21.1, 28.2, and 28.0 on En-De, De-En, En-Fr, and Fr-En respectively. BLEU scores were computed with SacreBLEU (Post, 2018) . This compares favorably with the best UNMT results in Lample et al. (2018c) , while avoiding the use of parallel development data altogether.", "cite_spans": [ { "start": 153, "end": 174, "text": "(Hieber et al., 2018)", "ref_id": "BIBREF16" }, { "start": 494, "end": 506, "text": "(Post, 2018)", "ref_id": "BIBREF34" }, { "start": 563, "end": 584, "text": "Lample et al. (2018c)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "French and German are high-resource languages and are linguistically close to English. We therefore evaluate our mined bitext on a lowresource, linguistically distant language pair. The IWSLT'15 English-Vietnamese MT task (Cettolo et al., 2015) provides 133k sentence pairs derived from translated TED talks transcripts and is a common benchmark for low-resource MT. We take supervised training data from the IWSLT task and augment it with different amounts of pseudoparallel text mined from English and Vietnamese Wikipedia. Furthermore, we construct a very lowresource setting by downsampling the parallel text and monolingual Vietnamese Wikipedia text by a factor of ten (13.3k sentence pairs).", "cite_spans": [ { "start": 222, "end": 244, "text": "(Cettolo et al., 2015)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Low-resource NMT", "sec_num": "4.3" }, { "text": "We use the reference implementation 8 for the state-of-the-art model (Nguyen and Salazar, 2019), which is a highly regularized 6+6-layer transformer with pre-norm residual connections, scale normalization, and normalized word embeddings. We use the same hyperparameters (except for the dropout rate) but train on our augmented datasets. To mitigate domain shift, we finetune the best checkpoint for 75k more steps using only the IWSLT training data, in the spirit of ''trivial'' transfer learning for low-resource NMT (Kocmi and Bojar, 2018) .", "cite_spans": [ { "start": 518, "end": 541, "text": "(Kocmi and Bojar, 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Low-resource NMT", "sec_num": "4.3" }, { "text": "In Table 5 , we show BLEU scores as more pseudo-parallel text is included during training. As in previous works on En-Vi (cf. Luong and Manning, 2015), we use tst2012 (1,553 pairs) and tst2013 (1,268 pairs) as our development and test sets respectively, we tokenize all data with Moses, and we report tokenized BLEU via multi-bleu.perl. The BLEU score increases monotonically with the size of the pseudo-parallel corpus and exceeds the state-of-the-art system's BLEU by 1.2 points. This result is consistent with improvements observed with other types of monolingual data augmentation like pre-trained UNMT initialization, various forms of backtranslation (Hoang et al., 2018; Zhou and Keung, 2020) , and cross-view training (CVT; Clark et al., 2018):", "cite_spans": [ { "start": 656, "end": 676, "text": "(Hoang et al., 2018;", "ref_id": "BIBREF17" }, { "start": 677, "end": 698, "text": "Zhou and Keung, 2020)", "ref_id": null } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Low-resource NMT", "sec_num": "4.3" }, { "text": "8 https://github.com/tnq177/transformers without tears.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Low-resource NMT", "sec_num": "4.3" }, { "text": "Luong and Manning 201526.4 Clark et al. (2018) 28.9 Clark et al. 2018 We describe our hyperparameter tuning and infrastructure following Dodge et al. (2019) . The translation sections of this work mostly used default parameters, but we did tune the dropout rate (at 0.2 and 0.3) for each amount of mined bitext for the supervised En-Vi task (at 100k, 200k, 300k , and 400k sentence pairs). We include development scores for our best models; dropout of 0.3 did best for 0k and 100k, while 0.2 did best otherwise. Training takes less than a day on one V100 GPU.", "cite_spans": [ { "start": 27, "end": 46, "text": "Clark et al. (2018)", "ref_id": "BIBREF7" }, { "start": 137, "end": 156, "text": "Dodge et al. (2019)", "ref_id": "BIBREF10" }, { "start": 341, "end": 361, "text": "(at 100k, 200k, 300k", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "En-Vi", "sec_num": null }, { "text": "To simulate a very low-resource task, we use one-tenth of the training data by downsampling the IWSLT En-Vi train set to 13.3k sentence pairs. Furthermore, we mine bitext from one-tenth of the monolingual Wiki Vi text and extract proportionately fewer sentence pairs (i.e., 10k, 20k, 30k, and 40k pairs). We use the implementation and hyperparameters for the regularized 4+4-layer transformer used by Nguyen and Salazar (2019) in a similar setting. We tune the dropout rate (0.2, 0.3, 0.4) to maximize development performance; 0.4 was best for 0k, 0.3 for 10k and 20k, and 0.2 for 30k and 40k. In Table 6 , we see larger improvements in BLEU (4+ points) for the same relative increases in mined data (as compared to Table 5 ). In both cases, the rate of improvement tapers off as the quality and relative quantity of mined pairs degrades at each increase. tst2013, where the bitext was mined from one-tenth of the monolingual Vietnamese data. Development scores on tst2012 in parentheses.", "cite_spans": [], "ref_spans": [ { "start": 597, "end": 604, "text": "Table 6", "ref_id": "TABREF7" }, { "start": 716, "end": 723, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "En-Vi", "sec_num": null }, { "text": "XLM embeddings were created prior to January 2019. Hence, it is possible that the UNMT BLEU increase would be smaller if the bitext were mined from the same corpus used for pre-training. We ran an ablation study to show the effect (or lack thereof) of the overlap between the pre-training and pseudo-parallel corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "UNMT Ablation", "sec_num": "4.4" }, { "text": "For the En-Vi language pair, we used 5 million English and 5 million Vietnamese Wiki sentences to pre-train the XLM model. We only use text from the October 2019 Wiki snapshot. We mined 300k pseudo-parallel sentence pairs using our approach (Sec. 2) from the same Wiki snapshot. We created two datasets for XLM pre-training: a 10 millionsentence corpus that is disjoint from the 600k sentences of the mined bitext, and a 10 millionsentence corpus that contains all 600k sentences of the bitext. In Table 7 , we show the BLEU increase on the IWSLT En-Vi task with and without using the mined bitext as parallel data, using each of the two XLM models as the initialization.", "cite_spans": [], "ref_spans": [ { "start": 498, "end": 505, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "UNMT Ablation", "sec_num": "4.4" }, { "text": "The benefit of using pseudo-parallel text is very clear; even if the pre-trained XLM model saw the pseudo-parallel sentences during pre-training, using mined bitext still significantly improves UNMT performance (23.1 vs. 28.3 BLEU). In addition, the baseline UNMT performance without the mined bitext is similar between the two XLM initializations (23.1 vs. 23.2 BLEU), which suggests that removing some of the parallel text present during pre-training does not have a major effect on UNMT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "UNMT Ablation", "sec_num": "4.4" }, { "text": "Finally, we trained a standard encoder-decoder model on the 300k pseudo-parallel pairs only, using the same Sockeye recipe in Sec. 4.2. This yielded a BLEU score of 27.5 on En-Vi, which is lower than the best XLM-based result (i.e., 28.9), which suggests that the XLM initialization improves unsupervised NMT. A similar outcome was also reported in Lample and Conneau (2019) . Table 7 : Tokenized UNMT BLEU scores on IWSLT'15 English-Vietnamese (tst2013) with XLM initialization. We mined 300k pseudoparallel (PP) sentence pairs from En and Vi Wikipedia (Oct. 2019). We created two XLM models, with the pre-training corpus including or excluding the PP pairs. We compare their downstream UNMT performance with and without PP pairs as ''bitext'' during UNMT training.", "cite_spans": [ { "start": 349, "end": 374, "text": "Lample and Conneau (2019)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 377, "end": 384, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "UNMT Ablation", "sec_num": "4.4" }, { "text": "5 Related Work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "UNMT Ablation", "sec_num": "4.4" }, { "text": "Approaches to parallel sentence (or bitext) mining have been historically driven by the data requirements of statistical machine translation. Some of the earliest work in mining the Web for large-scale parallel corpora can be found in Resnik (1998) and Resnik and Smith (2003) . Recent interest in the field is reflected by new shared tasks on parallel extraction and filtering (Zweigenbaum et al., 2017; and the creation of massively multilingual parallel corpora mined from the Web, like WikiMatrix and CCMatrix . Existing parallel corpora have been exploited in many ways to create sentence representations for supervised bitext mining. One approach involves a joint encoder with a shared wordpiece vocabulary, trained as part of multiple encoder-decoder translation models on parallel corpora (Schwenk, 2018) . Artetxe and Schwenk (2019b) apply this approach at scale, and shared a single encoder and joint vocabulary across 93 languages. Another approach uses negative sampling to align the encoders' sentence representations for nearestneighbor retrieval (Gr\u00e9goire and Langlais, 2018; Guo et al., 2018) .", "cite_spans": [ { "start": 235, "end": 248, "text": "Resnik (1998)", "ref_id": "BIBREF35" }, { "start": 253, "end": 276, "text": "Resnik and Smith (2003)", "ref_id": "BIBREF36" }, { "start": 378, "end": 404, "text": "(Zweigenbaum et al., 2017;", "ref_id": null }, { "start": 797, "end": 812, "text": "(Schwenk, 2018)", "ref_id": "BIBREF37" }, { "start": 815, "end": 842, "text": "Artetxe and Schwenk (2019b)", "ref_id": "BIBREF2" }, { "start": 1061, "end": 1090, "text": "(Gr\u00e9goire and Langlais, 2018;", "ref_id": "BIBREF11" }, { "start": 1091, "end": 1108, "text": "Guo et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Parallel Sentence Mining", "sec_num": "5.1" }, { "text": "However, these approaches require training with initial parallel corpora. In contrast, Hangya et al. (2018) and Hangya and Fraser (2019) proposed unsupervised methods for parallel sentence extraction that use bilingual word embeddings induced in an unsupervised manner. Our work is the first to explore using contextual representations (mBERT; Devlin et al., 2019) in an unsupervised manner to mine for bitext, and to show improvements over the latest UNMT systems (Lample and Conneau, 2019; Song et al., 2019) , for which transformers and encoder/ decoder pre-training have doubled or tripled BLEU scores on unsupervised WMT'16 En-De since Artetxe et al. (2018) and Lample et al. (2018c) .", "cite_spans": [ { "start": 87, "end": 107, "text": "Hangya et al. (2018)", "ref_id": "BIBREF14" }, { "start": 112, "end": 136, "text": "Hangya and Fraser (2019)", "ref_id": "BIBREF15" }, { "start": 344, "end": 364, "text": "Devlin et al., 2019)", "ref_id": "BIBREF8" }, { "start": 465, "end": 491, "text": "(Lample and Conneau, 2019;", "ref_id": "BIBREF22" }, { "start": 492, "end": 510, "text": "Song et al., 2019)", "ref_id": "BIBREF43" }, { "start": 641, "end": 662, "text": "Artetxe et al. (2018)", "ref_id": "BIBREF0" }, { "start": 667, "end": 688, "text": "Lample et al. (2018c)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Parallel Sentence Mining", "sec_num": "5.1" }, { "text": "Self-training refers to techniques that use the outputs of a model to provide labels for its own training. Yarowsky (1995) proposed a semisupervised strategy where a model is first trained on a small set of labeled data and then used to assign pseudo-labels to unlabeled data. Semisupervised self-training has been used to improve sentence encoders that project sentences into a common semantic space. For example, Clark et al. (2018) proposed cross-view training (CVT) with labeled and unlabeled data to achieve state-of-theart results on a set of sequence tagging, MT, and dependency parsing tasks.", "cite_spans": [ { "start": 107, "end": 122, "text": "Yarowsky (1995)", "ref_id": "BIBREF46" }, { "start": 415, "end": 434, "text": "Clark et al. (2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Self-training Techniques", "sec_num": "5.2" }, { "text": "Semi-supervised methods require some annotated data, even if it is not directly related to the target task. Our work is the first to apply unsupervised self-training for generating cross-lingual sentence embeddings. The most similar approach to ours is the prevailing scheme for unsupervised NMT (Lample et al., 2018c) , which relies on multiple iterations of backtranslation (Sennrich et al., 2016) to create a sequence of pseudoparallel sentence pairs with which to bootstrap an MT model.", "cite_spans": [ { "start": 296, "end": 318, "text": "(Lample et al., 2018c)", "ref_id": "BIBREF26" }, { "start": 376, "end": 399, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Self-training Techniques", "sec_num": "5.2" }, { "text": "In this work, we describe a novel approach for state-of-the-art unsupervised bitext mining using multilingual contextual representations. We extract pseudo-parallel sentences from unaligned corpora to create models that achieve state-of-theart performance on unsupervised and low-resource translation tasks. Our approach is complementary to the improvements derived from initializing MT models with pre-trained encoders and decoders, and helps narrow the gap between unsupervised and supervised MT. We focused on mBERTbased embeddings in our experiments, but we expect unsupervised self-training to improve the unsupervised bitext mining and downstream UNMT performance of other forms of multilingual contextual embeddings as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Our findings are in line with recent work showing that multilingual embeddings are very useful for cross-lingual zero-shot and zero-resource tasks. Even without using aligned corpora, mBERT can embed sentences across different languages in a consistent fashion according to their semantic content. More work will be needed to understand how contextual embeddings discover these crosslingual correspondences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Jiawei Zhou and Phillip Keung. 2020. Improving non-autoregressive neural machine translation with monolingual data. In ACL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp. 2017. Overview of the second BUCC shared task: Spotting parallel sentences in comparable corpora. In Proceedings of the 10th Workshop on Building and Using Comparable Corpora, pages 60-67. Vancouver, Canada. Association for Computational Linguistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In Python, set(re.findall(\"[0-9]+\",sent1)) == set(re.findall(\"[0-9]+\",sent2)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/facebookresearch/faiss. 4 https://github.com/google-research /bert/blob/master/multilingual.md.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "h t t p s : / / g i t hub.com/facebookresearch /fastText/blob/master/wikifil.pl.6 https://github.com/fnl/syntok. 7 https://github.com/facebookresearch/xlm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the anonymous reviewers for their thoughtful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised neural machine translation", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D18-1399" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. DOI: https://doi.org /10.18653/v1/D18-1399", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Margin-based parallel corpus mining with multilingual sentence embeddings", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3197--3203", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe and Holger Schwenk. 2019a. Margin-based parallel corpus mining with multilingual sentence embeddings. In Pro- ceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3197-3203. Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "597--610", "other_ids": { "DOI": [ "10.1162/tacl_a_00288" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe and Holger Schwenk. 2019b. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computa- tional Linguistics, 7:597-610. DOI: https:// doi.org/10.1162/tacl a 00288", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Repre- sentations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Paraphrasing with bilingual parallel corpora", "authors": [ { "first": "Colin", "middle": [], "last": "Bannard", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Association for Computational Linguistics", "authors": [], "year": null, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", "volume": "", "issue": "", "pages": "597--604", "other_ids": { "DOI": [ "10.3115/1219840.1219914" ] }, "num": null, "urls": [], "raw_text": "In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguis- tics (ACL'05), pages 597-604. Ann Arbor, Michigan. Association for Computational Lin- guistics. DOI: https://doi.org/10.3115 /1219840.1219914", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The IWSLT 2015 evaluation campaign", "authors": [ { "first": "Mauro", "middle": [], "last": "Cettolo", "suffix": "" }, { "first": "Niehues", "middle": [], "last": "Jan", "suffix": "" }, { "first": "St\u00fcker", "middle": [], "last": "Sebastian", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Roldano", "middle": [], "last": "Cattoni", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 12th International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "2--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mauro Cettolo, Niehues Jan, St\u00fcker Sebastian, Luisa Bentivogli, Roldano Cattoni, and Marcello Federico. 2015. The IWSLT 2015 evaluation campaign. In Proceedings of the 12th International Workshop on Spoken Language Translation, pages 2-14. Da Nang, Vietnam.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Semi-supervised sequence modeling with cross-view training", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1914--1925", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 1914-1925. Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Show your work: Improved reporting of experimental results", "authors": [ { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "Suchin", "middle": [], "last": "Gururangan", "suffix": "" }, { "first": "Dallas", "middle": [], "last": "Card", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2185--2194", "other_ids": { "DOI": [ "10.18653/v1/D19-1224" ] }, "num": null, "urls": [], "raw_text": "Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A. Smith. 2019. Show your work: Improved reporting of experimental results. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2185-2194. Hong Kong, China. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19 -1224", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Extracting parallel sentences with bidirectional recurrent neural networks to improve machine translation", "authors": [ { "first": "Francis", "middle": [], "last": "Gr\u00e9goire", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Langlais", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1442--1453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francis Gr\u00e9goire and Philippe Langlais. 2018. Extracting parallel sentences with bidirectional recurrent neural networks to improve machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1442-1453. Santa Fe, New Mexico, USA. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "GluonCV and GluonNLP: Deep learning in computer vision and natural language processing", "authors": [ { "first": "Jian", "middle": [], "last": "Guo", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Tong", "middle": [], "last": "He", "suffix": "" }, { "first": "Leonard", "middle": [], "last": "Lausen", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Haibin", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Xingjian", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Chenguang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Junyuan", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Zha", "suffix": "" }, { "first": "Aston", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongyue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shuai", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "7", "pages": "1--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Guo, He He, Tong He, Leonard Lausen, Mu Li, Haibin Lin, Xingjian Shi, Chenguang Wang, Junyuan Xie, Sheng Zha, Aston Zhang, Hang Zhang, Zhi Zhang, Zhongyue Zhang, Shuai Zheng, and Yi Zhu. 2020. GluonCV and GluonNLP: Deep learning in computer vision and natural language processing. Journal of Machine Learning Research, 21:23:1-23:7.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Effective parallel corpus mining using bilingual sentence embeddings", "authors": [ { "first": "Mandy", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Qinlan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Heming", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Gustavo", "middle": [ "Hernandez" ], "last": "Abrego", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Stevens", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Yun-Hsuan", "middle": [], "last": "Sung", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Strope", "suffix": "" }, { "first": "Ray", "middle": [], "last": "Kurzweil", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "165--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mandy Guo, Qinlan Shen, Yinfei Yang, Heming Ge, Daniel Cer, Gustavo Hernandez Abrego, Keith Stevens, Noah Constant, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Effective parallel corpus mining using bilingual sentence embeddings. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 165-176. Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unsupervised parallel sentence extraction from comparable corpora", "authors": [ { "first": "Viktor", "middle": [], "last": "Hangya", "suffix": "" }, { "first": "Fabienne", "middle": [], "last": "Braune", "suffix": "" }, { "first": "Yuliya", "middle": [], "last": "Kalasouskaya", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 15th International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "7--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Viktor Hangya, Fabienne Braune, Yuliya Kalasouskaya, and Alexander Fraser. 2018. Unsupervised parallel sentence extraction from comparable corpora. In Proceedings of the 15th International Workshop on Spoken Language Translation, pages 7-13. Bruges, Belgium.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Unsupervised parallel sentence extraction with parallel segment detection helps machine translation", "authors": [ { "first": "Viktor", "middle": [], "last": "Hangya", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "19--1118", "other_ids": { "DOI": [ "10.18653/v1/P19-1118" ] }, "num": null, "urls": [], "raw_text": "Viktor Hangya and Alexander Fraser. 2019. Unsupervised parallel sentence extraction with parallel segment detection helps machine trans- lation. In Proceedings of the 57th Annual Meeting of the Association for Computatio- nal Linguistics, pages 1224-1234. Florence, Italy. Association for Computational Linguis- tics. DOI: https://doi.org/10.18653 /v1/P19-1118", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The Sockeye neural machine translation toolkit at AMTA 2018", "authors": [ { "first": "Felix", "middle": [], "last": "Hieber", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Domhan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Denkowski", "suffix": "" }, { "first": "David", "middle": [], "last": "Vilar", "suffix": "" }, { "first": "Artem", "middle": [], "last": "Sokolov", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Clifton", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas", "volume": "1", "issue": "", "pages": "200--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2018. The Sockeye neural machine translation toolkit at AMTA 2018. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), pages 200-207. Boston, MA. Association for Machine Translation in the Americas.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Iterative back-translation for neural machine translation", "authors": [ { "first": "Duy", "middle": [], "last": "Vu Cong", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Haffari", "suffix": "" }, { "first": "", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", "volume": "", "issue": "", "pages": "18--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back-translation for neural machine transla- tion. In Proceedings of the 2nd Workshop on Neural Machine Translation and Gen- eration, pages 18-24. Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Billion-scale similarity search with GPUs. CoRR", "authors": [ { "first": "Jeff", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Matthijs", "middle": [], "last": "Douze", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/TBDATA.2019.2921572" ] }, "num": null, "urls": [], "raw_text": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2017. Billion-scale similarity search with GPUs. CoRR, abs/1702.08734v1. DOI: https:// doi.org/10.1109/TBDATA.2019.2921572", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Adversarial learning with contextual embeddings for zero-resource cross-lingual classification and NER", "authors": [ { "first": "Phillip", "middle": [], "last": "Keung", "suffix": "" }, { "first": "Yichao", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Vikas", "middle": [], "last": "Bhardwaj", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1355--1360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phillip Keung, Yichao Lu, and Vikas Bhardwaj. 2019. Adversarial learning with contextual embeddings for zero-resource cross-lingual classification and NER. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1355-1360. Hong Kong, China. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Trivial transfer learning for low-resource neural machine translation", "authors": [ { "first": "Tom", "middle": [], "last": "Kocmi", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "244--252", "other_ids": { "DOI": [ "10.18653/v1/W18-6325" ] }, "num": null, "urls": [], "raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2018. Trivial trans- fer learning for low-resource neural machine translation. In Proceedings of the Third Con- ference on Machine Translation: Research Papers, pages 244-252. Brussels, Belgium. Association for Computational Linguistics. DOI: https://doi.org/10.18653/v1 /W18-6325", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Findings of the WMT 2018 shared task on parallel corpus filtering", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Huda", "middle": [], "last": "Khayrallah", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" }, { "first": "Mikel", "middle": [ "L" ], "last": "Forcada", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", "volume": "", "issue": "", "pages": "726--739", "other_ids": { "DOI": [ "10.18653/v1/W18-6453" ] }, "num": null, "urls": [], "raw_text": "Philipp Koehn, Huda Khayrallah, Kenneth Heafield, and Mikel L. Forcada. 2018. Findings of the WMT 2018 shared task on parallel corpus filtering. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 726-739. Belgium, Brussels. Association for Computational Lin- guistics. DOI: https://doi.org/10 .18653/v1/W18-6453", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Cross-lingual language model pretraining", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "authors": [ { "first": "M", "middle": [], "last": "Hanna", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "Alina", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "", "middle": [], "last": "Beygelzimer", "suffix": "" }, { "first": "Emily", "middle": [ "B" ], "last": "Florence D'alch\u00e9-Buc", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Fox", "suffix": "" }, { "first": "", "middle": [], "last": "Garnett", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "7057--7067", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alch\u00e9-Buc, Emily B. Fox, and Roman Garnett, editors, In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8- 14 December 2019, Vancouver, BC, Canada, pages 7057-7067.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Unsupervised machine translation using monolingual corpora only", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised machine translation using mono- lingual corpora only. In 6th International Con- ference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Word translation without parallel data", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018b. Word translation without parallel data. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Phrase-based & neural unsupervised machine translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "5039--5049", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018c. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 5039-5049.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Association for Computational Linguistics", "authors": [ { "first": "Belgium", "middle": [], "last": "Brussels", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brussels, Belgium. Association for Computa- tional Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "How language-neutral is multilingual BERT? CoRR", "authors": [ { "first": "Jindrich", "middle": [], "last": "Libovick\u00fd", "suffix": "" }, { "first": "Rudolf", "middle": [], "last": "Rosa", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jindrich Libovick\u00fd, Rudolf Rosa, and Alexander Fraser. 2019. How language-neutral is multi- lingual BERT? CoRR, abs/1911.03310v1.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A neural interlingua for multilingual machine translation", "authors": [ { "first": "Yichao", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Phillip", "middle": [], "last": "Keung", "suffix": "" }, { "first": "Faisal", "middle": [], "last": "Ladhak", "suffix": "" }, { "first": "Vikas", "middle": [], "last": "Bhardwaj", "suffix": "" }, { "first": "Shaonan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "84--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yichao Lu, Phillip Keung, Faisal Ladhak, Vikas Bhardwaj, Shaonan Zhang, and Jason Sun. 2018. A neural interlingua for multilingual machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 84-92. Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Stanford neural machine translation systems for spoken language domains", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 12th International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "76--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spoken language domains. In Proceedings of the 12th International Workshop on Spoken Language Translation, pages 76-79. Da Nang, Vietnam.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Supervised and unsupervised machine translation for Myanmar-English and Khmer-English", "authors": [ { "first": "Benjamin", "middle": [], "last": "Marie", "suffix": "" }, { "first": "Hour", "middle": [], "last": "Kaing", "suffix": "" }, { "first": "Aye", "middle": [], "last": "Myat Mon", "suffix": "" }, { "first": "Chenchen", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Atsushi", "middle": [], "last": "Fujita", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 6th Workshop on Asian Translation", "volume": "", "issue": "", "pages": "68--75", "other_ids": { "DOI": [ "10.18653/v1/D19-5206" ] }, "num": null, "urls": [], "raw_text": "Benjamin Marie, Hour Kaing, Aye Myat Mon, Chenchen Ding, Atsushi Fujita, Masao Utiyama, and Eiichiro Sumita. 2019. Super- vised and unsupervised machine translation for Myanmar-English and Khmer-English. In Proceedings of the 6th Workshop on Asian Translation, pages 68-75. Hong Kong, China. Association for Computational Linguistics. DOI: https://doi.org/10.18653/v1 /D19-5206", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Polyglot contextual representations improve crosslingual transfer", "authors": [ { "first": "Phoebe", "middle": [], "last": "Mulcaire", "suffix": "" }, { "first": "Jungo", "middle": [], "last": "Kasai", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3912--3918", "other_ids": { "DOI": [ "10.18653/v1/N19-1392" ] }, "num": null, "urls": [], "raw_text": "Phoebe Mulcaire, Jungo Kasai, and Noah A. Smith. 2019. Polyglot contextual representations improve crosslingual transfer. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3912-3918. Minneapolis, Minnesota. Association for Computational Linguistics. DOI: https://doi.org/10 .18653/v1/N19-1392", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Transformers without tears: Improving the normalization of self-attention", "authors": [ { "first": "Q", "middle": [], "last": "Toan", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Salazar", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 16th International Workshop on Spoken Language Translation. Hong Kong, China. Zenodo", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Toan Q. Nguyen and Julian Salazar. 2019. Transformers without tears: Improving the normalization of self-attention. In Proceedings of the 16th International Workshop on Spoken Language Translation. Hong Kong, China. Zenodo.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A call for clarity in reporting BLEU scores", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "186--191", "other_ids": { "DOI": [ "10.18653/v1/W18-6319" ] }, "num": null, "urls": [], "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Re- search Papers, pages 186-191. Brussels, Belgium. Association for Computational Linguistics. DOI: https://doi.org/10 .18653/v1/W18-6319", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Parallel strands: A preliminary investigation into mining the web for bilingual text", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1998, "venue": "Machine Translation and the Information Soup, Third Conference of the Association for Machine Translation in the Americas, AMTA '98", "volume": "1529", "issue": "", "pages": "72--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1998. Parallel strands: A preliminary investigation into mining the web for bilingual text. David Farwell, Laurie Gerber, and Eduard H. Hovy, editors, In Machine Translation and the Information Soup, Third Conference of the Association for Machine Translation in the Americas, AMTA '98, Langhorne, PA, USA, October 28-31, 1998, Proceedings, volume 1529 of Lecture Notes in Computer Science, pages 72-82. Springer.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "The web as a parallel corpus", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "3", "pages": "349--380", "other_ids": { "DOI": [ "10.1162/089120103322711578" ] }, "num": null, "urls": [], "raw_text": "Philip Resnik and Noah A. Smith. 2003. The web as a parallel corpus. Computational Lin- guistics, 29(3):349-380. DOI: https://doi .org/10.1162/089120103322711578", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Filtering and mining parallel data in a joint multilingual space", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "228--234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk. 2018. Filtering and mining parallel data in a joint multilingual space. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 228-234.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Association for Computational Linguistics", "authors": [ { "first": "Australia", "middle": [], "last": "Melbourne", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P18-2037" ] }, "num": null, "urls": [], "raw_text": "Melbourne, Australia. Association for Compu- tational Linguistics. DOI: https://doi .org/10.18653/v1/P18-2037", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia. CoRR", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzm\u00e1n. 2019a. WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia. CoRR, abs/1907.05791v2.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "CCMatrix: Mining billions of highquality parallel sentences on the", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, and Armand Joulin. 2019b. CCMatrix: Mining billions of high- quality parallel sentences on the WEB. CoRR, abs/1911.04944v2.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "86--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine trans- lation models with monolingual data. In Pro- ceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Association for Computational Linguistics", "authors": [ { "first": "Germany", "middle": [], "last": "Berlin", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P16-1009" ] }, "num": null, "urls": [], "raw_text": "Berlin, Germany. Association for Computa- tional Linguistics. DOI: https://doi.org /10.18653/v1/P16-1009", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "MASS: Masked sequence to sequence pre-training for language generation", "authors": [ { "first": "Kaitao", "middle": [], "last": "Song", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 36th International Conference on Machine Learning, ICML 2019", "volume": "97", "issue": "", "pages": "5926--5936", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. MASS: Masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 5926-5936. PMLR.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Understanding and improving layer normalization", "authors": [ { "first": "Jingjing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Guangxiang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Junyang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "4383--4393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, and Junyang Lin. 2019. Understanding and improving layer normalization. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8- 14 December 2019, Vancouver, BC, Canada, pages 4383-4393.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Unsupervised neural machine translation with weight sharing", "authors": [ { "first": "Zhen", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Feng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "46--55", "other_ids": { "DOI": [ "10.18653/v1/P18-1005" ] }, "num": null, "urls": [], "raw_text": "Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised neural machine translation with weight sharing. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 46-55. Melbourne, Australia. Associ- ation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/P18 -1005", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Unsupervised word sense disambiguation rivaling supervised methods", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1995, "venue": "33rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "189--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd Annual Meeting of the Association for Computational Linguistics, pages 189-196.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Figure 1: Our self-training scheme. Left: We index sentences using our two encoders. For each source sentence, we retrieve k nearest-neighbor target sentences per the margin criterion (Eq. 1), depicted here for k = 4. If the nearest neighbor is within a threshold, it is treated with the source sentence as a positive pair, and the remaining k \u2212 1 are treated with the source sentence as negative pairs. Right: We refine one of the encoders such that the cosine similarity of the two embeddings is maximized on positive pairs and minimized on negative pairs." }, "TABREF0": { "text": "Table 1: F 1 scores for unsupervised bitext retrieval on BUCC 2017. Results with mBERT are from our method (Sec. 2) using the final (12th) layer. We also include results for the 8th layer (e.g.,Libovick\u00fd et al., 2019), but do not consider this part of the unsupervised setting as we would not have known a priori which layer was best to use. Thessaly and small parts of Epirus were ceded to Greece as part of the Treaty of Berlin.", "num": null, "type_str": "table", "html": null, "content": "
MethodDe-EnFr-EnRu-EnZh-En
Hangya and Fraser (2019)
avg.30.9644.8119.80\u2212
align-static42.8142.2124.53\u2212
align-dyn.43.3543.4424.97\u2212
Our method
mBERT (final layer)42.145.836.935.8
+ digit filtering (DF)47.049.341.238.0
+ edit distance (ED)47.049.341.238.0
+ self-training (ST)60.660.249.545.7
mBERT (layer 8)67.065.359.353.3
+ DF, ED, ST74.973.069.960.1
Language pair Parallel sentence pair
Beide Elemente des amerikanischen Traums haben heute einen Teil ihrer
De-EnAnziehungskraft verloren.
Both elements of the American dream have now lost something of their appeal.
L'Allemagne\u00e0 elle seule s'attend\u00e0 recevoir pas moins d'un million de demandeurs
Fr-End'asile cette ann\u00e9e.
Germany alone expects as many as a million asylum-seekers this year.
Ru-EnNevertheless, in 1881,
" }, "TABREF2": { "text": "F 1 scores for bitext retrieval on BUCC 2017 using random sentences as negative samples instead of nearest neighbors.", "num": null, "type_str": "table", "html": null, "content": "" }, "TABREF4": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
" }, "TABREF5": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
: Tokenized BLEU scores on tst2013 for
the low-resource IWSLT'15 English-Vietnamese
translation task using bitext mined with our
method. Added pairs are sorted by their score.
Development scores on tst2012 in parentheses.
" }, "TABREF7": { "text": "Tokenized BLEU scores", "num": null, "type_str": "table", "html": null, "content": "" } } } }