{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:06:50.236114Z" }, "title": "The Unreasonable Volatility of Neural Machine Translation Models", "authors": [ { "first": "Marzieh", "middle": [], "last": "Fadaee", "suffix": "", "affiliation": {}, "email": "marzieh.f@gmail.com" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "", "affiliation": {}, "email": "c.monz@uva.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent works have shown that while Neural Machine Translation (NMT) models achieve impressive performance, questions about understanding the behaviour of these models remain unanswered. We investigate the unexpected volatility of NMT models where the input is semantically and syntactically correct. We discover that with trivial modifications of source sentences, we can identify cases where unexpected changes happen in the translation and in the worst case lead to mistranslations. This volatile behaviour of translating extremely similar sentences in surprisingly different ways highlights the underlying generalization problem of current NMT models. We find that both RNN and Transformer models display volatile behaviour in 26% and 19% of sentence variations, respectively.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Recent works have shown that while Neural Machine Translation (NMT) models achieve impressive performance, questions about understanding the behaviour of these models remain unanswered. We investigate the unexpected volatility of NMT models where the input is semantically and syntactically correct. We discover that with trivial modifications of source sentences, we can identify cases where unexpected changes happen in the translation and in the worst case lead to mistranslations. This volatile behaviour of translating extremely similar sentences in surprisingly different ways highlights the underlying generalization problem of current NMT models. We find that both RNN and Transformer models display volatile behaviour in 26% and 19% of sentence variations, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The performance of Neural Machine Translation (NMT) models has dramatically improved in recent years, and with sufficient and clean data these models outperform more traditional models. Challenges when sufficient data is not available include translations of rare words (Pham et al., 2018) and idiomatic phrases (Fadaee et al., 2018) as well as domain mismatches between training and testing (Koehn and Knowles, 2017; Khayrallah and Koehn, 2018) .", "cite_spans": [ { "start": 270, "end": 289, "text": "(Pham et al., 2018)", "ref_id": "BIBREF24" }, { "start": 312, "end": 333, "text": "(Fadaee et al., 2018)", "ref_id": "BIBREF8" }, { "start": 392, "end": 417, "text": "(Koehn and Knowles, 2017;", "ref_id": "BIBREF16" }, { "start": 418, "end": 445, "text": "Khayrallah and Koehn, 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, several approaches investigated NMT models when encountering noisy input and how worst-case examples of noisy input can 'break' state-of-the-art NMT models (Goodfellow et al., 2015; Michel and Neubig, 2018) . Belinkov and Bisk (2018) show that character-level noise in the input leads to poor translation performance. Lee et al. (2018) randomly insert words in different positions in the source sentence and observe that in :", "cite_spans": [ { "start": 166, "end": 191, "text": "(Goodfellow et al., 2015;", "ref_id": "BIBREF11" }, { "start": 192, "end": 216, "text": "Michel and Neubig, 2018)", "ref_id": "BIBREF21" }, { "start": 219, "end": 243, "text": "Belinkov and Bisk (2018)", "ref_id": "BIBREF2" }, { "start": 328, "end": 345, "text": "Lee et al. (2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Marzieh is now affiliated with Zeta Alpha Vector. some cases the translations are completely unrelated to the input. While it is to some extent expected that the performance of NMT models that are trained on predominantly clean but tested on noisy data deteriorates, other changes are more unexpected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we explore unexpected and erroneous changes in the output of NMT models. Consider the simple example in Table 1 where the Transformer model (Vaswani et al., 2017 ) is used to translate very similar sentences. Surprisingly, we observe that by simply altering one word in the source sentence-inserting the German word sehr (English: very)-an unrelated change occurs in the translation. In principle, an NMT model that generates the translation of the word erleichtert (English: relieved) in one context, should also be able to generalize and translate it correctly in a very similar context. Note that there are no infrequent words in the source sentence and after each modification, the input is still syntactically correct and semantically plausible. We call a model volatile if it displays inconsistent behaviour across similar input sentences during inference.", "cite_spans": [ { "start": 155, "end": 176, "text": "(Vaswani et al., 2017", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 119, "end": 126, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We investigate to what extent well-established NMT models are volatile during inference. Specifically, we locally modify sentence pairs in the test set and identify examples where a trivial modification in the source sentence causes an 'unexpected change' in the translation. These modifications are generated conservatively to avoid insertion of any noise or rare words in the data (Section 2.2). Our goal is not to fool NMT models, but instead identify common cases where the models exhibit unexpected behaviour and in the worst cases result in incorrect translations. We observe that our modifications expose volatilities of both RNN and Transformer translation models in 26% and 19% of sentence variations, respectively. Our findings show how vulnerable current NMT models are to trivial linguistic variations, putting into question the generalizability of these models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Noisy input text can cause mistranslations in most MT systems, and there has been growing research interest in studying the behaviour of MT systems when encountering noisy input (Li et al., 2019) . Belinkov and Bisk (2018) propose to swap or randomize letters in a word in the input sentence. For instance, they change the word noise in the source sentence into iones. Lee et al. (2018) examine how the insertion of a random word in a random position in the source sentence leads to mistranslations. Michel and Neubig (2018) proposes a benchmark dataset for translation of noisy input sentences, consisting of noisy, user-generated comments on Reddit. The types of noisy input text they observe include spelling or typographical errors, word omission/insertion/repetition, and grammatical errors.", "cite_spans": [ { "start": 178, "end": 195, "text": "(Li et al., 2019)", "ref_id": "BIBREF19" }, { "start": 198, "end": 222, "text": "Belinkov and Bisk (2018)", "ref_id": "BIBREF2" }, { "start": 369, "end": 386, "text": "Lee et al. (2018)", "ref_id": "BIBREF17" }, { "start": 500, "end": 524, "text": "Michel and Neubig (2018)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Is this another noisy text translation problem?", "sec_num": "2.1" }, { "text": "In these previous works, the focus of the research is on studying how the MT systems are not robust when handling noisy input text. In these approaches, the input sentences are semantically or syntactically incorrect which leads to mistranslations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Is this another noisy text translation problem?", "sec_num": "2.1" }, { "text": "However, in this paper, our focus is on input text that does not contain any types of noise. We modify input sentences in a way that the outcomes are still syntactically and semantically correct. We investigate how the MT systems exhibit volatile behaviour in translating sentences that are extremely similar and only differ in one word without any noise injection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Is this another noisy text translation problem?", "sec_num": "2.1" }, { "text": "While there are various ways to automatically modify sentences, we are interested in simple semantic and syntactic modifications. These trivial linguistic variations should have almost no effect on the translation of the rest of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variation generation", "sec_num": "2.2" }, { "text": "We define a set of rules to slightly modify the source and target sentences in the test data and keep the sentences syntactically correct and semantically plausible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variation generation", "sec_num": "2.2" }, { "text": "DEL A conservative approach of modifying a sentence automatically without breaking the grammaticality of a sentence is to remove adverbs. We identify a list of the 50 most frequent adverbs in English and their translations in German * . For every sentence in the WMT test sets, if we find a sentence pair containing both a word and its translation from this list, we remove both words and create a new sentence pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variation generation", "sec_num": "2.2" }, { "text": "Another simple yet effective approach to safely modify sentences is to substitute numbers with other numbers. In this approach, we select every sentence pair from the test sets that contains a number and substitute the number i in both source and target sentences with i`k where 1 \u010f k \u010f 5. We choose a small range for change so that the sentences are still semantically correct Transformer. The majority of sentence variations falls into the category of minor changes between translations (blue area). However, a surprising number of cases have significant changes (red area). RNN exhibits a slightly more unstable pattern i.e., sentence variations with large edit differences and large spans of change.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SUBNUM", "sec_num": null }, { "text": "for the most part and result in few implausible sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SUBNUM", "sec_num": null }, { "text": "INS Randomly inserting words in a sentence has a high chance of producing a syntactically incorrect sentence. To ensure that sentences remain grammatical and semantically plausible after modification, we define a bidirectional n-gram probability for inserting new words as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SUBNUM", "sec_num": null }, { "text": "P pw 3 |w 1 w 2 w 4 w 5 q \" Cpw 1 w 2 w 3 w 4 w 5 q Cpw 1 w 2 \u201a w 4 w 5 q", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SUBNUM", "sec_num": null }, { "text": "w 3 is inserted in the middle of the phrase w 1 w 2 w 4 w 5 , if the conditional probability is greater than a predefined threshold. The probabilities are computed on the WMT data. This simple approach, instead of using a more complex language model, serves our purposes since we are interested in inserting very common words that are already captured by the n-grams in the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SUBNUM", "sec_num": null }, { "text": "SUBGEN Finally, a local modification is changing the gender of the person in the sentences. The goal of this modification is to investigate the existence and severity of gender bias in our models. This is inspired by recent approaches that have shown that NMT models learn social stereotypes such as gender bias from training data (Escud\u00e9 Font and Costa-juss\u00e0, 2019; Stanovsky et al., 2019) .", "cite_spans": [ { "start": 339, "end": 366, "text": "Font and Costa-juss\u00e0, 2019;", "ref_id": "BIBREF7" }, { "start": 367, "end": 390, "text": "Stanovsky et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "SUBNUM", "sec_num": null }, { "text": "Note that in a minority of cases these procedures can lead to semantically incorrect sentences, for instance, by substituting numbers we can potentially generate sentences such as \"She was born on October 34th\". While this can cause problems for a reasoning task, it barely affects the translation task, as long as the modifications are consistent on the source and target side. Table 2 shows examples of generated variations. We emphasize that only modifications with local consequences have been selected and we intentionally ignore cases such as negation which can result in wider structural changes in the translation of the sentence. We generate 10k sentence variations by applying these modifications to all sentence pairs in WMT test sets 2013-2018 (Bojar et al., 2018) . We use RNN and Transformer models to translate sentences and their variations.", "cite_spans": [ { "start": 756, "end": 776, "text": "(Bojar et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 379, "end": 386, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "SUBNUM", "sec_num": null }, { "text": "In the translation experiments, we use the standard EN\u00d8DE WMT-2017 training data (Bojar et al., 2018) . We perform NMT experiments with two different architectures: RNN (Luong et al., 2015) and Transformer (Vaswani et al., 2017) . We preprocess the training data with Byte-Pair Encoding (BPE) using 32K merge operations (Sennrich et al., 2016) . During inference, we use beam search with a beam size of 5. Table 3 shows the case-sensitive BLEU scores as calculated by Table 4 : A random sample of sentences from the WMT test sets and our proposed variations shown with 'unexpected change' annotations (\u2206T ranslation). The cases where the unexpected change leads to a change in translation quality are marked in column \u2206Quality. [w i \\w j ] indicates that w i in the original sentence is replaced by w j . S is the original and modified source sentence, R is the original and modified reference translation, T is the translation of the original sentence, and T m is the translation of the modified sentence. Differences in translations related to annotations in the original and the modified translations are in red and orange, respectively. Note that we are interested in unexpected changes and do not highlight the changes that are a direct consequence of the modifications. In the event of an accident involving a coach with 43 senior citizens as passengers, eight people were injured on Thursday in Krummaudin (County Aurich). Tm In the event of an accident involving a 45-year-old coach as a passenger, eight people were injured on Thursday in the district of Aurich. RNN As the first NMT system, we use a 2-layer bidirectional attention-based LSTM model implemented in OpenNMT (Klein et al., 2017) trained with an embedding size of 512, hidden dimension size of 1024, and batch size of 64 sentences. We use Adam (Kingma and Ba, 2015) for optimization.", "cite_spans": [ { "start": 81, "end": 101, "text": "(Bojar et al., 2018)", "ref_id": "BIBREF4" }, { "start": 169, "end": 189, "text": "(Luong et al., 2015)", "ref_id": "BIBREF20" }, { "start": 206, "end": 228, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF28" }, { "start": 320, "end": 343, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF25" }, { "start": 1682, "end": 1702, "text": "(Klein et al., 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 406, "end": 413, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 468, "end": 475, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experimental setup", "sec_num": "2.3" }, { "text": "Transformer We also experiment with the Transformer model (Vaswani et al., 2017) implemented in OpenNMT. We train a model with 6 layers, the hidden size is set to 512 and the fil-ter size is set to 2048. The multi-head attention has 8 attention heads. We use Adam (Kingma and Ba, 2015) for optimization. All parameters are set based on the suggestions in Klein et al. (2017) to replicate the results of the original paper.", "cite_spans": [ { "start": 58, "end": 80, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF28" }, { "start": 355, "end": 374, "text": "Klein et al. (2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "2.3" }, { "text": "The modifications described above generate sentences that are extremely similar and hence are expected to have a very similar difficulty of translation. We evaluate the NMT models on how robust and consistent they are in translating these sen- tence variations rather than their absolute quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of unexpected and erroneous changes", "sec_num": "3" }, { "text": "The variations are aimed to have minimal effect on changing the meaning of the sentences. Hence, major changes in the translations of these variations can be an indication of volatility in the model. To assess whether the proposed sentence variations result in major changes in the translations, we measure changes in the translations of sentence variations with Levenshtein distance (Levenshtein, 1966) . Specifically, Levenshtein distance measures the edit distance between the two translations. We also use the first and last positions of change in the translations, which represents the span of changes. Ideally, with our simple modifications, we expect a value of zero for the span of change and a value of at most 2 for the Levenshtein distance for a translation pair. This indicates that there is only one token difference between the translation of the original sentence and the modified sentence. We define two types of changes based on these measures: minor and major. We choose the threshold to distinguish between minor and major changes more conservatively to allow for more variations in the translations. The change in translations is empirically considered major if both metrics are greater than 10, and minor if both are less than 10. Note that edit distances and spans are based on BPE subword units.", "cite_spans": [ { "start": 384, "end": 403, "text": "(Levenshtein, 1966)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Deviations from Original Translations", "sec_num": "3.1" }, { "text": "With two very similar source sentences, we expect the Levenshtein distance and span of change between translations of these sentences to be small. Figure 1 shows the results for the RNN and Transformer model. While the majority of sentence variations have minor changes, a substan-tial number of sentences, 18% of RNN and 13% of Transformer translations, result in translations with major differences. This is surprising and an indication of volatility since these trivial modifications, in principle, should only result in minor and local changes in the translations.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 155, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Deviations from Original Translations", "sec_num": "3.1" }, { "text": "In this section, we look into various sentence-level metrics to further analyze the observed behaviour. In particular, we focus on the SUBNUM modification because with this modification we can generate numerous variations of the same sentence. Having a high number of variations for each sentence gives us the opportunity of observing oscillations of various string matching metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Oscillations of Variation in Translations", "sec_num": "3.2" }, { "text": "We use sentence-level BLEU, METEOR (Denkowski and Lavie, 2011), TER (Snover et al., 2006) , and LengthRatio to quantify changes in the translations. LengthRatio represents the translation length over reference length as a percentage. For a given source sentence, we define the oscillation range as changes in the sentence-level metric for the translations of variations of a given sentence.", "cite_spans": [ { "start": 68, "end": 89, "text": "(Snover et al., 2006)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Oscillations of Variation in Translations", "sec_num": "3.2" }, { "text": "While sentence-level metrics are not reliable indicators of translation quality, they do capture fluctuations in translations. With the variations we introduce, in theory there should be no fluctuations in the translations. Table 5 and Figure 3 provide the results. We observe that even though these sentence variations differ by only one number, there are many cases where an insignificant change in the sentence results in unexpectedly large oscillations. Both RNN and Transformer exhibit this behaviour to a certain extent. ", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 231, "text": "Table 5", "ref_id": "TABREF8" }, { "start": 236, "end": 244, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Oscillations of Variation in Translations", "sec_num": "3.2" }, { "text": "While edit distances and spans of change provide some indication of volatility, they do not capture all aspects of this unexpected behaviour. It is also not entirely clear what effect these unexpected changes have on translation quality. To further investigate this, we also perform manual evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Effect of Volatility on Translation Quality", "sec_num": "3.3" }, { "text": "In the first evaluation, we provide annotators with a pair of sentence variations and their corresponding translations and ask them to identify the differences between the two sentence pairs. In the second evaluation, we additionally provide the source sentences and reference translations, and ask the annotators to rank the sentence variations based on the translation quality similar to Bojar et al. (2016) . In total the annotators evaluated 400 randomly selected sentence quadruplets.", "cite_spans": [ { "start": 390, "end": 409, "text": "Bojar et al. (2016)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "The Effect of Volatility on Translation Quality", "sec_num": "3.3" }, { "text": "The annotators identified 71% and 68% of changes in the variation translation as expected for the RNN and Transformer model, respectively. The main types of unexpected changes identified by the annotators are a change of word form, e.g., verb tense,, reordering of phrases, paraphrasing parts of the sentence, and an 'other' category, e.g., preposition. A sentence pair can have multiple labels based on the types of changes. Table 4 provides examples from the test data.", "cite_spans": [], "ref_spans": [ { "start": 426, "end": 433, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "The Effect of Volatility on Translation Quality", "sec_num": "3.3" }, { "text": "Statistics for each category of unexpected change is shown in Figure 2 . Our first observation is that, as to be expected, there are very few 'unexpected changes' when two variations lead to translations with minor differences. Interestingly, the vast majority of changes are due to paraphrasing and dropping of words. Comparing the performance of the RNN and Transformer model, we see that both RNN and Transformer display inconsistent translation behaviour. While Transformer has slightly fewer sentences with major changes, it has a higher number of sentence variations in the major category that result in a change in translation quality. From the annotators' assessments, we find that in 26% and 19% of sentence variations, the modification results in a change in translation quality for the RNN and Transformer model, respectively.", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 70, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The Effect of Volatility on Translation Quality", "sec_num": "3.3" }, { "text": "Because of their ability to generalize beyond their training data, deep learning models achieve exceptional performances in numerous tasks. The generalization ability allows MT systems to generate long sentences not seen before. Recently there has been some interest in understanding whether this performance depends on recognizing shallow patterns, or whether the networks are indeed capturing and generalizing linguistic rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization and Compositionality", "sec_num": "3.4" }, { "text": "In simple terms, compositionality is the ability to construct larger linguistic expressions by combining simpler parts. For instance, if a model understands the correct compositional rules to understand 'John loves Mary', it must also understand 'Mary loves John' (Fodor and LePore, 2002) . Investigating the compositional behaviour of neural networks in real-world natural language problems is a challenging task. Recently, several works have studied deep learning models' understanding of compositionality in natural language by using synthetic and simplified languages (Andreas, 2019; Chevalier-Boisvert et al., 2019) . Baroni (2019) shows that to a certain extent neural networks can be productive without being compositional.", "cite_spans": [ { "start": 264, "end": 288, "text": "(Fodor and LePore, 2002)", "ref_id": "BIBREF9" }, { "start": 572, "end": 587, "text": "(Andreas, 2019;", "ref_id": "BIBREF0" }, { "start": 588, "end": 620, "text": "Chevalier-Boisvert et al., 2019)", "ref_id": "BIBREF5" }, { "start": 623, "end": 636, "text": "Baroni (2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Generalization and Compositionality", "sec_num": "3.4" }, { "text": "Although we do not specifically look into the compositional potential of MT systems, we are inspired by compositionality in defining our modifications. We argue that the observed volatile behaviour of the MT systems in this paper is a side effect of current models not being compositional. If an MT system has a good 'understanding' of the underlying structures of the sentences 'Mary is 10 years old' and 'Mary is 11 years old', it must also translate them very similarly regardless of the ac-curacy of the translation. While current evaluation metrics capture the accuracy of the NMT models, these volatilities go unnoticed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization and Compositionality", "sec_num": "3.4" }, { "text": "Current neural models are successful in generalizing without learning any explicit compositional rules, however, our findings signal that they still lack robustness. We highlight this lack of robustness and suspect that it is associated with these models' lack of understanding of the compositional nature of language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization and Compositionality", "sec_num": "3.4" }, { "text": "In this paper, we showed the unexpected volatility of NMT models by using a simple approach to modifying standard test sentences without introducing noise, i.e., by generating semantically and syntactically correct variations. We show that even with trivial linguistic modifications of source sentences we can effectively identify a surprising number of cases where the translations of extremely similar sentences are surprisingly different, see Figure 1 . Our manual analyses show that both RNN and Transformer models exhibit volatile behaviour with changes in translation quality for 26% and 19% of sentence variations, respectively. This highlights the problem of generalizability of current NMT models and we hope that our insights will be useful for developing more robust NMT models.", "cite_spans": [], "ref_spans": [ { "start": 446, "end": 454, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "4" } ], "back_matter": [ { "text": "We thank Arianna Bisazza for helpful discussions. This research was funded in part by the Netherlands Organization for Scientific Research (NWO) under project numbers 639.022.213 and 612.001.218. We also thank NVIDIA for their hardware support and the anonymous reviewers for their helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Measuring compositionality in representation learning", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Andreas. 2019. Measuring compositionality in representation learning. CoRR, abs/1902.07181.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Linguistic generalization and compositionality in modern artificial neural networks", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni. 2019. Linguistic generalization and compositionality in modern artificial neural net- works. CoRR, abs/1904.00157.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Synthetic and natural noise both break neural machine translation", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine transla- tion. In Proceedings of the International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Findings of the 2016 conference on machine translation", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Antonio", "middle": [ "Jimeno" ], "last": "Yepes", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Aurelie", "middle": [], "last": "Neveol", "suffix": "" }, { "first": "Mariana", "middle": [], "last": "Neves", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Popel", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Rubino", "suffix": "" }, { "first": "Carolina", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" }, { "first": "Karin", "middle": [], "last": "Verspoor", "suffix": "" }, { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the First Conference on Machine Translation", "volume": "", "issue": "", "pages": "131--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aure- lie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation, pages 131- 198, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Findings of the 2018 conference on machine translation (wmt18)", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation", "volume": "2", "issue": "", "pages": "272--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, and Christof Monz. 2018. Find- ings of the 2018 conference on machine translation (wmt18). In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Pa- pers, pages 272-307, Belgium, Brussels. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BabyAI: First steps towards grounded language learning with a human in the loop", "authors": [ { "first": "Maxime", "middle": [], "last": "Chevalier-Boisvert", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Salem", "middle": [], "last": "Lahlou", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Willems", "suffix": "" }, { "first": "Chitwan", "middle": [], "last": "Saharia", "suffix": "" }, { "first": "Thien", "middle": [], "last": "Huu Nguyen", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. 2019. BabyAI: First steps towards grounded language learning with a human in the loop. In International Conference on Learning Representations.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems", "authors": [ { "first": "Michael", "middle": [], "last": "Denkowski", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "85--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Pro- ceedings of the Sixth Workshop on Statistical Ma- chine Translation, pages 85-91, Edinburgh, Scot- land. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Equalizing gender bias in neural machine translation with word embeddings techniques", "authors": [ { "first": "Joel", "middle": [ "Escud\u00e9" ], "last": "Font", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing", "volume": "", "issue": "", "pages": "147--154", "other_ids": { "DOI": [ "10.18653/v1/W19-3821" ] }, "num": null, "urls": [], "raw_text": "Joel Escud\u00e9 Font and Marta R. Costa-juss\u00e0. 2019. Equalizing gender bias in neural machine transla- tion with word embeddings techniques. In Proceed- ings of the First Workshop on Gender Bias in Natu- ral Language Processing, pages 147-154, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Examining the tip of the iceberg: A data set for idiom translation", "authors": [ { "first": "Marzieh", "middle": [], "last": "Fadaee", "suffix": "" }, { "first": "Arianna", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). European Language Resource Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2018. Examining the tip of the iceberg: A data set for idiom translation. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). European Language Resource Association.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Compositionality Papers", "authors": [ { "first": "J", "middle": [ "A" ], "last": "Fodor", "suffix": "" }, { "first": "E", "middle": [], "last": "Lepore", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.A. Fodor and E. LePore. 2002. The Compositionality Papers. Clarendon Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "uber sinn und bedeutung", "authors": [ { "first": "G", "middle": [], "last": "Frege", "suffix": "" } ], "year": null, "venue": "Funktion -Begriff -Bedeutung", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Frege. 1892. \"uber sinn und bedeutung. In Mark Textor, editor, Funktion -Begriff -Bedeutung, vol- ume 4 of Sammlung Philosophie. Vandenhoeck & Ruprecht, G\"ottingen.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Explaining and harnessing adversarial examples", "authors": [ { "first": "J", "middle": [], "last": "Ian", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "", "middle": [], "last": "Szegedy", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adver- sarial examples. In Proceedings of the International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Frege, contextuality and compositionality", "authors": [ { "first": "M", "middle": [ "V" ], "last": "Theo", "suffix": "" }, { "first": "", "middle": [], "last": "Janssen", "suffix": "" } ], "year": 2001, "venue": "Journal of Logic, Language and Information", "volume": "10", "issue": "1", "pages": "115--136", "other_ids": { "DOI": [ "10.1023/A:1026542332224" ] }, "num": null, "urls": [], "raw_text": "Theo M. V. Janssen. 2001. Frege, contextuality and compositionality. Journal of Logic, Language and Information, 10(1):115-136.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "On the impact of various types of noise on neural machine translation", "authors": [ { "first": "Huda", "middle": [], "last": "Khayrallah", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", "volume": "", "issue": "", "pages": "74--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 74-83. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [ "Lei" ], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Repre- sentations (ICLR).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Opennmt: Open-source toolkit for neural machine translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL 2017, System Demonstrations", "volume": "", "issue": "", "pages": "67--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstra- tions, pages 67-72. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Six challenges for neural machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Knowles", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.03872" ] }, "num": null, "urls": [], "raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. arXiv preprint arXiv:1706.03872.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Hallucinations in neural machine translation", "authors": [ { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Fannjiang", "suffix": "" }, { "first": "David", "middle": [], "last": "Sussillo", "suffix": "" } ], "year": 2018, "venue": "Neural Information Processing Systems (NeurIPS) Workshop on Interpretability and Robustness for Audio, Speech, and Language. NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. 2018. Hallucinations in neural machine translation. In Neural Informa- tion Processing Systems (NeurIPS) Workshop on In- terpretability and Robustness for Audio, Speech, and Language. NeurIPS.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Binary codes capable of correcting deletions, insertions, and reversals. Soviet physics doklady", "authors": [ { "first": "", "middle": [], "last": "Vladimir I Levenshtein", "suffix": "" } ], "year": 1966, "venue": "", "volume": "10", "issue": "", "pages": "707--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. So- viet physics doklady, 10(8):707-710.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Findings of the first shared task on machine translation robustness", "authors": [ { "first": "Xian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Antonios", "middle": [], "last": "Anastasopoulos", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Pino", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "2", "issue": "", "pages": "91--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, and Has- san Sajjad. 2019. Findings of the first shared task on machine translation robustness. In Proceedings of the Fourth Conference on Machine Translation (Vol- ume 2: Shared Task Papers, Day 1), pages 91-102, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Mtnt: A testbed for machine translation of noisy text", "authors": [ { "first": "Paul", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "543--553", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Michel and Graham Neubig. 2018. Mtnt: A testbed for machine translation of noisy text. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 543- 553. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Universal grammar. In Formal Philosophy: Selected Papers of Richard Montague, number 222-247 in Theoria", "authors": [ { "first": "Richard", "middle": [], "last": "Montague", "suffix": "" } ], "year": 1974, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Montague. 1974. Universal grammar. In For- mal Philosophy: Selected Papers of Richard Mon- tague, number 222-247 in Theoria, New Haven, London. Yale University Press.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The principle of semantic compositionality", "authors": [ { "first": "Francis Jeffry", "middle": [], "last": "Pelletier", "suffix": "" } ], "year": 1994, "venue": "", "volume": "13", "issue": "", "pages": "11--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francis Jeffry Pelletier. 1994. The principle of seman- tic compositionality. Topoi, 13:11-24.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Towards one-shot learning for rare-word translation with external experts", "authors": [ { "first": "Ngoc-Quan", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Niehues", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", "volume": "", "issue": "", "pages": "100--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ngoc-Quan Pham, Jan Niehues, and Alexander Waibel. 2018. Towards one-shot learning for rare-word translation with external experts. In Proceedings of the 2nd Workshop on Neural Machine Transla- tion and Generation, pages 100-109. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A study of translation edit rate with targeted human annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Linnea", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "223--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annota- tion. In In Proceedings of Association for Machine Translation in the Americas, pages 223-231.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Evaluating gender bias in machine translation", "authors": [ { "first": "Gabriel", "middle": [], "last": "Stanovsky", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1679--1684", "other_ids": { "DOI": [ "10.18653/v1/P19-1164" ] }, "num": null, "urls": [], "raw_text": "Gabriel Stanovsky, Noah A. Smith, and Luke Zettle- moyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran As- sociates, Inc.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Levenshtein distance and span of change between translations of sentence variations for RNN and", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "Categories of unexpected changes in the translation of sentence variations as provided by annotators. The percentage of sentence variations with minor and major edit differences, as defined in 3.1, are shown separately. The hatched pattern indicates the ratio of sentence variations for which the translation quality changes. Note that expected changes are not plotted here.", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "Oscillations of various sentence-level attributes for randomly sampled sentences from our test data and their SUBNUM variations. The data points are the mean values for all variations of each sentence, and the error bars indicate the range of oscillation of the metrics. The x-axis represents test sentence instances, sorted based on the corresponding metric. Ideally each data point should have zero oscillation.", "num": null, "type_str": "figure", "uris": null }, "TABREF1": { "type_str": "table", "text": "Insertion of the German word sehr (English: very) in different positions in the source sentence results in substantially different translations. ; indicates the original sentence from WMT 2017.", "num": null, "html": null, "content": "" }, "TABREF2": { "type_str": "table", "text": "DELSome 500 years after the Reformation, Rome [now\\\u03c6 \u03c6 \u03c6] has a Martin Luther Square. SUBNUM I'm very pleased for it to have happened at Newmarket because this is where I landed [30\\31] years ago.INSI loved Amy and she is [\u03c6 \u03c6 \u03c6\\also] the only person who ever loved me.SUBGEN[He\\She] received considerable appreciation and praise for this.", "num": null, "html": null, "content": "
Modification Sentence variations
" }, "TABREF3": { "type_str": "table", "text": "Examples of different variations from WMT. [w i \\w j ] indicates that w i in the original sentence is replaced by w j . \u03c6 is an empty string.", "num": null, "html": null, "content": "" }, "TABREF5": { "type_str": "table", "text": "BLEU scores for different models on the WMT data for translation DE\u00d8EN.", "num": null, "html": null, "content": "
" }, "TABREF6": { "type_str": "table", "text": "Coes letztes Buch \"Chop Suey\" handelte von der chinesischen K\u00fcche in den USA, w\u00e4hrend Ziegelman in ihrem Buch \"[97\\101] Orchard\"\u00fcber das Leben in einem Wohnhaus an der Lower East Side aus der Lebensmittelperspektive erz\u00e4hlt. R Mr. Coe's last book, \"Chop Suey,\" was about Chinese cuisine in America, while Ms. Ziegelman told the story of life in a Lower East Side tenement through food in her book \"[97\\101] Orchard.\" T Coes's last book, \"Chop Suey,\" was about Chinese cuisine in the US, while Ziegelman, in her book \"97 Orchard\" talks about living in a lower East Side. Tm Coes last book \"Chop Suey\" was about Chinese cuisine in the United States, while Ziegelman writes in her book \"101 Orchard\" about living in a lower East Side. You are [already\\\u03c6 \u03c6 \u03c6] on the lookout for a park bench, a dog, and boys and girls playing football. Tm Look for Parkbank, dog and football playing boys and girls.", "num": null, "html": null, "content": "
\u2206T ranslation: [reordered] [paraphrased]
\u2206Quality: No
\u2206T ranslation: [word form] [add/remove]
\u2206Quality: Yes
" }, "TABREF7": { "type_str": "table", "text": "It's a backbreaking pace, but village musicians [usually\\\u03c6 \u03c6 \u03c6] help keep the team motivated. T It's a demanding child, but the village musicians usually help keep the team motivated. Tm It is a hard-to-use, but the village musician helps to keep the team motivated.", "num": null, "html": null, "content": "
\u2206T ranslation: [word form] [other]
\u2206Quality: Yes
multi-bleu.perl.
" }, "TABREF8": { "type_str": "table", "text": "Mean oscillations for SUBNUM variations. In theory the variations should result in zero oscillations for every metric.", "num": null, "html": null, "content": "
BLEU METEOR TER LengthRatio
RNN4.03.85.25.3
Transformer3.83.34.23.4
" } } } }