ACL-OCL / Base_JSON /prefixN /json /naacl /2021.naacl-industry.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:16:42.900477Z"
},
"title": "Addressing the Vulnerability of NMT in Input Perturbations",
"authors": [
{
"first": "Weiwen",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {},
"email": "wwxu@se.cuhk.edu.hk"
},
{
"first": "Ai",
"middle": [
"Ti"
],
"last": "Aw",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yang",
"middle": [],
"last": "Ding",
"suffix": "",
"affiliation": {},
"email": "ding_yang@i2r.a-star.edu.sg"
},
{
"first": "Kui",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nanyang Technological University",
"location": {}
},
"email": "srjoty@ntu.edu.sg"
},
{
"first": "Hong",
"middle": [],
"last": "Kong",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Neural Machine Translation (NMT) has achieved significant breakthrough in performance but is known to suffer vulnerability to input perturbations. As real input noise is difficult to predict during training, robustness is a big issue for system deployment. In this paper, we improve the robustness of NMT models by reducing the effect of noisy words through a Context-Enhanced Reconstruction (CER) approach. CER trains the model to resist noise in two steps: (1) perturbation step that breaks the naturalness of input sequence with madeup words; (2) reconstruction step that defends the noise propagation by generating better and more robust contextual representation. Experimental results on Chinese-English (ZH-EN) and French-English (FR-EN) translation tasks demonstrate robustness improvement on both news and social media text. Further finetuning experiments on social media text show our approach can converge at a higher position and provide a better adaptation.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Neural Machine Translation (NMT) has achieved significant breakthrough in performance but is known to suffer vulnerability to input perturbations. As real input noise is difficult to predict during training, robustness is a big issue for system deployment. In this paper, we improve the robustness of NMT models by reducing the effect of noisy words through a Context-Enhanced Reconstruction (CER) approach. CER trains the model to resist noise in two steps: (1) perturbation step that breaks the naturalness of input sequence with madeup words; (2) reconstruction step that defends the noise propagation by generating better and more robust contextual representation. Experimental results on Chinese-English (ZH-EN) and French-English (FR-EN) translation tasks demonstrate robustness improvement on both news and social media text. Further finetuning experiments on social media text show our approach can converge at a higher position and provide a better adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent techniques (Bahdanau et al., 2014; Vaswani et al., 2017) in NMT have gained remarkable improvement in translation quality. However, robust NMT that is immune to real input noise remains a big challenge for NMT researchers. Real input noises can exhibit in many forms such as spelling and grammatical errors, homophones replacement, Internet slang, new words or even a valid word used in an unfamiliar or a new context. Unlike humans who can easily comprehend and translate such texts, most NMT models are not robust to generate appropriate and meaningful translations in the presence of such noises, challenging the deployment of NMT system in real scenarios. : Examples of NMT's vulnerability in translating text containing noisy words (\"zei\" \u2192 \"thief\", \"chengfa\" \u2192 \"punishment\"). CER mitigates the effect of noisy words.",
"cite_spans": [
{
"start": 18,
"end": 41,
"text": "(Bahdanau et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 42,
"end": 63,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Noisy words have long been discussed in previous work. Aw et al. (2006) proposed the normalization approach to reduce the noise before translation. Tan et al. (2020a,b) addressed the character-level noise directly in the NMT model. Though these approaches addressed the effect of noisy words to some extent, they are limited to spelling errors, inflectional variations, and other noises definable during training. In addition, strong external supervision like a parallel corpus of noisy text translation or dictionary containing the translation of those noisy words are hard and expensive to obtain; they are also not practical in handling real noises as noisy words can exhibit in random forms and cannot be fully anticipated during training. Belinkov and Bisk (2018) pointed out NMT models are sensitive to small input perturbations and if this issue is not addressed, it will continue to bottleneck the translation quality. In such cases, not only the word embeddings of perturbations may cause irregularities with the local context, the contextual representation of other words may also get affected by such perturbations (Liu et al., 2019) . This phenomenon applies to valid words in unfamiliar context as well, which will also cause the translation to fail as illustrated in Table 1 (case 2) .",
"cite_spans": [
{
"start": 55,
"end": 71,
"text": "Aw et al. (2006)",
"ref_id": "BIBREF1"
},
{
"start": 148,
"end": 168,
"text": "Tan et al. (2020a,b)",
"ref_id": null
},
{
"start": 744,
"end": 768,
"text": "Belinkov and Bisk (2018)",
"ref_id": "BIBREF4"
},
{
"start": 1126,
"end": 1144,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 1281,
"end": 1297,
"text": "Table 1 (case 2)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we define \"noisy word\" as a valid or invalid word that is uncommonly used in the context or not observed frequently enough in the training data. When encoding a sentence with such a noisy word, the contextual representation of other words in the sentence are affected by the \"less jointly trained\" noisy word embeddings. We refer this process as \"noise propagation\". Noise propagation can extend to the decoder and finally distort the overall translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main intuition of our proposed method is to minimize this noise propagation and reduce the irregularities in contextual representation due to these words via a Context-Enhanced Reconstruction (CER) approach. To reduce the sensitivity of contextual towards noisy words in the encoder, we inject made-up words randomly to the source side of the training data to break the text naturalness. We then use a Noise Adaptation Layer (NAL) to enable a more stable contextual representation by minimizing the reconstruction loss. In the decoder, we add perturbations with a semantic constraint and apply the same reconstruction loss. Unlike adversarial examples which are crafted to cause the target model to fail, our perturbation process does not have such constraint and does not rely on a target model. Our input perturbations are randomly generated, representing any types of noises that can be observed in real-world usage. This makes the perturbation process generic, easy and fast. Following (Cheng et al., 2018) , we generate semantically related perturbations in the decoder to increase the diversity of the translations.",
"cite_spans": [
{
"start": 994,
"end": 1014,
"text": "(Cheng et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Together with NAL, our model shows its ability to resist noises in the input and produce more robust translations. Results on ZH-EN and FR-EN translation significantly improve over the baseline by +1.24 (MT03) and +1.4 (N15) BLEU on news domain, and +1.63 (Social), +1.3 (mtnt18) on social media domain respectively. Further fine-tuning experiments on FR-EN social media text even witness an average improvement of +1.25 BLEU over the best approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Robust Training: Robust training has shown to be effective to improve the robustness of the models in computer vision (Szegedy et al., 2013) . In Natural Language Processing, it involves augmenting the training data with carefully crafted noisy examples: semantically equivalent word substitu-tions (Alzantot et al., 2018) , paraphrasing (Iyyer et al., 2018; Ribeiro et al., 2018) , character-level noise (Ebrahimi et al., 2018b; Tan et al., 2020a,b) , or perturbations at embedding space (Miyato et al., 2016; Liang et al., 2020) . Inspired by Lei et al. (2017) that nicely captures the semantic interactions in discourse relation, we regard noise as a disruptor to break semantic interactions and propose our CER approach to mitigate this phenomenon. We make up \"noisy\" words randomly to act as random noise in the input to break the text naturalness. Our experiment demonstrates its superiority in multiple dimensions.",
"cite_spans": [
{
"start": 118,
"end": 140,
"text": "(Szegedy et al., 2013)",
"ref_id": "BIBREF33"
},
{
"start": 299,
"end": 322,
"text": "(Alzantot et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 338,
"end": 358,
"text": "(Iyyer et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 359,
"end": 380,
"text": "Ribeiro et al., 2018)",
"ref_id": "BIBREF29"
},
{
"start": 405,
"end": 429,
"text": "(Ebrahimi et al., 2018b;",
"ref_id": "BIBREF11"
},
{
"start": 430,
"end": 450,
"text": "Tan et al., 2020a,b)",
"ref_id": null
},
{
"start": 489,
"end": 510,
"text": "(Miyato et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 511,
"end": 530,
"text": "Liang et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 545,
"end": 562,
"text": "Lei et al. (2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Robust Neural Machine Translation: Methods have been proposed to make NMT models resilient not only to adequacy errors (Lei et al., 2019) but also to both natural and synthetic noise. Incorporating monolingual data into NMT has the capacity to improve the robustness (Sennrich et al., 2016a; Edunov et al., 2018; Cheng et al., 2016) . Some non data-driven approaches that specifically designed to address the robustness problem of NMT (Sperber et al., 2017; Ebrahimi et al., 2018a; Wang et al., 2018; Karpukhin et al., 2019; Cheng et al., 2019 Cheng et al., , 2020 explored effective ways to synthesize adversarial examples into the training data. Belinkov and Bisk (2018) showed a structure-invariant word representation capable of addressing multiple typo noise. Cheng et al. (2018) used adversarial stability training strategy to make NMT resilient to arbitrary noise. Liu et al. (2019) added an additional phonetic embedding to overcome homophone noise.",
"cite_spans": [
{
"start": 119,
"end": 137,
"text": "(Lei et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 267,
"end": 291,
"text": "(Sennrich et al., 2016a;",
"ref_id": "BIBREF30"
},
{
"start": 292,
"end": 312,
"text": "Edunov et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 313,
"end": 332,
"text": "Cheng et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 435,
"end": 457,
"text": "(Sperber et al., 2017;",
"ref_id": "BIBREF32"
},
{
"start": 458,
"end": 481,
"text": "Ebrahimi et al., 2018a;",
"ref_id": "BIBREF10"
},
{
"start": 482,
"end": 500,
"text": "Wang et al., 2018;",
"ref_id": "BIBREF38"
},
{
"start": 501,
"end": 524,
"text": "Karpukhin et al., 2019;",
"ref_id": "BIBREF15"
},
{
"start": 525,
"end": 543,
"text": "Cheng et al., 2019",
"ref_id": "BIBREF6"
},
{
"start": 544,
"end": 564,
"text": "Cheng et al., , 2020",
"ref_id": "BIBREF7"
},
{
"start": 765,
"end": 784,
"text": "Cheng et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 872,
"end": 889,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Meanwhile, Michel and Neubig (2018) released a dataset for evaluating NMT on social media text. This dataset was used as a benchmark for WMT 19 Robustness shared task (Li et al., 2019) to improve the robustness of NMT models on noisy text. We show our approach also benefits the fine-tuning process using additional social media data.",
"cite_spans": [
{
"start": 167,
"end": 184,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We propose a Context-Enhanced Reconstruction (CER) approach to learn robust contextual representation in the presence of noisy words through a perturbation step and a reconstruction step in both encoder and decoder during model training. The perturbation step automatically inserts made-up words in the input sequence x to generate a noisy example x . The noisy example mimics input where text naturalness is broken due to the noisy words. Similarly, we perturb the output sequence y to y using a semantic constraint to generate noisy examples for the decoder to have more diversity in the translations. The reconstruction step in the model aims to restore the contextual representation c x of x to be similar to its corresponding original contextual representation c x in the encoder. Specifically, under the Transformer architecture (Figure 1 ), the reconstruction step aims to stabilize and minimize the disruption of attention distribution for a word over the whole input in the presence of inserted noise. The stabilization is needed for both clean and noisy words as both of their contextual representations are affected. For a noisy word, reconstruction reduces the attention to itself and encourages the construction of the contextual representation to leverage more on its clean neighbors. For clean words, reconstruction works as a denoise module to mitigate the interference of noisy words. For c y in the decoder, the aim is to generate more examples with similar context as c y . The reconstruction helps to normalize the contextual representation of semantically similar words.",
"cite_spans": [],
"ref_spans": [
{
"start": 835,
"end": 844,
"text": "(Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Approaches",
"sec_num": "3"
},
{
"text": "We insert made-up words, representing any kinds of noise, to disturb the contextual representation during training. To create those words, we build a made-up dictionary D \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Perturbing Input Text with Noise",
"sec_num": "3.1"
},
{
"text": "x with M made-up words. As shown in Figure 1(a) , made-up words are simply indexed slots in D \u2212",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 47,
"text": "Figure 1(a)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Perturbing Input Text with Noise",
"sec_num": "3.1"
},
{
"text": "x , whose embeddings are randomly initialized with no prior restriction and updated during training just as valid words. During the perturbation step, we randomly select multiple positions in each input sequence based on probability \u03c3 x and replace the words with any arbitrary made-up words in D \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Perturbing Input Text with Noise",
"sec_num": "3.1"
},
{
"text": "x . For the decoder, as the aim is not to insert noise but to increase the diversity of translation, we add small perturbations with a semantic constraint to make the model robust. Specifically, we randomly select multiple positions in each target sequence with a probability \u03c3 y and perturb the corresponding words. For the word y i chosen to be perturbed, we create a dynamic set V y i consisting of m words having the highest cosine similarity with it (excluding y i ). We average the embeddings of the words in V y i as the perturbation for y i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Perturbing Input Text with Noise",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Vy i = top_m y j \u2208Dy ,j =i (cos(e y i , e y j ))",
"eq_num": "(1)"
}
],
"section": "Perturbing Input Text with Noise",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e y i = 1 m y j \u2208Vy i e y j",
"eq_num": "(2)"
}
],
"section": "Perturbing Input Text with Noise",
"sec_num": "3.1"
},
{
"text": "Where D y is the target dictionary, e y j is the target word embedding for y j and e y i is the perturbed embedding for y i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Perturbing Input Text with Noise",
"sec_num": "3.1"
},
{
"text": "As the injected noise in x affects the self-attention mechanism in producing correct contextual representation, we regularize the contextual representation using a Noise Adaptation Layer (NAL) immediately after the self-attention layer as depicted in Figure 1(a) . This NAL is trained together with the NMT model and used as a reconstruction module during testing (See Figure 1(b) ,(c)). Formally, let c x l and c x l be the outputs of the self-attention in the l-th encoder layer for x and x respectively. We train the NAL by:",
"cite_spans": [],
"ref_spans": [
{
"start": 251,
"end": 262,
"text": "Figure 1(a)",
"ref_id": "FIGREF1"
},
{
"start": 369,
"end": 380,
"text": "Figure 1(b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Reconstructing Contextual Representation",
"sec_num": "3.2"
},
{
"text": "L x nal (\u03b8 x nal ) = 1 |S| (x,y)\u2208S N l=1 ||c x l \u2212 NAL(c x l )|| 2 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reconstructing Contextual Representation",
"sec_num": "3.2"
},
{
"text": "Where \u03b8 x nal are parameters of NAL, S is the training corpus and N is the encoder layer size. Given c x , NAL attempts to output a more correct contextual representation guided by c x . We use a single layer feed-forward network (FFN) in (Vaswani et al., 2017) as our NAL implementation. Similarly, the reconstruction loss for decoder is:",
"cite_spans": [
{
"start": 239,
"end": 261,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reconstructing Contextual Representation",
"sec_num": "3.2"
},
{
"text": "L y nal (\u03b8 y nal ) = 1 |S| (x,y)\u2208S N l=1 ||c y l \u2212 NAL(c y l )|| 2 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reconstructing Contextual Representation",
"sec_num": "3.2"
},
{
"text": "We apply the perturbation step at the embedding layer, see Figure 1 . The inserted noise in x and y would also receive gradient from the final loss function and update just like other clean words. NAL is added at each Transformer layer where the outputs are only used to calculate the reconstruction loss and not passed to the next layer. On the other hand, the output of FFN is propagated to the next layer as usual. The reconstruction step mainly serves as a stabilizer to prevent the noise from propagating. The final training objective L is the combination of the above three loss functions, the original translation loss, the reconstruction loss for the encoder and the reconstruction loss for the decoder. Both \u03bb x and \u03bb y are set empirically to count for the relative importance.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.3"
},
{
"text": "L = Lnmt(\u03b8nmt) + \u03bbxL x nal (\u03b8 x nal ) + \u03bbyL y nal (\u03b8 y nal ) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.3"
},
{
"text": "Experiments are conducted on ZH-EN and FR-EN translation tasks for both news and social media domains. We also use social media text to fine-tune the NMT systems on FR-EN. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "4"
},
{
"text": "We use the same datasets as Michel and Neubig (2018) . The training set consists of 2.16M sentence pairs extracted from europarl-v7 and news-commentary-v10. We use the newsdiscuss-dev2015 as development set and evaluate the model on two news test sets, newstest2014 (N14) and newsdiscusstest2015 (N15). We also evaluate on two social media test sets: mtnt18 (Michel and Neubig, 2018) and mtnt19 (Li et al., 2019) . FR-EN Fine-Tuning: We use the noisy training set (mtnttrain) provided by Michel and Neubig (2018) to fine-tune the FR-EN model. We use fairseq's implementation of Transformer (Ott et al., 2019) . In evaluation, we report case-insensitive tokenized BLEU for ZH-EN (Papineni et al., 2002) and sacre-BLEU (Post, 2018) for FR-EN. Following Michel and Neubig (2018), we do not use development set but only report best results on three social media test sets.",
"cite_spans": [
{
"start": 28,
"end": 52,
"text": "Michel and Neubig (2018)",
"ref_id": "BIBREF23"
},
{
"start": 358,
"end": 383,
"text": "(Michel and Neubig, 2018)",
"ref_id": "BIBREF23"
},
{
"start": 395,
"end": 412,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 488,
"end": 512,
"text": "Michel and Neubig (2018)",
"ref_id": "BIBREF23"
},
{
"start": 590,
"end": 608,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 678,
"end": 701,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF26"
},
{
"start": 717,
"end": 729,
"text": "(Post, 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FR-EN:",
"sec_num": null
},
{
"text": "We segment the Chinese words using THU-LAC (Li and Sun, 2009) and tokenize both French and English words using tokenize.perl 2 . We apply BPE (Sennrich et al., 2016b) to get sub-word vocabularies for the encoder and decoder, both with 20K merge operations.",
"cite_spans": [
{
"start": 43,
"end": 61,
"text": "(Li and Sun, 2009)",
"ref_id": "BIBREF20"
},
{
"start": 142,
"end": 166,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FR-EN:",
"sec_num": null
},
{
"text": "The hyper-parameters setting is the same as transformer-base in (Vaswani et al., 2017) except that we set dropout rate as 0.4 in all our experiments. Our proposed models are trained on top of Transformer baseline for efficiency purpose, where additional parameters from the embeddings of D \u2212",
"cite_spans": [
{
"start": 64,
"end": 86,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FR-EN:",
"sec_num": null
},
{
"text": "x and ReL are uniformly initialized. The madeup dictionary size M is set to 10,000. The size of dynamic set m is set to 3. The probability \u03c3 x and \u03c3 y are both set to 0.1 and balance coefficient \u03bb x and \u03bb y are both set to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FR-EN:",
"sec_num": null
},
{
"text": "We use Transformer as our baseline. ZH-EN: We compare with Wang et al. (2018) ; Cheng et al. (2018 Cheng et al. ( , 2019 . Wang et al. (2018) use a data augmentation approach by randomly replacing words in source and target sentences with other in-dictionary words. Cheng et al. (2018) use adversarial stability training to make NMT resilient to noise. Cheng et al. (2019) Michel and Neubig (2018) do the first benchmark of the noisy text translation tasks in three languages. Vaibhav et al. (2019) leverage effective synthetic noise to make NMT resilient to noisy text. We implement their approach on Transformer backbone. For a fair comparison, we limit the data to train back-translation models only with mtnttrain. Zhou et al. (2019) adopt a multitask transformer architecture with two decoders, where the first decoder learns to denoise and the second decoder learns to translate from the denoised text. They adopt the approach proposed by Vaibhav et al. (2019) to synthesize the noisy text for their first decoder. We do not compare our model with (Berard et al., 2019; Helcl et al., 2019) as they use much more out-domain data, a great number of monolingual data and a bigger Transformer model, and hence not comparable with our experimental settings. Table 2 and Table 3 show the performance on ZH-EN and FR-EN tasks. We show the results of applying CER only to the encoder (+ CER-Enc), and to both the encoder and decoder (+ CER).",
"cite_spans": [
{
"start": 59,
"end": 77,
"text": "Wang et al. (2018)",
"ref_id": "BIBREF38"
},
{
"start": 80,
"end": 98,
"text": "Cheng et al. (2018",
"ref_id": "BIBREF8"
},
{
"start": 99,
"end": 120,
"text": "Cheng et al. ( , 2019",
"ref_id": "BIBREF6"
},
{
"start": 123,
"end": 141,
"text": "Wang et al. (2018)",
"ref_id": "BIBREF38"
},
{
"start": 266,
"end": 285,
"text": "Cheng et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 353,
"end": 372,
"text": "Cheng et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 373,
"end": 397,
"text": "Michel and Neubig (2018)",
"ref_id": "BIBREF23"
},
{
"start": 477,
"end": 498,
"text": "Vaibhav et al. (2019)",
"ref_id": "BIBREF36"
},
{
"start": 945,
"end": 966,
"text": "Vaibhav et al. (2019)",
"ref_id": "BIBREF36"
},
{
"start": 1054,
"end": 1075,
"text": "(Berard et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 1076,
"end": 1095,
"text": "Helcl et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 1259,
"end": 1278,
"text": "Table 2 and Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "4.2"
},
{
"text": "As illustrated, our approach improves the news Table 2 and Table 3 when applying noise-insertion methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 66,
"text": "Table 2 and Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Comparison with Baseline Models",
"sec_num": "5.1"
},
{
"text": "text translations on all test sets for both ZH-EN and FR-EN and outperforms the Transformer baseline in terms of average BLEU by +1.01 and +1.2 on ZH-EN and FR-EN respectively, illustrating the superiority of our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Baseline Models",
"sec_num": "5.1"
},
{
"text": "The performance on social media test sets shows significant improvement with up to +1.63 BLEU over Transformer and +0.84 BLEU over the best approach (Wang et al., 2018) on ZH-EN. For FR-EN, our model outperforms Wang et al. (2018) by +1.5 and +1.0 BLEU on mtnt18 and mtnt19 respectively. Zhou et al. (2019) use mtnttrain and TED (Qi et al., 2018) to synthesize noisy sentences for their first decoder, hence effectively they are exploiting indomain data during training and thus not quite a fair comparison in the evaluation. Nevertheless, CER still significantly outperforms Zhou et al. (2019) by +2.0 BLEU on mtnt18.",
"cite_spans": [
{
"start": 149,
"end": 168,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF38"
},
{
"start": 212,
"end": 230,
"text": "Wang et al. (2018)",
"ref_id": "BIBREF38"
},
{
"start": 288,
"end": 306,
"text": "Zhou et al. (2019)",
"ref_id": "BIBREF40"
},
{
"start": 329,
"end": 346,
"text": "(Qi et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 576,
"end": 594,
"text": "Zhou et al. (2019)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Baseline Models",
"sec_num": "5.1"
},
{
"text": "We investigate the effect of different noise-insertion methods by dynamically inserting noise into the source side of the original training set using different strategies with a same probability \u03c3 x . Madeup: Our approach to add made-up words. Semantics: We test our semantic constraint in the decoder to assess if it benefits the encoder. Dropout: We replace word embeddings with all-0 vectors, similar to enlarging the dropout rate. Gaussian: Following the feature-level perturbations of Cheng et al. (2018) , we add the Gaussian noise to a word embedding to simulate the noise. Random: We replace a word with an arbitrary word in the dictionary. This would result in a valid word being placed in an unreasonable context. Figure 2 shows the BLEU improvement of various noise-insertion methods on social media test sets. We find that nearly all kinds of noise-insertion methods improve the robustness of MT with the exception of Dropout. Since we have already set the dropout rate to an optimal rate, inserting additional Dropout noise does not increase but decreases the performance. As shown, Madeup improves the performance nearly twice than the rest of the noise-insertion methods. We conjecture Semantics, Dropout and Gaussian may be small and not diverse enough to simulate the real noisy words. Both Random and Madeup can break the text coherence. However, Random uses a random in-dictionary word, which can place a valid word in an unreasonable context and cause its embedding to update in a wrong direction. In fact, this method improves the robustness of NMT models at the cost of those replaced words. Our Madeup can entirely avoid this cost as we use made-up words to work as noisy words and does not cause any context change of all in-dictionary words.",
"cite_spans": [
{
"start": 490,
"end": 509,
"text": "Cheng et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 724,
"end": 732,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Effect of Noise",
"sec_num": "5.2"
},
{
"text": "To further gain insights on how NAL helps improve the robustness of NMT models. We create three variants to aid our analysis: CER-inactive: We do not activate NAL at testing time. The contextual representation is feed directly into later FFN. This variant is to test the effectiveness of NAL. CER-con: We remove NAL but only add a con- straint to ensure {c x , c x } and {c y , c y } to be close respectively at training time. This forces the selfattention layer to reconstruct the correct contextual representation itself. This variant is to demonstrate the necessity to set apart the context generation module (self-attention layer) and the reconstruction module (NAL).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of NAL",
"sec_num": "5.3"
},
{
"text": "We borrow the adversarial stability training strategy proposed in Cheng et al. (2018) here. In this variant, NAL is replaced by a discriminator and \u03b8 x nal and \u03b8 y nal are changed to the adversarial learning loss in Cheng et al. (2018) . The purpose is to assess the effectiveness of NAL and the discriminator in context reconstruction. Figure 3 shows the results of the three variants on three social media test sets. From the figure, we make the following observations.",
"cite_spans": [
{
"start": 66,
"end": 85,
"text": "Cheng et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 216,
"end": 235,
"text": "Cheng et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 337,
"end": 345,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "CER-D:",
"sec_num": null
},
{
"text": "NAL is effective at Test Time. The activation of NAL at test time helps to produce more reliable contextual representation. Notably, NAL gains +1.19 BLEU on Social. NAL needs to be learnt separately. As shown in CER-con, by forcing self-attention layer to do both tasks (context generation and reconstruction), the performance improvement gets affected by at least 0.4 BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CER-D:",
"sec_num": null
},
{
"text": "NAL is more effective than a discriminator to guide reconstruction. The improvements are less significant in all test sets when using a discriminator (CER-D) comparing to CER. Therefore, we can conclude that NAL is more effective than a discriminator to reconstruct the perturbed contextual representation and CER outperforms all variants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CER-D:",
"sec_num": null
},
{
"text": "We fine-tune the same Transformer model in Table 3 with the social media data mtnttrain (+FT) and further include CER in the fine-tuning (+FT w/ CER). Table 4 shows our performance (+FT w/ CER) with other four fine-tuning approaches on mtnttrain. It shows that our CER also bene- fits the fine-tuning process and outperforms all the approaches in two noisy test sets. Specifically, it gains +2.1 and +1.3 BLEU over +FT on mtnt18 and mtnt19 and outperforms Vaibhav et al. (2019) by +1.3 and +1.2 BLEU respectively.",
"cite_spans": [
{
"start": 456,
"end": 477,
"text": "Vaibhav et al. (2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "FR-EN Fine-Tuning on Social Media Text",
"sec_num": "5.4"
},
{
"text": "We first train a ZH-EN baseline model using 25M sentence pairs, which are mainly in news domain. Similar to the setting in Table 4 , we apply both simple finetuning (+FT) and our CER (+ FT w/ CER) approach using 125K social media training data. We evaluate those models on Social. We also include the performance of Google Translate 3 here to show the competitiveness of our baseline model. As shown in Table 5 , our CER approach can still benefit the fine-tuning process even on the strong baseline. It should be noted that the baseline has already maintained high robustness with large-scale training data where improvement in such a model is hard to obtain. In fact, 125K in-domain data can only contribute to 1.55 BLEU improvement. Under this circumstance, the 0.26 BLEU improvement brought by CER should be highly valued considered no additional fine-tuning data is used.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 403,
"end": 410,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Experiments on Large-Scale Datasets",
"sec_num": "5.5"
},
{
"text": "In this work, we propose an approach to reduce the vulnerability of NMT models to input perturbations. Our input perturbation is easy, fast and not specific to a target victim model. Experimental results show our proposed approach improves the robustness on both news and social media text and helped to improve the translation of real input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Available at https://github.com/wwxu21/ CER-MT.2 https://github.com/moses-smt/mosesdecoder",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://translate.google.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work was supported in part by Defence Science and Technology Agency (DSTA), Singapore.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generating natural language adversarial examples",
"authors": [
{
"first": "Moustafa",
"middle": [],
"last": "Alzantot",
"suffix": ""
},
{
"first": "Yash",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Bo-Jhang",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Mani",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07998"
]
},
"num": null,
"urls": [],
"raw_text": "Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial ex- amples. arXiv preprint arXiv:1804.07998.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A phrase-based statistical model for SMS text normalization",
"authors": [
{
"first": "Aiti",
"middle": [],
"last": "Aw",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "AiTi Aw, Min Zhang, Juan Xiao, and Jian Su. 2006. A phrase-based statistical model for SMS text normal- ization. In Proceedings of the COLING/ACL 2006",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "Main Conference Poster Sessions",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Main Conference Poster Sessions, pages 33-40, Syd- ney, Australia. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine transla- tion. In International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Naver labs europe's systems for the wmt19 machine translation robustness task",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Berard",
"suffix": ""
},
{
"first": "Ioan",
"middle": [],
"last": "Calapodescu",
"suffix": ""
},
{
"first": "Claude",
"middle": [],
"last": "Roux",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "526--532",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Berard, Ioan Calapodescu, and Claude Roux. 2019. Naver labs europe's systems for the wmt19 machine translation robustness task. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 526-532, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Robust neural machine translation with doubly adversarial inputs",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4324--4333",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly ad- versarial inputs. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 4324-4333, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "AdvAug: Robust adversarial augmentation for neural machine translation",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Lu Jiang, Wolfgang Macherey, and Jacob Eisenstein. 2020. AdvAug: Robust adversarial aug- mentation for neural machine translation. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Towards robust neural machine translation",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1756--1766",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards robust neural machine translation. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756- 1766, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semisupervised learning for neural machine translation",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1965--1974",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semi- supervised learning for neural machine translation. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1965-1974, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On adversarial examples for character-level neural machine translation",
"authors": [
{
"first": "Javid",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lowd",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "653--663",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javid Ebrahimi, Daniel Lowd, and Dejing Dou. 2018a. On adversarial examples for character-level neural machine translation. In Proceedings of the 27th In- ternational Conference on Computational Linguis- tics, pages 653-663, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "HotFlip: White-box adversarial examples for text classification",
"authors": [
{
"first": "Javid",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Anyi",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lowd",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "31--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018b. HotFlip: White-box adversarial exam- ples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 31-36, Melbourne, Australia. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Understanding back-translation at scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Cuni system for the wmt19 robustness task",
"authors": [
{
"first": "Jindich",
"middle": [],
"last": "Helcl",
"suffix": ""
},
{
"first": "Jindich",
"middle": [],
"last": "Libovick",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "539--543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jindich Helcl, Jindich Libovick, and Martin Popel. 2019. Cuni system for the wmt19 robustness task. In Proceedings of the Fourth Conference on Ma- chine Translation (Volume 2: Shared Task Papers, Day 1), pages 539-543, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adversarial example generation with syntactically controlled paraphrase networks",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1875--1885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Training on synthetic noise improves robustness to natural noise in machine translation",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.01509"
]
},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, and Marjan Ghazvininejad. 2019. Training on synthetic noise improves robustness to natural noise in ma- chine translation. arXiv preprint arXiv:1902.01509.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Swim: A simple word interaction model for implicit discourse relation recognition",
"authors": [
{
"first": "Wenqiang",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Xuancong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Meichun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ilija",
"middle": [],
"last": "Ilievski",
"suffix": ""
},
{
"first": "Xiangnan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17",
"volume": "",
"issue": "",
"pages": "4026--4032",
"other_ids": {
"DOI": [
"10.24963/ijcai.2017/562"
]
},
"num": null,
"urls": [],
"raw_text": "Wenqiang Lei, Xuancong Wang, Meichun Liu, Ilija Ilievski, Xiangnan He, and Min-Yen Kan. 2017. Swim: A simple word interaction model for implicit discourse relation recognition. In Proceedings of the Twenty-Sixth International Joint Conference on Arti- ficial Intelligence, IJCAI-17, pages 4026-4032.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Revisit automatic error detection for wrong and missing translation -a supervised approach",
"authors": [
{
"first": "Wenqiang",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Weiwen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ai",
"middle": [
"Ti"
],
"last": "Aw",
"suffix": ""
},
{
"first": "Yuanxin",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Tat Seng",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "942--952",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1087"
]
},
"num": null,
"urls": [],
"raw_text": "Wenqiang Lei, Weiwen Xu, Ai Ti Aw, Yuanxin Xiang, and Tat Seng Chua. 2019. Revisit automatic error detection for wrong and missing translation -a su- pervised approach. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 942-952, Hong Kong, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Findings of the first shared task on machine translation robustness",
"authors": [
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "91--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, and Has- san Sajjad. 2019. Findings of the first shared task on machine translation robustness. In Proceedings of the Fourth Conference on Machine Translation (Vol- ume 2: Shared Task Papers, Day 1), pages 91-102,",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Italy",
"middle": [],
"last": "Florence",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Punctuation as implicit annotations for Chinese word segmentation",
"authors": [
{
"first": "Zhongguo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "4",
"pages": "505--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongguo Li and Maosong Sun. 2009. Punctuation as implicit annotations for Chinese word segmentation. Computational Linguistics, 35(4):505-512.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Pirhdy: Learning pitch-, rhythm-, and dynamics-aware embeddings for symbolic music",
"authors": [
{
"first": "Hongru",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Wenqiang",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Yaozhu"
],
"last": "Chan",
"suffix": ""
},
{
"first": "Zhenglu",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2020,
"venue": "MM '20: The 28th ACM International Conference on Multimedia",
"volume": "",
"issue": "",
"pages": "574--582",
"other_ids": {
"DOI": [
"10.1145/3394171.3414032"
]
},
"num": null,
"urls": [],
"raw_text": "Hongru Liang, Wenqiang Lei, Paul Yaozhu Chan, Zhenglu Yang, Maosong Sun, and Tat-Seng Chua. 2020. Pirhdy: Learning pitch-, rhythm-, and dynamics-aware embeddings for symbolic music. In MM '20: The 28th ACM International Conference on Multimedia, Virtual Event / Seattle, WA, USA, Oc- tober 12-16, 2020, pages 574-582. ACM.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Robust neural machine translation with joint textual and phonetic embedding",
"authors": [
{
"first": "Hairong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3044--3049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, and Zhongjun He. 2019. Robust neural machine translation with joint textual and phonetic embed- ding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3044-3049, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "MTNT: A testbed for machine translation of noisy text",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "543--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Michel and Graham Neubig. 2018. MTNT: A testbed for machine translation of noisy text. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 543- 553, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Adversarial training methods for semi-supervised text classification",
"authors": [
{
"first": "Takeru",
"middle": [],
"last": "Miyato",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodfellow",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.07725"
]
},
"num": null,
"urls": [],
"raw_text": "Takeru Miyato, Andrew M Dai, and Ian Good- fellow. 2016. Adversarial training methods for semi-supervised text classification. arXiv preprint arXiv:1605.07725.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A call for clarity in reporting bleu scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.08771"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "When and why are pre-trained word embeddings useful for neural machine translation?",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Devendra",
"middle": [],
"last": "Sachan",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Felix",
"suffix": ""
},
{
"first": "Sarguna",
"middle": [],
"last": "Padmanabhan",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "529--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Pad- manabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neu- ral machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 529-535, New Orleans, Louisiana. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Semantically equivalent adversarial rules for debugging NLP models",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "856--865",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversar- ial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856-865, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Toward robust neural machine translation for noisy input sequences",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2017,
"venue": "International Workshop on Spoken Language Translation (IWSLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Sperber, Jan Niehues, and Alex Waibel. 2017. Toward robust neural machine translation for noisy input sequences. In International Workshop on Spo- ken Language Translation (IWSLT).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Intriguing properties of neural networks",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.6199"
]
},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "It's morphin' time! Combating linguistic discrimination with inflectional perturbations",
"authors": [
{
"first": "Samson",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2920--2935",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.263"
]
},
"num": null,
"urls": [],
"raw_text": "Samson Tan, Shafiq Joty, Min-Yen Kan, and Richard Socher. 2020a. It's morphin' time! Combating linguistic discrimination with inflectional perturba- tions. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 2920-2935, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Mind your inflections! Improving NLP for non-standard Englishes with Base-Inflection Encoding",
"authors": [
{
"first": "Samson",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Lav",
"middle": [],
"last": "Varshney",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5647--5663",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.455"
]
},
"num": null,
"urls": [],
"raw_text": "Samson Tan, Shafiq Joty, Lav Varshney, and Min-Yen Kan. 2020b. Mind your inflections! Improving NLP for non-standard Englishes with Base-Inflection En- coding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 5647-5663, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Improving robustness of machine translation with synthetic noise",
"authors": [
{
"first": "Vaibhav",
"middle": [],
"last": "Vaibhav",
"suffix": ""
},
{
"first": "Sumeet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1916--1920",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vaibhav Vaibhav, Sumeet Singh, Craig Stewart, and Graham Neubig. 2019. Improving robustness of ma- chine translation with synthetic noise. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1916-1920, Min- neapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "SwitchOut: an efficient data augmentation algorithm for neural machine translation",
"authors": [
{
"first": "Xinyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "856--861",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyi Wang, Hieu Pham, Zihang Dai, and Graham Neubig. 2018. SwitchOut: an efficient data aug- mentation algorithm for neural machine translation. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 856-861, Brussels, Belgium. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Google's neural machine translation system",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "Bridging the gap between human and machine translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Improving robustness of neural machine translation with multi-task learning",
"authors": [
{
"first": "Shuyan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiangkai",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Yingqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "565--571",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuyan Zhou, Xiangkai Zeng, Yingqi Zhou, Antonios Anastasopoulos, and Graham Neubig. 2019. Im- proving robustness of neural machine translation with multi-task learning. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 565-571, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Figure 1 shows the architecture.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "The architecture of CER (a), and the use of NAL in training (b) and testing (c). The solid lines indicate the flow for original input, while the dotted lines for noisy input, generated in the perturbation step.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "BLEU improvements compared to Transformer baseline shown in",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "BLEU scores of CER variants.",
"type_str": "figure"
},
"TABREF0": {
"text": "Input \u901a\u5bb5\u6253\u6e38\u620f\u4e0a\u5206\u8d3c\u5feb Ref. It's super-fast to gain scores when playing games over the night. MT Play the game all night and take points thief fast. CER Play games all night to score points quickly. Input \u6211\u5df2\u526a\u77ed\u4e86\u6211\u7684\u53d1,\u526a\u65ad\u4e86\u60e9\u7f5a,\u526a\u4e00\u5730\u4f24\u900f\u6211 \u7684\u5c34\u5c2c\u3002\u3002\u3002\u3002 Ref. I have cut my hair, i cut off the punishment, i away the awkwardness that hurt me. MT I got my punishment, got rid of my embarrassment. CER I cut short my hair , cut off punishment , and cut off my embarrassment that hurts me.",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF1": {
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF4": {
"text": "Case-insensitive BLEU scores (%) on ZH-EN translation. MT02 is our development set.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Model</td><td>N14</td><td colspan=\"3\">N15 mtnt18 mtnt19</td></tr><tr><td colspan=\"2\">Exising systems</td><td/><td/><td/></tr><tr><td>Wang et al.</td><td>29.2</td><td>31.1</td><td>25.0</td><td>28.1</td></tr><tr><td>Michel and Neubig</td><td>28.9</td><td>30.8</td><td>23.3</td><td>26.2</td></tr><tr><td>Zhou et al.*</td><td colspan=\"2\">N.A. N.A.</td><td>24.5</td><td>30.3</td></tr><tr><td colspan=\"2\">Our systems</td><td/><td/><td/></tr><tr><td>Transformer</td><td>29.7</td><td>31.0</td><td>25.2</td><td>28.0</td></tr><tr><td>+ CER-Enc</td><td>30.4</td><td>31.7</td><td>26.1</td><td>28.7</td></tr><tr><td>+ CER</td><td>30.7</td><td>32.4</td><td>26.5</td><td>29.1</td></tr></table>",
"num": null
},
"TABREF5": {
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF7": {
"text": "sacreBLEU on FR-EN fine-tuning task.",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF8": {
"text": "CER 40.82 (+4.64%)",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Model</td><td/><td>Social</td></tr><tr><td colspan=\"2\">Google Translate</td><td>38.59</td></tr><tr><td/><td>Baseline</td><td>39.01</td></tr><tr><td>Ours</td><td>+FT</td><td>40.56 (+3.97%)</td></tr><tr><td/><td>+FT w/</td><td/></tr></table>",
"num": null
},
"TABREF9": {
"text": "Case-insensitive BLEU scores (relative improvement) on large-scale ZH-EN translation system.",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}