{ "paper_id": "P15-1002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:12:03.744632Z" }, "title": "Addressing the Rare Word Problem in Neural Machine Translation", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "ilyasu@google.com" }, { "first": "Google", "middle": [], "last": "Quoc", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "" }, { "first": "V", "middle": [], "last": "Le", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "vinyals@google.com" }, { "first": "Wojciech", "middle": [], "last": "Zaremba", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "woj.zaremba@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Neural Machine Translation (NMT) is a new approach to machine translation that has shown promising results that are comparable to traditional approaches. A significant weakness in conventional NMT systems is their inability to correctly translate very rare words: end-to-end NMTs tend to have relatively small vocabularies with a single unk symbol that represents every possible out-of-vocabulary (OOV) word. In this paper, we propose and implement an effective technique to address this problem. We train an NMT system on data that is augmented by the output of a word alignment algorithm, allowing the NMT system to emit, for each OOV word in the target sentence, the position of its corresponding word in the source sentence. This information is later utilized in a post-processing step that translates every OOV word using a dictionary. Our experiments on the WMT'14 English to French translation task show that this method provides a substantial improvement of up to 2.8 BLEU points over an equivalent NMT system that does not use this technique. With 37.5 BLEU points, our NMT system is the first to surpass the best result achieved on a WMT'14 contest task. * Work done while the authors were in Google. \u2020 indicates equal contribution.", "pdf_parse": { "paper_id": "P15-1002", "_pdf_hash": "", "abstract": [ { "text": "Neural Machine Translation (NMT) is a new approach to machine translation that has shown promising results that are comparable to traditional approaches. A significant weakness in conventional NMT systems is their inability to correctly translate very rare words: end-to-end NMTs tend to have relatively small vocabularies with a single unk symbol that represents every possible out-of-vocabulary (OOV) word. In this paper, we propose and implement an effective technique to address this problem. We train an NMT system on data that is augmented by the output of a word alignment algorithm, allowing the NMT system to emit, for each OOV word in the target sentence, the position of its corresponding word in the source sentence. This information is later utilized in a post-processing step that translates every OOV word using a dictionary. Our experiments on the WMT'14 English to French translation task show that this method provides a substantial improvement of up to 2.8 BLEU points over an equivalent NMT system that does not use this technique. With 37.5 BLEU points, our NMT system is the first to surpass the best result achieved on a WMT'14 contest task. * Work done while the authors were in Google. \u2020 indicates equal contribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural Machine Translation (NMT) is a novel approach to MT that has achieved promising results (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2015; Jean et al., 2015 ). An NMT system is a conceptually simple large neural network that reads the en-tire source sentence and produces an output translation one word at a time. NMT systems are appealing because they use minimal domain knowledge which makes them well-suited to any problem that can be formulated as mapping an input sequence to an output sequence (Sutskever et al., 2014) . In addition, the natural ability of neural networks to generalize implies that NMT systems will also generalize to novel word phrases and sentences that do not occur in the training set. In addition, NMT systems potentially remove the need to store explicit phrase tables and language models which are used in conventional systems. Finally, the decoder of an NMT system is easy to implement, unlike the highly intricate decoders used by phrase-based systems (Koehn et al., 2003) .", "cite_spans": [ { "start": 95, "end": 127, "text": "(Kalchbrenner and Blunsom, 2013;", "ref_id": "BIBREF10" }, { "start": 128, "end": 151, "text": "Sutskever et al., 2014;", "ref_id": "BIBREF17" }, { "start": 152, "end": 169, "text": "Cho et al., 2014;", "ref_id": "BIBREF4" }, { "start": 170, "end": 192, "text": "Bahdanau et al., 2015;", "ref_id": "BIBREF1" }, { "start": 193, "end": 210, "text": "Jean et al., 2015", "ref_id": "BIBREF9" }, { "start": 554, "end": 578, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF17" }, { "start": 1039, "end": 1059, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite these advantages, conventional NMT systems are incapable of translating rare words because they have a fixed modest-sized vocabulary 1 which forces them to use the unk symbol to represent the large number of out-of-vocabulary (OOV) words, as illustrated in Figure 1 . Unsurprisingly, both Sutskever et al. (2014) and Bahdanau et al. (2015) have observed that sentences with many rare words tend to be translated much more poorly than sentences containing mainly frequent words. Standard phrase-based systems (Koehn et al., 2007; Chiang, 2007; Cer et al., 2010; Dyer et al., 2010) , on the other hand, do not suffer from the rare word problem to the same extent because they can support a much larger vocabulary, and because their use of explicit alignments and phrase tables allows them to memorize the translations of even extremely rare words.", "cite_spans": [ { "start": 297, "end": 320, "text": "Sutskever et al. (2014)", "ref_id": "BIBREF17" }, { "start": 325, "end": 347, "text": "Bahdanau et al. (2015)", "ref_id": "BIBREF1" }, { "start": 516, "end": 536, "text": "(Koehn et al., 2007;", "ref_id": "BIBREF12" }, { "start": 537, "end": 550, "text": "Chiang, 2007;", "ref_id": "BIBREF3" }, { "start": 551, "end": 568, "text": "Cer et al., 2010;", "ref_id": "BIBREF2" }, { "start": 569, "end": 587, "text": "Dyer et al., 2010)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 265, "end": 273, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Motivated by the strengths of standard phrase- 1 Due to the computationally intensive nature of the softmax, NMT systems often limit their vocabularies to be the top 30K-80K most frequent words in each language. However, Jean et al. (2015) has very recently proposed an efficient approximation to the softmax that allows for training NTMs with very large vocabularies. As discussed in Section 2, this technique is complementary to ours.", "cite_spans": [ { "start": 47, "end": 48, "text": "1", "ref_id": null }, { "start": 221, "end": 239, "text": "Jean et al. (2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "en: The ecotax portico in Pont-de-Buis , . . . [truncated] . . . , was taken down on Thursday morning fr: Le portique\u00e9cotaxe de Pont-de-Buis , . . . [truncated] . . . , a\u00e9t\u00e9 d\u00e9mont\u00e9 jeudi matin nn: Le unk de unk\u00e0 unk , . . . [truncated] . . . , a\u00e9t\u00e9 pris le jeudi matin", "cite_spans": [ { "start": 47, "end": 58, "text": "[truncated]", "ref_id": null }, { "start": 149, "end": 160, "text": "[truncated]", "ref_id": null }, { "start": 225, "end": 236, "text": "[truncated]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d \u2746 \u2746 \u2745 \u2745 \u2702 \u2702 \u2711 \u2711 \u2711 \u271f \u271f \u271f \u271f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: Example of the rare word problem -An English source sentence (en), a human translation to French (fr), and a translation produced by one of our neural network systems (nn) before handling OOV words. We highlight words that are unknown to our model. The token unk indicates an OOV word. We also show a few important alignments between the pair of sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "based system, we propose and implement a novel approach to address the rare word problem of NMTs. Our approach annotates the training corpus with explicit alignment information that enables the NMT system to emit, for each OOV word, a \"pointer\" to its corresponding word in the source sentence. This information is later utilized in a post-processing step that translates the OOV words using a dictionary or with the identity translation, if no translation is found. Our experiments confirm that this approach is effective. On the English to French WMT'14 translation task, this approach provides an improvement of up to 2.8 (if the vocabulary is relatively small) BLEU points over an equivalent NMT system that does not use this technique. Moreover, our system is the first NMT that outperforms the winner of a WMT'14 task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A neural machine translation system is any neural network that maps a source sentence, s 1 , . . . , s n , to a target sentence, t 1 , . . . , t m , where all sentences are assumed to terminate with a special \"end-of-sentence\" token . More concretely, an NMT system uses a neural network to parameterize the conditional distributions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(t j |t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "None the early work in neural machine translation systems has addressed the rare word problem, but the recent work of Jean et al. (2015) has tackled it with an efficient approximation to the softmax to accommodate for a very large vocabulary (500K words). However, even with a large vocabulary, the problem with rare words, e.g., names, numbers, etc., still persists, and Jean et al. (2015) found that using techniques similar to ours are beneficial and complementary to their approach.", "cite_spans": [ { "start": 118, "end": 136, "text": "Jean et al. (2015)", "ref_id": "BIBREF9" }, { "start": 372, "end": 390, "text": "Jean et al. (2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "Despite the relatively large amount of work done on pure neural machine translation systems, there has been no work addressing the OOV problem in NMT systems, with the notable exception of Jean et al. (2015)'s work mentioned earlier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rare Word Models", "sec_num": "3" }, { "text": "We propose to address the rare word problem by training the NMT system to track the origins of the unknown words in the target sentences. If we knew the source word responsible for each un-en: The unk 1 portico in unk 2 . . . fr: Le unk \u2205 unk 1 de unk 2 . . . Figure 2 : Copyable Model -an annotated example with two types of unknown tokens: \"copyable\" unk n and null unk \u2205 . known target word, we could introduce a postprocessing step that would replace each unk in the system's output with a translation of its source word, using either a dictionary or the identity translation. For example, in Figure 1 , if the model knows that the second unknown token in the NMT (line nn) originates from the source word ecotax, it can perform a word dictionary lookup to replace that unknown token by\u00e9cotaxe. Similarly, an identity translation of the source word Pont-de-Buis can be applied to the third unknown token.", "cite_spans": [], "ref_spans": [ { "start": 260, "end": 268, "text": "Figure 2", "ref_id": null }, { "start": 597, "end": 605, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Rare Word Models", "sec_num": "3" }, { "text": "We present three annotation strategies that can easily be applied to any NMT system (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) . We treat the NMT system as a black box and train it on a corpus annotated by one of the models below. First, the alignments are produced with an unsupervised aligner. Next, we use the alignment links to construct a word dictionary that will be used for the word translations in the post-processing step. 2 If a word does not appear in our dictionary, then we apply the identity translation.", "cite_spans": [ { "start": 84, "end": 116, "text": "(Kalchbrenner and Blunsom, 2013;", "ref_id": "BIBREF10" }, { "start": 117, "end": 140, "text": "Sutskever et al., 2014;", "ref_id": "BIBREF17" }, { "start": 141, "end": 158, "text": "Cho et al., 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Rare Word Models", "sec_num": "3" }, { "text": "The first few words of the sentence pair in Figure 1 (lines en and fr) illustrate our models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rare Word Models", "sec_num": "3" }, { "text": "In this approach, we introduce multiple tokens to represent the various unknown words in the source and in the target language, as opposed to using only one unk token. We annotate the OOV words in the source sentence with unk 1 , unk 2 , unk 3 , in that order, while assigning repeating unknown words identical tokens. The annotation of the unknown words in the target language is slightly more elaborate: (a) each unknown target word that is aligned to an unknown source word is assigned the same unknown token (hence, the en: The unk portico in unk . . . \"copy\" model) and (b) an unknown target word that has no alignment or that is aligned with a known word uses the special null token unk \u2205 . See Figure 2 for an example. This annotation enables us to translate every non-null unknown token.", "cite_spans": [], "ref_spans": [ { "start": 701, "end": 709, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Copyable Model", "sec_num": "3.1" }, { "text": "p 0 unk p \u22121 unk p 1 de p \u2205 unk p \u22121 . . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "fr: Le", "sec_num": null }, { "text": "The copyable model is limited by its inability to translate unknown target words that are aligned to known words in the source sentence, such as the pair of words, \"portico\" and \"portique\", in our running example. The former word is known on the source sentence; whereas latter is not, so it is labelled with unk \u2205 . This happens often since the source vocabularies of our models tend to be much larger than the target vocabulary since a large source vocabulary is cheap. This limitation motivated us to develop an annotation model that includes the complete alignments between the source and the target sentences, which is straightforward to obtain since the complete alignments are available at training time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positional All Model (PosAll)", "sec_num": "3.2" }, { "text": "Specifically, we return to using only a single universal unk token. However, on the target side, we insert a positional token p d after every word. Here, d indicates a relative position (d = \u22127, . . . , \u22121, 0, 1, . . . , 7) to denote that a target word at position j is aligned to a source word at position i = j \u2212 d. Aligned words that are too far apart are considered unaligned, and unaligned words rae annotated with a null token p n . Our annotation is illustrated in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 472, "end": 480, "text": "Figure 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Positional All Model (PosAll)", "sec_num": "3.2" }, { "text": "The main weakness of the PosAll model is that it doubles the length of the target sentence. This makes learning more difficult and slows the speed of parameter updates by a factor of two. However, given that our post-processing step is concerned only with the alignments of the unknown words, so it is more sensible to only annotate the unknown words. This motivates our positional unknown model which uses unkpos d tokens (for d in \u22127, . . . , 7 or \u2205) to simultaneously denote (a) the fact that a word is unknown and (b) its relative position d with respect to its aligned source word. Like the PosAll model, we use the symbol unkpos \u2205 for unknown target words that do not have an alignment. We use the universal unk for all unknown tokens in the source language. See Figure 4 for an annotated example.", "cite_spans": [], "ref_spans": [ { "start": 769, "end": 777, "text": "Figure 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Positional Unknown Model (PosUnk)", "sec_num": "3.3" }, { "text": "en: The unk portico in unk . . . fr: Le unkpos 1 unkpos \u22121 de unkpos 1 . . . It is possible that despite its slower speed, the PosAll model will learn better alignments because it is trained on many more examples of words and their alignments. However, we show that this is not the case (see \u00a75.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positional Unknown Model (PosUnk)", "sec_num": "3.3" }, { "text": "We evaluate the effectiveness of our OOV models on the WMT'14 English-to-French translation task. Translation quality is measured with the BLEU metric (Papineni et al., 2002) on the new-stest2014 test set (which has 3003 sentences).", "cite_spans": [ { "start": 151, "end": 174, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "To be comparable with the results reported by previous work on neural machine translation systems (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2015) , we train our models on the same training data of 12M parallel sentences (348M French and 304M English words), obtained from (Schwenk, 2014) . The 12M subset was selected from the full WMT'14 parallel corpora using the method proposed in Axelrod et al. (2011) .", "cite_spans": [ { "start": 98, "end": 122, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF17" }, { "start": 123, "end": 140, "text": "Cho et al., 2014;", "ref_id": "BIBREF4" }, { "start": 141, "end": 163, "text": "Bahdanau et al., 2015)", "ref_id": "BIBREF1" }, { "start": 290, "end": 305, "text": "(Schwenk, 2014)", "ref_id": null }, { "start": 403, "end": 424, "text": "Axelrod et al. (2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "4.1" }, { "text": "Due to the computationally intensive nature of the naive softmax, we limit the French vocabulary (the target language) to the either the 40K or the 80K most frequent French words. On the source side, we can afford a much larger vocabulary, so we use the 200K most frequent English words. The model treats all other words as unknowns. 3 We annotate our training data using the three schemes described in the previous section. The alignment is computed with the Berkeley aligner (Liang et al., 2006) using its default settings. We discard sentence pairs in which the source or the target sentence exceed 100 tokens.", "cite_spans": [ { "start": 334, "end": 335, "text": "3", "ref_id": null }, { "start": 477, "end": 497, "text": "(Liang et al., 2006)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "4.1" }, { "text": "Our training procedure and hyperparameter choices are similar to those used by Sutskever et al. (2014) . In more details, we train multi-layer deep LSTMs, each of which has 1000 cells, with 1000 dimensional embeddings. Like Sutskever et al. 2014, we reverse the words in the source sentences which has been shown to improve LSTM memory utilization and results in better translations of long sentences. Our hyperparameters can be summarized as follows: (a) the parameters are initialized uniformly in [-0.08, 0.08] for 4-layer models and [-0.06, 0.06] for 6-layer models, (b) SGD has a fixed learning rate of 0.7, (c) we train for 8 epochs (after 5 epochs, we begin to halve the learning rate every 0.5 epoch), (d) the size of the mini-batch is 128, and (e) we rescale the normalized gradient to ensure that its norm does not exceed 5 (Pascanu et al., 2012) .", "cite_spans": [ { "start": 79, "end": 102, "text": "Sutskever et al. (2014)", "ref_id": "BIBREF17" }, { "start": 834, "end": 856, "text": "(Pascanu et al., 2012)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "4.2" }, { "text": "We also follow the GPU parallelization scheme proposed in (Sutskever et al., 2014) , allowing us to reach a training speed of 5.4K words per second to train a depth-6 model with 200K source and 80K target vocabularies ; whereas Sutskever et al. (2014) achieved 6.3K words per second for a depth-4 models with 80K source and target vocabularies. Training takes about 10-14 days on an 8-GPU machine.", "cite_spans": [ { "start": 58, "end": 82, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "4.2" }, { "text": "We report BLEU scores based on both: (a) detokenized translations, i.e., WMT'14 style, to be comparable with results reported on the WMT website 4 and (b) tokenized translations, so as to be consistent with previous work (Cho et al., 2014; Bahdanau et al., 2015; Schwenk, 2014; Sutskever et al., 2014; Jean et al., 2015) . 5 The existing WMT'14 state-of-the-art system (Durrani et al., 2014) achieves a detokenized BLEU score of 35.8 on the newstest2014 test set for English to French language pair (see Table 2 ). In terms of the tokenized BLEU, its performance is 37.0 points (see Table 1 ).", "cite_spans": [ { "start": 221, "end": 239, "text": "(Cho et al., 2014;", "ref_id": "BIBREF4" }, { "start": 240, "end": 262, "text": "Bahdanau et al., 2015;", "ref_id": "BIBREF1" }, { "start": 263, "end": 277, "text": "Schwenk, 2014;", "ref_id": null }, { "start": 278, "end": 301, "text": "Sutskever et al., 2014;", "ref_id": "BIBREF17" }, { "start": 302, "end": 320, "text": "Jean et al., 2015)", "ref_id": "BIBREF9" }, { "start": 323, "end": 324, "text": "5", "ref_id": null }, { "start": 369, "end": 391, "text": "(Durrani et al., 2014)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 504, "end": 511, "text": "Table 2", "ref_id": null }, { "start": 583, "end": 590, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "A note on BLEU scores", "sec_num": "4.3" }, { "text": "Vocab Corpus BLEU State of the art in WMT'14 (Durrani et al., Table 1 : Tokenized BLEU on newstest2014 -Translation results of various systems which differ in terms of: (a) the architecture, (b) the size of the vocabulary used, and (c) the training corpus, either using the full WMT'14 corpus of 36M sentence pairs or a subset of it with 12M pairs. We highlight the performance of our best system in bolded text and state the improvements obtained by our technique of handling rare words (namely, the PosUnk model). Notice that, for a given vocabulary size, the more accurate systems achieve a greater improvement from the post-processing step. This is the case because the more accurate models are able to pin-point the origin of an unknown word with greater accuracy, making the post-processing more useful.", "cite_spans": [ { "start": 45, "end": 61, "text": "(Durrani et al.,", "ref_id": null } ], "ref_spans": [ { "start": 62, "end": 69, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "System BLEU Existing SOTA (Durrani et al., 2014) 35.8 Ensemble of 8 LSTMs + PosUnk 36.6 Table 2 : Detokenized BLEU on newstest2014translation results of the existing state-of-the-art system and our best system.", "cite_spans": [ { "start": 26, "end": 48, "text": "(Durrani et al., 2014)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 88, "end": 95, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "We compare our systems to others, including the current state-of-the-art MT system (Durrani et al., 2014) , recent end-to-end neural systems, as well as phrase-based baselines with neural components. The results shown in Table 1 demonstrate that our unknown word translation technique (in particular, the PosUnk model) significantly improves the translation quality for both the individual (nonensemble) LSTM models and the ensemble mod-els. 6 For 40K-word vocabularies, the performance gains are in the range of 2.3-2.8 BLEU points. With larger vocabularies (80K), the performance gains are diminished, but our technique can still provide a nontrivial gains of 1.6-1.9 BLEU points.", "cite_spans": [ { "start": 83, "end": 105, "text": "(Durrani et al., 2014)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 221, "end": 228, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Main Results", "sec_num": "4.4" }, { "text": "It is interesting to observe that our approach is more useful for ensemble models as compared to the individual ones. This is because the usefulness of the PosUnk model directly depends on the ability of the NMT to correctly locate, for a given OOV target word, its corresponding word in the source sentence. An ensemble of large models identifies these source words with greater accuracy. This is why for the same vocabulary size, better models obtain a greater performance gain our post-processing step. e Except for the very recent work of Jean et al. (2015) that employs a similar unknown treatment strategy 7 as ours, our best result of 37.5 BLEU outperforms all other NMT systems by a arge margin, and more importanly, our system has established a new record on the WMT'14 English to French translation.", "cite_spans": [ { "start": 543, "end": 561, "text": "Jean et al. (2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Main Results", "sec_num": "4.4" }, { "text": "We analyze and quantify the improvement obtained by our rare word translation approach and provide a detailed comparison of the different rare word techniques proposed in Section 3. We also examine the effect of depth on the LSTM architectures and demonstrate a strong correlation between perplexities and BLEU scores. We also highlight a few translation examples where our models succeed in correctly translating OOV words, and present several failures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "To analyze the effect of rare words on translation quality, we follow Sutskever et al. (Sutskever et al., 2014) and sort sentences in newstest2014 by the average inverse frequency of their words. We split the test sentences into groups where the sentences within each group have a comparable number of rare words and evaluate each group independently. We evaluate our systems before and after translating the OOV words and compare with the standard MT systems -we use the best system from the WMT'14 contest (Durrani et al., 2014) , and neural MT systems -we use the ensemble systems described in (Sutskever et al., 2014) and Section 4.", "cite_spans": [ { "start": 87, "end": 111, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF17" }, { "start": 508, "end": 530, "text": "(Durrani et al., 2014)", "ref_id": "BIBREF5" }, { "start": 597, "end": 621, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Rare Word Analysis", "sec_num": "5.1" }, { "text": "Rare word translation is challenging for neural machine translation systems as shown in Figure 5 . Specifically, the translation quality of our model before applying the postprocessing step is shown by the green curve, and the current best NMT system (Sutskever et al., 2014 ) is the purple curve. While (Sutskever et al., 2014) produces better translations for sentences with frequent words (the left part of the graph), they are worse than best 7 Their unknown replacement method and ours both track the locations of target unknown words and use a word dictionary to post-process the translation. However, the mechanism used to achieve the \"tracking\" behavior is different. Jean et al. (2015)'s uses the attentional mechanism to track the origins of all target words, not just the unknown ones. In contrast, we only focus on tracking unknown words using unsupervised alignments. Our method can be easily applied to any sequence-to-sequence models since we treat any model as a blackbox and manipulate only at the input and output levels. Figure 5 : Rare word translation -On the x-axis, we order newstest2014 sentences by their average frequency rank and divide the sentences into groups of sentences with a comparable prevalence of rare words. We compute the BLEU score of each group independently. system (red curve) on sentences with many rare words (the right side of the graph). When applying our unknown word translation technique (purple curve), we significantly improve the translation quality of our NMT: for the last group of 500 sentences which have the greatest proportion of OOV words in the test set, we increase the BLEU score of our system by 4.8 BLEU points. Overall, our rare word translation model interpolates between the SOTA system and the system of Sutskever et al. (2014) , which allows us to outperform the winning entry of WMT'14 on sentences that consist predominantly of frequent words and approach its performance on sentences with many OOV words.", "cite_spans": [ { "start": 251, "end": 274, "text": "(Sutskever et al., 2014", "ref_id": "BIBREF17" }, { "start": 304, "end": 328, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF17" }, { "start": 447, "end": 448, "text": "7", "ref_id": null }, { "start": 1774, "end": 1797, "text": "Sutskever et al. (2014)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 88, "end": 96, "text": "Figure 5", "ref_id": null }, { "start": 1040, "end": 1048, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Rare Word Analysis", "sec_num": "5.1" }, { "text": "We examine the effect of the different rare word models presented in Section 3, namely: (a) Copyable -which aligns the unknown words on both the input and the target side by learning to copy indices, (b) the Positional All (PosAll) -which predicts the aligned source positions for every target word, and (c) the Positional Unknown (PosUnk) -which predicts the aligned source positions for only the unknown target words. 8 It is also interest- Figure 6 : Rare word models -translation performance of 6-layer LSTMs: a model that uses no alignment (NoAlign) and the other rare word models (Copyable, PosAll, PosUnk) . For each model, we show results before (left) and after (right) the rare word translation as well as the perplexity (in parentheses). For PosAll, we report the perplexities of predicting the words and the positions.", "cite_spans": [ { "start": 586, "end": 612, "text": "(Copyable, PosAll, PosUnk)", "ref_id": null } ], "ref_spans": [ { "start": 443, "end": 451, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Rare Word Models", "sec_num": "5.2" }, { "text": "ing to measure the improvement obtained when no alignment information is used during training. As such, we include a baseline model with no alignment knowledge (NoAlign) in which we simply assume that the i th unknown word on the target sentence is aligned to the i th unknown word in the source sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rare Word Models", "sec_num": "5.2" }, { "text": "From the results in Figure 6 , a simple monotone alignment assumption for the NoAlign model yields a modest gain of 0.8 BLEU points. If we train the model to predict the alignment, then the Copyable model offers a slightly better gain of 1.0 BLEU. Note, however, that English and French have similar word order structure, so it would be interesting to experiment with other language pairs, such as English and Chinese, in which the word order is not as monotonic. These harder language pairs potentially imply a smaller gain for the NoAlign model and a larger gain for the Copyable model. We leave it for future work.", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 28, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Rare Word Models", "sec_num": "5.2" }, { "text": "The positional models (PosAll and PosUnk) improve translation performance by more than 2 BLEU points. This proves that the limitation of the copyable model, which forces it to align each unknown output word with an unknown input word, is considerable. In contrast, the positional models can align the unknown target words with any source word, and as a result, post-processing has a much stronger effect. The PosUnk model achieves better translation results than the PosAll model which suggests that it is easier to train the LSTM Figure 7 : Effect of depths -BLEU scores achieved by PosUnk models of various depths (3, 4, and 6) before and after the rare word translation. Notice that the PosUnk model is more useful on more accurate models. on shorter sequences.", "cite_spans": [], "ref_spans": [ { "start": 531, "end": 539, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Rare Word Models", "sec_num": "5.2" }, { "text": "Deep LSTM architecture -We compare PosUnk models trained with different number of layers (3, 4, and 6). We observe that the gain obtained by the PosUnk model increases in tandem with the overall accuracy of the model, which is consistent with the idea that larger models can point to the appropriate source word more accurately. Additionally, we observe that on average, each extra LSTM layer provides roughly 1.0 BLEU point improvement as demonstrated in Figure 7 . Perplexity and BLEU -Lastly, we find it interesting to observe a strong correlation between the perplexity (our training objective) and the translation quality as measured by BLEU. Figure 8 shows the performance of a 4-layer LSTM, in which we compute both perplexity and BLEU scores at different points during training. We find that on average, a reduction of 0.5 perplexity gives us roughly 1.0 BLEU point improvement. Sample translations -the table shows the source (src) and the translations of our best model before (trans) and after (+unk) unknown word translations. We also show the human translations (tgt) and italicize words that are involved in the unknown word translation process.", "cite_spans": [], "ref_spans": [ { "start": 456, "end": 464, "text": "Figure 7", "ref_id": null }, { "start": 648, "end": 656, "text": "Figure 8", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Other Effects", "sec_num": "5.3" }, { "text": "We present three sample translations of our best system (with 37.5 BLEU) in Table 3 . In our first example, the model translates all the unknown words correctly: 2600, orthop\u00e9diques, and cataracte. It is interesting to observe that the model can accurately predict an alignment of distances of 5 and 6 words. The second example highlights the fact that our model can translate long sentences reasonably well and that it was able to correctly translate the unknown word for JP-Morgan at the very far end of the source sentence. Lastly, our examples also reveal several penalties incurred by our model: (a) incorrect entries in the word dictionary, as with n\u00e9gociateur vs. trader in the second example, and (b) incorrect alignment prediction, such as when unkpos 3 is incorrectly aligned with the source word was and not with abandoning, which resulted in an incorrect translation in the third sentence.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Sample Translations", "sec_num": "5.4" }, { "text": "We have shown that a simple alignment-based technique can mitigate and even overcome one of the main weaknesses of current NMT systems, which is their inability to translate words that are not in their vocabulary. A key advantage of our technique is the fact that it is applicable to any NMT system and not only to the deep LSTM model of Sutskever et al. (2014) . A technique like ours is likely necessary if an NMT system is to achieve state-of-the-art performance on machine translation.", "cite_spans": [ { "start": 338, "end": 361, "text": "Sutskever et al. (2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We have demonstrated empirically that on the WMT'14 English-French translation task, our technique yields a consistent and substantial improvement of up to 2.8 BLEU points over various NMT systems of different architectures. Most importantly, with 37.5 BLEU points, we have established the first NMT system that outperformed the best MT system on a WMT'14 contest dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 11-19, Beijing, China, July 26-31, 2015. c 2015 Association for Computational Linguistics", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "When a source word has multiple translations, we use the translation with the highest probability. These translation probabilities are estimated from the unsupervised alignment links. When constructing the dictionary from these alignment links, we add a word pair to the dictionary only if its alignment count exceeds 100.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "When the French vocabulary has 40K words, there are on average 1.33 unknown words per sentence on the target side of the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://matrix.statmt.org/matrix 5 The tokenizer.perl and multi-bleu.pl scripts are used to tokenize and score translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For the 40K-vocabulary ensemble, we combine 5 models with 4 layers and 3 models with 6 layers. For the 80Kvocabulary ensemble, we combine 3 models with 4 layers and 5 models with 6 layers. Two of the depth-6 models are regularized with dropout, similar toZaremba et al. (2015) with the dropout probability set to 0.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In this section and in section 5.3, all models are trained on the unreversed sentences, and we use the following hyperparameters: we initialize the parameters uniformly in [-0.1, 0.1], the learning rate is 1, the maximal gradient norm is 1, with a source vocabulary of 90k words, and a target vocabulary of 40k (see Section 4.2 for more details). While these LSTMs do not achieve the best possible performance, it is still useful to analyze them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank members of the Google Brain team for thoughtful discussions and insights. The first author especially thanks Chris Manning and the Stanford NLP group for helpful comments on the early drafts of the paper. Lastly, we thank the annonymous reviewers for their valuable feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Domain adaptation via pseudo in-domain data selection", "authors": [ { "first": "Amittai", "middle": [], "last": "Axelrod", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2011, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In EMNLP.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "D", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Phrasal: A statistical machine translation toolkit for exploring new model features", "authors": [ { "first": "D", "middle": [], "last": "Cer", "suffix": "" }, { "first": "M", "middle": [], "last": "Galley", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2010, "venue": "ACL, Demonstration Session", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Cer, M. Galley, D. Jurafsky, and C. D. Manning. 2010. Phrasal: A statistical machine translation toolkit for exploring new model features. In ACL, Demonstration Session.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Hierarchical phrase-based translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "2", "pages": "201--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2007. Hierarchical phrase-based trans- lation. Computational Linguistics, 33(2):201-228.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Edinburgh's phrase-based machine translation systems for WMT-14", "authors": [ { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" } ], "year": 2014, "venue": "WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. 2014. Edinburgh's phrase-based machine translation systems for WMT-14. In WMT.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Weese", "suffix": "" }, { "first": "Hendra", "middle": [], "last": "Setiawan", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lopez", "suffix": "" }, { "first": "Ferhan", "middle": [], "last": "Ture", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Eidelman", "suffix": "" }, { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2010, "venue": "ACL, Demonstration Session", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Jonathan Weese, Hendra Setiawan, Adam Lopez, Ferhan Ture, Vladimir Eidelman, Juri Gan- itkevitch, Phil Blunsom, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In ACL, Demonstration Session.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Neural turing machines", "authors": [ { "first": "A", "middle": [], "last": "Graves", "suffix": "" }, { "first": "G", "middle": [], "last": "Wayne", "suffix": "" }, { "first": "I", "middle": [], "last": "Danihelka", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1410.5401" ] }, "num": null, "urls": [], "raw_text": "A. Graves, G. Wayne, and I. Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Generating sequences with recurrent neural networks", "authors": [ { "first": "A", "middle": [], "last": "Graves", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1308.0850" ] }, "num": null, "urls": [], "raw_text": "A. Graves. 2013. Generating sequences with recurrent neural networks. In Arxiv preprint arXiv:1308.0850.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On using very large target vocabulary for neural machine translation", "authors": [ { "first": "S\u00e9bastien", "middle": [], "last": "Jean", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Memisevic", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S\u00e9bastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large tar- get vocabulary for neural machine translation. In ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Recurrent continuous translation models", "authors": [ { "first": "N", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Kalchbrenner and P. Blunsom. 2013. Recurrent continuous translation models. In EMNLP.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Statistical phrase-based translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In NAACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" } ], "year": 2007, "venue": "ACL, Demonstration Session", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In ACL, Demonstration Session.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Alignment by agreement", "authors": [ { "first": "P", "middle": [], "last": "Liang", "suffix": "" }, { "first": "B", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Liang, B. Taskar, and D. Klein. 2006. Alignment by agreement. In NAACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "On the difficulty of training recurrent neural networks", "authors": [ { "first": "R", "middle": [], "last": "Pascanu", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1211.5063" ] }, "num": null, "urls": [], "raw_text": "R. Pascanu, T. Mikolov, and Y. Bengio. 2012. On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1211.5063.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "O", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Sutskever, O. Vinyals, and Q. V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Recurrent neural network regularization", "authors": [ { "first": "Wojciech", "middle": [], "last": "Zaremba", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" } ], "year": 2015, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2015. Recurrent neural network regularization. In ICLR.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Positional All Model -an example of the PosAll model. Each word is followed by the relative positional tokens p d or the null token p \u2205 .", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Positional Unknown Model -an example of the PosUnk model: only aligned unknown words are annotated with the unkpos d tokens.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF5": { "text": "Perplexity vs. BLEU -we show the correlation by evaluating an LSTM model with 4 layers at various stages of training.", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "html": null, "num": null, "type_str": "table", "content": "", "text": "Sentences srcAn additional 2600 operations including orthopedic and cataract surgery will help clear a backlog . trans En outre , unkpos 1 op\u00e9rations suppl\u00e9mentaires , dont la chirurgie unkpos 5 et la unkpos 6 , permettront de r\u00e9sorber l' arri\u00e9r\u00e9 .+unk En outre , 2600 op\u00e9rations suppl\u00e9mentaires , dont la chirurgie orthop\u00e9diques et la cataracte , permettront de r\u00e9sorber l' arri\u00e9r\u00e9 . tgt 2600 op\u00e9rations suppl\u00e9mentaires , notamment dans le domaine de la chirurgie orthop\u00e9dique et de la cataracte , aideront\u00e0 rattraper le retard . src This trader , Richard Usher , left RBS in 2010 and is understand to have be given leave from his current position as European head of forex spot trading at JPMorgan . trans Ce unkpos 0 , Richard unkpos 0 , a quitt\u00e9 unkpos 1 en 2010 et a compris qu' il est autoris\u00e9\u00e0 quitter son poste actuel en tant que leader europ\u00e9en du march\u00e9 des points de vente au unkpos 5 . +unk Ce n\u00e9gociateur , Richard Usher , a quitt\u00e9 RBS en 2010 et a compris qu' il est autoris\u00e9\u00e0 quitter son poste actuel en tant que leader europ\u00e9en du march\u00e9 des points de vente au JPMorgan . tgt Ce trader , Richard Usher , a quitt\u00e9 RBS en 2010 et aurait\u00e9t\u00e9 mis suspendu de son poste de responsable europ\u00e9en du trading au comptant pour les devises chez JPMorgan src But concerns have grown after Mr Mazanga was quoted as saying Renamo was abandoning the 1992 peace accord . trans Mais les inqui\u00e9tudes se sont accrues apr\u00e8s que M. unkpos 3 a d\u00e9clar\u00e9 que la unkpos 3 unkpos 3 l' accord de paix de 1992 . +unk Mais les inqui\u00e9tudes se sont accrues apr\u00e8s que M. Mazanga a d\u00e9clar\u00e9 que la Renamo\u00e9tait l' accord de paix de 1992 . tgt Mais l' inqui\u00e9tude a grandi apr\u00e8s que M. Mazanga a d\u00e9clar\u00e9 que la Renamo abandonnait l' accord de paix de 1992 ." }, "TABREF2": { "html": null, "num": null, "type_str": "table", "content": "
", "text": "" } } } }