{ "paper_id": "P18-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:40:05.608974Z" }, "title": "Unsupervised Neural Machine Translation with Weight Sharing", "authors": [ { "first": "Zhen", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences", "location": {} }, "email": "yangzhen2014@ia.ac.cn" }, { "first": "Wei", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences", "location": {} }, "email": "" }, { "first": "Feng", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences", "location": {} }, "email": "feng.wang@ia.ac.cn" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences", "location": {} }, "email": "xubo@ia.ac.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.", "pdf_parse": { "paper_id": "P18-1005", "_pdf_hash": "", "abstract": [ { "text": "Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; , directly applying a single neural network to transform the source sentence into the target sentence, has now reached impressive performance (Shen et al., 2015; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) . The NMT typically consists of two sub neural networks. The encoder network reads and encodes the source sentence into a 1 Feng Wang is the corresponding author of this paper context vector, and the decoder network generates the target sentence iteratively based on the context vector. NMT can be studied in supervised and unsupervised learning settings. In the supervised setting, bilingual corpora is available for training the NMT model. In the unsupervised setting, we only have two independent monolingual corpora with one for each language and there is no bilingual training example to provide alignment information for the two languages. Due to lack of alignment information, the unsupervised NMT is considered more challenging. However, this task is very promising, since the monolingual corpora is usually easy to be collected.", "cite_spans": [ { "start": 27, "end": 59, "text": "(Kalchbrenner and Blunsom, 2013;", "ref_id": "BIBREF12" }, { "start": 60, "end": 83, "text": "Sutskever et al., 2014;", "ref_id": "BIBREF21" }, { "start": 226, "end": 245, "text": "(Shen et al., 2015;", "ref_id": "BIBREF19" }, { "start": 246, "end": 267, "text": "Johnson et al., 2016;", "ref_id": "BIBREF11" }, { "start": 268, "end": 289, "text": "Gehring et al., 2017;", "ref_id": "BIBREF9" }, { "start": 290, "end": 311, "text": "Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Motivated by recent success in unsupervised cross-lingual embeddings (Artetxe et al., 2016; Zhang et al., 2017b; Conneau et al., 2017) , the models proposed for unsupervised NMT often assume that a pair of sentences from two different languages can be mapped to a same latent representation in a shared-latent space Artetxe et al., 2017b) . Following this assumption, use a single encoder and a single decoder for both the source and target languages. The encoder and decoder, acting as a standard auto-encoder (AE), are trained to reconstruct the inputs. And Artetxe et al. (2017b) utilize a shared encoder but two independent decoders. With some good performance, they share a glaring defect, i.e., only one encoder is shared by the source and target languages. Although the shared encoder is vital for mapping sentences from different languages into the shared-latent space, it is weak in keeping the uniqueness and internal characteristics of each language, such as the style, terminology and sentence structure. Since each language has its own characteristics, the source and target languages should be encoded and learned independently. Therefore, we conjecture that the shared encoder may be a factor limit-ing the potential translation performance.", "cite_spans": [ { "start": 69, "end": 91, "text": "(Artetxe et al., 2016;", "ref_id": "BIBREF1" }, { "start": 92, "end": 112, "text": "Zhang et al., 2017b;", "ref_id": "BIBREF31" }, { "start": 113, "end": 134, "text": "Conneau et al., 2017)", "ref_id": "BIBREF8" }, { "start": 316, "end": 338, "text": "Artetxe et al., 2017b)", "ref_id": "BIBREF3" }, { "start": 560, "end": 582, "text": "Artetxe et al. (2017b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to address this issue, we extend the encoder-shared model, i.e., the model with one shared encoder, by leveraging two independent encoders with each for one language. Similarly, two independent decoders are utilized. For each language, the encoder and its corresponding decoder perform an AE, where the encoder generates the latent representations from the perturbed input sentences and the decoder reconstructs the sentences from the latent representations. To map the latent representations from different languages to a shared-latent space, we propose the weightsharing constraint to the two AEs. Specifically, we share the weights of the last few layers of two encoders that are responsible for extracting highlevel representations of input sentences. Similarly, we share the weights of the first few layers of two decoders. To enforce the shared-latent space, the word embeddings are used as a reinforced encoding component in our encoders. For cross-language translation, we utilize the backtranslation following . Additionally, two different generative adversarial networks (GAN) , namely the local and global GAN, are proposed to further improve the cross-language translation. We utilize the local GAN to constrain the source and target latent representations to have the same distribution, whereby the encoder tries to fool a local discriminator which is simultaneously trained to distinguish the language of a given latent representation. We apply the global GAN to finetune the corresponding generator, i.e., the composition of the encoder and decoder of the other language, where a global discriminator is leveraged to guide the training of the generator by assessing how far the generated sentence is from the true data distribution 1 . In summary, we mainly make the following contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose the weight-sharing constraint to unsupervised NMT, enabling the model to utilize an independent encoder for each language. To enforce the shared-latent space, we also propose the embedding-reinforced encoders and two different GANs for our model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We conduct extensive experiments on 1 The code that we utilized to train and evaluate our models can be found at https://github.com/ZhenYangIACAS/unsupervised-NMT English-German, English-French and Chinese-to-English translation tasks. Experimental results show that the proposed approach consistently achieves great success.", "cite_spans": [ { "start": 38, "end": 39, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Last but not least, we introduce the directional self-attention to model temporal order information for the proposed model. Experimental results reveal that it deserves more efforts for researchers to investigate the temporal order information within self-attention layers of NMT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several approaches have been proposed to train N-MT models without direct parallel corpora. The scenario that has been widely investigated is one where two languages have little parallel data between them but are well connected by one pivot language. The most typical approach in this scenario is to independently translate from the source language to the pivot language and from the pivot language to the target language (Saha et al., 2016; Cheng et al., 2017) . To improve the translation performance, Johnson et al. (2016) propose a multilingual extension of a standard NMT model and they achieve substantial improvement for language pairs without direct parallel training data. Recently, motivated by the success of crosslingual embeddings, researchers begin to show interests in exploring the more ambitious scenario where an NMT model is trained from monolingual corpora only. and Artetxe et al. (2017b) simultaneously propose an approach for this scenario, which is based on pre-trained cross lingual embeddings. utilizes a single encoder and a single decoder for both languages. The entire system is trained to reconstruct its perturbed input. For cross-lingual translation, they incorporate back-translation into the training procedure. Different from , Artetxe et al. (2017b) use two independent decoders with each for one language. The two works mentioned above both use a single shared encoder to guarantee the shared latent space. However, a concomitant defect is that the shared encoder is weak in keeping the uniqueness of each language. Our work also belongs to this more ambitious scenario, and to the best of our knowledge, we are one among the first endeavors to investigate how to train an NMT model with monolingual corpora only. Figure 1 : The architecture of the proposed model. We implement the shared-latent space assumption using a weight sharing constraint where the connection of the last few layers in Enc s and Enc t are tied (illustrated with dashed lines) and the connection of the first few layers in Dec s and Dec t are tied. is the translation in reversed direction. D l is utilized to assess whether the hidden representation of the encoder is from the source or target language. D g1 and D g2 are used to evaluate whether the translated sentences are realistic for each language respectively. Z represents the shared-latent space.", "cite_spans": [ { "start": 422, "end": 441, "text": "(Saha et al., 2016;", "ref_id": "BIBREF16" }, { "start": 442, "end": 461, "text": "Cheng et al., 2017)", "ref_id": "BIBREF6" }, { "start": 504, "end": 525, "text": "Johnson et al. (2016)", "ref_id": "BIBREF11" }, { "start": 887, "end": 909, "text": "Artetxe et al. (2017b)", "ref_id": "BIBREF3" }, { "start": 1263, "end": 1285, "text": "Artetxe et al. (2017b)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1751, "end": 1759, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 The Approach", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The model architecture, as illustrated in figure 1, is based on the AE and GAN. It consists of seven sub networks: including two encoders Enc s and Enc t , two decoders Dec s and Dec t , the local discriminator D l , and the global discriminators D g1 and D g2 . For the encoder and decoder, we follow the newly emerged Transformer (Vaswani et al., 2017) . Specifically, the encoder is composed of a stack of four identical layers 2 . Each layer consists of a multi-head self-attention and a simple position-wise fully connected feed-forward network. The decoder is also composed of four identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sublayer, which performs multi-head attention over the output of the encoder stack. For more details about the multi-head self-attention layer, we refer the reader to (Vaswani et al., 2017) . We implement the local discriminator as a multi-layer perceptron and implement the global discriminator based on the convolutional neural network (CNN). Several ways exist to interpret the roles of the sub networks are summarised in table 1. The proposed system has several striking components , which are critical either for the system to be trained in an unsu-pervised manner or for improving the translation performance.", "cite_spans": [ { "start": 332, "end": 354, "text": "(Vaswani et al., 2017)", "ref_id": null }, { "start": 858, "end": 880, "text": "(Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.1" }, { "text": "Roles Table 1 : Interpretation of the roles for the subnetworks in the proposed system.", "cite_spans": [], "ref_spans": [ { "start": 6, "end": 13, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Networks", "sec_num": null }, { "text": "{Enc s , Dec s } AE for source language {Enc t , Dec t } AE for target language {Enc s , Dec t } translation source \u2192 target {Enc t , Dec s } translation target \u2192 source {Enc s , D l } 1st local GAN (GAN l1 ) {Enc t , D l } 2nd local GAN (GAN l2 ) {Enc t , Dec s , D g1 } 1st global GAN (GAN g1 ) {Enc s , Dec t , D g2 } 2nd global GAN (GAN g2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Networks", "sec_num": null }, { "text": "Directional self-attention Compared to recurrent neural network, a disadvantage of the simple self-attention mechanism is that the temporal order information is lost. Although the Transformer applies the positional encoding to the sequence before processed by the self-attention, how to model temporal order information within an attention is still an open question. Following (Shen et al., 2017) , we build the encoders in our model on the directional self-attention which utilizes the positional masks to encode temporal order information into attention output. More concretely, two positional masks, namely the forward mask M f and backward mask M b , are calculated as:", "cite_spans": [ { "start": 377, "end": 396, "text": "(Shen et al., 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Networks", "sec_num": null }, { "text": "M f ij = 0 i < j \u2212\u221e otherwise (1) M b ij = 0 i > j \u2212\u221e otherwise (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Networks", "sec_num": null }, { "text": "With the forward mask M f , the later token only makes attention connections to the early tokens in the sequence, and vice versa with the backward mask. Similar to (Zhou et al., 2016; , we utilize a self-attention network to process the input sequence in forward direction. The output of this layer is taken by an upper self-attention network as input, processed in the reverse direction.", "cite_spans": [ { "start": 164, "end": 183, "text": "(Zhou et al., 2016;", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Networks", "sec_num": null }, { "text": "Weight sharing Based on the shared-latent space assumption, we apply the weight sharing constraint to relate the two AEs. Specifically, we share the weights of the last few layers of the Enc s and Enc t , which are responsible for extracting high-level representations of the input sentences. Similarly, we also share the first few layers of the Dec s and Dec t , which are expected to decode high-level representations that are vital for reconstructing the input sentences. Compared to (Cheng et al., 2016; Saha et al., 2016) which use the fully shared encoder, we only share partial weights for the encoders and decoders. In the proposed model, the independent weights of the two encoders are expected to learn and encode the hidden features about the internal characteristics of each language, such as the terminology, style, and sentence structure. The shared weights are utilized to map the hidden features extracted by the independent weights to the shared-latent space.", "cite_spans": [ { "start": 487, "end": 507, "text": "(Cheng et al., 2016;", "ref_id": "BIBREF5" }, { "start": 508, "end": 526, "text": "Saha et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Networks", "sec_num": null }, { "text": "Embedding reinforced encoder We use pretrained cross-lingual embeddings in the encoders that are kept fixed during training. And the fixed embeddings are used as a reinforced encoding component in our encoder. Formally, given the input sequence embedding vectors E = {e 1 , . . . , e t } and the initial output sequence of the encoder stack H = {h 1 , . . . , h t }, we compute H r as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Networks", "sec_num": null }, { "text": "H r = g H + (1 \u2212 g) E (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Networks", "sec_num": null }, { "text": "where H r is the final output sequence of the encoder which will be attended by the decoder (In Transformer, H is the final output of the encoder), g is a gate unit and computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Networks", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g = \u03c3(W 1 E + W 2 H + b)", "eq_num": "(4)" } ], "section": "Networks", "sec_num": null }, { "text": "where W 1 , W 2 and b are trainable parameters and they are shared by the two encoders. The motivation behind is twofold. Firstly, taking the fixed cross-lingual embedding as the other encoding component is helpful to reinforce the sharedlatent space. Additionally, from the point of multichannel encoders (Xiong et al., 2017) , providing encoding components with different levels of composition enables the decoder to take pieces of source sentence at varying composition levels suiting its own linguistic structure.", "cite_spans": [ { "start": 306, "end": 326, "text": "(Xiong et al., 2017)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Networks", "sec_num": null }, { "text": "Based on the architecture proposed above, we train the NMT model with the monolingual corpora only using the following four strategies: Denoising auto-encoding Firstly, we train the two AEs to reconstruct their inputs respectively. In this form, each encoder should learn to compose the embeddings of its corresponding language and each decoder is expected to learn to decompose this representation into its corresponding language. Nevertheless, without any constraint, the AE quickly learns to merely copy every word one by one, without capturing any internal structure of the language involved. To address this problem, we utilize the same strategy of denoising AE (Vincent et al., 2008) and add some noise to the input sentences (Hill et al., 2016; Artetxe et al., 2017b) . To this end, we shuffle the input sentences randomly. Specifically, we apply a random permutation \u03b5 to the input sentence, verifying the condition:", "cite_spans": [ { "start": 667, "end": 689, "text": "(Vincent et al., 2008)", "ref_id": "BIBREF24" }, { "start": 732, "end": 751, "text": "(Hill et al., 2016;", "ref_id": "BIBREF10" }, { "start": 752, "end": 774, "text": "Artetxe et al., 2017b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Training", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "|\u03b5(i) \u2212 i| \u2264 min(k([ steps s ] + 1), n), \u2200i \u2208 {1, n}", "eq_num": "(5)" } ], "section": "Unsupervised Training", "sec_num": "3.2" }, { "text": "where n is the length of the input sentence, steps is the global steps the model has been updated, k and s are the tunable parameters which can be set by users beforehand. This way, the system needs to learn some useful structure of the involved languages to be able to recover the correct word order. In practice, we set k = 2 and s = 100000.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Training", "sec_num": "3.2" }, { "text": "Back-translation In spite of denoising autoencoding, the training procedure still involves a single language at each time, without considering our final goal of mapping an input sentence from the source/target language to the target/source language. For the cross language training, we utilize the back-translation approach for our unsupervised training procedure. Back-translation has shown its great effectiveness on improving NMT model with monolingual data and has been widely investigated by (Sennrich et al., 2015a; Zhang and Zong, 2016) . In our approach, given an input sentence in a given language, we apply the corresponding encoder and the decoder of the other language to translate it to the other language 3 . By combining the translation with its original sentence, we get a pseudo-parallel corpus which is utilized to train the model to reconstruct the original sentence from its translation.", "cite_spans": [ { "start": 497, "end": 521, "text": "(Sennrich et al., 2015a;", "ref_id": "BIBREF17" }, { "start": 522, "end": 543, "text": "Zhang and Zong, 2016)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Training", "sec_num": "3.2" }, { "text": "Local GAN Although the weight sharing constraint is vital for the shared-latent space assumption, it alone does not guarantee that the corresponding sentences in two languages will have the same or similar latent code. To further enforce the shared-latent space, we train a discriminative neural network, referred to as the local discriminator, to classify between the encoding of source sentences and the encoding of target sentences. The local discriminator, implemented as a multilayer perceptron with two hidden layers of size 256, takes the output of the encoder, i.e., H r calculated as equation 3, as input, and produces a binary prediction about the language of the input sentence. The local discriminator is trained to predict the language by minimizing the following crossentropy loss:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Training", "sec_num": "3.2" }, { "text": "L D l (\u03b8 D l ) = \u2212 E x\u2208xs [log p(f = s|Enc s (x))] \u2212 E x\u2208xt [log p(f = t|Enc t (x))] (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Training", "sec_num": "3.2" }, { "text": "where \u03b8 D l represents the parameters of the local discriminator and f \u2208 {s, t}. The encoders are trained to fool the local discriminator:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Training", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L Encs (\u03b8 Encs ) = \u2212 E x\u2208xs [log p(f = t|Enc s (x))] (7) L Enct (\u03b8 Enct ) = \u2212 E x\u2208xt [log p(f = s|Enc t (x))]", "eq_num": "(8)" } ], "section": "Unsupervised Training", "sec_num": "3.2" }, { "text": "where \u03b8 Encs and \u03b8 Enct are the parameters of the two encoders. Global GAN We apply the global GANs to fine tune the whole model so that the model is able to generate sentences undistinguishable from the true data, i.e., sentences in the training corpus. Different from the local GANs which updates the parameters of the encoders locally, the global GANs are utilized to update the whole parameters of the proposed model, including the parameters of encoders and decoders. The proposed model has two global GANs: GAN g1 and GAN g2 . In GAN g1 , the Enc t and Dec s act as the generator, which generates the sentencex t 4 from x t . The D g1 , implemented based on CNN, assesses whether the generated sentencex t is the true target-language sentence or the generated sentence. The global discriminator aims to distinguish among the true sentences and generated sentences, and it is trained to minimize its classification error rate. During training, the D g1 feeds back its assessment to finetune the encoder Enc t and decoder Dec s . Since the machine translation is a sequence generation problem, following , we leverage policy gradient reinforcement training to back-propagate the assessment. We apply a similar processing to GAN g2 (The details about the architecture of the global discriminator and the training procedure of the global GANs can be seen in appendix ?? and ??).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Training", "sec_num": "3.2" }, { "text": "There are two stages in the proposed unsupervised training. In the first stage, we train the proposed model with denoising auto-encoding, backtranslation and the local GANs, until no improvement is achieved on the development set. Specifically, we perform one batch of denoising autoencoding for the source and target languages, one batch of back-translation for the two languages, and another batch of local GAN for the two languages. In the second stage, we fine tune the proposed model with the global GANs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Training", "sec_num": "3.2" }, { "text": "We evaluate the proposed approach on English-German, English-French and Chinese-to-English translation tasks 5 . We firstly describe the datasets, pre-processing and model hyper-parameters we used, then we introduce the baseline systems, and finally we present our experimental results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "In English-German and English-French translation, we make our experiments comparable with previous work by using the datasets from the WMT 2014 and WMT 2016 shared tasks respectively. For Chinese-to-English translation, we use the datasets from LDC, which has been widely utilized by previous works (Tu et al., 2017; Zhang et al., 2017a) .", "cite_spans": [ { "start": 299, "end": 316, "text": "(Tu et al., 2017;", "ref_id": "BIBREF22" }, { "start": 317, "end": 337, "text": "Zhang et al., 2017a)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Preprocessing", "sec_num": "4.1" }, { "text": "WMT14 English-French Similar to , we use the full training set of 36M sentence pairs and we lower-case them and remove sentences longer than 50 words, resulting in a parallel corpus of about 30M pairs of sentences. To guarantee no exact correspondence between the source and target monolingual sets, we build monolingual corpora by selecting English sentences from 15M random pairs, and selecting the French sentences from the complementary set. Sentences are encoded with byte-pair encoding (Sennrich et al., 2015b) , which has an English vocabulary of about 32000 tokens, and French vocabulary of about 33000 tokens. We report results on newstest2014.", "cite_spans": [ { "start": 492, "end": 516, "text": "(Sennrich et al., 2015b)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Preprocessing", "sec_num": "4.1" }, { "text": "WMT16 English-German We follow the same procedure mentioned above to create monolingual training corpora for English-German translation, and we get two monolingual training data of 1.8M sentences each. The two languages share a vocabulary of about 32000 tokens. We report results on newstest2016.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Preprocessing", "sec_num": "4.1" }, { "text": "LDC Chinese-English For Chinese-to-English translation, our training data consists of 1.6M sentence pairs randomly extracted from LDC corpora 6 . Since the data set is not big enough, we just build the monolingual data set by randomly shuffling the Chinese and English sentences respectively. In spite of the fact that some correspondence between examples in these two monolingual sets may exist, we never utilize this alignment information in our training procedure (see Section 3.2). Both the Chinese and English sentences are encoded with byte-pair encoding. We get an English vocabulary of about 34000 tokens, and Chinese vocabulary of about 38000 tokens. The results are reported on NIST 02.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Preprocessing", "sec_num": "4.1" }, { "text": "Since the proposed system relies on the pretrained cross-lingual embeddings, we utilize the monolingual corpora described above to train the embeddings for each language independently by using word2vec (Mikolov et al., 2013) . We then apply the public implementation 7 of the method proposed by (Artetxe et al., 2017a) to map these embeddings to a shared-latent space 8 .", "cite_spans": [ { "start": 202, "end": 224, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF14" }, { "start": 295, "end": 318, "text": "(Artetxe et al., 2017a)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Preprocessing", "sec_num": "4.1" }, { "text": "Following the base model in (Vaswani et al., 2017) , we set the dimension of word embedding as 512, dropout rate as 0.1 and the head number as 8. We use beam search with a beam size of 4 and length penalty \u03b1 = 0.6. The model is implemented in TensorFlow (Abadi et al., 2015) and trained on up to four K80 GPUs synchronously in a multi-GPU setup on a single machine.", "cite_spans": [ { "start": 28, "end": 50, "text": "(Vaswani et al., 2017)", "ref_id": null }, { "start": 254, "end": 274, "text": "(Abadi et al., 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Model Hyper-parameters and Evaluation", "sec_num": "4.2" }, { "text": "For model selection, we stop training when the model achieves no improvement for the tenth evaluation on the development set, which is comprised of 3000 source and target sentences extracted randomly from the monolingual training corpora. Following , we translate the source sentences to the target language, and then translate the resulting sentences back to the source language. The quality of the model is then evaluated by computing the BLEU score over the original inputs and their reconstructions via this two-step translation process. The performance is finally averaged over two directions, i.e., from source to target and from target to source. BLEU (Papineni et al., 2002) is utilized as the evaluation metric. For Chinese-to-English, we apply the script mteval-v11b.pl to evaluate the translation performance. For English-German and English-French, we evaluate the translation performance with the script multi-belu.pl 9 .", "cite_spans": [ { "start": 659, "end": 682, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Model Hyper-parameters and Evaluation", "sec_num": "4.2" }, { "text": "Word-by-word translation (WBW) The first baseline we consider is a system that performs word-by-word translations using the inferred bilingual dictionary. Specifically, it translates a sentence word-by-word, replacing each word with its nearest neighbor in the other language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Systems", "sec_num": "4.3" }, { "text": "Lample et al. 2017The second baseline is a previous work that uses the same training and testing sets with this paper. Their model belongs to the standard attention-based encoder-decoder framework, which implements the encoder using a bidirectional long short term memory network (LST-M) and implements the decoder using a simple forward LSTM. They apply one single encoder and en-de de-en en-fr fr-en zh-en are copied directly from their paper. We do not present the results of (Artetxe et al., 2017b) since we use different training sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Systems", "sec_num": "4.3" }, { "text": "decoder for the source and target languages. Supervised training We finally consider exactly the same model as ours, but trained using the standard cross-entropy loss on the original parallel sentences. This model can be viewed as an upper bound for the proposed unsupervised model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Systems", "sec_num": "4.3" }, { "text": "We firstly investigate how the number of weightsharing layers affects the translation performance. In this experiment, we vary the number of weightsharing layers in the AEs from 0 to 4. Sharing one layer in AEs means sharing one layer for the encoders and in the meanwhile, sharing one layer for the decoders. The BLEU scores of English-to-German, English-to-French and Chinese-to-English translation tasks are reported in figure 2. Each curve corresponds to a different translation task and the x-axis denotes the number of weight-sharing layers for the AEs. We find that the number of weight-sharing layers shows much effect on the translation performance. And the best translation performance is achieved when only one layer is shared in our system. When all of the four layers are shared, i.e., only one shared encoder is utilized, we get poor translation performance in all of the three translation tasks. This verifies our conjecture that the shared encoder is detrimental to the performance of unsupervised NMT especially for the translation tasks on distant language pairs. More concretely, for the related language pair translation, i.e., English-to-French, the encoder-shared model achieves -0.53 BLEU points decline than the best model where only one layer is shared. For the more distant language pair English-to-German, the encoder-shared model achieves more significant decline, i.e., -0.85 BLEU points decline. And for the most distant language pair Chinese-to-English, the decline is as large as -1.66 BLEU points. We explain this as that the more distant the language pair is, the more different characteristics they have. And the shared encoder is weak in keeping the unique characteristic of each language. Additionally, we also notice that using two completely independent encoders, i.e., setting the number of weight-sharing layers as 0, results in poor translation performance too. This confirms our intuition that the shared layers are vital to map the source and target latent representations to a shared-latent space. In the rest of our experiments, we set the number of weightsharing layer as 1. Table 2 shows the BLEU scores on English-German, English-French and English-to-Chinese test sets. As it can be seen, the proposed approach obtains significant improvements than the word-by-word baseline system, with at least +5.01 BLEU points in English-to-German translation and up to +13.37 BLEU points in English-to-French translation. This shows that the proposed model only trained with monolingual data effec-en-de de-en en-fr fr-en zh-en Table 3 : Ablation study on English-German, English-French and Chinese-to-English translation tasks. Without weight sharing means no layers are shared in the two AEs.", "cite_spans": [], "ref_spans": [ { "start": 2122, "end": 2129, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 2567, "end": 2574, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Number of weight-sharing layers", "sec_num": "4.4.1" }, { "text": "tively learns to use the context information and the internal structure of each language. Compared to the work of , our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task. We believe that the unsupervised NMT is very promising. However, there is still a large room for improvement compared to the supervised upper bound. The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation results", "sec_num": "4.4.2" }, { "text": "To understand the importance of different components of the proposed system, we perform an ablation study by training multiple versions of our model with some missing components: the local GANs, the global GANs, the directional self-attention, the weight-sharing, the embeddingreinforced encoders, etc. Results are reported in table 3. We do not test the the importance of the auto-encoding, back-translation and the pretrained embeddings because they have been widely tested in Artetxe et al., 2017b) . Table 3 shows that the best performance is obtained with the simultaneous use of all the tested elements. The most critical component is the weight-sharing constraint, which is vital to map sentences of different languages to the sharedlatent space. The embedding-reinforced encoder also brings some improvement on all of the translation tasks. When we remove the directional selfattention, we get up to -0.3 BLEU points decline. This indicates that it deserves more efforts to investigate the temporal order information in selfattention mechanism. The GANs also significantly improve the translation performance of our system. Specifically, the global GANs achieve improvement up to +0.78 BLEU points on English-to-French translation and the local GANs also obtain improvement up to +0.57 BLEU points on English-to-French translation. This reveals that the proposed model benefits a lot from the crossdomain loss defined by GANs.", "cite_spans": [ { "start": 479, "end": 501, "text": "Artetxe et al., 2017b)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 504, "end": 511, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Ablation study", "sec_num": "4.4.3" }, { "text": "The models proposed recently for unsupervised N-MT use a single encoder to map sentences from different languages to a shared-latent space. We conjecture that the shared encoder is problematic for keeping the unique and inherent characteristic of each language. In this paper, we propose the weight-sharing constraint in unsupervised NMT to address this issue. To enhance the cross-language translation performance, we also propose the embedding-reinforced encoders, local GAN and global GAN into the proposed system. Additionally, the directional self-attention is introduced to model the temporal order information for our system. We test the proposed model on English-German, English-French and Chinese-to-English translation tasks. The experimental results reveal that our approach achieves significant improvement and verify our conjecture that the shared encoder is really a bottleneck for improving the unsupervised NMT. The ablation study shows that each component of our system achieves some improvement for the final translation performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "Unsupervised NMT opens exciting opportunities for the future research. However, there is still a large room for improvement compared to the supervised NMT. In the future, we would like to investigate how to utilize the monolingual data more effectively, such as incorporating the language model and syntactic information into unsupervised NMT. Besides, we decide to make more efforts to explore how to reinforce the temporal or-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "The layer number is selected according to our preliminary experiment, which is presented in appendix ??.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Since the quality of the translation shows little effect on the performance of the model(Sennrich et al., 2015a), we simply use greedy decoding for speed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Thext isx Enc t \u2212Decs t in figure 1. We omit the superscript for simplicity.5 The reason that we do not conduct experiments on English-to-Chinese translation is that we do not get public test sets for English-to-Chinese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "LDC2002L27, LDC2002T01, LDC2002E18, LD-C2003E07, LDC2004T08, LDC2004E12, LDC2005T107 https://github.com/artetxem/vecmap", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The configuration we used to run these open-source toolkits can be found in appendix ??9 https://github.com/mosessmt/mosesdecoder/blob/617e8c8/scripts/generic/multibleu.perl;mteval-v11b.pl", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported by the National Key Research and Development Program of China under Grant No. 2017YFB1002102, and Beijing Engineering Research Center under Grant No. Z171100002217015. We would like to thank Xu Shuang for her preparing data used in this work. Additionally, we also want to thank Jiaming Xu, Suncong Zheng and Wenfu Wang for their invaluable discussions on this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2016, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2289--2294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. In Conference on Empirical Methods in Natural Language Processing. pages 2289-2294.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning bilingual word embeddings with (almost) no bilingual data", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2017, "venue": "Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "451--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017a. Learning bilingual word embeddings with (almost) no bilingual data. In Meeting of the Asso- ciation for Computational Linguistics. pages 451- 462.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised neural machine translation", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017b. Unsupervised neural ma- chine translation .", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arX- iv:1409.0473 .", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Neural machine translation with pivot languages", "authors": [ { "first": "Yong", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qian", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.04928" ] }, "num": null, "urls": [], "raw_text": "Yong Cheng, Yang Liu, Qian Yang, Maosong Sun, and Wei Xu. 2016. Neural machine translation with piv- ot languages. arXiv preprint arXiv:1611.04928 .", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Joint training for pivotbased neural machine translation", "authors": [ { "first": "Yong", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Qian", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Qian", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2017, "venue": "Twenty-Sixth International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "3974--3980", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yong Cheng, Qian Yang, Yang Liu, Maosong Sun, Wei Xu, Yong Cheng, Qian Yang, Yang Liu, Maosong Sun, and Wei Xu. 2017. Joint training for pivot- based neural machine translation. In Twenty-Sixth International Joint Conference on Artificial Intelli- gence. pages 3974-3980.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1406.1078" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 .", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Word translation without parallel data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Herv", "middle": [], "last": "Jgou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv Jgou. 2017. Word translation without parallel data .", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Convolutional sequence to sequence learning", "authors": [ { "first": "Jonas", "middle": [], "last": "Gehring", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Yarats", "suffix": "" }, { "first": "Yann N", "middle": [], "last": "Dauphin", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning .", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning distributed representations of sentences from unlabelled data", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. TACL .", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [ { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Le", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "", "middle": [], "last": "Corrado", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.04558" ] }, "num": null, "urls": [], "raw_text": "Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, et al. 2016. Google's multilingual neural machine translation system: Enabling zero-shot translation. arXiv preprint arXiv:1611.04558 .", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Recurrent continuous translation models", "authors": [ { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "1700--1709", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recur- rent continuous translation models. EMNLP pages 1700-1709.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Unsupervised machine translation using monolingual corpora only", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Ludovic Denoyer, and Mar- c'Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only .", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems. pages 3111-3119.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. Association for Com- putational Linguistics pages 311-318.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A correlational encoder decoder architecture for pivot based sequence generation", "authors": [ { "first": "Amrita", "middle": [], "last": "Saha", "suffix": "" }, { "first": "M", "middle": [], "last": "Mitesh", "suffix": "" }, { "first": "Sarath", "middle": [], "last": "Khapra", "suffix": "" }, { "first": "Janarthanan", "middle": [], "last": "Chandar", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Rajendran", "suffix": "" }, { "first": "", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amrita Saha, Mitesh M Khapra, Sarath Chandar, Ja- narthanan Rajendran, and Kyunghyun Cho. 2016. A correlational encoder decoder architecture for piv- ot based sequence generation. arXiv preprint arX- iv:1606.04754 .", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015a. Improving neural machine translation mod- els with monolingual data. arXiv preprint arX- iv:1511.06709 .", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "Computer Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015b. Neural machine translation of rare words with subword units. Computer Science .", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Minimum risk training for neural machine translation", "authors": [ { "first": "Shiqi", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Wei", "middle": [], "last": "He", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1512.02433" ] }, "num": null, "urls": [], "raw_text": "Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2015. Minimum risk training for neural machine translation. arXiv preprint arXiv:1512.02433 .", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Disan: Directional self-attention network for rnn/cnn-free language understanding", "authors": [ { "first": "Tao", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Tianyi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Long", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Shirui", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Chengqi", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2017. Disan: Di- rectional self-attention network for rnn/cnn-free lan- guage understanding .", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Sequence to sequence learning with neural networks. Advances in neural information processing systems pages", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc Vv", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural network- s. Advances in neural information processing sys- tems pages 3104-3112.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning to remember translation history with a continuous cache", "authors": [ { "first": "Zhaopeng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shuming", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2017. Learning to remember translation his- tory with a continuous cache .", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Extracting and composing robust features with denoising autoencoders", "authors": [ { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Pierre-Antoine", "middle": [], "last": "Manzagol", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th international conference on Machine learning", "volume": "", "issue": "", "pages": "1096--1103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoen- coders. In Proceedings of the 25th internation- al conference on Machine learning. ACM, pages 1096-1103.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Deep neural machine translation with linear associative unit", "authors": [ { "first": "Mingxuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingxuan Wang, Zhengdong Lu, Jie Zhou, and Qun Liu. 2017. Deep neural machine translation with linear associative unit. arXiv preprint arX- iv:1705.00861 .", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", "authors": [ { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Le", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Gao", "suffix": "" }, { "first": "", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arX- iv:1609.08144 .", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Multi-channel encoder for neural machine translation", "authors": [ { "first": "Hao", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiaoguang", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1712.02109" ] }, "num": null, "urls": [], "raw_text": "Hao Xiong, Zhongjun He, Xiaoguang Hu, and Hua Wu. 2017. Multi-channel encoder for neural machine translation. arXiv preprint arXiv:1712.02109 .", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Improving neural machine translation with conditional sequence generative adversarial nets", "authors": [ { "first": "Zhen", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Feng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2017. Improving neural machine translation with condi- tional sequence generative adversarial nets .", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Prior knowledge integration for neural machine translation using posterior regularization", "authors": [ { "first": "Jiacheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Jingfang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1514--1523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiacheng Zhang, Yang Liu, Huanbo Luan, Jingfang X- u, and Maosong Sun. 2017a. Prior knowledge inte- gration for neural machine translation using posteri- or regularization. In Meeting of the Association for Computational Linguistics. pages 1514-1523.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Exploiting source-side monolingual data in neural machine translation", "authors": [ { "first": "Jiajun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" } ], "year": 2016, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1535--1545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiajun Zhang and Chengqing Zong. 2016. Exploit- ing source-side monolingual data in neural machine translation. In Conference on Empirical Methods in Natural Language Processing. pages 1535-1545.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Adversarial training for unsupervised bilingual lexicon induction", "authors": [ { "first": "Meng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1959--1970", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Adversarial training for unsupervised bilingual lexicon induction. In Meeting of the Asso- ciation for Computational Linguistics. pages 1959- 1970.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Deep recurrent models with fast-forward connections for neural machine translation", "authors": [ { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Xuguang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.04199" ] }, "num": null, "urls": [], "raw_text": "Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fast-forward connections for neural machine translation. arXiv preprint arXiv:1606.04199 .", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "num": null, "text": "The effects of the weight-sharing layer number on English-to-German, English-to-French and Chinese-to-English translation tasks.", "type_str": "figure" }, "TABREF2": { "content": "
Supervised24.07 26.99 30.50 30.21 40.02
Word-by-word5.859.343.606.805.09
Lample et al. (2017)9.64 13.33 15.05 14.31-
", "num": null, "text": "The proposed approach 10.86 14.62 16.97 15.58 14.52", "html": null, "type_str": "table" }, "TABREF3": { "content": "", "num": null, "text": "The translation performance on English-German, English-French and Chinese-to-English test sets. The results of", "html": null, "type_str": "table" }, "TABREF4": { "content": "
Without Global GANs10.34 14.05 16.19 15.21 14.09
Full model10.86 14.62 16.97 15.58 14.52
", "num": null, "text": "Without weight sharing 10.23 13.84 16.02 14.82 13.75 Without embedding-reinforced encoder 10.45 14.17 16.55 15.27 14.10 Without directional self-attention 10.60 14.21 16.82 15.30 14.29 Without local GANs 10.51 14.35 16.40 15.07 14.12", "html": null, "type_str": "table" } } } }