ACL-OCL / Base_JSON /prefixL /json /loresmt /2020.loresmt-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:59:34.982169Z"
},
"title": "Improving Multilingual Neural Machine Translation For Low-Resource Languages: French, English -Vietnamese",
"authors": [
{
"first": "Thi-Vinh",
"middle": [],
"last": "Ngo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Thai Nguyen University",
"location": {}
},
"email": ""
},
{
"first": "Phuong-Thai",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University",
"location": {
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Karlsruhe Institute of Technology",
"location": {}
},
"email": "thanh-le.ha@kit.edu"
},
{
"first": "Khac-Quy",
"middle": [],
"last": "Dinh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University",
"location": {
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Le-Minh",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "JAIST",
"location": {
"country": "Japan"
}
},
"email": "nguyenml@jaist.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Prior works have demonstrated that a lowresource language pair can benefit from multilingual machine translation (MT) systems, which rely on many language pairs' joint training. This paper proposes two simple strategies to address the rare word issue in multilingual MT systems for two low-resource language pairs: French-Vietnamese and English-Vietnamese. The first strategy is about dynamical learning word similarity of tokens in the shared space among source languages while another one attempts to augment the translation ability of rare words through updating their embeddings during the training. Besides, we leverage monolingual data for multilingual MT systems to increase the amount of synthetic parallel corpora while dealing with the data sparsity problem. We have shown significant improvements of up to +1.62 and +2.54 BLEU points over the bilingual baseline systems for both language pairs and released our datasets for the research community.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Prior works have demonstrated that a lowresource language pair can benefit from multilingual machine translation (MT) systems, which rely on many language pairs' joint training. This paper proposes two simple strategies to address the rare word issue in multilingual MT systems for two low-resource language pairs: French-Vietnamese and English-Vietnamese. The first strategy is about dynamical learning word similarity of tokens in the shared space among source languages while another one attempts to augment the translation ability of rare words through updating their embeddings during the training. Besides, we leverage monolingual data for multilingual MT systems to increase the amount of synthetic parallel corpora while dealing with the data sparsity problem. We have shown significant improvements of up to +1.62 and +2.54 BLEU points over the bilingual baseline systems for both language pairs and released our datasets for the research community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural Machine Translation (NMT) (Bahdanau et al., 2015) has achieved state of the art in various MT systems, including rich and low resource language pairs (Edunov et al., 2018; Gu et al., 2019; Ngo et al., 2019) . However, the quality of lowresource MT is quite unpretentious due to the lack of parallel data while it has achieved better results on systems of the available resource. Therefore, low-resource MT is one of the essential tasks investigated by many previous works (Ha et al., 2016; Lee et al., 2016; Sennrich and Zhang, 2019) .",
"cite_spans": [
{
"start": 33,
"end": 56,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 157,
"end": 178,
"text": "(Edunov et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 179,
"end": 195,
"text": "Gu et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 196,
"end": 213,
"text": "Ngo et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 479,
"end": 496,
"text": "(Ha et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 497,
"end": 514,
"text": "Lee et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 515,
"end": 540,
"text": "Sennrich and Zhang, 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, some works present MT systems that have achieved remarkable results for low-resource language (Gu et al., 2019; Aharoni et al., 2019) . Inspired by these works, we collect data from the TED Talks domain, then attempt to build multilingual MT systems from French, English-Vietnamese. Experiments demonstrate that both language pairs: French-Vietnamese and English-Vietnamese have achieved significant performance when joining the training.",
"cite_spans": [
{
"start": 104,
"end": 121,
"text": "(Gu et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 122,
"end": 143,
"text": "Aharoni et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although multilingual MT can reduce the sparse data in the shared space by using word segmentation, however, rare words still exist, evenly they are increased more if languages have a significant disparity in term vocabulary. Previous works suggested some strategies to reduce rare words such as using translation units at sub-word and character levels or generating a universal representation at the word and sentence levels (Lee et al., 2016; Gu et al., 2019) . These help to downgrade the dissimilarity of tokens shared from various languages. However, these works require learning additional parameters in training, thus increasing the size of models.",
"cite_spans": [
{
"start": 426,
"end": 444,
"text": "(Lee et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 445,
"end": 461,
"text": "Gu et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our paper presents two methods to augment the translation of rare words in the source space without modifying the architecture and model size of MT systems: (1) exploiting word similarity. This technique has been mentioned by previous works (Luong et al., 2015; Li et al., 2016; Trieu et al., 2016; Ngo et al., 2019) . They employ monolingual data or require supervised resources like a bilingual dictionary or WordNet, while we leverage relation from the multilingual space of MT systems. (2) Adding a scalar value to the rare word embedding in order to facilitate its translation in the training process.",
"cite_spans": [
{
"start": 241,
"end": 261,
"text": "(Luong et al., 2015;",
"ref_id": "BIBREF11"
},
{
"start": 262,
"end": 278,
"text": "Li et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 279,
"end": 298,
"text": "Trieu et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 299,
"end": 316,
"text": "Ngo et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to the fact that NMT tends to have bias in translating frequent words, so rare words (which have low frequency) often have less opportunity to be considered. Our ideal is inspired by the works of (Nguyen and Chiang, 2017; Ngo et al., 2019; Gu et al., 2019) . (Nguyen and Chiang, 2017) and (Ngo et al., 2019) proposed various solutions to urge for translation of rare words, including modification embedding in training. They only experimented with recurrent neural networks (RNNs) while our work uses the state-of-the-art transformer architecture. (Gu et al., 2019) transforms the word embedding of a token into the universal space, and they learn plus parameters while our method does not. We apply our strategies in our fine-tuning processes, and we show substantial improvements of the systems after some epochs only.",
"cite_spans": [
{
"start": 200,
"end": 225,
"text": "(Nguyen and Chiang, 2017;",
"ref_id": "BIBREF13"
},
{
"start": 226,
"end": 243,
"text": "Ngo et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 244,
"end": 260,
"text": "Gu et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 263,
"end": 288,
"text": "(Nguyen and Chiang, 2017)",
"ref_id": "BIBREF13"
},
{
"start": 293,
"end": 311,
"text": "(Ngo et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 552,
"end": 569,
"text": "(Gu et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Monolingual data are widely used in NMT to augment data for low-resource NMT systems (Sennrich et al., 2015; Lample et al., 2018; Wu et al., 2019; Siddhant et al., 2020) . Back-translation (Sennrich et al., 2015) is known as the most popular technique in exploiting target-side monolingual data to enhance the translation systems while the self-learning method focuses on utilizing source-side monolingual data. Otherwise, the dual-learning strategy (Wu et al., 2019) also suggests using both source-and target-side monolingual data to tackle this problem. Our work investigates the selflearning method on the low-resource multilingual NMT systems specifically related to Vietnamese. Besides, monolingual data are also leveraged in unsupervised (Lample et al., 2018) or zero-shot translation (Lample et al., 2018) .",
"cite_spans": [
{
"start": 85,
"end": 108,
"text": "(Sennrich et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 109,
"end": 129,
"text": "Lample et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 130,
"end": 146,
"text": "Wu et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 147,
"end": 169,
"text": "Siddhant et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 189,
"end": 212,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 450,
"end": 467,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 745,
"end": 766,
"text": "(Lample et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 792,
"end": 813,
"text": "(Lample et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of our work are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We first attempt to build a multilingual system for two low-resource language pairs: French-Vietnamese and English-Vietnamese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose two simple techniques to encourage the translation of rare words in multilingual MT to upgrade the systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We investigate the quality translation of the low-resource multilingual NMT systems when they are reinforced synthetic data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We release more datasets extracted from the TED Talks domain for the research purpose: French-Vietnamese and English-Vietnamese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In section 2, we review the transformer architecture used for our experiments. The brief of multilingual translation is shown in section 3. Section 4 presents our methods to deal with rare words in multilingual translation scenarios. The exploitation of monolingual data for low-resource multilingual MT is discussed in section 5. Our results are described in section 6, and related work is shown in section 7. Finally, the paper ends with conclusions and future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transformer architecture for machine translation is mentioned for the first time by (Vaswani et al., 2017) . This is based on the sequence to sequence framework (Sutskever et al., 2014) which includes an encoder to transform information of the source sentence X = (x 1 , x 2 , ..., x n ) into continuous representation and a decoder to generate the target sentence Y = (y 1 , y 2 , ..., y m ).",
"cite_spans": [
{
"start": 84,
"end": 106,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 161,
"end": 185,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "Self-attention is an important mechanism in the transformer architecture. It enables the ability to specify the relevance of a word with the remaining words in the sentence through the equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "Self-Attn(Q, K, V ) = Softmax( QK T d )V (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "where K (key), Q (query), V (value) are the representations of the input sentence and d is the size of the input. The attention mechanism (Luong et al., 2015a) bridges between the source sentence in the encoder and the target sentence in the decoder. Furthermore, the feed-forward networks are used to normalize the outputs on both encoder and decoder.",
"cite_spans": [
{
"start": 138,
"end": 159,
"text": "(Luong et al., 2015a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "The MT system is trained to minimize the maximum likelihood of K parallel pairs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(\u03b8) = 1 K k=K k=1 logp(Y k |X k ; \u03b8)",
"eq_num": "(2)"
}
],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "3 Multilingual NMT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "Multilingual NMT systems can translate between many language pairs, even in the zero-shot issue. Previous works investigate multilingual translation in many fashions: (1) Many to many (Ha et al., 2016; Aharoni et al., 2019) : from many sources to many target languages;",
"cite_spans": [
{
"start": 184,
"end": 201,
"text": "(Ha et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 202,
"end": 223,
"text": "Aharoni et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "(2) Many to one (Gu et al., 2019) : from many source languages to a target language;",
"cite_spans": [
{
"start": 16,
"end": 33,
"text": "(Gu et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "(3) One to many (Wang et al., 2018) : from one source language to many target languages. In cases (1) and (3), an artificial token is often added to the beginning of the source sentence to specify the predicted target language. Our MT systems are the same as the case (2), so we do not add any artificial token to the texts. In a multilingual NMT system from many to one with M language pairs and K sentence pairs for each one, the objective function uses maximum likelihood estimation on the whole parallel pairs X (m,k) , Y (m,k) m=1...M k=1..K as:",
"cite_spans": [
{
"start": 16,
"end": 35,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 516,
"end": 521,
"text": "(m,k)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "L(\u03b8) = 1 K m=M m=1 k=K k=1 logp(Y (m,k) |X (m,k) ; \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "(3) where K = m=M m=1 K m is the total number of sentences of the whole corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "The vocabulary of the source side is mixed from all source languages: (Gu et al., 2019) has shown that if the languages shared the same alphabet and had many similar words, such system will get many advantages from multilingual MT. In fact, different words from many languages can differ in form, but they may share the same subwords. This significantly reduces the number of rare words in the MT systems. Nevertheless, the rare word issue is still a challenge in NMT. We choose English and French are source languages in our experiment with the hope that they can share many tokens even though we do not have much data of those translation directions.",
"cite_spans": [
{
"start": 70,
"end": 87,
"text": "(Gu et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "V = m=M m=1 V m .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based NMT",
"sec_num": "2"
},
{
"text": "We assume that a rare word or rare token (which has a low frequency in the training data) from one source language may be similar to another word in a shared multilingual space. Similar words can belong to several languages and they can be replaced by the others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning multilingual word similarity",
"sec_num": "4.1"
},
{
"text": "Our method replaces rare tokens with their similar tokens in shared space. The replacements are learned dynamically in the training NMT system. To avoid slowing down the training speed, we only compute similar tokens after each epoch. In the experiments, we attempt to replace rare tokens from French with similar tokens in English and French.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning multilingual word similarity",
"sec_num": "4.1"
},
{
"text": "Our method is described as follows: Firstly, we extract the lists of all tokens from the English -{A} corpus, and the most k common words from the vocabulary of the source side of the French -{B}. We set k=15 thousand words in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning multilingual word similarity",
"sec_num": "4.1"
},
{
"text": "Secondly, we compute the similarity score between the embedding of a rare token t i , \u2200t i / \u2208 {A \u222a B} and each embedding of the tokens t j , \u2200t j \u2208 {A \u222a B} as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning multilingual word similarity",
"sec_num": "4.1"
},
{
"text": "score i = min(d j (e i , e j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning multilingual word similarity",
"sec_num": "4.1"
},
{
"text": "\u2022 e cos(e i ,e j ) ) 4where j = 1..M with M is the number of tokens of A \u222a B; d is the Euclidean distance between embedding e i of token t i and embedding e j of token t j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning multilingual word similarity",
"sec_num": "4.1"
},
{
"text": "The last, the token t i is replaced by its similar tokens. The scores are computed iteratively after each epoch during the training process. It may have more tokens similar to a rare token, so we experimentalize in the case of random selection a token from the similar tokens. To accrete the effectiveness of the method, we use a threshold to neglect similar pairs that have scores close to 0 or too large. In the experiments, we choose the scores in [2.4, 2.72] to warrant similar pairs alike in terms of distance as well as direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning multilingual word similarity",
"sec_num": "4.1"
},
{
"text": "In this approach, we assume that the embedding e i of token t j , \u2200t i / \u2208 {A \u222a B} is represented by the approximate embedding vector as following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating source embedding",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e i = e i + d",
"eq_num": "(5)"
}
],
"section": "Updating source embedding",
"sec_num": "4.2"
},
{
"text": "where d is the difference between embedding e i and the average of the all embeddings e j of token t j , \u2200t j \u2208 {A \u222a B}:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating source embedding",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d = e i \u2212 j=M j=1 e j M",
"eq_num": "(6)"
}
],
"section": "Updating source embedding",
"sec_num": "4.2"
},
{
"text": "where M is the number of tokens of {A \u222a B}. These embeddings are then updated during the training. The average of embeddings is only estimated after each epoch to avoid slowing down the training speed. We observe the improvements in both language pairs in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating source embedding",
"sec_num": "4.2"
},
{
"text": "Similar to the idea suggested in , we leverage monolingual data from the source-side to generate synthetic bilingual data. Instead of using monolingual data from all source languages, we only attempt to exploit monolingual data of English. Firstly, we train the multilingual NMT system from English, French \u2192 Vietnamese based on bilingual data from the TED talks with the approaches mentioned in section 4. The best system is then used to translate English to Vietnamese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting monolingual data for low-resource multilingual NMT",
"sec_num": "5"
},
{
"text": "Lastly, the synthetic parallel data are mixed with original bilingual data in the normal training scheme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting monolingual data for low-resource multilingual NMT",
"sec_num": "5"
},
{
"text": "We extracted data from TED Talks domain 1 for two language pairs English-Vietnamese and French-Vietnamese. The details of those datasets are described in Table 1 . For the English-Vietnamese, we used standard datasets like tst2012 and tst2013 from (Cettolo et al., 2016) as dev and test sets for validation and evaluation. For French-Vietnamese, we separate a subset from collected data for the same purposes.",
"cite_spans": [
{
"start": 248,
"end": 270,
"text": "(Cettolo et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "6.1"
},
{
"text": "Training dev test English-Vietnamese 231K 1553 1268 French-Vietnamese 203K 1007 1049 To generate synthetic bilingual data, we sampled 1.2 millions English monolingual sentences from the European Parliament English-French corpus 2 . After inferring from the multilingual MT system, we obtained two sets of pseudo bilingual data: English -Vietnamese, French -Vietnamese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": null
},
{
"text": "English and French texts were tokenized and truecased using Moses's scripts, and then they are applied to Sennrich's BPE (Sennrich et al., 2016) . 30000 operators are learned to generate BPE codes for both languages.",
"cite_spans": [
{
"start": 121,
"end": 144,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "6.2"
},
{
"text": "For Vietnamese texts, we only did tokenization and true-casing using Moses's scripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "6.2"
},
{
"text": "We extracted a list of all tokens in English (A) and another list of the 15K most frequency of tokens in French (B). All lists were then used for the mentioned strategies in section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "6.2"
},
{
"text": "We implement our NMT systems using the framework NMTGMinor 3 . The same settings are used for all experiments. The system includes 4 layers for both encoder and decoder, and the embedding size is 512. For the systems that adapted monolingual data, we use 6 layers. Adam optimizer is set with the initial learning rate at 1.0 for baseline and the multilingual systems and 0.5 for the finetuned systems. The size of a mini-batch is 128, and the vocabulary size is set to be the top 50K most frequent tokens. Training and development sets of both language pairs are concatenated prior to the training of our multilingual systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and Training",
"sec_num": "6.3"
},
{
"text": "We modified this framework to apply our ideals proposed in section 4. To speed up the training, we compute the similarity scores and find out similar tokens for rare tokens or the mean of all tokens in {A \u222a B} after each epoch. We replace rare tokens or update their embeddings in each batch. We do not use these techniques for the decoding process, so the system's performance is not affected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and Training",
"sec_num": "6.3"
},
{
"text": "The baseline and multilingual systems are trained for 70 epochs. Our methods are then used to fine-tune the systems for 15 epochs. We choose the five best models to decode the test sets independently for residual systems despite the baseline systems. The beam size is 10, and we try different values of alpha: 0.2, 0.4, 0.8, 1.0. Other settings are the default settings of NMTGMinor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and Training",
"sec_num": "6.3"
},
{
"text": "We evaluate the quality of systems on two translation tasks: French to Vietnamese and English to Vietnamese, using on different approaches mentioned in previous sections. The multi-BLEU from Moses's scripts 4 is used. The results have shown in the Table 2. (1) Bilingual baseline systems. We train the systems based on separate bilingual data of each language pair for 70 epochs. The best model is used to decode the test data for comparison purposes in our experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 248,
"end": 256,
"text": "Table 2.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.4"
},
{
"text": "(2) Multilingual systems. We concatenate training and development sets in order to construct the new sets: French, English \u2192 Vietnamese, and then train the system using those data for the same number of epochs as for the baseline systems. We observe an improvement of +1.05 BLEU points on English \u2192 Vietnamese translation task and another one of +1.19 BLEU points on French \u2192 Vietnamese translation task compared to the baseline systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.4"
},
{
"text": "(3) Multilingual fine-tuning systems. The multilingual system is fine-tuned from the baseline for further 15 epochs with an initial learning rate of 0.05. We see the improvements of +1.43 and Table 2 : The results of our MT systems are measured in BLEU. We evaluate the best model for the baseline systems and the average scores on the five best models for the multilingual and pseudo systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 199,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.4"
},
{
"text": "+1.83 BLEU points on both translation tasks, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.4"
},
{
"text": "(4) Multilingual fine-tuning with similarity systems. The systems from (2) are fine-tuned with the strategy mentioned in section 4.1 using the modified framework. We obtained a bigger gain of +1.62 BLEU points on the English \u2192 Vietnamese translation task whilst the French \u2192 Vietnamese translation task has achieved a lower improvement than the systems in (3). We show that the English \u2192 Vietnamese translation task has more advantages when rare tokens from French are replaced by similar tokens in the multilingual space. In the future, we would attempt the inverse replacement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.4"
},
{
"text": "(5) Multilingual fine-tuning with updated embedding systems. We use the modified framework to fine-tune the systems in (2) with the method mentioned in section 4.2. The greater improvements can be found at +1.61 and +1.93 on both translation tasks compared to the systems which do not use our methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.4"
},
{
"text": "(6) Multilingual with mixing of pseudo bilingual data. We use 400K synthetic bilingual sentence pairs for each of the language pairs: English-Vietnamese and French-Vietnamese. We train the multilingual NMT system on a mix of pseudo and real bilingual data mentioned in section 5 for 50 epochs. And then it is fine-tuned on the actual parallel data for 20 epochs. We observed a bigger improvement of +2.54 BLEU points on the French \u2192 Vietnamese system while the English \u2192 Vietnamese system has achieved less improvement compared to previous systems. We speculate that the English \u2192 Vietnamese translation task may be affected by the French \u2192 Vietnamese pseudo bilingual data. In future work, we would leverage the data selection methods in order to equip better synthetic data for our systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.4"
},
{
"text": "(7) Pseudo bilingual data translation. We train the French \u2192 Vietnamese NMT system relied on only 1.2 thousands pseudo bilingual data mentioned in section 5 for 26 epochs. We achieve 18.71 BLEU points on the averaged model from our five best models. Thus, we can generate synthetic parallel data for a low-resource language pair from another language pair with a bigger bilingual resource.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.4"
},
{
"text": "Due to the unavailability of the parallel data for lowresource language pairs or zero-shot translation, previous works focus on the task to have more data such as leveraging multilingual translation (Ha et al., 2016 (Ha et al., , 2017 Wang et al., 2018; Gu et al., 2019; Aharoni et al., 2019) or using monolingual data with back-translation, self-learning (Sennrich et al., 2015; Wu et al., 2019) or mix-source (Ha et al., 2016) technique.",
"cite_spans": [
{
"start": 199,
"end": 215,
"text": "(Ha et al., 2016",
"ref_id": "BIBREF6"
},
{
"start": 216,
"end": 234,
"text": "(Ha et al., , 2017",
"ref_id": "BIBREF5"
},
{
"start": 235,
"end": 253,
"text": "Wang et al., 2018;",
"ref_id": "BIBREF21"
},
{
"start": 254,
"end": 270,
"text": "Gu et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 271,
"end": 292,
"text": "Aharoni et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 356,
"end": 379,
"text": "(Sennrich et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 380,
"end": 396,
"text": "Wu et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 411,
"end": 428,
"text": "(Ha et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "For leveraging multilingual translation, (Ha et al., 2016) added language code and target forcing in order to learn the shared representations of the source words and specify the target words. (Wang et al., 2018 ) demonstrated a one-to-many multilingual MT with three different strategies which modify their architecture. (Gu et al., 2019 ) built many-to-one multilingual MT systems by adding a layer to transform the source embeddings and representation into a universal space to augment the translation of low resource language, which is similar to ours. (Aharoni et al., 2019) implemented a massive many-to-many multilingual system, employing many low-resource language pairs. All of the mentioned works have shown substantial improvements in low-resource translation, however, they are less correlative to our translation tasks.",
"cite_spans": [
{
"start": 41,
"end": 58,
"text": "(Ha et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 193,
"end": 211,
"text": "(Wang et al., 2018",
"ref_id": "BIBREF21"
},
{
"start": 322,
"end": 338,
"text": "(Gu et al., 2019",
"ref_id": "BIBREF4"
},
{
"start": 557,
"end": 579,
"text": "(Aharoni et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Although multilingual MT equips a shared space with many advantages, rare word translation is still the issue that needs to be considered. The task of dealing with rare words has been mentioned in previous works. (Luong et al., 2015) copied words from source sentences by words from target sentences after the translation using a bilingual dictionary. (Li et al., 2016) and (Trieu et al., 2016) learned word similarity from monolingual data to improve their systems. Our approach is similar to these works, but we only learn similarity from the shared multilingual space of MT systems. (Ngo et al., 2019) addressed the rare word problem by using the synonyms from WordNet. (Nguyen and Chiang, 2017) and (Ngo et al., 2019) presented different solutions to solve rare word situation by transforming the embeddings during the training of their RNN-based architecture. Those solutions cannot be applied to the transformer architecture. In (Gu et al., 2019) , the embeddings of rare tokens and universal tokens are jointly learned through a plus parameter while we only add a scalar value to the embeddings.",
"cite_spans": [
{
"start": 213,
"end": 233,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 352,
"end": 369,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 374,
"end": 394,
"text": "(Trieu et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 586,
"end": 604,
"text": "(Ngo et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 673,
"end": 698,
"text": "(Nguyen and Chiang, 2017)",
"ref_id": "BIBREF13"
},
{
"start": 703,
"end": 721,
"text": "(Ngo et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 935,
"end": 952,
"text": "(Gu et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Monolingual data is used to generate synthetic bilingual data in sparsity data issues. (Sennrich et al., 2015) proposed back-translation method that uses a backward model to get the source data from the monolingual target data. In contrast, shown the self-learning technique by employing a forward model to translate monolingual source data into the target data. (Wu et al., 2019) incorporated both mentioned techniques into their NMT systems. Monolingual data is also demonstrated its efficiency in unsupervised machine translation (Lample et al., 2018) or in zeroshot multilingual NMT (Siddhant et al., 2020; Ha et al., 2017) . In our work, we use the self-learning method to produce pseudo bilingual data, and it is then used to train our low-resource multilingual NMT systems.",
"cite_spans": [
{
"start": 87,
"end": 110,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 363,
"end": 380,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 533,
"end": 554,
"text": "(Lample et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 587,
"end": 610,
"text": "(Siddhant et al., 2020;",
"ref_id": "BIBREF17"
},
{
"start": 611,
"end": 627,
"text": "Ha et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "We have built multilingual MT systems for two lowresource language pairs: English-Vietnamese and French-Vietnamese, and proposed two approaches to tackle rare word translation. We show that our approaches bring significant improvements to our MT systems. We find that the pseudo bilingual can furthermore enhance a multilingual NMT system in case of French \u2192 Vietnamese translation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "In the future, we would like to use more language pairs in our systems and to combine proposed methods in order to evaluate the effectiveness of our MT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "https://www.ted.com/ 2 https://www.statmt.org/europarl 3 https://github.com/quanpn90/NMTGMinor",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/moses-smt/ mosesdecoder/tree/master/scripts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Massively multilingual neural machine translation",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. CoRR, abs/1903.00089.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. Proceedings of Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The IWSLT 2016 Evaluation Campaign",
"authors": [
{
"first": "M",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cattoni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 13th International Workshop on Spoken Language Translation (IWSLT 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Cettolo, J Niehues, S St\u00fcker, L Bentivogli, R Cattoni, and M Federico. 2016. The IWSLT 2016 Evaluation Campaign. In Proceedings of the 13th International Workshop on Spoken Language Translation (IWSLT 2016), Seattle, WA, USA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Understanding back-translation at scale. CoRR",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. CoRR, abs/1808.09381.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improved zero-shot neural machine translation via ignoring spurious correlations. CoRR, abs",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor O. K. Li. 2019. Improved zero-shot neural ma- chine translation via ignoring spurious correlations. CoRR, abs/1906.01181.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Effective Strategies in Zero-Shot Neural Machine Translation",
"authors": [
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2017. Effective Strategies in Zero-Shot Neural Ma- chine Translation.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Toward multilingual neural machine translation with universal encoder and decoder",
"authors": [
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"H"
],
"last": "Waibel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thanh-Le Ha, Jan Niehues, and Alexander H. Waibel. 2016. Toward multilingual neural machine trans- lation with universal encoder and decoder. CoRR, abs/1611.04798.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised machine translation using monolingual corpora only",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised ma- chine translation using monolingual corpora only.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Fully character-level neural machine translation without explicit segmentation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Lee, Kyunghyun Cho, and Thomas Hof- mann. 2016. Fully character-level neural machine translation without explicit segmentation. CoRR, abs/1610.03017.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Towards zero unknown word in neural machine translation",
"authors": [
{
"first": "Xiaoqing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2016,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqing Li, Jiajun Zhang, and Chengqing Zong. 2016. Towards zero unknown word in neural machine translation. In IJCAI.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attention- based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Addressing the rare word problem in neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "11--19",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 11-19, Beijing, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Overcoming the rare word problem for low-resource language pairs in neural machine translation",
"authors": [
{
"first": "Thi-Vinh",
"middle": [],
"last": "Ngo",
"suffix": ""
},
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Phuong-Thai",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Le-Minh",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "207--214",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5228"
]
},
"num": null,
"urls": [],
"raw_text": "Thi-Vinh Ngo, Thanh-Le Ha, Phuong-Thai Nguyen, and Le-Minh Nguyen. 2019. Overcoming the rare word problem for low-resource language pairs in neural machine translation. In Proceedings of the 6th Workshop on Asian Translation, pages 207-214, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improving lexical choice in neural machine translation",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Toan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of NAACL-HLT 2018",
"volume": "",
"issue": "",
"pages": "334--343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toan Q. Nguyen and David Chiang. 2017. Improving lexical choice in neural machine translation. Pro- ceedings of NAACL-HLT 2018, pages 334-343.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. CoRR, abs/1511.06709.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural Machine Translation of Rare Words with Subword Units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Association for Computa- tional Linguistics (ACL 2016).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Revisiting lowresource neural machine translation: A case study. CoRR, abs",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Biao",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 1905,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich and Biao Zhang. 2019. Revisiting low- resource neural machine translation: A case study. CoRR, abs/1905.11901.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Leveraging monolingual data with self-supervision for multilingual neural machine translation",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Mia",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sneha",
"middle": [],
"last": "Kudugunta",
"suffix": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Siddhant, Ankur Bapna, Yuan Cao, Orhan Fi- rat, Mia Chen, Sneha Kudugunta, Naveen Arivazha- gan, and Yonghui Wu. 2020. Leveraging monolin- gual data with self-supervision for multilingual neu- ral machine translation.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dealing with out-of-vocabulary problem in sentence alignment using word similarity",
"authors": [
{
"first": "Hai-Long",
"middle": [],
"last": "Trieu",
"suffix": ""
},
{
"first": "Le-Minh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Phuong-Thai",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 30th Pacific Asia Conference on Language, Information and Computation: Oral Papers",
"volume": "",
"issue": "",
"pages": "259--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai-Long Trieu, Le-Minh Nguyen, and Phuong-Thai Nguyen. 2016. Dealing with out-of-vocabulary problem in sentence alignment using word similar- ity. In Proceedings of the 30th Pacific Asia Con- ference on Language, Information and Computation: Oral Papers, pages 259-266, Seoul, South Korea.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Three strategies to improve one-to-many multilingual translation",
"authors": [
{
"first": "Yining",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Jingfang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "2955--2960",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1326"
]
},
"num": null,
"urls": [],
"raw_text": "Yining Wang, Jiajun Zhang, Feifei Zhai, Jingfang Xu, and Chengqing Zong. 2018. Three strategies to im- prove one-to-many multilingual translation. pages 2955-2960.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Exploiting monolingual data at scale for neural machine translation",
"authors": [
{
"first": "Lijun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yiren",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Jianhuang",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4207--4216",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1430"
]
},
"num": null,
"urls": [],
"raw_text": "Lijun Wu, Yiren Wang, Yingce Xia, Tao Qin, Jian- huang Lai, and Tie-Yan Liu. 2019. Exploiting mono- lingual data at scale for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4207- 4216, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Exploiting source-side monolingual data in neural machine translation",
"authors": [
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1160"
]
},
"num": null,
"urls": [],
"raw_text": "Jiajun Zhang and Chengqing Zong. 2016. Exploit- ing source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1535-1545, Austin, Texas. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF1": {
"text": "Multilingual + fine-tuning with similarity 31.93 (+0.19) 36.75 (+1.62) Multilingual + fine-tuning with updated embedding 32.11 (+0.37) 36.74 (+1.61)",
"content": "<table><tr><td>Datasets</td><td>Systems</td><td>dev</td><td>test</td></tr><tr><td/><td>Bilingual Baseline</td><td>31.74</td><td>35.13</td></tr><tr><td/><td>Multilingual</td><td colspan=\"2\">31.66 (-0.08) 36.18 (+1.05)</td></tr><tr><td>English \u2192 Vietnamese</td><td>Multilingual + fine-tuning</td><td colspan=\"2\">31.88 (+0.14) 36.56 (+1.43)</td></tr><tr><td/><td>Multilingual + mixing pseudo bilingual data</td><td>30.86 (-0.88)</td><td>35.09 (-0.04)</td></tr><tr><td/><td>Bilingual Baseline</td><td>23.07</td><td>23.03</td></tr><tr><td/><td>Multilingual</td><td colspan=\"2\">24.49 (+1.42) 24.22 (+1.19)</td></tr><tr><td>French \u2192 Vietnamese</td><td>Multilingual + fine-tuning</td><td colspan=\"2\">24.51 (+1.44) 24.86 (+1.83)</td></tr><tr><td/><td>Multilingual + fine-tuning with similarity</td><td colspan=\"2\">24.37 (+1.30) 24.70 (+1.63)</td></tr><tr><td/><td colspan=\"3\">Multilingual + fine-tuning with updated embedding 24.60 (+1.53) 24.96 (+1.93)</td></tr><tr><td/><td>Multilingual + mixing pseudo bilingual data</td><td colspan=\"2\">25.59 (+2.52) 25.57 (+2.54)</td></tr><tr><td/><td>Pseudo bilingual data translation</td><td>19.00</td><td>18.71</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}