|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T11:59:38.454217Z" |
|
}, |
|
"title": "Unsupervised Neural Machine Translation for English and Manipuri", |
|
"authors": [ |
|
{ |
|
"first": "Salam", |
|
"middle": [ |
|
"Michael" |
|
], |
|
"last": "Singh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Institute of Technology", |
|
"location": { |
|
"postCode": "788010", |
|
"settlement": "Silchar", |
|
"country": "India" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Thoudam", |
|
"middle": [ |
|
"Doren" |
|
], |
|
"last": "Singh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Institute of Technology", |
|
"location": { |
|
"postCode": "788010", |
|
"settlement": "Silchar", |
|
"country": "India" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Availability of bitext dataset has been a key challenge in the conventional machine translation system which requires surplus amount of parallel data. In this work, we devise an unsupervised neural machine translation (UNMT) system consisting of a transformer based shared encoder and language specific decoders using denoising autoencoder and backtranslation with an additional Manipuri side multiple test reference. We report our work on low resource setting for English (en)-Manipuri (mni) language pair and attain a BLEU score of 3.1 for en \u2192 mni and 2.7 for mni \u2192 en respectively. Subjective evaluation on translated output gives encouraging findings.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Availability of bitext dataset has been a key challenge in the conventional machine translation system which requires surplus amount of parallel data. In this work, we devise an unsupervised neural machine translation (UNMT) system consisting of a transformer based shared encoder and language specific decoders using denoising autoencoder and backtranslation with an additional Manipuri side multiple test reference. We report our work on low resource setting for English (en)-Manipuri (mni) language pair and attain a BLEU score of 3.1 for en \u2192 mni and 2.7 for mni \u2192 en respectively. Subjective evaluation on translated output gives encouraging findings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Machine Translation had been dominated by the statistical methods, notably the phrase-based (Koehn et al., 2003; Och, 2003) . But they suffered from a rigid structure since they have multiple modules (Brown et al., 1990 (Brown et al., , 1993 which are tuned independently. In other words, SMT lacked an end-to-end learning mechanism. For a quite long period, SMT dominated the MT systems, but with an RNN based sequence-to-sequence model ; marked the beginning of the NMT era. But, these primitive neural based MT systems choked when the input sentences get longer as the input sentence are squeezed into a fixed length vector. Fortunately, with the advent of attention Luong et al., 2015) , sub-word tokenization (Sennrich et al., 2016b) and transformers (Vaswani et al., 2017) , NMT outperformed SMT in various machine translation tasks. However, when the parallel corpus is scarce, the NMT fails to produce good translations, and performing much poorer than the phrase-based systems. Building parallel corpus is a costly task and specifically for the low resource languages where bi-text is nonexistent. But, monolingual data is easily available even for the low resource languages and some have utilised it to augment parallel data (Sennrich et al., 2016a ) with a little bi-text data or translation systems using monolingual data only (Lample et al., 2018a; Artetxe et al., 2018b; Ren et al., 2019) . Recent works using monolingual data only show a positive direction in machine translation tasks. Although, these systems do not outperform a strong supervised system, they could be treated as a strong baseline system which should be the lower bound for any supervised system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 112, |
|
"text": "(Koehn et al., 2003;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 113, |
|
"end": 123, |
|
"text": "Och, 2003)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 219, |
|
"text": "(Brown et al., 1990", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 220, |
|
"end": 241, |
|
"text": "(Brown et al., , 1993", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 670, |
|
"end": 689, |
|
"text": "Luong et al., 2015)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 714, |
|
"end": 738, |
|
"text": "(Sennrich et al., 2016b)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 756, |
|
"end": 778, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 1236, |
|
"end": 1259, |
|
"text": "(Sennrich et al., 2016a", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1340, |
|
"end": 1362, |
|
"text": "(Lample et al., 2018a;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1363, |
|
"end": 1385, |
|
"text": "Artetxe et al., 2018b;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1386, |
|
"end": 1403, |
|
"text": "Ren et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The machine translation (MT) systems have become very effective in recent times, however with the condition of huge parallel data availability. On the other hand, the MT task for many low-resource language is yet to be addressed. Likewise, Manipuri is a low-resource Indian language belonging to Tibeto-Burman language family where readily available English-Manipuri parallel data is close to non-existent. As the manual parallel data acquisition is both challenging and resource intensive task, while the monolingual data is comparatively easier to acquire and it is hence intuitive to look upon the techniques which exploits the monolingual data to the fullest.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Challenges", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "Another challenge is that Manipuri language is highly agglutinitive and morphologically rich which causes linguistic diversity and variations in word forms thus penalizes the automatic n-gram matching metrics like BLEU. Lack of grammatical gender, agglutinative verb morphology, extensive suffix with more limited prefixation and Subject Object Verb (SOV) order are some of the linguistic features. Manipuri language uses Bengali 1 scripts and Meetei mayek 2 in written form. In this work, we will focus on the Bengali script.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Challenges", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "In order to tackle the above challenges i.e lack of parallel data and linguistic diversities, we make the following contributions: (1) We report a preliminary unsupervised MT task for English-Manipuri language pair using monolingual data only to tackle the parallel data scarcity and explore the effectiveness/non-effectiveness for the same.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contributions", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "(2) We develop multiple references for the Manipuri side test data, specifically we built an additional test reference apart from the one extracted from the training corpus to tackle the linguistic diversity as it increases the n-gram overlapping probability. 3We find that a cross-lingual mapping of embeddings performs better than a pretrained cross-lingual language model as an initialization step for this distant language pair (English-Manipuri).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contributions", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "The remaining of this paper is organized as follows. Section 2 explores the related work. Section 3 then describes the framework of our approach. The experimental settings are discussed in Section 4, while Section 5 discusses the obtained results and its analysis. Section 6 concludes the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contributions", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "To the best of our knowledge, we are the first to report the unsupervised neural machine translation for Manipuri language. However, there are reports of supervised Statistical based approach, notably by (Singh and Bandyopadhyay, 2010a; Singh, 2013) , where the authors made an imperative study over the effects of the morpho-syntactic information and dependency relation in a Statistical Machine Translation setting. In another work, Singh and Bandyopadhyay (2011) showed that the Phrase Based Statistical Machine Translation system improves by incorporating linguistic features such as, named entities and reduplicated multiword expressions. However, the MT task for Manipuri is still in its inception, considering it being a low resource language. Low resource has always been a hurdle in the MT task. Many, have mended their hands to overcome this bottleneck. Sennrich et al. (2016a) leveraged the NMT system by using monolingual data to create a synthetic parallel corpus using backtranslation. This synthetic data greatly improved the MT systems, however it consisted mostly noises which is necessary to be postprocessed, making the task not so feasible. Zoph et al. (2016) devised a transfer learning mechanism by sharing model parameters from a high resource language pair (parent model) to lower resource language pair (child model) which significantly improved the BLEU (Papineni et al., 2002) score. This kind of parent-child model has been used in other works (Nguyen and Chiang, 2017; Kocmi and Bojar, 2018) where they used a shared vocabulary of subword units. Kocmi and Bojar (2018) further showed that transfer learning can be simplified where the parent model is trained until convergence and switching the low resource language pair as the training data while keeping the training parameters unchanged. Parameter sharing has also been explored in previous works such as Firat et al. (2016) by using a single shared attention mechanism with multiple encoder-decoder to devise a multi-way multilingual NMT system. Further, Johnson et al. (2017) devised another multilingual approach through parameter sharing using a single shared encoder-decoder model enabling zero-shot translation, where the model learned translation between multiple languages and could even translate unseen language pairs. However, these models require some form of parallel data which has led the dynamics to shift towards exploiting the monolingual data, which is comparatively abundant than the bi-text data. Furthermore, a noteworthy attempt is reported by , where they used an auto-encoding task to ensure the translated sentence can be translated back to the original sentence using reinforcement learning technique. Similarly, Cheng et al. (2016) also used this auto-encoding task upon the monolingual data. Although, these models seemed promising, it still needed decent amount of parallel data for a warm-start. Machine translation task can be reduced to a deciphering task (Ravi and Knight, 2011; Pourdamghani and Knight, 2017) from a monolingual data using noisy channel model where the source language is treated as ciphertext generation, however these settings were mostly confined to short sentences and related language pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 236, |
|
"text": "(Singh and Bandyopadhyay, 2010a;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 237, |
|
"end": 249, |
|
"text": "Singh, 2013)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1161, |
|
"end": 1179, |
|
"text": "Zoph et al. (2016)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 1380, |
|
"end": 1403, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1498, |
|
"end": 1520, |
|
"text": "Kocmi and Bojar, 2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1575, |
|
"end": 1597, |
|
"text": "Kocmi and Bojar (2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1888, |
|
"end": 1907, |
|
"text": "Firat et al. (2016)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 2039, |
|
"end": 2060, |
|
"text": "Johnson et al. (2017)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 2723, |
|
"end": 2742, |
|
"text": "Cheng et al. (2016)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 2972, |
|
"end": 2995, |
|
"text": "(Ravi and Knight, 2011;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 2996, |
|
"end": 3026, |
|
"text": "Pourdamghani and Knight, 2017)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "All these works induced a promising starting point towards exploiting the monolingual data, but these primitive models were unable to stand against a supervised setting with abundant parallel data. Fortunately, it was the concurrent work of Lample et al. (2018a) and Artetxe et al. (2018b) which lifted the unsupervised MT on par with a supervised setting. Their approach first learns a linear transformation of the word embeddings of the two languages in an unsupervised manner which are trained independently and map this linear transformation into a shared space using adversarial training (Conneau et al., 2017) or through self learning (Artetxe et al., 2017 (Artetxe et al., , 2018a . A shared encoder for both the languages is initialized using the resulting cross-lingual embeddings. The model is trained using denoising auto-encoding, backtranslation and adversarial training 3 iteratively giving rise to translation models of increasing quality. There are reports of unsupervised MT using PB-SMT(Phrase-based Statistical Machine Translation) where Lample et al. (2018b) used the backtranslated synthetic data to feed into the NMT system. Furthermore, there are reports of using the synthetic data generated from an unsupervised SMT to initialize an NMT system from scratch in an iterative way (Marie and Fujita, 2018; Ren et al., 2019) . Recently, a pretrained cross-lingual language model (Lample and Conneau, 2019; Song et al., 2019) is used to pretrain a shared encoder and finetune using iterative backtranslation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 262, |
|
"text": "Lample et al. (2018a)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 289, |
|
"text": "Artetxe et al. (2018b)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 593, |
|
"end": 615, |
|
"text": "(Conneau et al., 2017)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 641, |
|
"end": 662, |
|
"text": "(Artetxe et al., 2017", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 663, |
|
"end": 687, |
|
"text": "(Artetxe et al., , 2018a", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1057, |
|
"end": 1078, |
|
"text": "Lample et al. (2018b)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1302, |
|
"end": 1326, |
|
"text": "(Marie and Fujita, 2018;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1327, |
|
"end": 1344, |
|
"text": "Ren et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 1399, |
|
"end": 1425, |
|
"text": "(Lample and Conneau, 2019;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1426, |
|
"end": 1444, |
|
"text": "Song et al., 2019)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section, we discuss the framework of our unsupervised neural machine translation system. Denoising Autoecoder: The goal of the unsupervised MT setup is to be able to reconstruct an original sentence using a decoder from an input sentence which is encoded using a shared encoder. The reconstruction is carried out such that the encoder should learn the latent representations (embeddings of both the languages) in a language independent manner. Meanwhile, the decoder should be able to transfer this latent representations into their corresponding languages. Since, the systems lacks any constraint, it fails to perform a meaningful reconstruction mechanism, as the system sticks in a trivial copying task. This blindly copying results into failure of capturing any real and useful structure from the data. Denoising autoencoder (Vincent et al., 2008) is used in addition with noise (random word swaps, ran-3 Adversarial training is adopted by (Lample et al., 2018a,b) Figure 1 : System Overview: The system first maps the monolingual vector embeddings into a common crosslingual embedding space. The shared encoder and the two decoders are transformer units. For each iterations, we first denoise the two languages in batches and backtranslate from L1 to L2 and L2 to L1. L1 is en when L2 is mni and vice-versa.", |
|
"cite_spans": [ |
|
{ |
|
"start": 836, |
|
"end": 858, |
|
"text": "(Vincent et al., 2008)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 951, |
|
"end": 975, |
|
"text": "(Lample et al., 2018a,b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 976, |
|
"end": 984, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "dom word drops) to address this issue by constraining the system to learn latent representations. Backtranslation: The denoising objective covers only a single language at one time which partially fulfills the final translation task. To achieve a full translation system, we need some sort of synthetic parallel data which is obtained through backtranslation (Sennrich et al., 2016a ) since our setting is constraint to monolingual data alone. The system translates from one language to another in the inference mode using greedy decoding and then optimises the discrepancy between the actual and the synthetic translation. The backtranslation is carried out batch by batch for each models in an iterative manner producing an improved synthetic parallel data after each iteration.", |
|
"cite_spans": [ |
|
{ |
|
"start": 359, |
|
"end": 382, |
|
"text": "(Sennrich et al., 2016a", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Unsupervised Neural Machine Translation: The system first optimises the encoding loss objective of a noisy version of the source language (L1) through denoising autoencoder using a shared transformer based encoder and reconstruction using the (L1) decoder. Subsequently, this reconstructed version of L1 is backtranslated to the target language (L2) using the L2 decoder to create a synthetic parallel data and iteratively optimizes the objective to predict the original sentence from this synthetic data. This iterative process is executed alternatively for L1 and L2. A brief overview of our framework is shown in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 616, |
|
"end": 624, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We perform the translation task for both the en \u2194 mni sides. We use the same data for both tasks and compare our model with XLM (Lample and Conneau, 2019) and Artetxe et al. (2018b) . First, we discuss the dataset description and its preprocessing followed by the baseline and the proposed framework.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 154, |
|
"text": "Conneau, 2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 181, |
|
"text": "Artetxe et al. (2018b)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our unsupervised setting exploits the comparable English and Manipuri monolingual data accompanied with parallel data for development and evaluation purpose. Both monolingual data (Singh and Bandyopadhyay, 2010b) are crawled 4 and comparable in news domain. For the validation and testing, we used the Technology Development for Indian Languages (TDIL) 5 dataset. We took 1000 sentences for the development set and 500 sentences for the test set which fall under different domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Description and Preprocessing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The preprocessing consists of normalization, sentence splitting and tokenization. The mosesdecoder 6 toolkit scripts are used for the English side while IndicNLP 7 library is used for the Manipuri counterpart. Further, the English monolingual data consisted multiple instances of hyphen separated words instead of a single continuous word as illustrated in Table 1 since the corpus was scraped from columnar news format where words are broken into hyphen separated subwords in order to maintain page layout which we removed. Finally,", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 357, |
|
"end": 364, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Preprocessing", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "HypRem Word Mani-pur Manipur Samaj-wadi Samajwadi irrespon-sible irrespossible a shared vocabulary was learned using fastBPE 8 from the monolingual data using 60,000 operations. The statistics of both the monolingual corpus and the parallel corpus after the preprocessing is given in If we consider the above multiple reference (Reference-1 and Reference-2) instances for an English Source (Source), the word crazy in the Source corresponds to \u09aa\u09be\u09ae\u09c8\u099c (pamjei: love) and \u0999\u09be\u0993\u09c8\u099c (ngou-jei: crazy) in Reference-1 and Reference-2 respectively. Thus, linguistic diversity is evident even in this short sentence and finally we hypothesise that the multiple reference will handle the linguistic diversity up to a certain degree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HypSep Word", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We compare our proposed model with the XLM (Lample and Conneau, 2019) and Artetxe et al. (2018b) . XLM uses a transformer based shared encoder and a shared decoder with a cross-lingual language model pretraining as the initialization step. While, Artetxe et al. (2018b) uses a GRU based shared encoder and language specific decoder with cross-lingual word embedding mapping as the initial step.", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 69, |
|
"text": "(Lample and Conneau, 2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 74, |
|
"end": 96, |
|
"text": "Artetxe et al. (2018b)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Systems", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We use the Lample and Conneau (2019) system as our first baseline (UNMT baseline-1 ). For both the cross-lingual language model pretraining and the translation finetuning, the XLM 9 toolkit is used. A Transformer based architecture is used (Vaswani et al., 2017) for the encoder and decoder with 6 layers, 8 multi-head attention units, 1024 hidden units, GELU activation with a dropout (Srivastava et al., 2014) rate of 0.1 accompanied with a positional encoding. In addition to this, Adam optimizer (Kingma and Ba, 2014) for the optimisation, a linear warmup (Vaswani et al., 2017) and varying learning rates from 10 \u22124 to 5.10 \u22124 is used. A stream of 256 tokens is used with a batch size of 32 instead of 64 in (Lample and Conneau, 2019) for the Masked Language Model (MLM) objective. Averaged perplexity over the two languages is used as the stopping criterion for the MLM pretraining. For the unsupervised MT task, all the hyper-parameters are same as that of the MLM with the addition of noise (word shuffle, word dropout, word blank) and 2000 tokens per batch. The stopping criterion is the average of the tokenized BLEU score 10 considering the unsupervised criterion for the two directions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 262, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 560, |
|
"end": 582, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 713, |
|
"end": 739, |
|
"text": "(Lample and Conneau, 2019)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline-1", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "The second baseline (UNMT baseline-2 ) is based on the work done by Artetxe et al. (2018b) . For the sake of comparison, we use the same settings as described in the paper of Artetxe et al. (2018b) and the same cross-lingual embedding mapped vectors from our proposed model (UNMTproposed) in Section 4.3 for the intialization step. Furthermore, the system is a GRU based bi-directional encoder-decoder network with global attention (Luong et al., 2015) with 600 hidden units for each GRU cells and the embeddings of 300 dimensions. For optimization, Adam optimizer (Kingma and Ba, 2014) is used with a learning rate of 0.0002. Additionally, this setting incorporates a single shared encoder and two language specific decoders and we do not use any additional parallel data for parameter tuning purpose. The model is trained for 290,000 iterations with a batch size of 50 and implemented using the codebase 11 of Artetxe et al. (2018b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 90, |
|
"text": "Artetxe et al. (2018b)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 197, |
|
"text": "Artetxe et al. (2018b)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 432, |
|
"end": 452, |
|
"text": "(Luong et al., 2015)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 934, |
|
"text": "Artetxe et al. (2018b)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline-2", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Our proposed model (UNMT proposed ) operates in twofold. First, it learns a cross-lingual word embedding mapping and then followed by an iterative backtranslation. Cross-lingual mapping: First, we use the monolingual corpora to independently train the embeddings for en and mni separately using fast-Text (Bojanowski et al., 2017) with the skip-gram model having 10 negative samples, a context window of 10 words, 300 dimensional embedding vector, a sub-sampling of 10 \u22125 and 10 training iterations. After getting monolingual embedding for each languages, we map the fastText embeddings into a common embedding space using vecmap 12 (Artetxe et al., 2018a) without using any parallel data. MT Task: After getting the cross-lingual embeddings mapping, we perform the denoising and iterative backtranslation task. For this, we use a transformer based shared encoder and language specific decoders. The transformer (Vaswani et al., 2017 ) model has 5 encoder-decoder layers, 6 attention heads, 300 hidden units. We use GELU activation with a dropout (Srivastava et al., 2014) rate of 0.1 and optimized using Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001. The training is conducted for 290,000 iterations. We implement our model using PyTorch 13 by extending the implementation of Artetxe et al. (2018b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 305, |
|
"end": 330, |
|
"text": "(Bojanowski et al., 2017)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 633, |
|
"end": 656, |
|
"text": "(Artetxe et al., 2018a)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 933, |
|
"text": "(Vaswani et al., 2017", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 1047, |
|
"end": 1072, |
|
"text": "(Srivastava et al., 2014)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1298, |
|
"end": 1320, |
|
"text": "Artetxe et al. (2018b)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Model Setup", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We measured the performance of our systems we used both sentence level and character level similarity between the reference and the hypothesis. Table 3 : BLEU score and the character n-gram F-score (ChrF) of the systems for en-mni and mni-en translation tasks using the multiple references (ref1 and ref2) for the Manipuri side.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 151, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics Used", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "For the sentence level we used BLEU (Papineni et al., 2002) and ChrF (Popovi\u0107, 2015) for the character n-gram F-score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 59, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 69, |
|
"end": 84, |
|
"text": "(Popovi\u0107, 2015)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics Used", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In this section, we discuss the results and the performance of our proposed setting (UNMT proposed ) in comparison with the baselines (UNMT baseline-1 ) and (UNMT baseline-2 ). The reported BLEU score is calculated upon the de-tokenized text using sacrebleu 14 (Post, 2018) while ChrF is calculated using ChrF toolkit 15 for the en \u2194 mni directions. The scores of the systems is given in Table 3 with the inclusion of additional Manipuri side multiple test references (ref1 and ref2). We find that the two UNMT systems with cross-lingual embeddings (UNMT baseline-2 and UNMT proposed ) as the initialization step yields better BLEU and ChrF scores than the one with the cross-lingual language model pretraining (UNMT baseline-1 ) as the precursor to UNMT for this language pair. Furthermore, a shared decoder setting of the UNMT baseline-1 fails to mitigate a proper parameter sharing between this distant language pairs. On the contrary, the other two UNMT settings with distinct language decoders performs better. Additionally, our transformer based premise (UNMT proposed ) significantly improves over the GRU based stronger baseline (UNMT baseline-2 ) both in terms of BLEU and ChrF scores for the en \u2192 mni and mni \u2192 en directions respectively. The UNMT baseline-1 produced a BLEU score of \u2248 0 for almost every checkpoints so we present in Figure 2 the comparative score for the other two unsupervised settings over 290,000 iterations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 273, |
|
"text": "(Post, 2018)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 388, |
|
"end": 395, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1344, |
|
"end": 1352, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The machine translation system seldom generates a perfect paraphrase due to the possible valid linguistic variations. However, the n-gram based evaluation metrics like BLEU requires exact word overlaps thus penalizing a valid diverse word form. To cope up with this issue we use an additional Manipuri side multiple test references (ref1 and ref2). In Table 3 , we find that both UNMT baseline-2 and UNMT proposed improves significantly in terms of BLEU score after using the multiple references for en-mni direction. The UNMT baseline-2 improves by +0.9 and +1.3 cumulative BLEU score (ref1+ref2) over the separate scores for ref1 and ref2 respectively. While, the cumulative BLEU score of UNMT proposed improves by +1.1 and +1.4 over the ref1 and ref2 respectively. And finally our approach, UNMT proposed significantly improves by a BLEU score of +0.4 over UNMT baseline-2 with the inclusion of multiple reference.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 352, |
|
"end": 359, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Effect of the Multiple Reference", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We present a critical case study of the sample input and output of two randomly selected en test sentences w.r.t the reference \"ref1\" and their respective mni translations by our UNMT baseline-2 and UNMT proposed systems. We do not report the UNMT baseline-1 since it struggled to obtain even a proper word form. Although, we perform a bidirectional translation task, the analysis is focused on the en \u2192 mni part. The en test sentences are selected randomly, one being short and the other a longer one in order to see the effect of the length on translation quality. Furthermore, we select the translated outputs of UNMT baseline-2 and UNMT proposed with their respective best BLEU scoring checkpoint. Additionally, in Table 4 we provide the en transliteration of the mni sentences and their en translations. We assess the quality of our translations under the criterion such as subjectobject-verb 16 agreement, adequacy and fluency. Considering the first test sentence (Input English-1), it is observed that both the systems captured the similar word form of the subject \u09ae\u09bf\u09b2\u09b0\u09df\u09be\u0997\u09c0 \u0987\u09c7\u09a8\u09ab \u09a8 (malarial infection) of the reference while dropping the object \u098f\u09c7\u09a8\u09bf\u09ae\u09df\u09be (anaemia). Although, both the systems do not converge to the actual meaning of the reference sentence, they follow a perfect SOV word agreements thus making fluent translations, albeit with a relatively poor adequacy. Meanwhile, the systems performed comparatively good for a shorter length test sentence, but the systems struggles when the length increases as it is evident in the translations of the Input English-2 sentence. The two systems generated translations with a proper sentential form. Although, both the systems covered the key words such as \u0988\u09b6\u09c0\u0982 (eeshing : water), \u09a5\u09ae\u0997\u09a6\u09ac\u09bf\u09a8 (thamgadabni : should be stored), \u09a4 -\u09a4\u09a8\u09be\u09a8\u09ac\u09be (taru-taananba : clean), \u09a4 -\u09a4 \u09ac\u09be (taru-taruba : clean) and the conjunction \u0985\u09ae\u09b8\u09c1 \u0982 (amashung : and), but failed to generate a corresponding word for \u09aa\u09be \u09b6\u09c0\u0982 (paatrashing : containers). Rather the systems translated extraneous yet related words like \u0999\u09be \u09ab\u09be\u09a8\u09ac\u09be (ngaa faanaba : fishing) and \u0988\u09b6\u09c0\u0982 \u09a5 \u09ac\u09be (eeshing thaknaba : drinking water). The UNMT baseline-2 generates an absurd word \u09aa\u09c7 \u099c (Pertraize) which is highlighted with an underline. This word is highly likely to have been generated by the BPE subwords.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 719, |
|
"end": 726, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sample Input-Output and Qualitative Analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "While we demonstrate here the translation of two samples, but for the test set as a whole we observe that the performance of these two systems are synonymous for a shorter length test sentence. Additionally, we observe that when the sentence length increases our proposed system with transformer produces a more fluent translation and a relatively adequate one (by generating the related words) than the GRU based baseline (UNMT baseline-2 ) as per human evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sample Input-Output and Qualitative Analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this work, we report an unsupervised neural machine translation system on low resource setting for English (en) -Manipuri (mni) language pair using a transformer based shared encoder, language specific decoders and monolingual data only. We observe an improvement in BLEU score (+0.4 for en \u2192 mni and +0.3 for mni \u2192 en) over the stronger baseline (UNMT baseline-2 ) by incorporating a transformer based shared encoder and independent decoders along with a multiple reference scenario. Similarly, the ChrF scores of UNMT proposed surpassed both the baselines by a large margin. Moreover, it is found that the cross lingual embedding mappings is more effective than a cross lingual language model pretraining for this language pair. One of the reason being that the XLM model objective pretrains the encoder and the decoders separately and the decoders are shared between the two languages. This shared decoder may be useful for a similar language pair but for the distant pairs it fails to capture the cross-lingual representations. Besides, the automatic scoring mechanism BLEU fails to mitigate and capture the linguistic inflections of the morphologically richer Manipuri language which is tackled by using multiple references. Mean-while, the translation quality is reasonably fluent and adequate considering the facts that the language pairs are relatively unrelated and the use of a single test reference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Finally, our unsupervised premise made a descent performance considering the relatively smaller monolingual data used hence this work highlights the potential of the unsupervised MT for this distant language pairs. In our future work, we plan to extend this preliminary unsupervised setting by incorporating linguistic features and devise an improved initialization step along with a better use of the synthetic data suitable for this language pair.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://unicode.org/charts/PDF/U0980.pdf", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://unicode.org/charts/PDF/UABC0. pdf", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/facebookresearch/ XLM 10 https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ generic/multi-bleu.perl", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/artetxem/undreamt 12 https://github.com/artetxem/vecmap 13 https://pytorch.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/mjpost/sacrebleu 15 https://github.com/m-popovic/chrF", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Manipuri language follows subject-object-verb (SOV) word order.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Learning bilingual word embeddings with (almost) no bilingual data", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "451--462", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1042" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 451-462, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "789--798", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1073" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully un- supervised cross-lingual mappings of word embed- dings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 789-798, Melbourne, Australia. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Unsupervised neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Sixth International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural ma- chine translation. In Proceedings of the Sixth Inter- national Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1409.0473" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "135--146", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00051" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A statistical approach to machine translation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"A Della" |
|
], |
|
"last": "Cocke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"J Della" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fredrick", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Roossin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Computational Linguistics", |
|
"volume": "16", |
|
"issue": "2", |
|
"pages": "79--85", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Fredrick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79-85.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The mathematics of statistical machine translation: Parameter estimation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"A Della" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The math- ematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263- 311.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Semisupervised learning for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhongjun", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1965--1974", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1185" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semi- supervised learning for neural machine translation. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1965-1974, Berlin, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merri\u00ebnboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1724--1734", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1179" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. pages 1724- 1734.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Word translation without parallel data", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herv\u00e9", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1710.04087" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Multi-way, multilingual neural machine translation with a shared attention mechanism", |
|
"authors": [ |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "866--875", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1101" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine trans- lation with a shared attention mechanism. In Pro- ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 866-875, San Diego, California. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Dual learning for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yingce", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liwei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nenghai", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tie-Yan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Ying", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "820--828", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learn- ing for machine translation. In Advances in neural information processing systems, pages 820-828.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", |
|
"authors": [ |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Thorat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernanda", |
|
"middle": [], |
|
"last": "Vi\u00e9gas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Wattenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Macduff", |
|
"middle": [], |
|
"last": "Hughes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "339--351", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00065" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Trivial transfer learning for low-resource neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kocmi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "244--252", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6325" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2018. Trivial transfer learning for low-resource neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 244-252, Brus- sels, Belgium. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Statistical phrase-based translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "127--133", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 127-133.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Crosslingual language model pretraining", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems (NeurIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Unsupervised machine translation using monolingual corpora only", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Represen- tations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Phrase-based & neural unsupervised machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5039--5049", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1549" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine trans- lation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039-5049, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Effective approaches to attention-based neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1412--1421", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D15-1166" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Unsupervised neural machine translation initialized by unsupervised statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Marie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Fujita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.12703" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Marie and Atsushi Fujita. 2018. Unsuper- vised neural machine translation initialized by un- supervised statistical machine translation. arXiv preprint arXiv:1810.12703.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Transfer learning across low-resource, related languages for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Toan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "296--301", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Toan Q. Nguyen and David Chiang. 2017. Trans- fer learning across low-resource, related languages for neural machine translation. In Proceedings of the Eighth International Joint Conference on Natu- ral Language Processing (Volume 2: Short Papers), pages 296-301, Taipei, Taiwan. Asian Federation of Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Minimum error rate training in statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1075096.1075117" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Compu- tational Linguistics, pages 160-167, Sapporo, Japan. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1073083.1073135" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "chrF: character n-gram F-score for automatic MT evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Maja", |
|
"middle": [], |
|
"last": "Popovi\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "392--395", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W15-3049" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A call for clarity in reporting BLEU scores", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "186--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Deciphering related languages", |
|
"authors": [ |
|
{ |
|
"first": "Nima", |
|
"middle": [], |
|
"last": "Pourdamghani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2513--2518", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1266" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nima Pourdamghani and Kevin Knight. 2017. Deci- phering related languages. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 2513-2518, Copen- hagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Deciphering foreign language", |
|
"authors": [ |
|
{ |
|
"first": "Sujith", |
|
"middle": [], |
|
"last": "Ravi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sujith Ravi and Kevin Knight. 2011. Deciphering for- eign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies, pages 12- 21, Portland, Oregon, USA. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Unsupervised neural machine translation with smt as posterior regularization", |
|
"authors": [ |
|
{ |
|
"first": "Zhirui", |
|
"middle": [], |
|
"last": "Shuo Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shujie", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuai", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "241--248", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Unsupervised neural machine translation with smt as posterior regularization. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 241-248.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Improving neural machine translation models with monolingual data", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "86--96", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1009" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Taste of two different flavours: Which Manipuri script works better for English-Manipuri language pair SMT systems?", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Thoudam Doren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventh Workshop on Syntax, Semantics and Structure in Statistical Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thoudam Doren Singh. 2013. Taste of two differ- ent flavours: Which Manipuri script works better for English-Manipuri language pair SMT systems? In Proceedings of the Seventh Workshop on Syn- tax, Semantics and Structure in Statistical Transla- tion, pages 11-18, Atlanta, Georgia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Manipuri-English bidirectional statistical machine translation systems using morphology and dependency relations", |
|
"authors": [ |
|
{ |
|
"first": "Doren", |
|
"middle": [], |
|
"last": "Thoudam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sivaji", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bandyopadhyay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 4th Workshop on Syntax and Structure in Statistical Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "83--91", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thoudam Doren Singh and Sivaji Bandyopadhyay. 2010a. Manipuri-English bidirectional statistical machine translation systems using morphology and dependency relations. In Proceedings of the 4th Workshop on Syntax and Structure in Statistical Translation, pages 83-91, Beijing, China. Coling 2010 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Web based Manipuri corpus for multiword NER and reduplicated MWEs identification using SVM", |
|
"authors": [ |
|
{ |
|
"first": "Doren", |
|
"middle": [], |
|
"last": "Thoudam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sivaji", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bandyopadhyay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 1st Workshop on South and Southeast Asian Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "35--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thoudam Doren Singh and Sivaji Bandyopadhyay. 2010b. Web based Manipuri corpus for multiword NER and reduplicated MWEs identification using SVM. In Proceedings of the 1st Workshop on South and Southeast Asian Natural Language Processing, pages 35-42, Beijing, China. Coling 2010 Organiz- ing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Integration of reduplicated multiword expressions and named entities in a phrase based statistical machine translation system", |
|
"authors": [ |
|
{ |
|
"first": "Doren", |
|
"middle": [], |
|
"last": "Thoudam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sivaji", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bandyopadhyay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1304--1312", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thoudam Doren Singh and Sivaji Bandyopadhyay. 2011. Integration of reduplicated multiword expres- sions and named entities in a phrase based statisti- cal machine translation system. In Proceedings of 5th International Joint Conference on Natural Lan- guage Processing, pages 1304-1312, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Mass: Masked sequence to sequence pre-training for language generation", |
|
"authors": [ |
|
{ |
|
"first": "Kaitao", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tie-Yan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5926--5936", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2019. Mass: Masked sequence to se- quence pre-training for language generation. In In- ternational Conference on Machine Learning, pages 5926-5936.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "15", |
|
"issue": "", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in Neural Information Processing Systems 27", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems 27, pages 3104-3112. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Extracting and composing robust features with denoising autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre-Antoine", |
|
"middle": [], |
|
"last": "Manzagol", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 25th international conference on Machine learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1096--1103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoen- coders. In Proceedings of the 25th international con- ference on Machine learning, pages 1096-1103.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Transfer learning for low-resource neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Barret", |
|
"middle": [], |
|
"last": "Zoph", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deniz", |
|
"middle": [], |
|
"last": "Yuret", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "May", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1568--1575", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1163" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td colspan=\"2\">Corpus</td><td>Sentences</td><td>Tokens</td></tr><tr><td>Mono</td><td>en mni</td><td>378,693 132,071</td><td>10,172,299 3,509,945</td></tr><tr><td>Dev</td><td>en mni</td><td>1,000 1,000</td><td>29,801 31,109</td></tr><tr><td>Test</td><td>en mni</td><td>500 500</td><td>13,131 13,575</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td colspan=\"2\">4.1.2 Manipuri Side Multiple Reference</td></tr><tr><td colspan=\"2\">In addition to the original parallel data for the Ma-</td></tr><tr><td colspan=\"2\">nipuri side test reference, we also include another</td></tr><tr><td colspan=\"2\">test reference to handle linguistic diversity. Any</td></tr><tr><td colspan=\"2\">manual translation is bound to have variations with</td></tr><tr><td colspan=\"2\">the translation of other translator at lexical, seman-</td></tr><tr><td colspan=\"2\">tic and syntactic level. And, considering the fact</td></tr><tr><td colspan=\"2\">that most machine translation systems are evalu-</td></tr><tr><td colspan=\"2\">ated via one of the string matching methods which</td></tr><tr><td colspan=\"2\">penalizes the paraphrasing. Hence to maximize</td></tr><tr><td colspan=\"2\">the string overlapping, we include an additional</td></tr><tr><td colspan=\"2\">Manipuri side test reference.</td></tr><tr><td>Source:</td><td>I am crazy about Thai, Mughlai</td></tr><tr><td/><td>and Bengali food.</td></tr><tr><td colspan=\"2\">Reference-1: Transliteration: ai-na thai mughlai ama-sung</td></tr><tr><td/><td>bengali chinjak yaamnaa pam-</td></tr><tr><td/><td>jei.</td></tr><tr><td>English-</td><td>I extremely love Thai, Mughlai</td></tr><tr><td>Translation:</td><td>and Bengali food.</td></tr><tr><td colspan=\"2\">Reference-2: \u0990 \u09a5\u09be\u0987 \u09ae\u09c1 \u0998\u09b2\u09be\u0987 \u0985\u09ae\u09bf\u09a6 \u09ac \u09bf\u09b2 \u09bf\u099a \u09be\u0995\u09bf\u09b6\u0982\u09a6\u09be \u09df\u09be \u09be</td></tr><tr><td/><td>\u0999\u09be\u0993\u09c8\u099c\u0964</td></tr><tr><td colspan=\"2\">Transliteration: ai thai mughlai amadi bengali</td></tr><tr><td/><td>chinjak-shing-da yaamnaa ngou-</td></tr><tr><td/><td>jei.</td></tr><tr><td>English-</td><td>I am very crazy about Thai,</td></tr><tr><td>Translation:</td><td>Mughlai and Bengali food.</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Statistics of the Monolingual and the Development Corpora for the English (en) and Manipuri (mni). \u0990\u09a8\u09be \u09a5\u09be\u0987 \u09ae\u09c1 \u0998\u09b2\u09be\u0987 \u0985\u09ae\u09b8\u09c1 \u0982 \u09ac \u09bf\u09b2 \u09bf\u099a \u09be\u0995 \u09df\u09be \u09be \u09aa\u09be\u09ae\u09c8\u099c\u0964 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ." |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td/><td>1.75 2.00</td><td colspan=\"2\">BLEU vs Iterations for en-mni UNMTbaseline-2 UNMTproposed</td><td/><td/><td>2.5</td><td>BLEU vs Iterations for mni-en UNMTbaseline-2 UNMTproposed</td></tr><tr><td/><td>1.50</td><td/><td/><td/><td/><td>2.0</td></tr><tr><td>BLEU Score</td><td>0.75 1.00 1.25</td><td/><td/><td/><td>BLEU Score</td><td>1.0 1.5</td></tr><tr><td/><td>0.50</td><td/><td/><td/><td/><td/></tr><tr><td/><td>0.25</td><td/><td/><td/><td/><td>0.5</td></tr><tr><td/><td>0</td><td colspan=\"3\">50000 100000 150000 200000 250000 300000 Iterations</td><td/><td>0</td><td>50000 100000 150000 200000 250000 300000 Iterations</td></tr><tr><td colspan=\"3\">Figure 2: Systems</td><td>Directions</td><td colspan=\"4\">BLEU ref1 ref2 ref1+ref2 ref1</td><td>ChrF ref2</td></tr><tr><td/><td/><td>UNMT baseline-1</td><td>en-mni mni-en</td><td>\u2248 0 \u2248 0</td><td colspan=\"2\">\u2248 0 \u2248 0</td><td>\u2248 0 \u2248 0</td><td>13.5531 13.0772 14.0372 -</td></tr><tr><td/><td/><td>UNMT baseline-2</td><td>en-mni mni-en</td><td>1.8 2.4</td><td>1.4 -</td><td/><td>2.7 -</td><td>19.51 22.27</td><td>19.30 -</td></tr><tr><td/><td/><td>UNMT proposed</td><td>en-mni mni-en</td><td>2.0 2.7</td><td>1.7 -</td><td/><td>3.1 -</td><td>21.21 24.89</td><td>20.73 -</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "BLEU vs Iterations graph: The left side graph shows the BLEU vs Iterations for the en \u2192 mni direction considering the reference \"ref1\" only. Whereas, the right side is for the mni \u2192 en direction. The blue line represents the UNMT baseline-2 model while the orange line is for the UNMT proposed model." |
|
}, |
|
"TABREF6": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Sample Input and Output for the two test sentences for the reference \"ref1\"." |
|
} |
|
} |
|
} |
|
} |