|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:06:38.660220Z" |
|
}, |
|
"title": "POSTECH Submission on Duolingo Shared Task", |
|
"authors": [ |
|
{ |
|
"first": "Junsu", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hongseok", |
|
"middle": [], |
|
"last": "Kwon", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "hkwon@postech.ac.kr" |
|
}, |
|
{ |
|
"first": "Jong-Hyeok", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Pohang University of Science and Technology (POSTECH)", |
|
"location": { |
|
"country": "Republic of Korea" |
|
} |
|
}, |
|
"email": "jhlee@postech.ac.kr" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes POSTECH's submission to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Langauge Education (STAPLE) for the English-Korean language pair. In this paper, we propose a transfer learning based simultaneous translation model by extending BART. We pretrained BART with Korean Wikipedia and a Korean news dataset, and fine-tuned it with an additional web-crawled parallel corpus and the 2020 Duolingo official training dataset. In our experiments on the 2020 Duolingo test dataset, our submission achieves 0.312 in weighted macro F1 score, and ranks second among the submitted En-Ko systems.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes POSTECH's submission to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Langauge Education (STAPLE) for the English-Korean language pair. In this paper, we propose a transfer learning based simultaneous translation model by extending BART. We pretrained BART with Korean Wikipedia and a Korean news dataset, and fine-tuned it with an additional web-crawled parallel corpus and the 2020 Duolingo official training dataset. In our experiments on the 2020 Duolingo test dataset, our submission achieves 0.312 in weighted macro F1 score, and ranks second among the submitted En-Ko systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Simultaneous Translation And Paraphrase for Language Education (STAPLE) is the task of automatically producing multiple translations from a single source sentence (Mayhew et al., 2020) . Because STAPLE can be regarded as a mixture of the machine translation (MT) and paraphrasing problem, MT and paraphrasing techniques play an important role in this task. Unlike in a typical MT task, systems are demanded to generate high-coverage sets on a sentence-level, as opposed to word-level. Subsequently, systems require a deeper linguistic understanding of the target language to generate accurate target sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 184, |
|
"text": "(Mayhew et al., 2020)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recent NLP studies have alleviated this problem by transfer learning (Ventura and Warnick, 2007) from pre-trained language models. Radford et al. (2018) proposed a generative pre-trained language model (GPT), which trains a Transformer decoder with large-scale monolingual data, to achieve significantly improved performance in nine out of the twelve datasets. Despite these improvements, GPT shows a limited ability to model bidirectional context due to using the classical generative model-ing approach. On the other hand, Devlin et al. (2018) proposed bidirectional encoder representations from Transformers (BERT), trained for the reconstruction of natural language from sentences containing masked tokens, in order to obtain deeper representations for natural language. By training on an enormous amount of training data, they achieved state-of-the-art results on eleven NLP tasks. To take advantage of both pre-trained generative models and pre-trained bidirectional encoders, Lewis et al. (2019) introduced a denoising autoencoder for pre-training sequence-to-sequence models called BART. BART aims to learn linguistic knowledge in the process of first corrupting the text using various noise functions and then restoring it, and showed state-of-the-art performance in various tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 96, |
|
"text": "(Ventura and Warnick, 2007)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 131, |
|
"end": 152, |
|
"text": "Radford et al. (2018)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 525, |
|
"end": 545, |
|
"text": "Devlin et al. (2018)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 983, |
|
"end": 1002, |
|
"text": "Lewis et al. (2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Given this background, we expected that using a transfer-learning-based approach could resolve two difficulties of the En-Ko track of STAPLE: data insufficiency and multiple sentence generation. Unlike recent MT models which used over 4.5 million sentence pair for training data, the STAPLE official dataset includes only 2500 En-Ko source sentences. With such small data, we predicted that recent NMT models would not be able to learn translation knowledge effectively. Also, we speculated that paraphrasing requires a deep understanding of the language. Based on this prediction, a welltrained language model and a generative model for target language were needed to achieve this task's objectives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With these considerations, we concluded that BART, a sequence-to-sequence generative model pre-trained on a large amount of data, is most suitable for STAPLE and thus propose a transferlearning-based simultaneous translation model by extending BART. Our model added a randomly initialized source-side encoder in place of the embedding layer of BART pre-trained by Korean monolingual data and predicts translation weights with an additional feed-forward network using hidden vectors generated by the pre-trained decoder. The remainder of the paper is organized as follows: Section 2 describes our proposed method. Section 3 summarizes the experimental procedure and results, and Section 4 gives the conclusion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We adopt BART to the STAPLE problem, which takes source sentence to generate multiple target sentences. Our model consists of a pre-trained autoencoder with the source-side encoder that proposed in Lewis et al. (2019) and a feed-forward network to predict translation weights ( Figure 1 ). In the following subsections, we describe our methods in detail.", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 217, |
|
"text": "Lewis et al. (2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 286, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We used BART as our pre-trained autoencoder structure. As was with BART, our autoencoder structure learns linguistic information of the target language by denoising various types of document corruptions. Among the five document corruption types proposed by BART, we applied Text Infilling and Sentence Permutation because they yielded the best results on Lewis et al. (2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 374, |
|
"text": "Lewis et al. (2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained autoencoder (BART)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Pre-trained BART is a monolingual model, so the proposed model needs an additional encoder to function as translation model. After pre-training BART, we removed the embedding layer of the pretrained encoder and added a randomly-initialized encoder instead (Lewis et al., 2019) . In order to prevent corruption from the high loss in the randomly- initialized encoder during initial training, we freeze all pre-trained BART weights during the first finetuning step except for the self-attention input projection matrix of BART's first encoder layer. In the second step, we train all model parameters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 276, |
|
"text": "(Lewis et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source-side Encoder", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We added a feed-forward network to predict a translation weight on each generated sentence. The sum of hidden vectors which generated on the decoder is passed as the input of the feed-forward network. The output of the feed-forward network passed through a sigmoid layer becomes the final translation weight. During the generation step, the sentences with the high weights are selected.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feed-forward network for translation weight training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "3 Experiments", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feed-forward network for translation weight training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Pre-training. For pre-training, we use text crawled from the Korean Wikipedia (5.8M words) and Korean online news sites (447M words). When crawling, we extracted only text passages and ignored headers, lists, and tables. To reduce training time, we filtered out any samples that exceed 100 tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Fine-tuning. For fine-tuning, we used the STA-PLE official training data (Duolingo, 2020) (700K sentences), setting aside 100 sentences each for the development set and test set. In addition, we adopted the web crawling parallel corpus (2M sentences) as additional training and development data for the source-side encoder. As with the pretraining corpus, we filtered out any training or development samples longer than 100 tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Settings. We modified the Fairseq (Ott et al., 2019) implementation of BART to build our model. Most hyperparameters of BART pre-training such as dropout ratio, hidden size, and etc. were copied from the base model described in Lewis et al. (2019). For the document corruption scheme, we used the pre-training options of Lewis et al. (2019) : Text Infilling and Sentence Shuffling. We set warmup learning steps to 10K out of 250K total steps. For data preprocessing, we applied the sentencepiece (Kudo and Richardson, 2018) implementation of byte-pair encoding (Sennrich et al., 2016) with a 32k vocabulary on each language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 52, |
|
"text": "(Ott et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 340, |
|
"text": "Lewis et al. (2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 523, |
|
"text": "(Kudo and Richardson, 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 561, |
|
"end": 584, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Pre-training. We trained target-side BART using Text Infilling and Sentence Shuffling as described in \u00a72.1. We replaced 30% of tokens with single [MASK] symbols with span length distribution (\u03bb = 3) on Text Infilling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Fine-tuning. We divided fine-tuning step into four steps.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1. Pre-train source-side encoder After pretraining, we detached the embedding layer of BART encoder and attached a randomly initialized encoder as described in \u00a72.2. We used only our web crawling parallel corpus for this step. During this step, we freeze the pretrained model except the first encoder layer's projection weights to prevent the pre-trained weights being affected by the high loss while the encoder learns the source-side representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "2. Fine-tuning on MT After pre-training the source-side encoder, we trained entire model on the same training data with a smaller learning rate. Because the size of the parallel data used for fine-tuning is much smaller than that of monolingual data used for pre-training, we expected pre-trained BART to generate the correct sentences even if the source-side encoder produced an incorrect expression.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "3. Fine-tuning on paraphrasing After training on an additional parallel corpus, we trained the entire model on the official parallel corpus to reach the paraphrasing goal.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "After learning all sentence representations, we trained a feed-forward network for translation weight prediction on the official target language weights. In order to train translation weights without corrupting the sentence generation model, we freeze all parts of the model excluding the feed forward network.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight training", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Experiment variations. We conduct multiple experiments on test set divided from official training set to determine the best generation strategy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight training", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "\u2022 Beam search with different beam size. We selected all generated sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight training", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "\u2022 Diverse beam search with different beam size and group size. We used the implementation of Vijayakumar et al. (2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 118, |
|
"text": "Vijayakumar et al. (2016)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight training", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "\u2022 Beam search w/ weight with same beam size but different size of sentences selected by highest translation weight. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight training", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We trained the model as described in \u00a73.2 using various generation strategies. For evaluation, we used weighted macro F1 scores on our test set extracted from the 2020 Duolingo official dataset. Table 2 shows the scores of each generation strategy. In the case of beam size, results showed the highest weighted macro F1 score when the beam size was 75. We speculate this to be because of the trade-off between weighted recall and precision. Using diverse beam search with beam size 100 and beam search with translation weight showed ineffective results. We initially expected to attain a higher precision with similar weighted recall if the translation weights were predicted accurately, but it seems our feed-forward network was not able to learn the distribution of translation weights properly. Also, we had expected diverse beam decoding to help generate more diverse sentences, but it had an adverse effect on overall performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 202, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Submission results. The submission results on the official test set are reported in Table 3 . We selected the decoding option obtained by applying beam search with beam size 75, Nbest 75 which showed the highest weighted macro F1 score in Table 2 as our final submission. Our submission achieves an improvement of +0.263 in weighted macro F1 score compared to the baseline. As a result, our system ranks second out of the four systems submitted this year.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 91, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 239, |
|
"end": 246, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this paper, we present POSTECH's submissions to the 2020 Duolingo shared task. We propose a transfer-learning based simultaneous translation model by extending BART. The proposed model is first pre-trained by reconstructing large corrupted text using text infilling and sentence shuffling, and then fine-tuned with an additional parallel corpus and the official training dataset with a newly added randomly initialized encoder in place of the embedding layer. It has an additional feed-forward network to predict translation weight trained on the official dataset. Finally, our model outperforms the baseline by a large margin and ranks second out of the submitted systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Data for the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE)", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Duolingo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.7910/DVN/38OJR6" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Duolingo. 2020. Data for the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
|
"authors": [ |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal ; Abdelrahman Mohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ves", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.13461" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Simultaneous translation and paraphrase for language education", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Mayhew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Bicknell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Brust", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Mcdowell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Monroe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Settles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Mayhew, K. Bicknell, C. Brust, B. McDowell, W. Monroe, and B. Settles. 2020. Simultaneous translation and paraphrase for language education. In Proceedings of the ACL Workshop on Neural Gen- eration and Translation (WNGT). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "fairseq: A fast, extensible toolkit for sequence modeling", |
|
"authors": [ |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Improving language understanding by generative pre-training", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Narasimhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Salimans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A theoretical foundation for inductive transfer", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Ventura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Warnick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Brigham Young University, College of Physical and Mathematical Sciences", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Ventura and Sean Warnick. 2007. A theoretical foundation for inductive transfer. Brigham Young University, College of Physical and Mathematical Sciences.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Diverse beam search: Decoding diverse solutions from neural sequence models", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Ashwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Vijayakumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramprasath", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Cogswell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qing", |
|
"middle": [], |
|
"last": "Selvaraju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhruv", |
|
"middle": [], |
|
"last": "Crandall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Batra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashwin K Vijayakumar, Michael Cogswell, Ram- prasath R. Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural se- quence models.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "The overall architecture of the proposed model. The input vectors of feed-forward network are the sum of the pre-trained decoder's hidden vectors.", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"text": "Dataset statistics -number of target sentence and word.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "Results of training variants -each separated section corresponds to a different generation strategy (Beam search, Diverse beam search and Beam search with weight). Diverse is the number of group for diverse beam search and Nbest (weight) is the number of sentences selected by highest translation weight. The bold values indicate the best result in the metrics for each architecture.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "Submission results -the official results of 2020 Duolingo shared task in En-Ko language pair. The bold values indicate the best result in the metrics for the each architecture.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |