|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:06:41.402971Z" |
|
}, |
|
"title": "Training and Inference Methods for High-Coverage Neural Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": { |
|
"settlement": "Pittsburgh PA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Yixin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": { |
|
"settlement": "Pittsburgh PA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Mayuranath", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "U.S.A. {myang2", |
|
"location": { |
|
"postCode": "yixinl2" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we introduce a system built for the Duolingo Simultaneous Translation And Paraphrase for Language Education (STA-PLE) shared task at the 4th Workshop on Neural Generation and Translation (WNGT 2020). We participated in the English-to-Japanese track with a Transformer model pretrained on the JParaCrawl corpus and finetuned in two steps on the JESC corpus and then the (smaller) Duolingo training corpus. First, during training, we find it is essential to deliberately expose the model to higher-quality translations more often during training for optimal translation performance. For inference, encouraging a small amount of diversity with Diverse Beam Search to improve translation coverage yielded marginal improvement over regular Beam Search. Finally, using an auxiliary filtering model to filter out unlikely candidates from Beam Search improves performance further. We achieve a weighted F1 score of 27.56% on our own test set, outperforming the STAPLE AWS translations baseline score of 4.31%.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we introduce a system built for the Duolingo Simultaneous Translation And Paraphrase for Language Education (STA-PLE) shared task at the 4th Workshop on Neural Generation and Translation (WNGT 2020). We participated in the English-to-Japanese track with a Transformer model pretrained on the JParaCrawl corpus and finetuned in two steps on the JESC corpus and then the (smaller) Duolingo training corpus. First, during training, we find it is essential to deliberately expose the model to higher-quality translations more often during training for optimal translation performance. For inference, encouraging a small amount of diversity with Diverse Beam Search to improve translation coverage yielded marginal improvement over regular Beam Search. Finally, using an auxiliary filtering model to filter out unlikely candidates from Beam Search improves performance further. We achieve a weighted F1 score of 27.56% on our own test set, outperforming the STAPLE AWS translations baseline score of 4.31%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Currently, state of the art machine translation systems generally produce a single output translation. However, human evaluators of translation tasks will often accept multiple translations as correct. We introduce a neural machine translation (NMT) system that generates high-coverage translation sets for a single given prompt in the source language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our system was prepared for the English-to-Japanese track 1 of the Duolingo Simultaneous Translation And Paraphrase for Language Education (STAPLE) shared task (Mayhew et al., 2020) at the 4th Workshop on Neural Generation and Translation (WNGT 2020). The shared task datasets consist of English prompts and a weighted set of target language translations for each prompt. The task requires systems to produce translation sets for given English prompts that are evaluated on weighted F1 score, defined in Appendix A. We have made our code publicly available. 2 We experimented with models trained and finetuned on the provided Duolingo English-Japanese prompt-translation data (Mayhew et al., 2020) , the JParaCrawl web-crawled corpus (Morishita et al., 2019) , as well as the Japanese-English Subtitle Corpus (JESC) (Pryzant et al., 2018) . The sizes of each dataset are summarized in Table 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 181, |
|
"text": "(Mayhew et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 558, |
|
"end": 559, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 676, |
|
"end": 697, |
|
"text": "(Mayhew et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 734, |
|
"end": 758, |
|
"text": "(Morishita et al., 2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 816, |
|
"end": 838, |
|
"text": "(Pryzant et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 885, |
|
"end": 892, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our system uses a Transformer-based (Vaswani et al., 2017) NMT model and we began with weights pretrained on the large JParaCrawl corpus (Morishita et al., 2019) . Section 4 describes in detail how the model was pretrained. Our system's NMT model was then obtained by fine-tuning first on the Japanese-English Subtitle Corpus (JESC) (Pryzant et al., 2018) before further fine-tuning on the Duolingo training set (Mayhew et al., 2020) . We outline these datasets in more detail in Section 2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 58, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 137, |
|
"end": 161, |
|
"text": "(Morishita et al., 2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 333, |
|
"end": 355, |
|
"text": "(Pryzant et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 433, |
|
"text": "(Mayhew et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Given the small size of the Duolingo data, this multi-step fine-tuning helped the model generalize and outperformed single-step fine-tuning and no fine-tuning. High-coverage translation bitext data is not easy to mine or create, so we expect that in other settings, the size of such available training data will also be small. Therefore, it is very likely that adopting a multi-step fine-tuning method may be advantageous more generally. The fine-tuning procedure is described in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Outputting the entire beam of candidates from 150-width Beam Search, scored on per token log likelihood, this two-step fine-tuned system produced the translations that we submitted to the", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "English Sentences Japanese Sentences JParaCrawl 8,763,995 8,763,995 JESC 2,801,388 2,801,388 Duolingo 2,500 855,940 Table 1 : Number of sentence-pairs in the datasets (Duolingo pairs have a one-to-many correspondence)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 135, |
|
"text": "Japanese Sentences JParaCrawl 8,763,995 8,763,995 JESC 2,801,388 2,801,388 Duolingo 2,500 855,940 Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "shared task leaderboard. It achieved 25.69 % weighted F1 score on the shared task blind development set and 26.0% on the blind test set. After the leaderboard closed, we conducted further experiments and discovered several notable optimizations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The most effective optimization was using the ground truth weights that indicate variations in translation quality during training. We find that it is essential to deliberately expose the model to higher-quality translations more often during training. Otherwise, overexposure to low-quality translations harms the model's translation performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Secondly, Diverse Beam Search with a very small penalty outperformed Beam Search. However, too much diversity begins to introduce minor semantic shifts that deviate from correct translations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also explored introducing an auxiliary filtering model for post-processing candidates. Our proposed filtering model is able to refine the candidates generated by the NMT model, which improved the system's performance with respect to the weighted F1 score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We share our results in Section 7. Our best result was a weighted F1 score of 27.56% on our own test set of 200 prompts randomly selected from the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Duolingo provided training, development and test sets (Mayhew et al., 2020) . However, the development and test datasets were 'blind' and did not contain ground truth translations, so we did not use these for training or development.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 75, |
|
"text": "(Mayhew et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Duolingo High-coverage Translations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The training set consists of 2,500 English prompts, each of which are paired with a variable number of Japanese translations (Table 1) . Duolingo provides weights for each translation, which can be interpreted as a quality score. For our experiments, we randomly split the the 2,500 prompts into 2,100, 200 and 200-prompt training, development and test sets respectively. For the shared task submission, we retrained a model over all 2,500 prompts with our best hyperparameters.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 134, |
|
"text": "(Table 1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Duolingo High-coverage Translations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "As our base model, we use a model pre-trained on the JParaCrawl corpus (Morishita et al., 2019) . This corpus contains over 8.7 million sentence pairs which were crawled from the web and then automatically aligned, similar to European corpora in the ParaCrawl project 3 . Though noisy due to an imperfect alignment method, this is currently the largest publicly-available English-Japanese bitext corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 95, |
|
"text": "(Morishita et al., 2019)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "JParaCrawl", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The Japanese-English Subtitle Corpus (JESC) (Pryzant et al., 2018) , is a large parallel training corpus that contains 2.8 million pairs of TV and movie subtitles. With an average length of 8, the corpus mostly consists of short sentences, which is similar to the data present in the Duolingo training corpus. Even though JESC contains some noise, it captures sufficient information that is useful for downstream NMT tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 66, |
|
"text": "(Pryzant et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Japanese-English Subtitle Corpus", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Machine Translation Machine translation (MT) involves finding a target sentence y = y 1 , ...y m with the maximum probability conditioned on a source sentence x = x 1 , ...x n , i.e argmax y P (y|x).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "There are various neural approaches to tackle machine translation. These include utilizing recurrent neural networks (Cho et al., 2014b) , convolutional neural networks (Kalchbrenner et al., 2016) , attention-based models (Luong et al., 2014; Bahdanau et al., 2015) and transformer networks (Vaswani et al., 2017) . Sequence to sequence models deal with the task of mapping an input sequence to an output sequence. These were first introduced by Sutskever et al. (2014) and typically use an RNN 121 based encoder-decoder architecture, where the encoder outputs a fixed length representation of the input which is fed into the decoder to get a target translation. RNN and LSTM based approaches struggle to handle long sequences and long-range dependencies since the encoder network is tasked with encoding all relevant information in a fixedlength hidden state vector. Bahdanau et al. (2015) overcome this by utilizing attention, an alignment model that can attend to important parts of the input during translation. Luong et al. (2014) used the attention mechanism to great effect, observing gains of 5.0 BLEU over non-attention based techniques for NMT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 136, |
|
"text": "(Cho et al., 2014b)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 196, |
|
"text": "(Kalchbrenner et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 222, |
|
"end": 242, |
|
"text": "(Luong et al., 2014;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 265, |
|
"text": "Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 313, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 868, |
|
"end": 890, |
|
"text": "Bahdanau et al. (2015)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1016, |
|
"end": 1035, |
|
"text": "Luong et al. (2014)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The Transformer Architecture For our experiments, we used the the Transformer architecture proposed by Vaswani et al. (2017) . It is a selfattention based model that produces superior results for machine translation tasks compared to CNN and LSTM based models. By stacking multiple layers of multi-head self-attention blocks, they demonstrate that the attention mechanism by itself is very powerful for sequence encoding and decoding. Recently, Transformer-based models that are pre-trained on large-scale datasets have produced superior performance on various Natural Language Processing (NLP) tasks (Rajpurkar et al., 2016; Talmor and Berant, 2019; Mayhew et al., 2019) . In Section 4 we further describe the transformer architecture and our pretraining procedure.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 124, |
|
"text": "Vaswani et al. (2017)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 601, |
|
"end": 625, |
|
"text": "(Rajpurkar et al., 2016;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 626, |
|
"end": 650, |
|
"text": "Talmor and Berant, 2019;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 651, |
|
"end": 671, |
|
"text": "Mayhew et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Domain Adaptation Domain adaptation involves making use of out-of-domain data in situations where high quality in-domain data are scarce. This fine tuning approach has been shown to be effective for NMT (Luong and Manning, 2015; Sennrich et al., 2015; Freitag and Al-Onaizan, 2016) . Morishita et al. (2019) show that pre-training with JParaCrawl vastly improves in-domain performance for English-Japanese translations. We make use of these ideas in our multi-step fine-tuning experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 228, |
|
"text": "(Luong and Manning, 2015;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 251, |
|
"text": "Sennrich et al., 2015;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 252, |
|
"end": 281, |
|
"text": "Freitag and Al-Onaizan, 2016)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 307, |
|
"text": "Morishita et al. (2019)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Inference with Beam Search Beam Search is an approximate search algorithm used for finding high likelihood sequences from sequential decoders. At every time step, the top k outputs are traversed and the rest are discarded. A common issue with beam search is that it generates similar outputs that only differ by a few words or minor morphological variations (Li and Jurafsky, 2016) . Vijayakumar et al. (2016) propose Diverse Beam Search, a method that reduces redundancy during decoding in NMT models to generate a wider range of candidate outputs. This is achieved by splitting the beam width into evenly-sized groups and adding a penalty term for the presence of similar candidates across groups. The authors find most success with the Hamming Diversity penalty term, which penalizes the selection of tokens used in previous groups proportionally to the number of times it was selected before. We detail our experiments using both search strategies in Section 6.", |
|
"cite_spans": [ |
|
{ |
|
"start": 358, |
|
"end": 381, |
|
"text": "(Li and Jurafsky, 2016)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 384, |
|
"end": 409, |
|
"text": "Vijayakumar et al. (2016)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Post-processing in NLP For tasks that require sets of outputs rather than single outputs, postprocessing or reranking methods are often used as a downstream step after a model generates an initial set. They have proven to be useful techniques for various NLP tasks, such as Question Answering (Kratzwald et al., 2019) , Named Entity Recognition (Yang et al., 2017) and Neural Summarization (Cao et al., 2018) . The basic methodology is to first generate an initial candidate set and rerank or prune these candidates to generate the final set. This set up reduces reliance on generators by introducing an auxiliary discriminator to refine the outputs of the generator. Section 6 describes our experiments with pruning or filtering Beam Search candidates during decoding.", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 317, |
|
"text": "(Kratzwald et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 345, |
|
"end": 364, |
|
"text": "(Yang et al., 2017)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 408, |
|
"text": "(Cao et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As our base model, we used a model pretrained by Morishita et al. (2019) on the JParaCrawl data using the fairseq framework (Ott et al., 2019) . Morishita et al. (2019) preprocessed the JParaCrawl English and Japanese text using sentencepiece (Kudo and Richardson, 2018) to obtain 32,000-token vocabularies on both the English and Japanese sides. Architecture The pretrained model follows the Transformer 'base' architecture (Vaswani et al., 2017) , with a dropout probability of 0.3 (Srivastava et al., 2014).", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 72, |
|
"text": "Morishita et al. (2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 124, |
|
"end": 142, |
|
"text": "(Ott et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 145, |
|
"end": 168, |
|
"text": "Morishita et al. (2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 270, |
|
"text": "(Kudo and Richardson, 2018)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 425, |
|
"end": 447, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pretrained Base Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Transformer is a multi-layer self-attention model. Both its encoder and its decoder contain multiple similar sub-modules which include a multi-head attention layer (MultiHead) and a position-wise feed-forward network (FFN).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preprocessing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Attention(Q, K, V ) = softmax( QK T \u221a d k )V (1) head i = Attention(QW Q i , KW K i , V W V i ) (2) MultiHead(Q, K, V ) = Concat(head 1 , ..., head h )W O (3) FFN(x) = max(0, xW 1 + b 1 )W 2 + b 2", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Data Preprocessing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here, Q, K, V are the matrix representation of the query, key, and value separately. W and b denote the weights and biases of the linear layers. d k denotes the dimension of the key matrix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preprocessing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Optimizer The pretrained model was trained using the Adam optimizer (Kingma and Ba, 2014) with the hyperparameters \u03b2 1 = 0.9, \u03b2 2 = 0.98, \u03b1 = 10 \u22123 and = 10 \u22129 . The loss function used was cross entropy loss with ls = 0.1 loss smoothing (Szegedy et al., 2016) . To improve update stability, gradients were clipped to a maximum norm of 1.0 (Pascanu et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 259, |
|
"text": "(Szegedy et al., 2016)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 361, |
|
"text": "(Pascanu et al., 2013)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preprocessing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Learning rate scheduling The learning rate schedule adopted for the pretrained model was the so-called 'Noam' schedule (Vaswani et al., 2017) . This schedule linearly increases the learning rate for 4000 'warm-up' steps from a starting learning rate of 10 \u22127 to the target learning rate of 10 \u22123 , then decreases it from that point proportionally to the inverse square root of the step number.", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 141, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preprocessing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Apart from the NMT model, we additionally introduce a neural filtering model to post-process the NMT model's candidates. Instead of designing a model that will assign a real-value score to each of the candidates, we simplify the task by formulating it as a binary classification problem. Namely, the filtering model is trained to classify a given candidate sentence as a valid sample (in the goldstandard list) or an invalid sample. The intuition is that the gold-standard candidate list contains a small number of high-quality sentences (with larger weights) and a large number of lower-quality sentences. Thus it is more important to distinguish the hits from misses than high-quality hits from low-quality hits.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Filtering Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To construct the dataset for the filtering model, we augmented the Duolinguo dataset with the results of NMT model. Specifically, we labeled those result sentences that appear in the gold-standard list as True and labeled others as False.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Filtering Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As for the model architecture, we encode the source sentence and the candidate sentence separately with a one-layer bidirectional LSTM model. The encoding is the concatenation of the hidden vectors in both directions after complete traversal of the sequence, along with a (learned) positional embedding vector. This embedding encodes the position of the candidate sentence in the candidate list generated by the NMT model, which is sorted by descending score order. 4 Lastly, we use a multilayer perception (MLP) to classify the concatenated vector.", |
|
"cite_spans": [ |
|
{ |
|
"start": 466, |
|
"end": 467, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Filtering Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "v s = LSTM s (s) (5) v c i = LSTM c (c i ) (6) p i = MLP(Dropout([v s : v c i : v i ]))", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Filtering Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Here, s denotes the source sentence, c i denoted the i-th candidate, v i denotes the positional encoding, and p i denotes the predicted likelihood. The filtering model is optimized with binary cross-entropy loss.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Filtering Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We experiment with several different fine-tuning scenarios, each time evaluating the models using the Weighted F1 metric on our 200-prompt Duolingo test set. First as a baseline, we directly evaluate the JParaCrawl pretrained model without fine-tuning. Then we evaluate the performance of models fine-tuned on either JESC or on all English-Japanese pairs in our 2,100-prompt Duolingo training set. 5 Finally, we experiment with first finetuning on the JESC data and then on the Duolingo training set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-step Fine-tuning", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Before training, we preprocessed the JESC and Duolingo data using the same 32,000-token English and Japanese sentencepiece models as Morishita et al. (2019) used on the JParaCrawl data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 156, |
|
"text": "Morishita et al. (2019)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-step Fine-tuning", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Training procedure We adopted the same optimizer settings as they used for the pretrained model, described in Section 4. Using mini-batches of up to 5,000 tokens, we made an update step every 16 mini-batches with mixed precision computation for increased training speed (Micikevicius et al., 2018) . While the pretrained model was trained for 24,000 steps, each time we fine-tuned the model, we did so for 2,000 steps, continuing the inverse square root learning rate schedule from the pretraining. We saved the model parameters every 100 steps and for each fine-tuning experiment, we averaged the last eight parameter checkpoints to obtain our final model weights. For the model with two-step finetuning, we use the averaged checkpoint from the JESC fine-tuning experiment as the starting point for further fine-tuning on the Duolingo dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 297, |
|
"text": "(Micikevicius et al., 2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-step Fine-tuning", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "For producing multiple translations for each prompt, we output the entire beam width of candidates from the Beam Search or Diverse Beam Search (Vijayakumar et al., 2016) algorithms. Our motivation for experimenting with using Diverse Beam Search is to improve the coverage of our translation sets. In all our experiments, we capped the generated sequence length at 200 tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Strategies", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Beam Search scoring Beam Search using sequence log likelihood (or likelihood) as scores results in a well-known length bias towards shorter sequences, with worsening bias for wider beams (Murray and Chiang, 2018) . To address this, we scored beam candidates based on the mean log likelihood per token (Cho et al., 2014a) . Further work could involve the use of more complex adjustments for length bias and including a coverage penalty over the source prompt (Wu et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 212, |
|
"text": "(Murray and Chiang, 2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 320, |
|
"text": "(Cho et al., 2014a)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 475, |
|
"text": "(Wu et al., 2016)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Strategies", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Aligning data distributions The ground truth weights of the Duolingo reference translations invariably follow skewed distributions, with long tails of low weight translations (Figure 1) . Consequently, one drawback of training with all English-Japanese pairs in the Duolingo data is that each pair is essentially provided to the model with equal weight. In other words, the distribution over reference translations at training time is uniform, whereas the distribution when evaluating weighted F1 score is skewed.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 185, |
|
"text": "(Figure 1)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "To address this, we sampled the training data such that the model was trained on prompts with equal probability but for each prompt, reference translations were sampled according to the distribution given by the ground truth weights. In effect, this aligns the distribution over reference translations during training time and evaluation time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Loss smoothing to improve coverage Aside from helping NMT models generalize, M\u00fcller et al. (2019) show that use of loss smoothing also better calibrates NMT models, preventing them from becoming over-confident. To encourage our NMT model to produce high-coverage translations, we hypothesize that increasing loss smoothing to decrease the model's confidence will improve its performance in producing a wider variety of correct translation candidates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 97, |
|
"text": "M\u00fcller et al. (2019)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Since our filtering model is trained with the results of the NMT model, we trained two filtering models with two different decoding strategies of the NMT model, namely, Regular and Diverse Beam Search with beam widths set such that approximately 100 unique candidates are output for each prompt. The NMT model is trained with the best hyper-parameters we found with the weighted sampling technique. We use the same train/dev/test splits as the NMT model and select the checkpoint with the best classification accuracy on the development set. We used the Adam optimizer with initial learning rate 0.0001 and halved the learning rate when the validation accuracy plateaued for 2 epochs. The word embedding dimension, positional embedding dimension, the hidden dimension of the LSTM and MLP are all set to 128. The dropout rate was 0.2. The post-processing procedure involved pruning all candidates with predicted likelihood less than 0.5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Filtering Model", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "We conducted our experiments sequentially and generally used the best results so far as a baseline for subsequent experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Our best performing model was the one trained using multi-step fine-tuning, as shown in Table 2 . The performance of this model was superior to the other fine-tuning settings on every metric, suggesting this result was not simply a matter of imbalance between precision and recall. This result provides strong evidence that the first fine-tuning step on the JESC data helped the model generalize to the Duolingo test set. In contrast, the model only fine-tuned on the Duolingo training set may not have generalized as well due to the training set's small size.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 96, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-step Fine-tuning Results", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "In order to balance precision and (weighted) recall appropriately to maximize the weighted F1 metric, we experimented with tuning the number of Beam Search candidates to output and found that 100 was optimal ( Table 3) . Note that the number of unique candidates returned can be fewer than the beam width as Beam Search searches over sequences of subword tokens and sometimes detokenization results in duplicates.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 218, |
|
"text": "Table 3)", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-step Fine-tuning Results", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Our experiments with Diverse Beam Search show that using 3 beam groups with a very low Hamming diversity penalty can result in marginal performance improvement (Table 4 ). The algorithm evenly divides the total beam width between the groups and although the algorithm penalizes duplicate sequences, high scoring candidates are still often duplicated across groups. As such, we varied the total beam widths so that the mean number of unique candidates per prompt were approximately 100. 6 We conclude that encouraging a small amount diversity can allow the model to capture a wider range of variations without sacrificing too much precision.", |
|
"cite_spans": [ |
|
{ |
|
"start": 486, |
|
"end": 487, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 168, |
|
"text": "(Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Diverse Beam Search Results", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "We found that performance deteriorates when increasing the diversity penalty or the number of groups further. These results suggest that standard beam search by itself is relatively good at producing high-coverage translations and that acceptable variations of translations are rather homogeneous rather than diverse. To illustrate, Table 5 contains some examples of error candidates produced by Diverse Beam Search. Even though they would backtrackslate to the English prompt correctly, they nevertheless introduce a minor semantic variation that makes them unacceptable translations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 333, |
|
"end": 340, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Diverse Beam Search Results", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Sampling training data according to the ground truth weights meaningfully improves performance, as shown in Table 6 . Our previous best weighted F1 score using Diverse Beam Search was 26.29%, and this improved to 27.21%. Moreover, evaluating the model on the standard machine translation metric of BLEU-4 score between the single best candidates and the single best ground truth translations, we observe a remarkable increase in BLEU score if weighted sampling is used during train-ing. From this result, we conclude that unweighted sampling of training data overexposes the model to poorer translations, which significantly reduces the model's effectiveness as a general-purpose NMT model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 115, |
|
"text": "Table 6", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation Results", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "As for loss smoothing, contrary to our hypothesis, increasing the loss smoothing rate was detrimental. and, in fact, decreasing the rate from 0.1 to 0.05 even improved the weighted F1 score slightly from 27.21% to 27.43%. This suggests that the effect of loss smoothing on the high-coverage translation task is not necessarily different to the usual machine translation task. Table 7 shows the results of the filtering algorithm. The filtering model can improve the weighted F1 score with both the diverse beam search and regular beam search, especially with the regular beam search. This improvement results from a larger gain in precision from filtering than the loss in recall. One thing to note is that our filtering model suffers from over-fitting. For example, with Regular Beam Search, our filtering model improves the weighted F1 score by 0.43% on the test set (Table 7). However, using the same technique on the training set results in an improvement of 6.25%. 7 This may result from the limited size of Duolinguo dataset, and the fact that over-fitting introduced by the NMT model would be amplified since the filtering model is trained on the results of the NMT model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 970, |
|
"end": 971, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 376, |
|
"end": 383, |
|
"text": "Table 7", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation Results", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "Our machine translation system produces highcoverage sets of target language translations from single source language prompts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We used multi-step fine-tuning to train a robust NMT model. This involved first training or finetuning a model on a large bitext dataset, then finetuning on the bitext dataset with high coverage sets of target language translations, which is likely to be small. In our experiments, we find that fine-tuning a pretrained model first on a corpus similar to our intended domain and then fine-tuning further on our smaller in-domain dataset produced the best results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "During training, we find that if the ground truth translations come with weights that indicate variations in their quality / likelihood, it is essential to expose the model to higher-quality translations more often during training. One way to do this is to to sample the training data with probabilities commensurate to the ground truth weights. Doing so will prevent overexposure to low-quality translations that ultimately harm the model's translation performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "For decoding, we find that Beam Search scored on per token log likelihood finds very good translation candidates on its own. Nevertheless, instead using Diverse Beam Search with a very small penalty improves coverage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We observed a further performance boost from post-processing the translation candidates. This was achieved by training an auxiliary filtering model on the results of the NMT model to prune unlikely candidates as a final step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "One idea for future work is to directly optimize the weighted F1 score during training using reinforcement learning. As the weighted F1 score is not a differentiable function, it is impossible to train directly on this metric using maximum likelihood estimation. Instead, one may use policy gradients under a reinforcement learning paradigm to do so.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "To evaluate the result, the weighted macro F 1 (equation 8) with respect to the accepted translations is the metric of interest. This is the average weighted F 1 score (equation 12) over all prompts s in the corpus, where weighted F 1 is calculated with (unweighted) precision and weighted recall.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendices", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Weighted Macro F 1 = s\u2208S Weighted F 1 (s) |S| (8) Calculating the weighted recall requires the use of weights included in the dataset. These weights are associated with each human-curated acceptable translation, which represent the likelihood that an English learner would respond with that translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendices", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each prompt s, the weighted true positives (WTP) and weighted false negatives (WFN) are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendices", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "WTP s = t\u2208TPs weight(t)", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "A Appendices", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "WFN s = t\u2208FNs weight(t)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendices", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "With these, the weighted recall for each s can be calculated as follows", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendices", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Weighted Recall(s) = WTP s WTP s + WFN s (11) Precision is calculated in the usual way, so the weighted F 1 score, Weighted F 1 (s), for a particular input s is given by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendices", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "2\u2022 Precision(s) \u2022 WeightedRecall(s) Precision(s) + WeightedRecall(s)", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "A Appendices", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There were five tracks of target languages in total, the others being Hungarian, Korean, Portuguese and Vietnamese.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our code can be found at https://github.com/ michaelzyang/high-coverage-translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://paracrawl.eu/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In our experiments, we scored candidates using per token log likelihood (see Section 6.2 for further details.)5 The given English and Japanese sentences are unbalanced as there are multiple reference Japanese translations per English prompt. We balanced the training data by repeating the corresponding prompts over all reference translations to create English-Japanese pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This duplication makes the number of outputs from Diverse Beam Search more variable. Our result with beam width 225 outputted 62-182 unique results per prompt with a standard deviation of 19.6, compared to 72-100 unique results with standard deviation of 4.0 from 100-width Regular Beam Search.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "On the training set, the filtering algorithm improves the weighted F1 score from 56.78% to 63.03%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "3rd International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Retrieve, rerank and rewrite: Soft template based neural summarization", |
|
"authors": [ |
|
{ |
|
"first": "Ziqiang", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenjie", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sujian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Furu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "152--161", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1015" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 152-161, Melbourne, Australia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A systematic comparison of smoothing techniques for sentencelevel bleu", |
|
"authors": [ |
|
{ |
|
"first": "Boxing", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "WMT@ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentence- level bleu. In WMT@ACL.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "On the properties of neural machine translation: Encoder-decoder approaches", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merri\u00ebnboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--111", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/W14-4012" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014a. On the proper- ties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation, pages 103-111, Doha, Qatar. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merrienboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Aglar G\u00fcl\u00e7ehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7 aglar G\u00fcl\u00e7ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representa- tions using RNN encoder-decoder for statistical ma- chine translation. CoRR, abs/1406.1078.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Fast domain adaptation for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Freitag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaser", |
|
"middle": [], |
|
"last": "Al-Onaizan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. CoRR, abs/1612.06897.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A\u00e4ron van den Oord, Alex Graves, and Koray Kavukcuoglu", |
|
"authors": [ |
|
{ |
|
"first": "Nal", |
|
"middle": [], |
|
"last": "Kalchbrenner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lasse", |
|
"middle": [], |
|
"last": "Espeholt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Simonyan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "CoRR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, A\u00e4ron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. CoRR, abs/1610.10099.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Rankqa: Neural question answering with answer re-ranking", |
|
"authors": [ |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Kratzwald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Eigenmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Feuerriegel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6076--6085", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernhard Kratzwald, Anna Eigenmann, and Stefan Feuerriegel. 2019. Rankqa: Neural question answer- ing with answer re-ranking. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 6076-6085.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
|
"authors": [ |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--71", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-2012" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Mutual information and diverse decoding improve neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li and Dan Jurafsky. 2016. Mutual information and diverse decoding improve neural machine trans- lation. CoRR, abs/1601.00372.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Stanford neural machine translation systems for spoken language domain", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Workshop on Spoken Language Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spo- ken language domain. In International Workshop on Spoken Language Translation, Da Nang, Vietnam.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Addressing the rare word problem in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zaremba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1410.8206" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2014. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Simultaneous translation and paraphrase for language education", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Mayhew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Bicknell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Brust", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Mcdowell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Monroe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Settles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Mayhew, K. Bicknell, C. Brust, B. McDowell, W. Monroe, and B. Settles. 2020. Simultaneous translation and paraphrase for language education. In Proceedings of the ACL Workshop on Neural Gen- eration and Translation (WNGT). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Robust named entity recognition with truecasing pretraining", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Mayhew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Mayhew, Nitish Gupta, and Dan Roth. 2019. Robust named entity recognition with truecasing pre- training.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Mixed precision training", |
|
"authors": [ |
|
{ |
|
"first": "Paulius", |
|
"middle": [], |
|
"last": "Micikevicius", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharan", |
|
"middle": [], |
|
"last": "Narang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonah", |
|
"middle": [], |
|
"last": "Alben", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Diamos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erich", |
|
"middle": [], |
|
"last": "Elsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Garcia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boris", |
|
"middle": [], |
|
"last": "Ginsburg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Houston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oleksii", |
|
"middle": [], |
|
"last": "Kuchaiev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ganesh", |
|
"middle": [], |
|
"last": "Venkatesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2018. Mixed preci- sion training. In International Conference on Learn- ing Representations.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "JParaCrawl: A large scale web-based japanese-english parallel corpus", |
|
"authors": [ |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Morishita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Suzuki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masaaki", |
|
"middle": [], |
|
"last": "Nagata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.10668" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Makoto Morishita, Jun Suzuki, and Masaaki Na- gata. 2019. JParaCrawl: A large scale web-based japanese-english parallel corpus. arXiv preprint arXiv:1911.10668.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "When does label smoothing help?", |
|
"authors": [ |
|
{ |
|
"first": "Rafael", |
|
"middle": [], |
|
"last": "M\u00fcller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Kornblith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "4694--4703", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rafael M\u00fcller, Simon Kornblith, and Geoffrey E Hin- ton. 2019. When does label smoothing help? In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlch\u00e9-Buc, E. Fox, and R. Garnett, editors, Ad- vances in Neural Information Processing Systems 32, pages 4694-4703. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Correcting length bias in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Murray", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "212--223", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6322" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenton Murray and David Chiang. 2018. Correct- ing length bias in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 212-223, Brus- sels, Belgium. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "fairseq: A fast, extensible toolkit for sequence modeling", |
|
"authors": [ |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "On the difficulty of training recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Razvan", |
|
"middle": [], |
|
"last": "Pascanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 30th International Conference on International Conference on Machine Learning", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Ben- gio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th In- ternational Conference on International Conference on Machine Learning -Volume 28, ICML'13, page III-1310-III-1318. JMLR.org.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "JESC: Japanese-English Subtitle Corpus. Language Resources and Evaluation Conference (LREC)", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Pryzant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Chung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Britz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Pryzant, Y. Chung, D. Jurafsky, and D. Britz. 2018. JESC: Japanese-English Subtitle Corpus. Language Resources and Evaluation Conference (LREC).", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Squad: 100, 000+ questions for machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Konstantin", |
|
"middle": [], |
|
"last": "Lopyrev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ ques- tions for machine comprehension of text. CoRR, abs/1606.05250.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Improving neural machine translation models with monolingual data", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Dropout: a simple way to prevent neural networks from overfitting", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "J. Mach. Learn. Res", |
|
"volume": "15", |
|
"issue": "", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi- nov. 2014. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15:1929-1958.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Rethinking the inception architecture for computer vision", |
|
"authors": [ |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Szegedy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Vanhoucke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Ioffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Shlens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zbigniew", |
|
"middle": [], |
|
"last": "Wojna", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In The IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR).", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Multiqa: An empirical investigation of generalization and transfer in reading comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Talmor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "CoRR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alon Talmor and Jonathan Berant. 2019. Multiqa: An empirical investigation of generalization and transfer in reading comprehension. CoRR, abs/1905.13453.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Diverse beam search: Decoding diverse solutions from neural sequence models", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Ashwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Vijayakumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramprasaath", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Cogswell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qing", |
|
"middle": [], |
|
"last": "Selvaraju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhruv", |
|
"middle": [], |
|
"last": "Crandall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Batra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashwin K. Vijayakumar, Michael Cogswell, Ram- prasaath R. Selvaraju, Qing Sun, Stefan Lee, David J. Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural se- quence models. CoRR, abs/1610.02424.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Klingner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Apurva", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaobing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshikiyo", |
|
"middle": [], |
|
"last": "Kato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideto", |
|
"middle": [], |
|
"last": "Kazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Stevens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Kurian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nishant", |
|
"middle": [], |
|
"last": "Patil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Oriol Vinyals", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Neural reranking for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "784--792", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.26615/978-954-452-049-6_101" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jie Yang, Yue Zhang, and Fei Dong. 2017. Neural reranking for named entity recognition. In Pro- ceedings of the International Conference Recent Ad- vances in Natural Language Processing, RANLP 2017, pages 784-792, Varna, Bulgaria. INCOMA Ltd.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Typical distribution of ground truth weights", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "Results of different fine-tuning methods. Metrics evaluated on Beam Search beams of width 100 on our 200-prompt test set.", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"6\">Beam Mean # Cands Precision Recall Weighted Recall Weighted F1</td></tr><tr><td>50</td><td>49.6</td><td>46.27%</td><td>7.04%</td><td>21.29%</td><td>24.75%</td></tr><tr><td>100</td><td>98.6</td><td colspan=\"2\">35.98% 10.89%</td><td>27.85%</td><td>25.92%</td></tr><tr><td>150</td><td>147.1</td><td colspan=\"2\">29.98% 13.54%</td><td>31.33%</td><td>24.98%</td></tr><tr><td>200</td><td>195.4</td><td colspan=\"2\">26.01% 15.61%</td><td>33.92%</td><td>23.98%</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Results of tuning Beam (beam width). Mean # Cands refers to the mean number of unique candidates remaining after detokenizing subword tokens back into raw text and then removing duplicates. Metrics evaluated on our 200-prompt test set.", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "Results of Diverse Beam Search on the test set. Beam refers to the beam width. Groups refers to the number of Diverse Groups (use of 1 group is equivalent to regular Beam Search). Penalty refers to the Hamming Diversity penalty in the Diverse Beam Search algorithm. Mean # Cands refers to the mean number of unique candidates remaining after detokenizing subword tokens back into raw text and then removing duplicates. Metrics evaluated on our 200-prompt test set.", |
|
"num": null, |
|
"content": "<table><tr><td>Prompt</td><td>my parents have money</td></tr><tr><td>Incorrect</td><td>\u50d5\u306e\u4e21\u89aa\u306f\u304a\u91d1\u3092\u6301\u3063\u3066\u308b</td></tr><tr><td>Diverse</td><td>\u50d5\u306e\u4e21\u89aa\u306f\u304a\u91d1\u3092\u6301\u3063\u3066\u307e\u3059</td></tr><tr><td colspan=\"2\">Candidates \u50d5\u306e\u4e21\u89aa\u306b\u306f\u91d1\u304c\u3042\u308a\u307e\u3059</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": "Example incorrect candidates from Diverse Beam Search with 3 groups and 0.1 Hamming Diversity penalty. While the candidates would correctly back-translate to 'my parents have money', the first character of each candidate sentence indicates that the speaker / subject must be male (a restriction that is absent in the prompt).", |
|
"num": null, |
|
"content": "<table><tr><td>Sampling</td><td colspan=\"5\">Smoothing 1-best BLEU Precision Recall Weighted Recall Weighted F1</td></tr><tr><td>Weighted</td><td>0</td><td>43.2</td><td>36.28% 10.95%</td><td>27.88%</td><td>26.88%</td></tr><tr><td>Weighted</td><td>0.05</td><td>42.5</td><td>37.41% 11.30%</td><td>28.31%</td><td>27.43%</td></tr><tr><td>Weighted</td><td>0.10</td><td>43.2</td><td>37.00% 11.27%</td><td>28.14%</td><td>27.21%</td></tr><tr><td>Unweighted</td><td>0.10</td><td>27.0</td><td>35.72% 10.88%</td><td>27.18%</td><td>26.29%</td></tr><tr><td>Weighted</td><td>0.15</td><td>41.8</td><td>36.84% 11.07%</td><td>28.01%</td><td>27.06%</td></tr><tr><td>Weighted</td><td>0.20</td><td>42.3</td><td>36.86% 11.09%</td><td>27.96%</td><td>27.04%</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"text": "Results of weighted sampling of input translation pairs and different loss smoothing rates on the test set. 1-best BLEU refers to corpus BLEU-4 score between the single highest-scoring Diverse Beam Search candidate and the single highest weighted reference translation for each prompt, smoothed with the NIST method(Chen and Cherry, 2014). The other metrics were evaluated over Diverse Beam Search with 225-width beams split across 3 groups and Hamming diversity penalty of 0.01. Metrics evaluated on our 200-prompt test set.", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "Results of filtering methods on our 200-prompt test set. Candidates were generated by the NMT models fine-tuned on JESC then Duolingo data with weighted sampling technique. Regular Beam Search used beam width 100 and Diverse Beam Search used beam width 225 over 3 groups with Hamming diversity penalty of 0.01 to yield approximately 100 candidates per prompt after deduplication. Candidates that have likelihoods greater than 0.5 assigned by the filtering model are selected as the results.", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |