ACL-OCL / Base_JSON /prefixN /json /nodalida /2021.nodalida-main.37.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:52.089050Z"
},
"title": "Boosting Neural Machine Translation from Finnish to Northern S\u00e1mi with Rule-Based Backtranslation",
"authors": [
{
"first": "Mikko",
"middle": [],
"last": "Aulamo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Humanities University of Helsinki",
"location": {
"settlement": "Helsinki",
"country": "Finland"
}
},
"email": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Humanities University of Helsinki",
"location": {
"settlement": "Helsinki",
"country": "Finland"
}
},
"email": ""
},
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Humanities University of Helsinki",
"location": {
"settlement": "Helsinki",
"country": "Finland"
}
},
"email": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Humanities University of Helsinki",
"location": {
"settlement": "Helsinki",
"country": "Finland"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We consider a low-resource translation task from Finnish into Northern S\u00e1mi. Collecting all available parallel data between the languages, we obtain around 30,000 sentence pairs. However, there exists a significantly larger monolingual Northern S\u00e1mi corpus, as well as a rulebased machine translation (RBMT) system between the languages. To make the best use of the monolingual data in a neural machine translation (NMT) system, we use the backtranslation approach to create synthetic parallel data from it using both NMT and RBMT systems. Evaluating the results on an in-domain test set and a small out-of-domain set, we find that the RBMT backtranslation outperforms NMT backtranslation clearly for the out-of-domain test set, but also slightly for the in-domain data, for which the NMT backtranslation model provided clearly better BLEU scores than the RBMT. In addition, combining both backtranslated data sets improves the RBMT approach only for the in-domain test set. This suggests that the RBMT system provides general-domain knowledge that cannot be found from the relative small parallel training data.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We consider a low-resource translation task from Finnish into Northern S\u00e1mi. Collecting all available parallel data between the languages, we obtain around 30,000 sentence pairs. However, there exists a significantly larger monolingual Northern S\u00e1mi corpus, as well as a rulebased machine translation (RBMT) system between the languages. To make the best use of the monolingual data in a neural machine translation (NMT) system, we use the backtranslation approach to create synthetic parallel data from it using both NMT and RBMT systems. Evaluating the results on an in-domain test set and a small out-of-domain set, we find that the RBMT backtranslation outperforms NMT backtranslation clearly for the out-of-domain test set, but also slightly for the in-domain data, for which the NMT backtranslation model provided clearly better BLEU scores than the RBMT. In addition, combining both backtranslated data sets improves the RBMT approach only for the in-domain test set. This suggests that the RBMT system provides general-domain knowledge that cannot be found from the relative small parallel training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Machine translation from and to minority languages is challenging because large parallel corpora are typically hard to obtain. Two strategies have proven most successful to eliminate this bottleneck: using rule-based machine translation (RBMT) systems that do not rely on large data, or training data-driven translation systems with automatically created synthetic data, e.g. backtranslation (Sennrich et al., 2016) . In this paper, we com-bine both strategies in the context of neural machine translation (NMT) from Finnish to Northern S\u00e1mi. In particular, we investigate the impact of RBMT in data augmentation in comparison to standard NMT-based backtranslation.",
"cite_spans": [
{
"start": 392,
"end": 415,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Northern S\u00e1mi is a Uralic minority language spoken in Norway, Sweden and Finland. Historically, most of the work on machine translation from and to S\u00e1mi languages is based on RBMT (Trosterud and Unhammer, 2012; Antonsen et al., 2017; Pirinen et al., 2017) . Data-driven approaches such as NMT are generally more competitive, but require large amounts of training data in the form of parallel translated sentences. For minority languages, finding parallel data sets is usually more difficult than collecting monolingual data, which is also the case for Northern S\u00e1mi.",
"cite_spans": [
{
"start": 180,
"end": 210,
"text": "(Trosterud and Unhammer, 2012;",
"ref_id": "BIBREF17"
},
{
"start": 211,
"end": 233,
"text": "Antonsen et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 234,
"end": 255,
"text": "Pirinen et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A common way of leveraging monolingual data for NMT is the above mentioned backtranslation strategy, a method where monolingual data of the target language is translated automatically to the source language to create additional parallel training data. In this work, we use two reverse translation models to produce the backtranslations: a neural model trained only on the available parallel data and a rule-based approach. The latter is a system developed for the translation from Northern S\u00e1mi to Finnish (Pirinen et al., 2017) within the Apertium framework (Forcada et al., 2011) . We also combine both methods to further augment the data. Our experiments demonstrate the positive effects of both strategies and the possibility of obtaining complementary information from different backtranslation engines.",
"cite_spans": [
{
"start": 506,
"end": 528,
"text": "(Pirinen et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 559,
"end": 581,
"text": "(Forcada et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using backtranslations from different sources as training data has been shown to be beneficial for improving machine translation quality. In addition to proposing training data augmentation methods that do not require reverse translation systems, Burlot and Yvon (2018) compare the effects of using statistical machine translation (SMT) and NMT based backtranslations for English\u2192French and English\u2192German translations. They show that both types of backtranslations improve translation quality, NMT slightly more than SMT. Poncelas et al. (2019) also produce backtranslations with SMT and NMT. They show that the translation quality of a German\u2192English NMT system is improved when including either type of backtranslations in the training data. The greatest improvement is observed when both types of backtranslations are used.",
"cite_spans": [
{
"start": 247,
"end": 269,
"text": "Burlot and Yvon (2018)",
"ref_id": "BIBREF2"
},
{
"start": 523,
"end": 545,
"text": "Poncelas et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Augmenting training data with RBMT backtranslations has also proven to be useful for boosting translation quality. Dowling et al. (2019) use RBMT backtranslations to improve statistical machine translation performance for Scottish Gaelic\u2192English translations. The authors show that backtranslations can be beneficial even in cases where the translation quality of the MT system used to produce the backtranslations is low. Soto et al. (2019) study the performance of NMT systems trained with augmented training data backtranslated using RBMT, SMT and NMT. They experiment with Basque\u2192Spanish translations and show that the translation performance improves when using each type of augmented training data individually. Soto et al. (2020) also analyze the effects of using augmented training data backtranslated with the three different paradigms. They focus on two language pairs: a low-resource language pair, Basque\u2192Spanish, and a high-resource language pair, German\u2192English. In addition to showing similar results as Soto et al. (2019) , they show further improvement in translation performance when all types of augmented training data are combined.",
"cite_spans": [
{
"start": 115,
"end": 136,
"text": "Dowling et al. (2019)",
"ref_id": "BIBREF4"
},
{
"start": 423,
"end": 441,
"text": "Soto et al. (2019)",
"ref_id": "BIBREF15"
},
{
"start": 718,
"end": 736,
"text": "Soto et al. (2020)",
"ref_id": "BIBREF16"
},
{
"start": 1019,
"end": 1037,
"text": "Soto et al. (2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The UiT freecorpus 1 contains a Finnish -Northern S\u00e1mi (fin-sme) parallel corpus with 110k sentence pairs and a distinct set of 868k monolingual Northern S\u00e1mi sentences. The UiT corpora are collected from multiple sources and cover various domains. Both the parallel and the monolingual corpora contain considerable amounts of duplicate lines. In this section, we describe our data cleaning and filtering efforts and the data split. For ad-ditional evaluation, we collected a small test set consisting of translated YLE news articles 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Data filtering and cleaning is carried out with the OpusFilter toolbox (Aulamo et al., 2020) . Our OpusFilter configuration files are available online 3 , which helps to replicate the data preprocessing steps. First, we remove duplicate lines from the parallel corpus. This process removes 67.7% of the sentence pairs, leaving us with 35,426 unique sentence pairs. The remaining data set is then cleaned with a set of filters from OpusFilter. Similar filtering setups have been confirmed to improve translation quality (V\u00e1zquez et al., 2019; Aulamo et al., 2020) . In particular, we remove sentence pairs that satisfy one of the following conditions:",
"cite_spans": [
{
"start": 71,
"end": 92,
"text": "(Aulamo et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 519,
"end": 541,
"text": "(V\u00e1zquez et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 542,
"end": 562,
"text": "Aulamo et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 One or both of the sentences are empty or longer than 100 words,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 The ratio of the sentence lengths in words is greater than 3,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 The sentence pair contains words longer than 40 characters,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 The sentence pair contains HTML elements,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 The sentences have dissimilar numerals based on the \"Non-zero numerals score\" (V\u00e1zquez et al., 2019) ,",
"cite_spans": [
{
"start": 80,
"end": 102,
"text": "(V\u00e1zquez et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 The sentences have dissimilar punctuation based on the \"Terminal punctuation score\" (V\u00e1zquez et al., 2019 ),",
"cite_spans": [
{
"start": 86,
"end": 107,
"text": "(V\u00e1zquez et al., 2019",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 The sentence pair contains characters outside of the Latin script,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 The sentences are not recognized to be their correct language by the langid.py language identifier (Lui and Baldwin, 2012) .",
"cite_spans": [
{
"start": 101,
"end": 124,
"text": "(Lui and Baldwin, 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "After filtering, 29,106 clean sentence pairs remain in the parallel data set. From this clean set, 2000 pairs are randomly selected to form a validation set and another 2000 pairs to form a test set, leaving 25,106 pairs for training. Note that all subsets are disjoint due to the initial deduplication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The additional test set consists of two news articles describing S\u00e1mi culture in Finland available in both Finnish and Northern S\u00e1mi on YLE News. It was extracted from the web and manually aligned to create a clean reference set. This test set is, however, small (151 sentence pairs) and may not produce completely reliable evaluation scores, but it should still provide additional insights about the quality of the translation models and their ability to generalize to new domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The monolingual Northern S\u00e1mi data is processed in a similar way as the parallel data above. Duplicate removal discards 35.6% of the total of 867,677 sentences, leaving 559,074 sentences in the data set. For corpus cleaning, we use all filters of those cited above that are applicable to monolingual data, i.e. the sentence length filter, the word length filter, the HTML element filter, the Latin script filter, and the language identification filter. The resulting clean monolingual corpus contains 462,803 sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "In this section, we compare a baseline fin-sme NMT model trained only with the available parallel data to NMT models trained with additional backtranslated data. The backtranslations are produced by translating the clean monolingual Northern S\u00e1mi data to Finnish either with a NMT system trained on the parallel data in the reverse direction (sme-fin), or with the sme-fin RBMT system. This yields three additional synthetic training sets that augment the original parallel training data: one with the NMT backtranslations, one with RBMT translations, and one with both types of backtranslations. Each of them is then used to train a separate NMT model that we can compare to the baseline model, which is trained on the original parallel data only. Note that we do not use any data sampling or weighting scheme to balance original and augmented training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "4"
},
{
"text": "All NMT models in our experiments are trained with MarianNMT (Junczys-Dowmunt et al., 2018) version 1.8.33. The backtranslation model is based on a RNN architecture with GRU cells (Cho et al., 2014) and attention. In our experiments, the RNN architecture slightly outperformed Transformers in the out-of-domain test set for this translation direction. All models using additional backtranslated training sets are trained with both RNNs and Transformers. All RNN models have the same architecture as the backtranslation model. For Transformers, we use the example hyperparameters from MarianNMT 4 which replicate the setup 4 https://github.com/marian-nmt/ marian-examples/tree/master/transformer UiT YLE NMT 19.4 4.5 RBMT 12.3 10.0 from Vaswani et al. (2017) . For subword segmentation, we use the SentencePiece tokenizer (Kudo and Richardson, 2018) with vocabulary size 8000, which has been shown to produce the best results with the data set sizes that we are dealing with (Gowda and May, 2020; Gr\u00f6nroos et al., 2021) . We train the models until the cross-entropy of the validation set does not improve for 10 consecutive validation steps.",
"cite_spans": [
{
"start": 180,
"end": 198,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 736,
"end": 757,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF18"
},
{
"start": 821,
"end": 848,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF9"
},
{
"start": 974,
"end": 995,
"text": "(Gowda and May, 2020;",
"ref_id": "BIBREF6"
},
{
"start": 996,
"end": 1018,
"text": "Gr\u00f6nroos et al., 2021)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "4"
},
{
"text": "For the RBMT backtranslations, we use Apertium with the sme-fin model by Pirinen et al. (2017) .",
"cite_spans": [
{
"start": 73,
"end": 94,
"text": "Pirinen et al. (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "4"
},
{
"text": "This system implements a shallow transfer-based translation engine consisting of modules for morphological analysis, disambiguation and generation, modules for lexical translation based on context rules, and a module for syntactic transformation operations. Table 1 shows the quality of the sme-fin translation models used for backtranslations in BLEU points (Papineni et al., 2002) . The NMT model performs much better with UiT test data than with the YLE test data, which shows that the NMT system is strongly adapted to the UiT data, while the RBMT system has similar performance with both test sets.",
"cite_spans": [
{
"start": 359,
"end": 382,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 258,
"end": 265,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "4"
},
{
"text": "All the 462,803 sentences of the cleaned monolingual data are translated with the sme-fin NMT and RBMT models. As the quality of the source side of the backtranslations is not as important as the quality of the target side (Sennrich et al., 2016) , we keep an unfiltered version of both backtranslation data sets. To see the effect of filtering the augmented data set, we apply OpusFilter with a reduced set of filters (recall that the monolingual Northern S\u00e1mi data has already been processed): sentence length filter, length ratio filter, word length filter, HTML element filter, nonzero numeral filter and terminal punctuation filter. After filtering and an additional deduplication step, the NMT-produced backtranslations amount to 415,313 sentence pairs and the RBMT-Transformer RNN Training data UiT YLE UiT YLE Baseline 25,106 18.9 4.3 18.5 5.1 + NMT-all-bt 470,085 32.9 9.2 23.0 8.4 + RBMT-all-bt 487,862 37.0 14.4 26.4 11.0 + NMT-all-bt + RBMT-all-bt 932,790 38.8 10.9 26.3 9.6 + NMT-clean-bt 422,596 34.0 9.8 25.0 8.8 + RBMT-clean-bt 378,567 36.3 15.5 25.6 10.9 + NMT-clean-bt + RBMT-clean-bt 776,006 38.9 11.3 28.2 10.7 + NMT-clean-bt + RBMT-all-bt 885,301 40.1 10.8 29.9 9.9 Table 2 : Training data sizes (sentence pairs) and results (in BLEU points) for the fin-sme translation models with two different architectures (Transformer and RNN) using original parallel data (Baseline), augmented data sets with unfiltered and filtered backtranslations (all-bt and clean-bt, resp.) evaluated on the UiT test set and the YLE test set.",
"cite_spans": [
{
"start": 223,
"end": 246,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 1187,
"end": 1194,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Backtranslations",
"sec_num": "4.1"
},
{
"text": "produced ones to 353,465 sentence pairs. After concatenation with the parallel data and removal of duplicates in this concatenated set, we are left with 422,596 and 378,567 sentence pairs respectively. Furthermore, another training set is created by merging both the NMT and RBMT backtranslations with the parallel data; this set contains 776,006 sentence pairs. The first column of Table 2 shows the training data sizes of the different configurations.",
"cite_spans": [],
"ref_spans": [
{
"start": 383,
"end": 390,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Backtranslations",
"sec_num": "4.1"
},
{
"text": "The upper part of Table 2 shows the BLEU scores of the translation models trained with the original parallel data set (baseline) and the unfiltered augmented data sets. Similarly to the reverse model, the baseline fin-sme models are well adapted to the UiT test set and do not perform as well with the YLE test set. Adding the NMT backtranslations to the training data gives a significant improvement with respect to BLEU scores: using Transformers on the UiT set, the score raises by 14 points (74% relative), and on the YLE set, the score goes up by 4.9 points (114%). The RBMT backtranslations give an even larger boost on the UiT set than the NMT translations (18.1 points, 96%) and especially on the YLE data (10.1 points, 235%). Using RNNs, the scores are lower overall, but they do show similar improvements with the same training sets as Transformers. The significant boost from RBMT backtranslations is quite remarkable considering that Apertium does not seem to perform very well on the reverse translation direction on UiT data. This result stresses once more that the effect of backtransla-tion is to a larger extent due to improved target language coverage than to the quality of the translations. Instead, the additional, less domain-specific knowledge encoded in the RBMT model seems to lead to the additional push even in the UiT domain and it certainly carries over to the out-of-domain data represented by the YLE news data.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The simple combination of both types of backtranslations only provides a modest additional boost on the UiT test set. The out-of-domain performance drops substantially compared to using RBMT-based backtranslations alone. Adding NMT-based translations seem to hurt the model in this regard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Next, we study the effect of filtering the backtranslations before training the augmented NMT models. Table 2 also shows the results of this approach. We can see that the models benefit from filtering the NMT backtranslations, especially on the UiT domain, whereas the RBMTbased augmentation model performance decreases on the UiT test set. The RBMT-based Transformer model gains an improvement on the YLE set, but the same score with the RNN model decreases slightly. The combination of both backtranslation augmentations leads to a boost in translation quality over the unfiltered backtranslation training set, which suggests that a careful data selection can be important when using data augmentation techniques. The performance on the YLE data is still lower than the RBMT-based data augmentation alone, which could indicate that the RBMT backtranslations are able to carry over out-of-domain information, but this result needs to be taken with a grain of salt as the test set is very small.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 109,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Finally, we also train a models that combine filtered NMT backtranslations with unfiltered RBMT backtranslations (last row in Table 2 ). These models reach the overall highest BLEU scores on the UiT test set, 40.1 with Transformer, but on the YLE test set the performance is lower than with other models, which is a bit surprising but may also depend on random variation and on the small size of the test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In this work, we confirm that the addition of backtranslations produced with multiple paradigms, including RBMT, improves the quality of NMT models. Additionally, the translation performance can be further improved by removing noisy sentence pairs from the NMT backtranslations. We show that these methods are beneficial in a real-world low-resource setting with the Finnish\u2192Northern S\u00e1mi translation pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In the future, we plan to extend our work in various ways including more careful data selection and filtering, the use of subword regularization, domain labeling, improved sampling strategies and further data augmentation techniques such as pivot-based translations and transfer learning using multilingual NMT models. Furthermore, we would like to optimize hyper-parameters such as vocabulary size, network architectures and training parameters to maximize the translation performance in low-resource scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://giellatekno.uit.no/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://yle.fi/uutiset/osasto/sapmi/ 3 https://github.com/Helsinki-NLP/ Sami-MT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research presented in this paper was supported by the European Language Grid project through its open call for pilot projects. The European Language Grid project has received funding from the European Union's Horizon 2020 Research and Innovation programme under Grant Agreement \u2116 825627(ELG). This work was also supported by the FoTran project, funded by the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation programme under Grant Agreement \u2116 771113. The authors wish to acknowledge CSC -IT Center for Science, Finland, for computational resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Machine translation with North Saami as a pivot language",
"authors": [
{
"first": "Lene",
"middle": [],
"last": "Antonsen",
"suffix": ""
},
{
"first": "Ciprian",
"middle": [],
"last": "Gerstenberger",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Kappfjell",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"Nyst\u00f8"
],
"last": "Rahka",
"suffix": ""
},
{
"first": "Marja-Liisa",
"middle": [],
"last": "Olthuis",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Trosterud",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "123--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lene Antonsen, Ciprian Gerstenberger, Maja Kappf- jell, Sandra Nyst\u00f8 Rahka, Marja-Liisa Olthuis, Trond Trosterud, and Francis Tyers. 2017. Machine translation with North Saami as a pivot language. In Proceedings of the 21st Nordic Conference on Com- putational Linguistics, pages 123-131.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "OpusFilter: A configurable parallel corpus filtering toolbox",
"authors": [
{
"first": "Mikko",
"middle": [],
"last": "Aulamo",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "150--156",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-demos.20"
]
},
"num": null,
"urls": [],
"raw_text": "Mikko Aulamo, Sami Virpioja, and J\u00f6rg Tiedemann. 2020. OpusFilter: A configurable parallel corpus filtering toolbox. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 150-156, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using monolingual data in neural machine translation: a systematic study",
"authors": [
{
"first": "Franck",
"middle": [],
"last": "Burlot",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "144--155",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6315"
]
},
"num": null,
"urls": [],
"raw_text": "Franck Burlot and Fran\u00e7ois Yvon. 2018. Using mono- lingual data in neural machine translation: a system- atic study. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 144-155, Brussels, Belgium. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1179"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Leveraging backtranslation to improve machine translation for Gaelic languages",
"authors": [
{
"first": "Meghan",
"middle": [],
"last": "Dowling",
"suffix": ""
},
{
"first": "Teresa",
"middle": [],
"last": "Lynn",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Celtic Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "58--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meghan Dowling, Teresa Lynn, and Andy Way. 2019. Leveraging backtranslation to improve machine translation for Gaelic languages. In Proceedings of the Celtic Language Technology Workshop, pages 58-62.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Apertium: a free/open-source platform for rulebased machine translation. Machine translation",
"authors": [
{
"first": "Mireia",
"middle": [],
"last": "Mikel L Forcada",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Ginest\u00ed-Rosell",
"suffix": ""
},
{
"first": "Jim",
"middle": [
"O"
],
"last": "Nordfalk",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "'regan",
"suffix": ""
},
{
"first": "Juan",
"middle": [
"Antonio"
],
"last": "Ortiz-Rojas",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "P\u00e9rez-Ortiz",
"suffix": ""
},
{
"first": "Gema",
"middle": [],
"last": "S\u00e1nchez-Mart\u00ednez",
"suffix": ""
},
{
"first": "Francis",
"middle": [
"M"
],
"last": "Ram\u00edrez-S\u00e1nchez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tyers",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "25",
"issue": "",
"pages": "127--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel L Forcada, Mireia Ginest\u00ed-Rosell, Jacob Nord- falk, Jim O'Regan, Sergio Ortiz-Rojas, Juan An- tonio P\u00e9rez-Ortiz, Felipe S\u00e1nchez-Mart\u00ednez, Gema Ram\u00edrez-S\u00e1nchez, and Francis M Tyers. 2011. Apertium: a free/open-source platform for rule- based machine translation. Machine translation, 25(2):127-144.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Finding the optimal vocabulary size for neural machine translation",
"authors": [
{
"first": "Thamme",
"middle": [],
"last": "Gowda",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "3955--3964",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.352"
]
},
"num": null,
"urls": [],
"raw_text": "Thamme Gowda and Jonathan May. 2020. Finding the optimal vocabulary size for neural machine transla- tion. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 3955-3964, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Transfer learning and subword sampling for asymmetric-resource one-to-many neural translation",
"authors": [
{
"first": "Stig-Arne",
"middle": [],
"last": "Gr\u00f6nroos",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2021,
"venue": "Machine Translation",
"volume": "",
"issue": "",
"pages": "1--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stig-Arne Gr\u00f6nroos, Sami Virpioja, and Mikko Ku- rimo. 2021. Transfer learning and subword sam- pling for asymmetric-resource one-to-many neural translation. Machine Translation, pages 1-36.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Marian: Fast neural machine translation in C++",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Dwojak",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Neckermann",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Alham",
"middle": [],
"last": "Fikri Aji",
"suffix": ""
},
{
"first": "Nikolay",
"middle": [],
"last": "Bogoychev",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL 2018, System Demonstrations",
"volume": "",
"issue": "",
"pages": "116--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr\u00e9 F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116- 121, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sentence-Piece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. Sentence- Piece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "2012. langid.py: An off-the-shelf language identification tool",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the ACL 2012 System Demonstrations",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Pro- ceedings of the ACL 2012 System Demonstrations, pages 25-30, Jeju Island, Korea. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "North-S\u00e1mi to Finnish rule-based machine translation system",
"authors": [
{
"first": "Tommi",
"middle": [],
"last": "Pirinen",
"suffix": ""
},
{
"first": "Francis",
"middle": [
"M"
],
"last": "Tyers",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Trosterud",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "115--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommi Pirinen, Francis M. Tyers, Trond Trosterud, Ryan Johnson, Kevin Unhammer, and Tiina Puo- lakainen. 2017. North-S\u00e1mi to Finnish rule-based machine translation system. In Proceedings of the 21st Nordic Conference on Computational Linguis- tics, pages 115-122, Gothenburg, Sweden. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Combining SMT and NMT back-translated data for efficient NMT",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
},
{
"first": "Dimitar",
"middle": [],
"last": "Shterionov",
"suffix": ""
},
{
"first": "Gideon Maillette De Buy",
"middle": [],
"last": "Wenniger",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2019,
"venue": "12th International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "922--931",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alberto Poncelas, Maja Popovi\u0107, Dimitar Shterionov, Gideon Maillette De Buy Wenniger, and Andy Way. 2019. Combining SMT and NMT back-translated data for efficient NMT. In 12th International Con- ference on Recent Advances in Natural Language Processing, RANLP 2019, pages 922-931. Incoma Ltd., Shoumen, Bulgaria.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Leveraging SNOMED CT terms and relations for machine translation of clinical texts from Basque to Spanish",
"authors": [
{
"first": "Xabier",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Olatz",
"middle": [],
"last": "Perez-De-Vi\u00f1aspre",
"suffix": ""
},
{
"first": "Maite",
"middle": [],
"last": "Oronoz",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Second Workshop on Multilingualism at the Intersection of Knowledge Bases and Machine Translation",
"volume": "",
"issue": "",
"pages": "8--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xabier Soto, Olatz Perez-De-Vi\u00f1aspre, Maite Oronoz, and Gorka Labaka. 2019. Leveraging SNOMED CT terms and relations for machine translation of clini- cal texts from Basque to Spanish. In Proceedings of the Second Workshop on Multilingualism at the In- tersection of Knowledge Bases and Machine Trans- lation, pages 8-18, Dublin, Ireland. European Asso- ciation for Machine Translation.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Selecting backtranslated data from multiple sources for improved neural machine translation",
"authors": [
{
"first": "Xabier",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Dimitar",
"middle": [],
"last": "Shterionov",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3898--3908",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xabier Soto, Dimitar Shterionov, Alberto Poncelas, and Andy Way. 2020. Selecting backtranslated data from multiple sources for improved neural machine translation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3898-3908. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Evaluating North S\u00e1mi to Norwegian assimilation RBMT",
"authors": [
{
"first": "Trond",
"middle": [],
"last": "Trosterud",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"Brubeck"
],
"last": "Unhammer",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Third International Workshop on Free/Open-Source Rule-Based Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trond Trosterud and Kevin Brubeck Unhammer. 2012. Evaluating North S\u00e1mi to Norwegian assimilation RBMT. In Proceedings of the Third International Workshop on Free/Open-Source Rule-Based Ma- chine Translation (FreeRBMT 2012).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The University of Helsinki submission to the WMT19 parallel corpus filtering task",
"authors": [
{
"first": "Ra\u00fal",
"middle": [],
"last": "V\u00e1zquez",
"suffix": ""
},
{
"first": "Umut",
"middle": [],
"last": "Sulubacak",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "294--300",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5441"
]
},
"num": null,
"urls": [],
"raw_text": "Ra\u00fal V\u00e1zquez, Umut Sulubacak, and J\u00f6rg Tiedemann. 2019. The University of Helsinki submission to the WMT19 parallel corpus filtering task. In Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 3: Shared Task Papers, Day 2), pages 294-300, Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"text": "Reverse translation model (sme-fin) quality in BLEU points evaluated with the UiT test set and the YLE test set.",
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}