ACL-OCL / Base_JSON /prefixL /json /loresmt /2020.loresmt-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:59:42.801681Z"
},
"title": "Neural Machine Translation for Extremely Low-Resource African Languages: A Case Study on Bambara",
"authors": [
{
"first": "Allahsera",
"middle": [
"Auguste"
],
"last": "Tapo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rochester Institute of Technology",
"location": {}
},
"email": ""
},
{
"first": "Bakary",
"middle": [],
"last": "Coulibaly",
"suffix": "",
"affiliation": {
"laboratory": "Centre National Collaboratif de l'Education en Robotique et en Intelligence Artificielle",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Diarra",
"suffix": "",
"affiliation": {
"laboratory": "Centre National Collaboratif de l'Education en Robotique et en Intelligence Artificielle",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Homan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rochester Institute of Technology",
"location": {}
},
"email": ""
},
{
"first": "Julia",
"middle": [],
"last": "Kreutzer",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Luger",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Nagashima",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rochester Institute of Technology",
"location": {}
},
"email": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rochester Institute of Technology",
"location": {}
},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Leventhal",
"suffix": "",
"affiliation": {
"laboratory": "Centre National Collaboratif de l'Education en Robotique et en Intelligence Artificielle",
"institution": "",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Low-resource languages present unique challenges to (neural) machine translation. We discuss the case of Bambara, a Mande language for which training data is scarce and requires significant amounts of pre-processing. More than the linguistic situation of Bambara itself, the socio-cultural context within which Bambara speakers live poses challenges for automated processing of this language. In this paper, we present the first parallel data set for machine translation of Bambara into and from English and French and the first benchmark results on machine translation to and from Bambara. We discuss challenges in working with low-resource languages and propose strategies to cope with data scarcity in low-resource machine translation (MT).",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Low-resource languages present unique challenges to (neural) machine translation. We discuss the case of Bambara, a Mande language for which training data is scarce and requires significant amounts of pre-processing. More than the linguistic situation of Bambara itself, the socio-cultural context within which Bambara speakers live poses challenges for automated processing of this language. In this paper, we present the first parallel data set for machine translation of Bambara into and from English and French and the first benchmark results on machine translation to and from Bambara. We discuss challenges in working with low-resource languages and propose strategies to cope with data scarcity in low-resource machine translation (MT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Underresourced languages, from a natural language processing (NLP) perspective, are those lacking the resources (large volumes of parallel bitexts) needed to support state-of-the-art performance on NLP problems like machine translation, automated speech recognition, or named entity recognition. Yet the vast majority of the world's languages-representing billions of native speakers worldwide-are underresourced. And the lack of available training data in such languages usually reflects a broader paucity of electronic information resources accessible to their speakers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For instance, there are over six million Wikipedia articles in English but fewer than sixty thousand in Swahili and fewer than seven hundred in Bambara, the vehicular and most widely-spoken native language of Mali that is the subject of this paper. 1 Consequently, only 53% of the worlds population have access to \"encyclopedic knowledge\" in their primary language, according to a 2014 study by Facebook. 2 MT technologies could help bridge this gap, and there is enormous interest in such applications, ironically enough, from speakers of the languages on which MT has thus far had the least success. There is also great potential for humanitarian response applications (\u00d6ktem et al., 2020) .",
"cite_spans": [
{
"start": 249,
"end": 250,
"text": "1",
"ref_id": null
},
{
"start": 405,
"end": 406,
"text": "2",
"ref_id": null
},
{
"start": 671,
"end": 691,
"text": "(\u00d6ktem et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Fueled by data, advances in hardware technology, and deep neural models, machine translation (NMT) has advanced rapidly over the last ten years. Researchers are beginning to investigate the effectiveness of (NMT) low-resource languages, as in recent WMT 2019 and WMT 2020 tasks (Barrault et al., 2019) , and in underresourced African languages. Most prominently, the Masakhane (\u2200 et al., 2020) community 3 , a grassroots initiative, has developed open-source NMT models for over 30 African languages on the base of the JW300 corpus (Agi\u0107 and Vuli\u0107, 2019) , a parallel corpus of religious texts.",
"cite_spans": [
{
"start": 278,
"end": 301,
"text": "(Barrault et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 532,
"end": 554,
"text": "(Agi\u0107 and Vuli\u0107, 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since African languages cover a wide spectrum of linguistic phenomena and language families (Heine and Nurse, 2000) , individual development of translations and resources for selected languages or language families are vital to drive the overall progress. Just within the last year, a number of dedicated studies have significantly improved the state of African NMT: van Biljon et al. (2020) analyzed the depth of Transformers specifically for low-resource translation of South-African languages, based on prior studies by Martinus and Abbott (2019) on the Autshumato corpus (Groenewald and du Plooy, 2010) . Dossou and Emezue (2020) developed an MT model and compiled resources for translations between Fon and French, Akinfaderin (2020) modeled translations between English and Hausa, Orife (2020) for four languages of the Edoid language family, and Ahia and Ogueji (2020) investigated supervised vs. unsupervised NMT for Nigerian Pidgin.",
"cite_spans": [
{
"start": 92,
"end": 115,
"text": "(Heine and Nurse, 2000)",
"ref_id": "BIBREF16"
},
{
"start": 362,
"end": 391,
"text": "NMT: van Biljon et al. (2020)",
"ref_id": null
},
{
"start": 523,
"end": 549,
"text": "Martinus and Abbott (2019)",
"ref_id": "BIBREF24"
},
{
"start": 594,
"end": 606,
"text": "Plooy, 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present the first parallel data set for machine translation of Bambara into and from English and French and the first benchmark results on machine translation to and from Bambara. We discuss challenges in working with low-resource languages and propose strategies to cope with data scarcity in low-resource MT. We discuss the sociocultural context of Bambara translation and its implications for model and data development. Finally, we analyze our best-performing neural models with a small-scale human evaluation study and give recommendations for future development. We find that the translation quality on our in-domain data set is acceptable, which gives hope for other languages that have previously fallen under the radar of MT development.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We released our models and data upon publication 4 . Our evaluation setup may serve as benchmark for an extremely challenging translation task.",
"cite_spans": [
{
"start": 49,
"end": 50,
"text": "4",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bambara is the first language of five million people and the second language of approximately ten million more. Most of its speakers are members of Bambara ethnic groups, who live throughout the African continent. Approximately 30-40 million people speak some language in the Mande family of languages, to which Bambara belongs (Lewis et al., 2014) .",
"cite_spans": [
{
"start": 328,
"end": 348,
"text": "(Lewis et al., 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Bambara Language",
"sec_num": "2"
},
{
"text": "Bambara is a tonal language with a rich morphology. Over the years, several competing writing systems have developed, however, as an historically predominately oral language, a majority of Bambara speakers have never been taught to read or write the standard form of the language. Many are incapable of reading or writing the language at all. The standardization of words and the coinage of new ones are still works in progress; this poses challenges to automated text processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Bambara Language",
"sec_num": "2"
},
{
"text": "During Muslim expansion and French colonization, Arabic and French mixed with local languages, resulting in a lingua franca, e.g., Urban Bambara. Most of the existing Bambara resources are cultural (folk stories or news/topical) or come from social media or text messages, and these are a written in a melange of French, Bambara and Arabic. Consequently, corpora based on common Bambara usage must account for the code switching found in these mixtures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Bambara Language",
"sec_num": "2"
},
{
"text": "Most of these characteristics are shared with related languages, e.g., a subset of the Mande family of languages, where many languages are mutually intelligible. Thus, our hope is that our approach will be transferable to the other twelve official local languages of Mali, or to other African languages with a comparable socio-cultural and linguistic embedding, for example Wolof (non-Mande), which is comparable in terms of number of speakers, borrowings from Arabic and French influence, and oral traditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Bambara Language",
"sec_num": "2"
},
{
"text": "The next section will provide more details on digital resources and describe the process of exploring and collecting data and choosing parallel corpora for the training of the NMT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Bambara Language",
"sec_num": "2"
},
{
"text": "We discovered that there has been no prior development of automatic translation of Bambara, despite a relatively large volume of research on the language (Culy, 1985; Aplonova and Tyers, 2017; Aplonova, 2018) . As a pilot study for assessing the potential for automatic translation of Bambara, crowdsourced a small set of written or oral translations from French to Bambara. Additional work was carried out exploring novel crowdsourcing strategies for data collection in Mali Luger et al. (2020) .",
"cite_spans": [
{
"start": 154,
"end": 166,
"text": "(Culy, 1985;",
"ref_id": "BIBREF11"
},
{
"start": 167,
"end": 192,
"text": "Aplonova and Tyers, 2017;",
"ref_id": "BIBREF6"
},
{
"start": 193,
"end": 208,
"text": "Aplonova, 2018)",
"ref_id": "BIBREF5"
},
{
"start": 471,
"end": 495,
"text": "Mali Luger et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bambara Corpora",
"sec_num": "3.1"
},
{
"text": "The Corpus Bambara de R\u00e9f\u00e9rence (Vydrin et al., 2011) is the largest collection of electronic texts in Bambara. It includes scanned and textbased electronic formats. A number of parallel texts based on this data exist. For example, Vydrin (2018) analyzed Bambara's separable adjectives using this data.",
"cite_spans": [
{
"start": 32,
"end": 53,
"text": "(Vydrin et al., 2011)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bambara Corpora",
"sec_num": "3.1"
},
{
"text": "To survey the known available sources of parallel texts with Bambara, we consulted with a number of authorities on Bambara, including the Academie Malienne des Langues (AMALAN) in Mali and the Institut National des Langues et Civilisations Orientales (INALCO) in France, as well as a number of individual linguists and machine translation experts throughout the world. These two organisations play key roles in the definition and the promotion of a standard form of written Bambara through the collection and annotation of corpora, the publishing of dictionaries, and, in formulating recommendations for language policy in Mali.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bambara Corpora",
"sec_num": "3.1"
},
{
"text": "Our efforts uncovered several sources of parallel texts between Bambara and French and/or English that are listed in Table 5 in Appendix A. The table provides a rating of each of the identified resources, and the rationale why they were in-or excluded from our translation study. Ultimately, most of these resources proved either of very little or no practical use as sources of training data. Many did not actually contain aligned texts and some not even suitable monolingual text.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bambara Corpora",
"sec_num": "3.1"
},
{
"text": "A systematic problem was lack of adherence to the standardized Bambara orthography, due to it being a predominately oral language. This is also one of the reasons why our search for parallel data on the web generally did not yield many findscommonly being used in written form, Bambara is used even less on the web. For example, the Bambara Wikipedia contains currently 667 articles (compared to 6M for English), of which a large percentage are only stubs. Of the small number of full articles, most do not consistently employ the standard orthography of Bambara. A selection of those, however, was prepared to be used as monolingual data for MT data augmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bambara Corpora",
"sec_num": "3.1"
},
{
"text": "Most African NMT studies have been based on the JW300 corpus (Agi\u0107 and Vuli\u0107, 2019), e.g. most of the Masakhane benchmarks (\u2200 et al., 2020). JW300 only contains less than 200 sentences of Dyula, a closely related language to Bambara that it is mutually intelligible with. It might be useful for future cross-lingual studies Goyal et al., 2020) , but in order to avoid interference between languages, we focus on Bambara data exclusively in this first study.",
"cite_spans": [
{
"start": 324,
"end": 343,
"text": "Goyal et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bambara Corpora",
"sec_num": "3.1"
},
{
"text": "The most promising for our NMT approach was a dictionary data set from SIL Mali 5 with examples of sentences used to demonstrate word usage in Spanish, French, English, and Bambara; and a tri-lingual health guide titled \"Where there is no doctor. 6 \" Detailed corpus statistics are listed in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 299,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Bambara Corpora",
"sec_num": "3.1"
},
{
"text": "The part of the dictionary that we are focusing on in this study, are the dictionary entries that consist of examples of Bambara expressions followed by their translations in French and in English. Most of these are single sentences, so there is sentence-tosentence alignment in the majority of cases. However, there remains a sufficient number of exceptions to render automated pairing impossible. Part of the problem lies in the unique linguistic and cultural elements of the bambaraphone environment; it is often not possible to meaningfully translate an expression in Bambara without giving an explanation of the context. The medical health guide is aligned by chapters, each of which is roughly aligned by paragraphs. But at the paragraph level there are too many exceptions for automated pairing to be feasible. Many of the bambaraphone-specific problems found in the dictionary dataset are present at the sentence level as well, particularly in explanations of concepts that can be succinctly expressed in English or French but for which Bambara lacks terminology and the bambaraphone environment lacks an equivalent physical or cultural context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment",
"sec_num": "3.2"
},
{
"text": "Both datasets therefore required manual alignment by individuals fluent in written Bambara and either French or English. The annotators need to be able to exercise expert-level judgment on linguistic and, occasionally, medical questions. Access to such human resources was a major factor limiting the quantity of data we were able to align. Because of this, and since the dictionary data was more closely aligned at the sentence level and did not require as much domain knowledge as the medical dataset, we have thus far only used the dictionary dataset in our machine learning experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment",
"sec_num": "3.2"
},
{
"text": "In order to facilitate this alignment, we imple- Figure 1 : The custom aligner we developed to manually align the dictionary data set. The controls are as follows: for each language, \">\" goes to the next item and \"<\" goes to the previous item; for all languages, \">>>\" goes to the next items and \"<<<\" goes to the previous items; \"Aligned B-F-E\" saves to memory the alignment of all 3 languages; \"Aligned B-F\" saves to memory the alignment of Bambara and French items; \"Aligned B-E\" saves to memory the alignment of Bambara and English items; \"Save\" saves to a new file; \"Continue Saving\" continues saving the file created.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 57,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Alignment",
"sec_num": "3.2"
},
{
"text": "mented an alignment interface, as shown in Figure 1 . It allows annotators to manually align sentences and to save those sentence pairs that another annotator considered properly aligned. In separate tasks, four annotators with a secondary school level understanding of Bambara performed alignment on French-Bambara and English-Bambara sentence pairs using the tool.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 52,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Alignment",
"sec_num": "3.2"
},
{
"text": "Before we could align these sentences, we needed to clean the retrieved dictionary entries. Below we give examples of cases we had to handle manually, going through the entire corpus line by line.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.3"
},
{
"text": "1. Only one language is represented: Discarded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.3"
},
{
"text": "2. Ambiguous pronouns in Bambara: 7 Before: fr: \"Il/elle est n\u00e9 \u00e0 Bamako en 1938.\" bam: \"A bangera Bamak\u00aa san 1938.\" 3. Additional explanations in the other languages while those are absent in Bambara: Before: fr: \"Un doigt ne peut pas prendre un caillou (C'est important d'aider les uns les autres).\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.3"
},
{
"text": "bam: \"Bolok\u00aani kelen t\u00a2 se ka b\u00a2l\u00a2 ta.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.3"
},
{
"text": "7 One can imagine that translating these into French or English is difficult since there is no indicator of the correct choice. (Johnson, 2018) After: fr: \"Un doigt ne peut pas prendre un caillou.\" bam: \"Bolok\u00aani kelen t\u00a2 se ka b\u00a2l\u00a2 ta. \"",
"cite_spans": [
{
"start": 128,
"end": 143,
"text": "(Johnson, 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.3"
},
{
"text": "Before: fr: \"Proverbe: Une longue absence vaut mieux qu'un communiqu\u00e9 (d'un d\u00e9c\u00e8s).\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proverbs:",
"sec_num": "4."
},
{
"text": "bam: \"Fama ka sa ni k\u00aamunike ye.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proverbs:",
"sec_num": "4."
},
{
"text": "After: fr: \"Une longue absence vaut mieux qu'un communiqu\u00e9.\" bam: \"Fama ka sa ni k\u00aamunike ye.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proverbs:",
"sec_num": "4."
},
{
"text": "Data preparation, including alignment, proved to be about 60% of the overall time spent in person-hours on the experiment and required onthe-ground organisation and recruitment of skilled volunteers in Mali.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proverbs:",
"sec_num": "4."
},
{
"text": "The final data set contains 2,146 parallel sentences of Bambara-French and 2,158 parallel sentences of Bambara-English-a very small data set for NMT compared to the massive state-of-the-art models that are trained on billions of sentences (Arivazhagan et al., 2019). We split the data randomly into training, validation, and test sets of 75%, 12.5% and 12.5% respectively. The training set is composed of 1611 sentences, the validation set of 268 sentences, the test set of 267 sentences for Bambara-French. The training set is composed of 1620 sentences, the validation set of 270 sentences, the test set of 268 sentences for Bambara-French.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Data",
"sec_num": "3.4"
},
{
"text": "In addition to the translations, we obtained a dataset of 488 monolingual Bambara sentences, sampled from all articles in the Bambara Wikipedia and covering a range of topics, but with preponderance of articles related to Mali. We used this monolingual dataset for experiments in data augmentation through back-translation, described in Section 5.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Data",
"sec_num": "3.5"
},
{
"text": "Our NMT is a transformer (Vaswani et al., 2017) of appropriate size for a relatively smaller training dataset (van Biljon et al., 2020) . It has six layers with four attention heads for encoder and decoder, the transformer layer has a size of 1024, and the hidden layer size 256, the embeddings have 256 units. Embeddings and vocabularies are not shared across languages, but the softmax layer weights are tied to the output embedding weights. The model is implemented with the Joey NMT framework (Kreutzer et al., 2019) based on PyTorch (Paszke et al., 2019) .",
"cite_spans": [
{
"start": 25,
"end": 47,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF35"
},
{
"start": 110,
"end": 135,
"text": "(van Biljon et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 497,
"end": 520,
"text": "(Kreutzer et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 538,
"end": 559,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters",
"sec_num": "4.1"
},
{
"text": "Training runs for 120 epochs in batches of 1024 tokens each. The ADAM optimizer (Kingma and Ba, 2014) is used with a constant learning rate of 0.0004 to update model weights. This setting was found to be best to tune for highest BLEU, compared to decaying or warmup-cooldown learning rate scheduling. For regularization, we experimented with dropout and label smoothing (Szegedy et al., 2016) . The best values were 0.1 for dropout and 0.2 for label smoothing across the board. For inference, beam search with width of 5 is used. The remaining hyperparameters are documented in the Joey NMT configuration files that we will provide with the code.",
"cite_spans": [
{
"start": 370,
"end": 392,
"text": "(Szegedy et al., 2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters",
"sec_num": "4.1"
},
{
"text": "There is no standard tokenizer for Bambara. Therefore, we simply apply whitespace tokenization for word-based NMT models and compute BLEU with \"international\" tokenization. 8 Of the 542 distinct word types in the Bambara dev set, 166 are not contained in the vocabulary (seen during training), 174 of 590 (29.5%) for the test split. For the French portion it is 243 of 713 (34.1%) for the dev split and 274 of 756 (36.2%) for the test split. Because of this large proportion of unknown words, we segment the data for both language pairs into subword units (byte pair encodings, BPE) (500 or 1000, separately) using subword-nmt 9 (Sennrich et al., 2016) , and apply BPE dropout to the training sets of both languages (Provilkov et al., 2019) . We also experiment with character-level translation for French.",
"cite_spans": [
{
"start": 173,
"end": 174,
"text": "8",
"ref_id": "BIBREF2"
},
{
"start": 629,
"end": 652,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF33"
},
{
"start": 716,
"end": 740,
"text": "(Provilkov et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation",
"sec_num": "4.2"
},
{
"text": "Segmentation. We evaluate the models' translations against reference translations on our heldout sets with corpus BLEU (Papineni et al., 2002) and ChrF (Popovi\u0107, 2015) computed with Sacre-BLEU (Post, 2018) . 10 Tables 2 and 3 show the results for French and English translations respectively. We find that word-and character-level modeling performs sub par compared to subword-level segmentation, which is in line with previous work on low-resource MT. The word-based model cannot resolve out-of-vocabulary words, and the characterlevel model struggled with word composition. With BPE, smaller subwords seem to perform slightly better than larger ones. BPE dropout (Provilkov et al., 2019) , which was previously reported to be helpful for low-resource MT (Richburg et al., 2020) , did not increase the quality of the results. We observe a trend towards higher scores for translations into Bambara than in the reverse direction, but this cross-lingual comparison has to be taken with a grain of salt, since it is influenced by source and target complexity (Bugliarello et al., 2020) . Ambiguities on the Bambara side, such as the gender of pronouns illustrated in the example in Section 3.2, might make translation into English and French particularly difficult.",
"cite_spans": [
{
"start": 119,
"end": 142,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF26"
},
{
"start": 152,
"end": 167,
"text": "(Popovi\u0107, 2015)",
"ref_id": "BIBREF28"
},
{
"start": 193,
"end": 205,
"text": "(Post, 2018)",
"ref_id": "BIBREF29"
},
{
"start": 208,
"end": 210,
"text": "10",
"ref_id": null
},
{
"start": 665,
"end": 689,
"text": "(Provilkov et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 756,
"end": 779,
"text": "(Richburg et al., 2020)",
"ref_id": "BIBREF32"
},
{
"start": 1056,
"end": 1082,
"text": "(Bugliarello et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 211,
"end": 225,
"text": "Tables 2 and 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "5.1"
},
{
"text": "Back-translation. In addition, we experimented with back-translated Wikipedia data: fine-tuning the original model on a combination of the backtranslated and original data, or training it from scratch on a combination of both, as e.g. in (Przystupa and Abdul-Mageed, 2019). However, this did not yield improvements over the original BPE model. 11 We speculate that the mismatch between domains hindered improvement. Indeed, we discovered that when we selected only short sentences from the Wikipedia data, we observed slightly better results, but they still did not outperform the baseline. This highlights the importance of general domain evaluation sets for future work, so that the effectiveness of leveraging additional out-ofdomain data can be measured.",
"cite_spans": [
{
"start": 344,
"end": 346,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "5.1"
},
{
"text": "Multilingual modeling. Another promising approach for the improvement of extremely lowresourced languages is multilingual modeling (Johnson et al., 2017) . In our case, we combined the tasks of translating from English and French into Bambara to strengthen the Bambara decoding abilities of the translation model, by concatenating the training data and learning joint BPE models. The training data is filtered so that it does not contain sentences from the evaluation sets of the respective other language. However, we do not find improvements over the bilingual model. 12 We would have expected improvements on translating",
"cite_spans": [
{
"start": 131,
"end": 153,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 570,
"end": 572,
"text": "12",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "5.1"
},
{
"text": "into Bambara because of larger variation on the source side. However, one reason for not seeing this improvement might be that the sentences are relatively short, and fluency is not as much of an issue as in larger scale studies from previous works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "5.1"
},
{
"text": "Two native Bambara speakers from Mali, coauthors of this paper, with both college-level French and English reading skills, evaluated a random sample of 21 of the test set translations from Bambara into French and 21 different test set translations from Bambara into English produced by the highest scoring models. Both native speakers received their basic education in Bambara and can read and write the language with fluency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "5.2"
},
{
"text": "Evaluation Schema. For translations that had only a limited correspondence to the source text, the evaluators were given a number of questions specific to the quality of the translations. For translations that rose to the level of being qualified as conveying most of the sense of the source text we asked for two numerical ratings: First, whether \"most people\" would be able to understand what the meaning of the source sentence from its translation. This was intended to cover translations where the technical accuracy might be low, as a BLEU score might measure, but that substantially conveyed the meaning of the source text. Second, whether the translation was a \"good translation,\" meaning that exact word choices and structure hewed closely to the style and meaning of the source text. We chose to use relatively inexact terminology to describe the ranking criteria as we felt that, as non-professional translators, our evaluators would have difficulty using more technical guidance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "5.2"
},
{
"text": "Quantitative results. The evaluation schema and the results obtained are presented in Table 4 . The proportion of adequate words appears similar for English and French (rows 1 and 3). However, more English translations are judged as being adequate (rows 2 and 4). The overall percentage of translations that might be said to be useful (row 7), in that they convey at least the gist of the source sentence, is low at 36%, similar to the results obtained by automated methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 93,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "5.2"
},
{
"text": "Examples. The following translation excited our admiration because the sentence is relatively complex and the translation is flawless:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "5.2"
},
{
"text": "\"Faranna tilebinyanfan tun tilalen b\u00a2 Angil\u00a2w ni Tubabuw c\u00a2.\", which gets translated to \"L'Afrique de l'Ouest a \u00e9t\u00e9 divis\u00e9e entre les Anglais et les Fran\u00e7ais.\" (\"West Africa was divided between the English and the French\".)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "5.2"
},
{
"text": "We also observe that the MT system often translated verb tense correctly, perhaps helped by the fact that verb tenses are extremely simple in Bambara. Some of the sentences that did not qualify as adequate translations nonetheless were instructive and demonstrated specific pattern recognition capabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "5.2"
},
{
"text": "For another example with the Bambara source \"I b\u00a2 gojogojo wa?\", the model translates \"Have you ever eaten you is your wife?\". The word \"gojogojo\" is slang Bambara mainly used by youth playfully employing reduplication, onomatopoeia, and inspiration from a foreign language. While the term produced a nonsense sentence, the translation seems to carry some of the playfulness in the source sentence. We also notice that it does pick up the subject and uses interrogative word order. In Bambara the word order is the same word order used for a declarative sentence, \"You\" -\"are\" -\"athletic\", the sentence is made interrogative by the interrogative marker \"wa\". Looking at the following example, with Bambara source \"Araba, \u00aakut\u00aaburukalo tile 8 san 2003\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "5.2"
},
{
"text": "and translation \"Apr\u00e8s la mort de 25 ans.\", the failure of translating time expressions is surprising because we would have expected the system to have been trained on this pattern -it says, Wednesday, the month of October, the 8th day, the year 2003. The translation (\"After the death of 25 years\") is not close. Still, there are markers of time in both the original and translated sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "5.2"
},
{
"text": "Finally, for Bambara source \"A nalolen don i n'a f\u00aa suruku.\" the translation says \"Il met les mains dans ses poches.\" (\"He put the hands in his pockets.\"), even though the correct translation would say \"He is as crude as a hyena\" (the word for crude does not translate very exactly into English). While the translation seems to have nothing to do with the source, it has the right subject and somehow seems a reasonable guess if you did not know the key words and gives a bit of the spirit of the source sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "5.2"
},
{
"text": "Our study constitutes the first attempt of modeling automatic translation for the extremely lowresource language of Bambara. We identified challenges for future work, such as the development of alignment tools for small-scale datasets, and the need for a general domain evaluation set. The current limitation of processing written text as input might furthermore benefit from the integration of spoken resources through speech recognition or speech translation, since Bambara is primarily spoken and the lack of standardization in writing complicates the creation of clean reference sets and consistent evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "English French All",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criterion",
"sec_num": null
},
{
"text": "(1) Percentage of words in the translated sentence that are related to the subject of the source. 43% 51% 47% (2) Translated sentences containing words related to the subject of the source. 71% 52% 62%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criterion",
"sec_num": null
},
{
"text": "(3) Percentage of words that are plausible direct translations of words in the source. 37% 40% 39% (4) Translated sentences containing words that are plausible direct translations of words in the source. 71% 52% 62%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criterion",
"sec_num": null
},
{
"text": "(5) Translated sentences that say something intelligible whether or not related to the source. 71% 66% 69% (6) Translated sentences that convey most of the information in the source. 33% 38% 36% (7) Translated sentences where most people would be able to get, at a minimum, the gist of the source. 33% 38% 36%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criterion",
"sec_num": null
},
{
"text": "meta.wikimedia.org/wiki/List_of_ Wikipedias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "fbnewsroomus.files.wordpress.com/ 2015/02/state-of-connectivity1.pdf 3 masakhane.io",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/israaar/mt_ bambara_data_models",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.sil-mali.org/en/content/ introducing-sil-mali 6 https://gafe.dokotoro.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Tokenizing on punctuation and special symbols, except when surrounding a digit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/rsennrich/subwordnmt 10 BLEU+case.mixed+numrefs.1+smooth.exp+ tok.intl+version.1.4.9, chrF2+case.mixed+numchars.6+numrefs.1+ space.False+version.1.4.911 Not included in tables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Initially, with cross-lingual overlap between training data of one language and evaluation data of the other language we found large improvements, which shows that the model can transfer from English to French sources very well (and vice versa).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": " Table 5 : Datasets for Bambara including: a URL, linking to where we found the source; a description, describing what it is about (as we understood it); a rating from 1 to 5, one is \"related to our topic with no usable data\", and five is \"related to our topic with usable data\"; pros, describing its pluses; and cons, describing its minuses.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "JW300: A widecoverage parallel corpus for low-resource languages",
"authors": [
{
"first": "\u017deljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3204--3210",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1310"
]
},
"num": null,
"urls": [],
"raw_text": "\u017deljko Agi\u0107 and Ivan Vuli\u0107. 2019. JW300: A wide- coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204-3210, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Towards supervised and unsupervised neural machine translation baselines for nigerian pidgin",
"authors": [
{
"first": "Orevaoghene",
"middle": [],
"last": "Ahia",
"suffix": ""
},
{
"first": "Kelechi",
"middle": [],
"last": "Ogueji",
"suffix": ""
}
],
"year": 2020,
"venue": "Workshop at the 8th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orevaoghene Ahia and Kelechi Ogueji. 2020. Towards supervised and unsupervised neural machine trans- lation baselines for nigerian pidgin. \"AfricaNLP\" Workshop at the 8th International Conference on Learning Representations.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Rating of translation quality for sentences of (7) with \"understandable by most people\" as the criterion",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rating of translation quality for sentences of (7) with \"understandable by most people\" as the criterion.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Evaluation results for translations of the test set from Bambara into English and French",
"authors": [],
"year": null,
"venue": "Table",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table 4: Evaluation results for translations of the test set from Bambara into English and French. Ratings in the last two lines are on a scale from 1 (poor quality) to 5 (good quality).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Towards English-Hausa neural machine translation",
"authors": [],
"year": null,
"venue": "Proceedings of the The Fourth Widening Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "144--147",
"other_ids": {
"DOI": [
"10.18653/v1/2020.winlp-1.38"
]
},
"num": null,
"urls": [],
"raw_text": "Adewale Akinfaderin. 2020. HausaMT v1.0: Towards English-Hausa neural machine translation. In Pro- ceedings of the The Fourth Widening Natural Lan- guage Processing Workshop, pages 144-147, Seat- tle, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Development of a bambara treebank",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Aplonova",
"suffix": ""
}
],
"year": 2018,
"venue": "ARANEA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekaterina Aplonova. 2018. Development of a bambara treebank. ARANEA 2018, page 7.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards a dependency-annotated treebank for bambara",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Aplonova",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 16th International Workshop on Treebanks and Linguistic Theories",
"volume": "",
"issue": "",
"pages": "138--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekaterina Aplonova and Francis Tyers. 2017. Towards a dependency-annotated treebank for bambara. In Proceedings of the 16th International Workshop on Treebanks and Linguistic Theories, pages 138-145.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Massively multilingual neural machine translation in the wild: Findings and challenges",
"authors": [
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Lepikhin",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Mia",
"middle": [
"Xu"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen, and Yonghui Wu. 2019. Massively multilingual neural machine translation in the wild: Findings and chal- lenges.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Findings of the 2019 conference on machine translation (wmt19)",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "1--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R Costa-Juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, et al. 2019. Findings of the 2019 conference on machine translation (wmt19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "On optimal transformer depth for lowresource language translation",
"authors": [
{
"first": "Arnu",
"middle": [],
"last": "Elan Van Biljon",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Pretorius",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kreutzer",
"suffix": ""
}
],
"year": 2020,
"venue": "AfricaNLP\" Workshop at the 8th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elan van Biljon, Arnu Pretorius, and Julia Kreutzer. 2020. On optimal transformer depth for low- resource language translation. \"AfricaNLP\" Work- shop at the 8th International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "It's easier to translate out of English than into it: Measuring neural translation difficulty by cross-mutual information",
"authors": [
{
"first": "Emanuele",
"middle": [],
"last": "Bugliarello",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1640--1649",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.149"
]
},
"num": null,
"urls": [],
"raw_text": "Emanuele Bugliarello, Sabrina J. Mielke, Anto- nios Anastasopoulos, Ryan Cotterell, and Naoaki Okazaki. 2020. It's easier to translate out of English than into it: Measuring neural translation difficulty by cross-mutual information. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 1640-1649, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The complexity of the vocabulary of bambara",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Culy",
"suffix": ""
}
],
"year": 1985,
"venue": "Linguistics and philosophy",
"volume": "8",
"issue": "3",
"pages": "345--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Culy. 1985. The complexity of the vo- cabulary of bambara. Linguistics and philosophy, 8(3):345-351.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Ffr v1.0: Fon-french neural machine translation",
"authors": [
{
"first": "F",
"middle": [
"P"
],
"last": "Bonaventure",
"suffix": ""
},
{
"first": "Chris",
"middle": [
"C"
],
"last": "Dossou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Emezue",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonaventure F. P. Dossou and Chris C. Emezue. 2020. Ffr v1.0: Fon-french neural machine translation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Efficient neural machine translation for lowresource languages via exploiting related languages",
"authors": [
{
"first": "Vikrant",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Sourav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Dipti Misra",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "162--168",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-srw.22"
]
},
"num": null,
"urls": [],
"raw_text": "Vikrant Goyal, Sourav Kumar, and Dipti Misra Sharma. 2020. Efficient neural machine translation for low- resource languages via exploiting related languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 162-168, Online. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Processing parallel text corpora for three south african language pairs in the autshumato project",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Hendrik",
"middle": [],
"last": "Groenewald",
"suffix": ""
},
{
"first": "Liza",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Plooy",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Second Workshop on African Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Hendrik Groenewald and Liza du Plooy. 2010. Pro- cessing parallel text corpora for three south african language pairs in the autshumato project. In Pro- ceedings of the Second Workshop on African Lan- guage Technology, Valletta, Malta.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "African languages: An introduction",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Heine",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Nurse",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Heine and Derek Nurse. 2000. African lan- guages: An introduction. Cambridge University Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Providing gender-specific translations in google translate",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson. 2018. Providing gender-specific translations in google translate.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00065"
]
},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Joey NMT: A minimalist NMT toolkit for novices",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Kreutzer",
"suffix": ""
},
{
"first": "Jasmijn",
"middle": [],
"last": "Bastings",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "109--114",
"other_ids": {
"DOI": [
"10.18653/v1/D19-3019"
]
},
"num": null,
"urls": [],
"raw_text": "Julia Kreutzer, Jasmijn Bastings, and Stefan Riezler. 2019. Joey NMT: A minimalist NMT toolkit for novices. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing (EMNLP-IJCNLP): Sys- tem Demonstrations, pages 109-114, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Assessing human translations from french to bambara for machine learning: a pilot study",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Leventhal",
"suffix": ""
},
{
"first": "Allahsera",
"middle": [],
"last": "Tapo",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Homan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Leventhal, Allahsera Tapo, Sarah Luger, Mar- cos Zampieri, and Christopher M. Homan. 2020. As- sessing human translations from french to bambara for machine learning: a pilot study.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Ethnologue: Languages of Africa and Europe",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Gary",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"D"
],
"last": "Simons",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fennig",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Paul Lewis, Gary F Simons, and Charles D Fennig. 2014. Ethnologue: Languages of Africa and Europe. SIL international.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Towards a crowdsourcing platform for low resource languages -a semi-supervised approach",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Luger",
"suffix": ""
},
{
"first": "Auguste",
"middle": [],
"last": "Tapo",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Homan",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Leventhal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Eighth Conference on Human Computations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Luger, Allashera Auguste Tapo, Christopher M. Homan, Marcos Zampieri, and Michael Leventhal. 2020. Towards a crowdsourcing platform for low resource languages -a semi-supervised approach. In Proceedings of the Eighth Conference on Human Computations. AAAI.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A focus on neural machine translation for african languages",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Martinus",
"suffix": ""
},
{
"first": "Jade",
"middle": [
"Z"
],
"last": "Abbott",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Martinus and Jade Z. Abbott. 2019. A focus on neural machine translation for african languages. CoRR, abs/1906.05685.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Towards neural machine translation for edoid languages",
"authors": [
{
"first": "",
"middle": [],
"last": "Iroro Orife",
"suffix": ""
}
],
"year": 2020,
"venue": "Workshop at the 8th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iroro Orife. 2020. Towards neural machine translation for edoid languages. \"AfricaNLP\" Workshop at the 8th International Conference on Learning Represen- tations.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8026--8037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8026-8037. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "chrf: character n-gram f-score for automatic mt evaluation",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "392--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107. 2015. chrf: character n-gram f-score for automatic mt evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6319"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bpe-dropout: Simple and effective subword regularization",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Provilkov",
"suffix": ""
},
{
"first": "Dmitrii",
"middle": [],
"last": "Emelianenko",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2019. Bpe-dropout: Simple and effective subword regularization.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Neural machine translation of low-resource and similar languages with backtranslation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Przystupa",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Abdul-Mageed",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "224--235",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5431"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Przystupa and Muhammad Abdul-Mageed. 2019. Neural machine translation of low-resource and similar languages with backtranslation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 224-235, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "An evaluation of subword segmentation strategies for neural machine translation of morphologically rich languages",
"authors": [
{
"first": "Aquia",
"middle": [],
"last": "Richburg",
"suffix": ""
},
{
"first": "Ramy",
"middle": [],
"last": "Eskander",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the The Fourth Widening Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "151--155",
"other_ids": {
"DOI": [
"10.18653/v1/2020.winlp-1.40"
]
},
"num": null,
"urls": [],
"raw_text": "Aquia Richburg, Ramy Eskander, Smaranda Mure- san, and Marine Carpuat. 2020. An evaluation of subword segmentation strategies for neural ma- chine translation of morphologically rich languages. In Proceedings of the The Fourth Widening Natu- ral Language Processing Workshop, pages 151-155, Seattle, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Rethinking the inception architecture for computer vision",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Zbigniew",
"middle": [],
"last": "Wojna",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "2818--2826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 2818-2826.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems (NeurIPS), Long Beach, CA, USA.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Where corpus methods hit their limits: the case of separable adjectives in bambara",
"authors": [
{
"first": "Valentin",
"middle": [],
"last": "Vydrin",
"suffix": ""
}
],
"year": 2018,
"venue": "Rhema",
"volume": "",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.31862/2500-2953-2018-4-34-49"
]
},
"num": null,
"urls": [],
"raw_text": "Valentin Vydrin. 2018. Where corpus methods hit their limits: the case of separable adjectives in bambara. Rhema, (4).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Corpus bambara de r\u00e9f\u00e9rence",
"authors": [
{
"first": "Valentin",
"middle": [],
"last": "Vydrin",
"suffix": ""
},
{
"first": "Kirill",
"middle": [],
"last": "Maslinsky",
"suffix": ""
},
{
"first": "Jean-Jacques",
"middle": [],
"last": "M\u00e9ric",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rovenchak",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin Vydrin, Kirill Maslinsky, Jean-Jacques M\u00e9ric, and A Rovenchak. 2011. Corpus bambara de r\u00e9f\u00e9rence.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Language discrimination and transfer learning for similar languages: experiments with feature combinations and adaptation",
"authors": [
{
"first": "Nianheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Demattos",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianheng Wu, Eric DeMattos, Kwok Him So, Pin-zhen Chen, and \u00c7agr\u0131 \u00c7\u00f6ltekin. 2019. Language discrim- ination and transfer learning for similar languages: experiments with feature combinations and adapta- tion. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 54-63.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Tigrinya neural machine translation with transfer learning for humanitarian response",
"authors": [
{
"first": "Alp",
"middle": [],
"last": "\u00d6ktem",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Plitt",
"suffix": ""
},
{
"first": "Grace",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2020,
"venue": "Workshop at the 8th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alp \u00d6ktem, Mirko Plitt, and Grace Tang. 2020. Tigrinya neural machine translation with transfer learning for humanitarian response. \"AfricaNLP\" Workshop at the 8th International Conference on Learning Representations.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A Appendices",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Appendices",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "est n\u00e9 \u00e0 Bamako en 1938.\" bam: \"A bangera Bamak\u00aa san 1938.\" and fr: \"Elle est n\u00e9 \u00e0 Bamako en 1938.\" bam: \"A bangera Bamak\u00aa san 1938.\""
},
"TABREF1": {
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "The dictionary data set from SIL Mali and the medical health guide \"Where there is no doctor\" with examples in French, English, and Bambara."
},
"TABREF2": {
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">bam\u2192fr</td><td colspan=\"2\">fr\u2192bam</td></tr><tr><td>NMT Model</td><td>Configuration</td><td colspan=\"4\">BLEU ChrF BLEU ChrF</td></tr><tr><td>(1) Word</td><td colspan=\"2\">2464 (fr) / 1724 (bam) words 18.9</td><td>0.3</td><td>20.6</td><td>0.3</td></tr><tr><td>(2) Char</td><td>97 (fr) / 89 (bam) chars</td><td>11.7</td><td>0.2</td><td>10.6</td><td>0.2</td></tr><tr><td>(2) BPE</td><td>500 subword merges each</td><td>19.1</td><td>0.3</td><td>21.1</td><td>0.3</td></tr><tr><td>(3) BPE</td><td>1000 subword merges each</td><td>19.2</td><td>0.3</td><td>20.4</td><td>0.3</td></tr><tr><td>(4) (2) + BPE dropout</td><td>dropout=0.1</td><td>17.8</td><td>0.3</td><td>16.6</td><td>0.2</td></tr><tr><td>(5) (3) + BPE dropout</td><td>dropout=0.1</td><td>16.0</td><td>0.3</td><td>17.7</td><td>0.2</td></tr><tr><td>Test scores for the best models from above</td><td/><td>20.9</td><td>0.3</td><td>21.4</td><td>0.3</td></tr><tr><td colspan=\"4\">Table 2: bam\u2192en</td><td colspan=\"2\">en\u2192bam</td></tr><tr><td>NMT Model</td><td>Configuration</td><td colspan=\"4\">BLEU ChrF BLEU ChrF</td></tr><tr><td>(1) Word</td><td colspan=\"2\">2364 (en) / 1745 (bam) words 19.1</td><td>0.3</td><td>17.7</td><td>0.3</td></tr><tr><td>(2) Char</td><td>81 (en) / 86 (bam) chars</td><td>13.2</td><td>0.2</td><td>12.5</td><td>0.2</td></tr><tr><td>(3) BPE</td><td>500 subword merges each</td><td>18.1</td><td>0.3</td><td>21.3</td><td>0.3</td></tr><tr><td>(4) BPE</td><td>1000 subword merges each</td><td>19.1</td><td>0.3</td><td>20.0</td><td>0.3</td></tr><tr><td>(5) (3) + BPE dropout</td><td>dropout=0.1</td><td>17.0</td><td>0.2</td><td>19.3</td><td>0.3</td></tr><tr><td>(6) (4) + BPE dropout</td><td>dropout=0.1</td><td>15.2</td><td>0.2</td><td>18.2</td><td>0.2</td></tr><tr><td>Test scores for the best models from above</td><td/><td>14.8</td><td>0.3</td><td>20.9</td><td>0.3</td></tr></table>",
"num": null,
"type_str": "table",
"text": "NMT results for French and Bambara translations in corpus BLEU and ChrF on the dev set. The best model (in bold) is evaluated on the test set, results are reported in the last row."
},
"TABREF3": {
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "NMT results for English and Bambara translations in corpus BLEU and ChrF on the dev set. The best model (in bold) is evaluated on the test set, results are reported in the last row."
}
}
}
}