{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:11:53.656277Z" }, "title": "The Relevance of the Source Language in Transfer Learning for ASR", "authors": [ { "first": "Nils", "middle": [], "last": "Hjortnaes", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indiana University", "location": {} }, "email": "" }, { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki Helsinki", "location": { "country": "Finland" } }, "email": "niko.partanen@helsinki.fi" }, { "first": "Michael", "middle": [], "last": "Rie\u00dfler", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Eastern Finland Joensuu", "location": { "country": "Finland" } }, "email": "michael.riessler@uef.fi" }, { "first": "Francis", "middle": [ "M" ], "last": "Tyers", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indiana University", "location": { "settlement": "Bloomington", "region": "IN" } }, "email": "ftyers@iu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This study presents new experiments on Zyrian Komi speech recognition. We use Deep-Speech to train ASR models from a language documentation corpus that contains both contemporary and archival recordings. Earlier studies have shown that transfer learning from English and using a domain matching Komi language model both improve the CER and WER. In this study we experiment with transfer learning from a more relevant source language, Russian, and including Russian text in the language model construction. The motivation for this is that Russian and Komi are contemporary contact languages, and Russian is regularly present in the corpus. We found that despite the close contact of Russian and Komi, the size of the English speech corpus yielded greater performance when used as the source language. Additionally, we can report that already an update in DeepSpeech version improved the CER by 3.9% against the earlier studies, which is an important step in the development of Komi ASR.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This study presents new experiments on Zyrian Komi speech recognition. We use Deep-Speech to train ASR models from a language documentation corpus that contains both contemporary and archival recordings. Earlier studies have shown that transfer learning from English and using a domain matching Komi language model both improve the CER and WER. In this study we experiment with transfer learning from a more relevant source language, Russian, and including Russian text in the language model construction. The motivation for this is that Russian and Komi are contemporary contact languages, and Russian is regularly present in the corpus. We found that despite the close contact of Russian and Komi, the size of the English speech corpus yielded greater performance when used as the source language. Additionally, we can report that already an update in DeepSpeech version improved the CER by 3.9% against the earlier studies, which is an important step in the development of Komi ASR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This study describes a Automatic Speech Recognition (ASR) experiment on Zyrian Komi, an endangered, low-resource Uralic language spoken in Russia. Komi has approximately 160,000 speakers and the writing system is well established using the Cyrillic script. Although Zyrian Komi is endangered, it is used widely in various media, and also in education system in the Komi Republic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper continues our experiments on Zyrian Komi ASR using DeepSpeech, which started in Hjortnaes et al. (2020b) . The scores reported there were very low, but in a later study we found that a language model built on more data increased the performance dramatically (Hjortnaes et al., 2020a) . We are not yet at a level that would be im-mediately useful for our goals, but we continue to explore different ways to improve our result. This study uses the same dataset, but attempts to take the multilingual processes found in the corpus into account better.", "cite_spans": [ { "start": 91, "end": 115, "text": "Hjortnaes et al. (2020b)", "ref_id": "BIBREF20" }, { "start": 269, "end": 294, "text": "(Hjortnaes et al., 2020a)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "ASR has progressed greatly for high resource languages and several advances have been made to extend that progress to low resource languages. There are still, however, numerous challenges in situations where the available data is limited. In many situations with the larger languages the training data for speech recognition is collected expressly for the purpose of ASR. These approaches, such as Common Voice platform, can be extended also the endangered languages (see i.e. Berkson et al., 2019) , so there is no clear cut boundary between resources available for different languages. While having dedicated, purpose-built data is good for the performance of ASR, it also leaves a large quantity of more challenging but usable data untapped. At the same time these materials not explicitly produced for this purpose may be more realistic for the resources we intend to use the ASR for in the later stages.", "cite_spans": [ { "start": 477, "end": 498, "text": "Berkson et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The data collected in language documentation work customarily originates from recorded and transcribed conversations and/or elicitations in the target language. While this data does not have the desirable features of a custom speech recognition dataset such as a large variety of speakers and accents, and includes much fewer recorded hours, for many endangered languages the language documentation corpora are the only available source.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, attention should also be paid to the differences in endangered language contexts globally. There is an enormous variation in how much one language is previously documented, and whether earlier resources exist. This also connects to the new materials collected, as some languages need a full linguistic description, and some already have established orthographies and variety of descriptions available. For example, in the case of Komi our transcription choice is the existing orthography which is also used in other resources (Gerstenberger et al., 2016, 32) . Our spoken language corpus is connected with the entire NLP ecosystem of the Komi language, which includes Finite State Transducer (Rueter, 2000) , well developed dictionaries both online 1 and in print (Rueter et al., 2020; Beznosikova et al., 2012; Alnajjar et al., 2019) , several treebanks (Partanen et al., 2018) and also written language corpora (Fedina, 2019) 2 . We use this technical stack to annotate our corpus directly into the ELAN files (Gerstenberger et al., 2017) , but also to create versions where identifiable information has been restricted (Partanen et al., 2020) . Thereby our goal is not to describe the language from the scratch, but to create a spoken language corpus that is not separate from the rest of the work and infrastructure done on this language. From this point of view we need an ASR system that produces the contemporary orthography, and not purely the phonetic or phonemic levels. It can be expected that entirely undocumented languages and languages with a long tradition of documentation need very different ASR approaches, although still being under the umbrella of endangered language documentation.", "cite_spans": [ { "start": 535, "end": 567, "text": "(Gerstenberger et al., 2016, 32)", "ref_id": null }, { "start": 701, "end": 715, "text": "(Rueter, 2000)", "ref_id": "BIBREF30" }, { "start": 773, "end": 794, "text": "(Rueter et al., 2020;", "ref_id": "BIBREF29" }, { "start": 795, "end": 820, "text": "Beznosikova et al., 2012;", "ref_id": "BIBREF6" }, { "start": 821, "end": 843, "text": "Alnajjar et al., 2019)", "ref_id": "BIBREF2" }, { "start": 864, "end": 887, "text": "(Partanen et al., 2018)", "ref_id": "BIBREF26" }, { "start": 1021, "end": 1049, "text": "(Gerstenberger et al., 2017)", "ref_id": "BIBREF14" }, { "start": 1131, "end": 1154, "text": "(Partanen et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work we expand on the use of transfer learning to improve the quality of a speech recognition system for dialectal Zyrian Komi. Our data consists of about 35 hours of transcribed speech data that will be available as an independent dataset in the Language Bank of Finland (Blokland et al., forthcoming) . While this collection is under preparation, the original raw multimedia is available by request in The Language Archive in Nijmegen (Blokland et al., 2021) . This is a relatively large dataset for a low resource language, but is still nowhere near high resource datasets such as Librispeech (Panayotov et al., 2015) , which has about 1000 hours of English.", "cite_spans": [ { "start": 280, "end": 310, "text": "(Blokland et al., forthcoming)", "ref_id": null }, { "start": 445, "end": 468, "text": "(Blokland et al., 2021)", "ref_id": null }, { "start": 604, "end": 628, "text": "(Panayotov et al., 2015)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of the largest challenges in our dataset is that there is significant code switching between Komi and Russian. This is a feature shared with other similar corpora (compare i.e. Shi et al., 2021) . All speakers are bi-or multilingual, and use several languages regularly, so there are large segments where the language is in Russian, although the main language is Komi. There are also very fragmentary occurrences of Tundra Nenets, Kildin Saami and Northern Mansi languages, but these are so rare at the moment that we have not addressed them separately. In addition, none of the data is annotated for which language is being spoken, and we only have transcriptions in the contemporary Cyrillic orthographies of these languages, as explained above. We propose two possible methods to accommodate these properties of the data. First, we compare whether it is better to transfer from a high resource language, English, or the contact language, Russian. Second, we analyze the impact of constructing languages models from different combinations of Komi and Russian sources. The goal is to make the language model more representative of the data and thereby improve performance.", "cite_spans": [ { "start": 181, "end": 198, "text": "Shi et al., 2021)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A majority of the work on Speech Recognition focuses on improving the performance of models for high resource languages. Very small improvements may be made through advances such as improving the network and the information available to it, as in Han et al. (2020) or Li et al. (2019) , though as performance increases the gains of these new methods decrease. Another avenue is to try to make these systems more robust to noise through data augmentation (Braun et al., 2017; Park et al., 2019) . As with improving the networks, however, these improvements become more and more marginal as performance increases.", "cite_spans": [ { "start": 247, "end": 264, "text": "Han et al. (2020)", "ref_id": null }, { "start": 268, "end": 284, "text": "Li et al. (2019)", "ref_id": "BIBREF21" }, { "start": 454, "end": 474, "text": "(Braun et al., 2017;", "ref_id": "BIBREF9" }, { "start": 475, "end": 493, "text": "Park et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Prior Work", "sec_num": "2" }, { "text": "As more models for ASR become available as open source (Hannun et al., 2014; Pratap et al., 2019) , it becomes easier to develop these tools for low resource languages and to create best practice standards for doing so. This is the fundamental goal of Common Voice (Ardila et al., 2020) . Others also work on individual languages, such as Fantaye et al. (2020) and Dalmia et al. (2018) .", "cite_spans": [ { "start": 55, "end": 76, "text": "(Hannun et al., 2014;", "ref_id": "BIBREF17" }, { "start": 77, "end": 97, "text": "Pratap et al., 2019)", "ref_id": "BIBREF28" }, { "start": 265, "end": 286, "text": "(Ardila et al., 2020)", "ref_id": null }, { "start": 365, "end": 385, "text": "Dalmia et al. (2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Prior Work", "sec_num": "2" }, { "text": "In the language documentation context we have seen a large number of experiments on endangered languages in the last few years, but often focusing on the datasets with a single speaker. Under this constraint a few hours of transcribed data has already shown to result in a relatively good accuracy, as shown by Adams et al. (2018) . Also Partanen et al. (2020) report very good results on the extinct Samoyedic language Kamas, where the model was also trained with one speaker, for whom, however, a relatively large corpus exists. Under many circumstances it is realistic and important to record individual speakers in numerous recording sessions, and such collections appear to be numerous in the archives containing past field recordings, so there is no doubt that also single speaker systems can be useful, although not ideal.", "cite_spans": [ { "start": 311, "end": 330, "text": "Adams et al. (2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Prior Work", "sec_num": "2" }, { "text": "Recently, Shi et al. (2021) also report very encouraging results on Yolox\u00f3chitl Mixtec and Puebla Nahuatl, especially as the corpus contains multiple speakers. Our corpus is of a comparable size as is used in their experiments (Shi et al., 2021) . Compared to their results our Komi accuracy, including the latest ones reported in this paper, are tens of percentages worse that what could be expected from the size of our corpus. This calls for wider experimentation at our dataset using different systems, which, we hope, will reveal more about how particularities of individual corpora impact the result.", "cite_spans": [ { "start": 10, "end": 27, "text": "Shi et al. (2021)", "ref_id": "BIBREF31" }, { "start": 227, "end": 245, "text": "(Shi et al., 2021)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Prior Work", "sec_num": "2" }, { "text": "Zahrer et al. (2020) also describe a language documentation project design where ASR tools are being integrated into actual workflows during the project. The end goal of our work is in line with this: we want Komi ASR to reach a level where it is useful for work on this language, and we see this happening through gradual steps where the system used is improved through different experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior Work", "sec_num": "2" }, { "text": "What it comes to the usability and accessibility of ASR systems, Adams et al. (2020) describe their work on a user friendly interface for the language workers to train and use ASR tools. Cox (2019) has created an ELAN plugin, and the same approach was recently extended for DeepSpeech by Partanen (2021) .", "cite_spans": [ { "start": 65, "end": 84, "text": "Adams et al. (2020)", "ref_id": "BIBREF1" }, { "start": 288, "end": 303, "text": "Partanen (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Prior Work", "sec_num": "2" }, { "text": "The Russian speech corpus we use is from Mozilla's Common Voice 3 project and contains about 105 hours of speech data (Ardila et al., 2020) . The Komi data consists of about 35 hours of dialectal speech, and is described in Hjortnaes et al. (2020b) .", "cite_spans": [ { "start": 118, "end": 139, "text": "(Ardila et al., 2020)", "ref_id": null }, { "start": 224, "end": 248, "text": "Hjortnaes et al. (2020b)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "To prepare both our Komi and Russian data, we split it into 8/1/1 training, dev, and testing sets and cleaned any sections which were too long or too short as defined by DeepSpeech. The alphabet is based on the Komi data and not the text used to construct the language models as it is what determines the output of the network. We obtained our English model from DeepSpeech's publicly available release models 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Preprocessing", "sec_num": "3.2" }, { "text": "We trained our models using Mozilla's open source DeepSpeech 5 (Hannun et al., 2014) version 0.8.2. We used this version because it was the latest release version at the time of these experiments. DeepSpeech is an end-to-end bidirectional LSTM neural network specifically designed for speech recognition. It consists of 5 hidden layers followed by a softmax layer where the 4th layer is the LSTM layer. The other hidden layers all use the ReLU activation function. The whole structure can be seen in figure 1, which shows an older version of the architecture with a unidirectional LSTM. For this experiment we used a dropout of 10% and a learning rate of 0.0001 with batch sizes of 128 for training, testing, and development sets. DeepSpeech automatically detects plateaus and reduces the learning rate by a factor of 10 when no further improvement is being made on the dev set.", "cite_spans": [ { "start": 63, "end": 84, "text": "(Hannun et al., 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "DeepSpeech", "sec_num": "3.3" }, { "text": "We trained a Russian model using DeepSpeech from the standard random initialization using the hyperparameters defined above. DeepSpeech saves the best performing model, which we then use for transfer learning later.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DeepSpeech", "sec_num": "3.3" }, { "text": "DeepSpeech outputs its best guess as to the transcription, but that is based entirely on the contents of the audio and does not account for spelling or punctuation. In order to address this, the output is put through a function which attempts to maximize the weighted probability of the model output and a probabilistic language model with two tuneable parameters. The first, \u03b1, determines how much the language model is allowed to edit the network output. The second, \u03b2, controls inserting spaces (Hannun et al., 2014) . Our language models were constructed using Kenlm (Heafield, 2011) Figure 1: Mozilla's DeepSpeech architecture (Meyer, 2019) on the 500000 most common words in the relevant text corpus.", "cite_spans": [ { "start": 498, "end": 519, "text": "(Hannun et al., 2014)", "ref_id": "BIBREF17" }, { "start": 571, "end": 587, "text": "(Heafield, 2011)", "ref_id": "BIBREF18" }, { "start": 632, "end": 645, "text": "(Meyer, 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Language Models", "sec_num": "3.4" }, { "text": "We constructed three language models using various quantities of available Komi and Russian data. The first is exclusively Komi and is constructed using the Komi-Zyrian corpora 6 which consists of a main corpus of 1.39 million words in the literary domain and a 1.37 million word corpus from social media (see Arkhangelskiy, 2019) . This serves as our baseline language model. The second includes all available text data from both the Komi corpora and the Russian Wikipedia dump from September 1st, 2020, which contains over 786 million tokens. We did not expect this largely Russian model to perform especially well, but include it anyway as an additional point of comparison. The last language model was constructed by cutting the Russian Wikipedia dump down to the same number of tokens as the combined Komi corpora such that the model is based on an equal amount of Komi and Russian data.", "cite_spans": [ { "start": 310, "end": 330, "text": "Arkhangelskiy, 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Language Models", "sec_num": "3.4" }, { "text": "We trained our models using the transfer learning feature built into DeepSpeech which is based on Meyer (2019). This starts by training the model on a high resource language, re-initializing the last n layers, and switching to the target language. Both Meyer and Hjortnaes et al. (2020b) found that reinitializing the last 2 layers, the softmax and ReLU layer after the LSTM layer, yielded the best performance. The softmax function outputs a letter of the alphabet for each time stamp and re-initializing is necessary to accommodate the target language having a different alphabet than the source language.", "cite_spans": [ { "start": 253, "end": 287, "text": "Meyer and Hjortnaes et al. (2020b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Transfer Learning", "sec_num": "3.5" }, { "text": "To train the Komi model, we used transfer learning from English to Komi using each of the three language models defined above and used transfer learning from the Russian model we trained to Komi again using each of the three language models. We obtained the English model from Deep-Speech's released models for 0.8.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Learning", "sec_num": "3.5" }, { "text": "The best Word Error Rate and best Character Error Rate were achieved by using English as the source language for the transfer and the language model constructed from equal parts Komi and Russian. This is, however, only a minor improvement as compared to using the language model constructed from Komi alone. When using Russian as the source language, performance drops regardless of the language model used. The worst performance was achieved when using Russian as the source language and the language model leveraging all available text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "The biggest difference in performance between the models was between the English sourced models and the Russian sourced models. Though there is a significant amount of Russian data alongside the Komi due to code switching and borrowing, training from a Russian model did not yield any improvement. We interpret this as demonstrating that the amount of data in the source language is more important than the relevance of the source language to the dataset. Common Voice for English has over 1400 hours of validated data, as compared to the 105 hours of Russian data. It is unsurprising that the language model constructed from all available text caused a reduction in performance, as the focus of the dataset is on Komi. The 786 million tokens in the Russian Wikipedia corpus dwarfed the 2.76 million tokens available for Komi, and because the language model was constructed using only the 500000 most common words in the text, there was probably very little Komi accounted for. This language model combined with the English source language still achieves a better performance than the Russian source language runs. We conclude from this that while both source language and language model are important, source language is more important.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Additionally, it can be discussed whether an ASR system that relies so strongly on the language model is the best architecture for endangered languages, especially with very agglutinative morphology. While a language model is capable of handling new words it has not encountered, it will presume them to be less likely regardless of their validity. In the case of Komi, new morphologically complex word forms are continuously encountered for the first time in the new recordings, and there is no way that a relatively small corpus would cover them perfectly, not to even mention the dialectal forms that are common. Still, the language model has proven to have an important role in our approach, and also other systems could possibly benefit from using it in one form or another.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "We can report continuous improvements from the earlier studies by Hjortnaes et al. (2020b) and Hjortnaes et al. (2020a) , and our CER improves by several percentages from the earlier best score. This appears to be, however, just due to a different DeepSpeech version, as otherwise the test setup was identical. Our results indicate that when using transfer learning to create ASR tools for minority languages, the size of the source language is more important than the similarity or contact. Having a larger quantity of training data in the source language allows the model to learn to interpret on a phonetic level. This improvement in phonetic understanding is more valuable and impactful on the performance of the model than having a more relevant but lower resource source language after transferring to the target language. We do note, however, that while this is true for this particular case, it does not necessarily hold true for any source/target language pairing.", "cite_spans": [ { "start": 66, "end": 90, "text": "Hjortnaes et al. (2020b)", "ref_id": "BIBREF20" }, { "start": 95, "end": 119, "text": "Hjortnaes et al. (2020a)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Multilinguality is by no means the only challenge the used dataset offers. For example, the corpus has a large amount of overlapping speech, which is very frequent in the interviews. Most of the recordings have more than two participants, and participants were not discouraged to use interruptions and small verbal cues, as these are essential for normal communication, and the goal was to collect natural speech. Additionally the corpus has a large number of speakers, and many speakers are present in one recording only, so the untranscribed recordings are prone to contain speakers who are entirely unseen by the ASR model. Further work is needed to effectively leverage resources in closely related languages and contact languages, as the current choice of English in transfer learning is not motivated by anything other than the amount of available data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "https://dict.fu-lab.ru 2 http://komicorpora.ru", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://commonvoice.mozilla.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/mozilla/ DeepSpeech/releases 5 https://github.com/mozilla/ DeepSpeech/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute. Niko Partanen and Michael Rie\u00dfler collaborate within the project Language Documentation meets Language Technology: The Next Step in the Description of Komi, funded by Kone Foundation, Finland.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Evaluating phonemic transcription of low-resource tonal languages for language documentation", "authors": [ { "first": "Oliver", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Hilaria", "middle": [], "last": "Cruz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Michaud", "suffix": "" } ], "year": 2018, "venue": "Proceedings of", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oliver Adams, Trevor Cohn, Graham Neubig, Hilaria Cruz, Steven Bird, and Alexis Michaud. 2018. Eval- uating phonemic transcription of low-resource tonal languages for language documentation. In Proceed- ings of LREC 2018.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "User-friendly automatic transcription of low-resource languages", "authors": [ { "first": "Oliver", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Galliot", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wisniewski", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Lambourne", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Foley", "suffix": "" }, { "first": "Rahasya", "middle": [], "last": "Sanders-Dwyer", "suffix": "" }, { "first": "Janet", "middle": [], "last": "Wiles", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Michaud", "suffix": "" }, { "first": "S\u00e9verine", "middle": [], "last": "Guillaume", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Cox", "suffix": "" }, { "first": "Katya", "middle": [], "last": "Aplonova", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Jacques", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Hill", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oliver Adams, Benjamin Galliot, Guillaume Wis- niewski, Nicholas Lambourne, Ben Foley, Ra- hasya Sanders-Dwyer, Janet Wiles, Alexis Michaud, S\u00e9verine Guillaume, Laurent Besacier, Christopher Cox, Katya Aplonova, Guillaume Jacques, and Nathan Hill. 2020. User-friendly automatic tran- scription of low-resource languages: Plugging esp- net into elpis.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The open dictionary infrastructure for Uralic languages", "authors": [ { "first": "Khalid", "middle": [], "last": "Alnajjar", "suffix": "" }, { "first": "Mika", "middle": [], "last": "H\u00e4m\u00e4l\u00e4inen", "suffix": "" }, { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Rueter", "suffix": "" } ], "year": 2019, "venue": "II Me\u017edunarodna\u00e2 nau\u010dna\u00e2 konferenci\u00e2\u00c8lektronna\u00e2 pismennost narodov Rossijskoj Federacii: opyt, problemy i perspektivy", "volume": "", "issue": "", "pages": "49--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Khalid Alnajjar, Mika H\u00e4m\u00e4l\u00e4inen, Niko Partanen, Jack Rueter, et al. 2019. The open dictionary infras- tructure for Uralic languages. In II Me\u017edunarodna\u00e2 nau\u010dna\u00e2 konferenci\u00e2\u00c8lektronna\u00e2 pismennost naro- dov Rossijskoj Federacii: opyt, problemy i perspek- tivy, pages 49-51. Ba\u0161kirska\u00e2\u00e8nciklopedi\u00e2.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "2020. Common Voice", "authors": [ { "first": "R", "middle": [], "last": "Ardila", "suffix": "" }, { "first": "M", "middle": [], "last": "Branson", "suffix": "" }, { "first": "K", "middle": [], "last": "Davis", "suffix": "" }, { "first": "M", "middle": [], "last": "Henretty", "suffix": "" }, { "first": "M", "middle": [], "last": "Kohler", "suffix": "" }, { "first": "J", "middle": [], "last": "Meyer", "suffix": "" }, { "first": "R", "middle": [], "last": "Morais", "suffix": "" }, { "first": "L", "middle": [], "last": "Saunders", "suffix": "" }, { "first": "F", "middle": [ "M" ], "last": "Tyers", "suffix": "" }, { "first": "G", "middle": [], "last": "Weber", "suffix": "" } ], "year": null, "venue": "Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Ardila, M. Branson, K. Davis, M. Henretty, M. Kohler, J. Meyer, R. Morais, L. Saunders, F. M. Tyers, and G. Weber. 2020. Common Voice. In Pro- ceedings of the 12th Conference on Language Re- sources and Evaluation (LREC 2020).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Corpora of social media in minority Uralic languages", "authors": [ { "first": "Timofey", "middle": [], "last": "Arkhangelskiy", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fifth International Workshop on Computational Linguistics for Uralic Languages", "volume": "", "issue": "", "pages": "125--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timofey Arkhangelskiy. 2019. Corpora of social me- dia in minority Uralic languages. In Proceedings of the Fifth International Workshop on Computational Linguistics for Uralic Languages, pages 125-140.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Building a common voice corpus for laiholh (hakha chin)", "authors": [ { "first": "Kelly", "middle": [], "last": "Berkson", "suffix": "" }, { "first": "Samson", "middle": [], "last": "Lotven", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Peng Hlei Thang", "suffix": "" }, { "first": "Zai", "middle": [], "last": "Thawngza", "suffix": "" }, { "first": "", "middle": [], "last": "Sung", "suffix": "" }, { "first": "C", "middle": [], "last": "James", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Wamsley", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Tyers", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Van Bik", "suffix": "" }, { "first": "Donald", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "", "middle": [], "last": "Williamson", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Workshop on Computational Methods for Endangered Languages", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kelly Berkson, Samson Lotven, Peng Hlei Thang, Thomas Thawngza, Zai Sung, James C Wamsley, Francis Tyers, Kenneth Van Bik, Sandra K\u00fcbler, Donald Williamson, et al. 2019. Building a common voice corpus for laiholh (hakha chin). In Proceed- ings of the Workshop on Computational Methods for Endangered Languages, volume 2.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Slovar dialektov komi\u00e2zyka", "authors": [ { "first": "L", "middle": [ "M" ], "last": "Beznosikova", "suffix": "" }, { "first": "E", "middle": [ "A" ], "last": "Ajbabina", "suffix": "" }, { "first": "R", "middle": [ "I" ], "last": "Kosnyreva", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. M. Beznosikova, E. A. Ajbabina, and R. I. Kos- nyreva. 2012. Slovar dialektov komi\u00e2zyka. Kola.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Niko Partanen, and Michael Rie\u00dfler. 2021. Spoken Komi Corpus. The Language Archive version", "authors": [ { "first": "Rogier", "middle": [], "last": "Blokland", "suffix": "" }, { "first": "Vasily", "middle": [], "last": "Chuprov", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Fedina", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Fedina", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Levchenko", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rogier Blokland, Vasily Chuprov, Maria Fedina, Ma- rina Fedina, Dmitry Levchenko, Niko Partanen, and Michael Rie\u00dfler. 2021. Spoken Komi Corpus. The Language Archive version.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Niko Partanen, and Michael Rie\u00dfler. forthcoming. Spoken Komi Corpus. The Language Bank of Finland version", "authors": [ { "first": "Rogier", "middle": [], "last": "Blokland", "suffix": "" }, { "first": "Vasily", "middle": [], "last": "Chuprov", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Fedina", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Fedina", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Levchenko", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rogier Blokland, Vasily Chuprov, Maria Fedina, Ma- rina Fedina, Dmitry Levchenko, Niko Partanen, and Michael Rie\u00dfler. forthcoming. Spoken Komi Cor- pus. The Language Bank of Finland version.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A curriculum learning method for improved noise robustness in automatic speech recognition", "authors": [ { "first": "Stefan", "middle": [], "last": "Braun", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Neil", "suffix": "" }, { "first": "Shih-Chii", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "25th European Signal Processing Conference", "volume": "", "issue": "", "pages": "548--552", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Braun, Daniel Neil, and Shih-Chii Liu. 2017. A curriculum learning method for improved noise ro- bustness in automatic speech recognition. In 2017 25th European Signal Processing Conference (EU- SIPCO), pages 548-552. IEEE.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Persephone-ELAN: Automatic phoneme recognition for ELAN users", "authors": [ { "first": "Christopher", "middle": [], "last": "Cox", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Cox. 2019. Persephone-ELAN: Automatic phoneme recognition for ELAN users. Version 0.1.2.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Sequence-based multilingual low resource speech recognition", "authors": [ { "first": "Siddharth", "middle": [], "last": "Dalmia", "suffix": "" }, { "first": "Ramon", "middle": [], "last": "Sanabria", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Metze", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "4909--4913", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharth Dalmia, Ramon Sanabria, Florian Metze, and Alan W Black. 2018. Sequence-based multi- lingual low resource speech recognition. In 2018 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 4909-4913. IEEE.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Investigation of automatic speech recognition systems via the multilingual deep neural network modeling methods for a very low-resource language", "authors": [ { "first": "Junqing", "middle": [], "last": "Tessfu Geteye Fantaye", "suffix": "" }, { "first": "Tulu Tilahun", "middle": [], "last": "Yu", "suffix": "" }, { "first": "", "middle": [], "last": "Hailu", "suffix": "" } ], "year": 2020, "venue": "Chaha. Journal of Signal and Information Processing", "volume": "11", "issue": "1", "pages": "1--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tessfu Geteye Fantaye, Junqing Yu, and Tulu Tilahun Hailu. 2020. Investigation of automatic speech recognition systems via the multilingual deep neural network modeling methods for a very low-resource language, Chaha. Journal of Signal and Information Processing, 11(1):1-21.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Korpus kom\u00ee azyka kak baza dl\u00e2 nau\u010dnyh issledovanij", "authors": [ { "first": "Marina", "middle": [], "last": "Serafimovna Fedina", "suffix": "" } ], "year": 2019, "venue": "II Me\u017edunarodna\u00e2 nau\u010dna\u00e2 konferenci\u00e2\u00c8lektronna\u00e2 pismennost narodov Rossijskoj Federacii: opyt, problemy i perspektivy", "volume": "", "issue": "", "pages": "45--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Serafimovna Fedina. 2019. Korpus kom\u00ee azyka kak baza dl\u00e2 nau\u010dnyh issledovanij. In II Me\u017edunarodna\u00e2 nau\u010dna\u00e2 konferenci\u00e2\u00c8lektronna\u00e2 pismennost narodov Rossijskoj Federacii: opyt, problemy i perspektivy, pages 45-48. Ba\u0161kirska\u1ea7 enciklopedi\u00e2.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Instant annotations in ELAN corpora of spoken and written Komi, an endangered language of the Barents Sea region", "authors": [ { "first": "Ciprian", "middle": [], "last": "Gerstenberger", "suffix": "" }, { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rie\u00dfler", "suffix": "" } ], "year": 2017, "venue": "Workshop on the Use of Computational Methods in the Study of Endangered Languages (ComputEl-2)", "volume": "", "issue": "", "pages": "57--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ciprian Gerstenberger, Niko Partanen, and Michael Rie\u00dfler. 2017. Instant annotations in ELAN corpora of spoken and written Komi, an endangered lan- guage of the Barents Sea region. In Antti Arppe, Jeff Good, Mans Hulden, Jordan Lachler, Alexis Palmer, and Lane Schwartz, editors, Workshop on the Use of Computational Methods in the Study of Endangered Languages (ComputEl-2), pages 57-66. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Utilizing language technology in the documentation of endangered Uralic languages", "authors": [ { "first": "Ciprian", "middle": [], "last": "Gerstenberger", "suffix": "" }, { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rie\u00dfler", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Wilbur", "suffix": "" } ], "year": 2016, "venue": "Northern European Journal of Language Technology", "volume": "4", "issue": "", "pages": "29--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ciprian Gerstenberger, Niko Partanen, Michael Rie\u00dfler, and Joshua Wilbur. 2016. Utilizing language technology in the documentation of endangered Uralic languages. Northern European Journal of Language Technology, 4:29-47.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Ruoming Pang, and Yonghui Wu. 2020. Contextnet: Improving convolutional neural networks for automatic speech recognition with global context", "authors": [ { "first": "Wei", "middle": [], "last": "Han", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jiahui", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Chung-Cheng", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "James", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Anmol", "middle": [], "last": "Gulati", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.03191" ] }, "num": null, "urls": [], "raw_text": "Wei Han, Zhengdong Zhang, Yu Zhang, Jiahui Yu, Chung-Cheng Chiu, James Qin, Anmol Gulati, Ruoming Pang, and Yonghui Wu. 2020. Contextnet: Improving convolutional neural networks for auto- matic speech recognition with global context. arXiv preprint arXiv:2005.03191.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Deep Speech", "authors": [ { "first": "Awni", "middle": [], "last": "Hannun", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Case", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Casper", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Catanzaro", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Diamos", "suffix": "" }, { "first": "Erich", "middle": [], "last": "Elsen", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Prenger", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Satheesh", "suffix": "" }, { "first": "Shubho", "middle": [], "last": "Sengupta", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Coates", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Awni Hannun, Carl Case, Jared Casper, Bryan Catan- zaro, Greg Diamos, Erich Elsen, Ryan Prenger, San- jeev Satheesh, Shubho Sengupta, Adam Coates, and Andrew Y. Ng. 2014. Deep Speech.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Proceedings of the sixth workshop on statistical machine translation", "authors": [ { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "187--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Heafield. 2011. Kenlm. In Proceedings of the sixth workshop on statistical machine transla- tion, pages 187-197.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Improving the language model for low-resource ASR with online text corpora", "authors": [ { "first": "Nils", "middle": [], "last": "Hjortnaes", "suffix": "" }, { "first": "Timofey", "middle": [], "last": "Arkhangelskiy", "suffix": "" }, { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rie\u00dfler", "suffix": "" }, { "first": "Francis", "middle": [ "M" ], "last": "Tyers", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st joint SLTU and CCURL workshop (SLTU-CCURL 2020)", "volume": "", "issue": "", "pages": "336--341", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Hjortnaes, Timofey Arkhangelskiy, Niko Parta- nen, Michael Rie\u00dfler, and Francis M. Tyers. 2020a. Improving the language model for low-resource ASR with online text corpora. In Proceedings of the 1st joint SLTU and CCURL workshop (SLTU- CCURL 2020), pages 336-341, Marseille. European Language Resources Association (ELRA).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Towards a speech recognizer for Komi, an endangered and low-resource Uralic language", "authors": [ { "first": "Nils", "middle": [], "last": "Hjortnaes", "suffix": "" }, { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rie\u00dfler", "suffix": "" }, { "first": "Francis", "middle": [ "M" ], "last": "Tyers", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Sixth International Workshop on Computational Linguistics of Uralic Languages", "volume": "", "issue": "", "pages": "31--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Hjortnaes, Niko Partanen, Michael Rie\u00dfler, and Francis M. Tyers. 2020b. Towards a speech rec- ognizer for Komi, an endangered and low-resource Uralic language. In Proceedings of the Sixth In- ternational Workshop on Computational Linguistics of Uralic Languages, pages 31-37. Association for Computational Linguistics, Vienna.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Improving rnn transducer modeling for end-to-end speech recognition", "authors": [ { "first": "Jinyu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Hu", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Yifan", "middle": [], "last": "Gong", "suffix": "" } ], "year": 2019, "venue": "2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)", "volume": "", "issue": "", "pages": "114--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinyu Li, Rui Zhao, Hu Hu, and Yifan Gong. 2019. Improving rnn transducer modeling for end-to-end speech recognition. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 114-121. IEEE.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Multi-task and transfer learning in low-resource speech recognition", "authors": [ { "first": "Josh", "middle": [], "last": "Meyer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Josh Meyer. 2019. Multi-task and transfer learning in low-resource speech recognition. Ph.D. thesis, Uni- versity of Arizona.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Librispeech: an asr corpus based on public domain audio books", "authors": [ { "first": "Vassil", "middle": [], "last": "Panayotov", "suffix": "" }, { "first": "Guoguo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Povey", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2015, "venue": "2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "5206--5210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 5206-5210. IEEE.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Specaugment: A simple data augmentation method for automatic speech recognition", "authors": [ { "first": "S", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "William", "middle": [], "last": "Park", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Chung-Cheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Barret", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "D", "middle": [], "last": "Ekin", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Cubuk", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.08779" ] }, "num": null, "urls": [], "raw_text": "Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le. 2019. Specaugment: A simple data augmentation method for automatic speech recognition. arXiv preprint arXiv:1904.08779.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The first Komi-Zyrian Universal Dependencies treebanks", "authors": [ { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Rogier", "middle": [], "last": "Blokland", "suffix": "" }, { "first": "Kyungtae", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Thierry", "middle": [], "last": "Poibeau", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rie\u00dfler", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Second Workshop on Universal Dependencies (UDW 2018)", "volume": "", "issue": "", "pages": "126--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niko Partanen, Rogier Blokland, KyungTae Lim, Thierry Poibeau, and Michael Rie\u00dfler. 2018. The first Komi-Zyrian Universal Dependencies tree- banks. In Proceedings of the Second Workshop on Universal Dependencies (UDW 2018), pages 126- 132. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A pseudonymisation method for language documentation corpora", "authors": [ { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Rogier", "middle": [], "last": "Blokland", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rie\u00dfler", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Sixth International Workshop on Computational Linguistics of Uralic Languages", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niko Partanen, Rogier Blokland, and Michael Rie\u00dfler. 2020. A pseudonymisation method for language documentation corpora. In Tommi A. Pirinen, Fran- cis M. Tyers, and Michael Rie\u00dfler, editors, Proceed- ings of the Sixth International Workshop on Compu- tational Linguistics of Uralic Languages, pages 1-8. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Wav2letter++: A fast open-source speech recognition system", "authors": [ { "first": "Vineel", "middle": [], "last": "Pratap", "suffix": "" }, { "first": "Awni", "middle": [], "last": "Hannun", "suffix": "" }, { "first": "Qiantong", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Kahn", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Synnaeve", "suffix": "" } ], "year": 2019, "venue": "ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "6460--6464", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vineel Pratap, Awni Hannun, Qiantong Xu, Jeff Cai, Jacob Kahn, Gabriel Synnaeve, Vitaliy Liptchin- sky, and Ronan Collobert. 2019. Wav2letter++: A fast open-source speech recognition system. In ICASSP 2019-2019 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 6460-6464. IEEE.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Komi-Zyrian to X lexica", "authors": [ { "first": "Jack", "middle": [], "last": "Rueter", "suffix": "" }, { "first": "Paula", "middle": [], "last": "Kokkonen", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Fedina", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "file://localhost/home/benjaminaw/grobid-0.6.1/grobid-home/tmp/10.5281/zenodo.4309763" ] }, "num": null, "urls": [], "raw_text": "Jack Rueter, Paula Kokkonen, and Marina Fedina. 2020. Komi-Zyrian to X lexica. Version 0.5.1, De- cember 7. 2020. 10.5281/zenodo.4309763.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Helsinkisa universitetyn kyv tu\u00e2lys I\u017ekaryn perymsa simpozium vylyn lydd\u00f6mtor", "authors": [ { "first": "M", "middle": [], "last": "Jack", "suffix": "" }, { "first": "", "middle": [], "last": "Rueter", "suffix": "" } ], "year": 2000, "venue": "Permistika 6 (Proceedings of Permistika 6 conference)", "volume": "", "issue": "", "pages": "154--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jack M. Rueter. 2000. Helsinkisa universitetyn kyv tu\u00e2lys I\u017ekaryn perymsa simpozium vylyn lydd\u00f6mtor. In Permistika 6 (Proceedings of Per- mistika 6 conference), pages 154-158.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Leveraging end-to-end ASR for endangered language documentation: An empirical study on Yolox\u00f3chitl Mixtec", "authors": [ { "first": "Jiatong", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jonathan", "middle": [ "D" ], "last": "Amith", "suffix": "" }, { "first": "Rey", "middle": [], "last": "Castillo Garc\u00eda", "suffix": "" }, { "first": "Esteban", "middle": [ "Guadalupe" ], "last": "Sierra", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Shinji", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "2101--10877", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiatong Shi, Jonathan D. Amith, Rey Castillo Garc\u00eda, Esteban Guadalupe Sierra, Kevin Duh, and Shinji Watanabe. 2021. Leveraging end-to-end ASR for endangered language documentation: An empirical study on Yolox\u00f3chitl Mixtec. ArXiv:2101.10877.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Towards building an automatic transcription system for language documentation: Experiences from muyu", "authors": [ { "first": "Alexander", "middle": [], "last": "Zahrer", "suffix": "" }, { "first": "Andrej", "middle": [], "last": "Zgank", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Schuppler", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2893--2900", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Zahrer, Andrej Zgank, and Barbara Schup- pler. 2020. Towards building an automatic transcrip- tion system for language documentation: Experi- ences from muyu. In Proceedings of The 12th Lan- guage Resources and Evaluation Conference, pages 2893-2900.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "num": null, "text": "http://komi-zyrian.web-corpora.net", "content": "
CorpusSize
Komi Speech35 hours
Russian Speech105 hours
Komi Literary1.39M tokens
Komi Social1.37M tokens
Komi Text Combined 2.76M tokens
Russian Wiki786M tokens
", "type_str": "table" }, "TABREF1": { "html": null, "num": null, "text": "Token counts for the speech corpora and text corpora used to create the language models.", "content": "", "type_str": "table" }, "TABREF3": { "html": null, "num": null, "text": "The best Character Error Rate (CER) and Word Error Rate (WER) for each combination of source language and language model (lower is better). Note the best WER and best CER may have come from different language model \u03b1 and \u03b2 parameters.", "content": "
", "type_str": "table" } } } }