{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:12:01.301454Z" }, "title": "User-Friendly Automatic Transcription of Low-Resource Languages: Plugging ESPnet into Elpis", "authors": [ { "first": "Oliver", "middle": [], "last": "Adams", "suffix": "", "affiliation": {}, "email": "oliver.adams@gmail.com" }, { "first": "Benjamin", "middle": [], "last": "Galliot", "suffix": "", "affiliation": { "laboratory": "Langues et Civilisations \u00e0 Tradition Orale (LACITO)", "institution": "CNRS-Sorbonne Nouvelle", "location": { "country": "France" } }, "email": "" }, { "first": "Guillaume", "middle": [], "last": "Wisniewski", "suffix": "", "affiliation": { "laboratory": "Laboratoire de Linguistique Formelle (LLF)", "institution": "CNRS", "location": { "country": "France" } }, "email": "guillaume.wisniewski@u-paris.fr" }, { "first": "Nicholas", "middle": [], "last": "Lambourne D E", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Queensland", "location": { "settlement": "Brisbane", "country": "Australia" } }, "email": "" }, { "first": "Ben", "middle": [], "last": "Foley D E", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Queensland", "location": { "settlement": "Brisbane", "country": "Australia" } }, "email": "" }, { "first": "Rahasya", "middle": [], "last": "Sanders-Dwyer D E", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Queensland", "location": { "settlement": "Brisbane", "country": "Australia" } }, "email": "" }, { "first": "Janet", "middle": [], "last": "Wiles D E", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Queensland", "location": { "settlement": "Brisbane", "country": "Australia" } }, "email": "" }, { "first": "Alexis", "middle": [], "last": "Michaud", "suffix": "", "affiliation": { "laboratory": "Langues et Civilisations \u00e0 Tradition Orale (LACITO)", "institution": "CNRS-Sorbonne Nouvelle", "location": { "country": "France" } }, "email": "" }, { "first": "S\u00e9verine", "middle": [], "last": "Guillaume", "suffix": "", "affiliation": { "laboratory": "Langues et Civilisations \u00e0 Tradition Orale (LACITO)", "institution": "CNRS-Sorbonne Nouvelle", "location": { "country": "France" } }, "email": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "", "affiliation": { "laboratory": "Laboratoire d'Informatique de Grenoble (LIG)", "institution": "Universit\u00e9 Grenoble Alpes", "location": { "country": "France" } }, "email": "laurent.besacier@univ-grenoble-alpes.fr" }, { "first": "Christopher", "middle": [], "last": "Cox", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Alberta", "location": { "country": "Canada" } }, "email": "cox.christopher@gmail.com" }, { "first": "Katya", "middle": [], "last": "Aplonova", "suffix": "", "affiliation": { "laboratory": "Langues et Civilisation d'Afrique (LLACAN)", "institution": "CNRS-INALCO", "location": { "settlement": "Langage", "country": "France" } }, "email": "" }, { "first": "Guillaume", "middle": [], "last": "Jacques", "suffix": "", "affiliation": { "laboratory": "Centre de Recherches Linguistiques sur l'Asie Orientale (CRLAO)", "institution": "CNRS-EHESS", "location": { "country": "France" } }, "email": "" }, { "first": "Nathan", "middle": [], "last": "Hill", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of London", "location": { "country": "United Kingdom" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper reports on progress integrating the speech recognition toolkit ESPnet into Elpis, a web front-end originally designed to provide access to the Kaldi automatic speech recognition toolkit. The goal of this work is to make end-to-end speech recognition models available to language workers via a user-friendly graphical interface. Encouraging results are reported on (i) development of an ESPnet recipe for use in Elpis, with preliminary results on data sets previously used for training acoustic models with the Persephone toolkit along with a new data set that had not previously been used in speech recognition, and (ii) incorporating ESPnet into Elpis along with UI enhancements and a CUDA-supported Dockerfile.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper reports on progress integrating the speech recognition toolkit ESPnet into Elpis, a web front-end originally designed to provide access to the Kaldi automatic speech recognition toolkit. The goal of this work is to make end-to-end speech recognition models available to language workers via a user-friendly graphical interface. Encouraging results are reported on (i) development of an ESPnet recipe for use in Elpis, with preliminary results on data sets previously used for training acoustic models with the Persephone toolkit along with a new data set that had not previously been used in speech recognition, and (ii) incorporating ESPnet into Elpis along with UI enhancements and a CUDA-supported Dockerfile.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Transcription of speech is an important part of language documentation, and yet speech recognition technology has not been widely harnessed to aid linguists. Despite revolutionary progress in the performance of speech recognition systems in the past decade (Hinton et al., 2012; Hannun et al., 2014; Zeyer et al., 2018; Hadian et al., 2018; Ravanelli et al., 2019; Zhou et al., 2020) , including its application to low-resource languages (Besacier et al., 2014; Blokland et al., 2015; van Esch et al., 2019; Hjortnaes et al., 2020) , these advances are yet to play a common role in language documentation workflows. Speech recognition software often requires effective command line skills and a reasonably detailed understanding of the underlying modeling. People involved in language documentation, language description, and language revitalization projects (this includes, but is not limited to, linguists who carry out fieldwork) seldom have such knowledge. Thus, the tools are largely inaccessible by many people who would benefit from their use.", "cite_spans": [ { "start": 257, "end": 278, "text": "(Hinton et al., 2012;", "ref_id": "BIBREF25" }, { "start": 279, "end": 299, "text": "Hannun et al., 2014;", "ref_id": "BIBREF21" }, { "start": 300, "end": 319, "text": "Zeyer et al., 2018;", "ref_id": "BIBREF65" }, { "start": 320, "end": 340, "text": "Hadian et al., 2018;", "ref_id": "BIBREF20" }, { "start": 341, "end": 364, "text": "Ravanelli et al., 2019;", "ref_id": "BIBREF47" }, { "start": 365, "end": 383, "text": "Zhou et al., 2020)", "ref_id": "BIBREF66" }, { "start": 438, "end": 461, "text": "(Besacier et al., 2014;", "ref_id": "BIBREF4" }, { "start": 462, "end": 484, "text": "Blokland et al., 2015;", "ref_id": "BIBREF5" }, { "start": 485, "end": 507, "text": "van Esch et al., 2019;", "ref_id": "BIBREF14" }, { "start": 508, "end": 531, "text": "Hjortnaes et al., 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Elpis 1 is a tool created to allow language workers with minimal computational experience to build their own speech recognition models and automatically transcribe audio (Foley et al., 2018 . Elpis uses the Kaldi 2 automatic speech recognition (ASR) toolkit (Povey et al., 2011) as its backend. Kaldi is a mature, widely used and well-supported speech recognition toolkit which supports a range of hidden Markov model based speech recognition models.", "cite_spans": [ { "start": 170, "end": 189, "text": "(Foley et al., 2018", "ref_id": "BIBREF15" }, { "start": 258, "end": 278, "text": "(Povey et al., 2011)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we report on the ongoing integration of ESPnet 3 into Elpis as an alternative to the current Kaldi system. We opted to integrate ESPnet as it is a widely used and actively developed tool with state-of-the-art end-to-end neural network models. By supporting ESPnet in Elpis, we aim to bring a wider range of advances in speech recognition to a broad group of users, and provide alternative model options that may better suit some data circumstances, such as an absence of a pronunciation lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the rest of this paper, we describe changes to the Elpis toolkit to support the new backend, and preliminary experiments applying our ESPnet recipe to several datasets from a language documentation context. Finally, we discuss plans going forward with this project.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Automatic phonetic/phonemic transcription in language documentation As a subset of speech recognition research, work has been done in applying speech recognition systems to the very low-resource phonemic data scenarios typical in the language documentation context. Encouraging results capitalizing on the advances in speech recognition technology for automatic phonemic transcription in a language documentation context were reported by . Their work used a neural network architecture with connectionist temporal classification (Graves et al., 2006) for phonemic (including tonal) transcription. A command line toolkit was released called Persephone. To assess the reproducibility of the results on other languages, experiments were extended beyond the Chatino, Na and Tsuut'ina data sets, to a sample of languages from the Pangloss Collection, an online archive of under-resourced languages (Michailovsky et al., 2014) . The results confirmed that end-to-end models for automatic phonemic transcription deliver promising performance, and also suggested that preprocessing tasks can to a large extent be automated, thereby increasing the attractiveness of the tool for language documentation workflows (Wisniewski et al., 2020) . Another effort in this space is Allosaurus (Li et al., 2020) , which leverages multilingual models for phonetic transcription and jointly models language independent phones and language-dependent phonemes. This stands as a 3 https://github.com/espnet/espnet promising step towards effective universal phonetic recognition, which would be of great value in the language documentation process.", "cite_spans": [ { "start": 529, "end": 550, "text": "(Graves et al., 2006)", "ref_id": "BIBREF18" }, { "start": 893, "end": 920, "text": "(Michailovsky et al., 2014)", "ref_id": "BIBREF36" }, { "start": 1203, "end": 1228, "text": "(Wisniewski et al., 2020)", "ref_id": "BIBREF61" }, { "start": 1274, "end": 1291, "text": "(Li et al., 2020)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Since such research tools do not have user friendly interfaces, efforts have been put into making these tools accessible to wider audience of users. The authors of Allosaurus provide a web interface online. 4 To integrate Persephone into the language documentation workflow, a plugin, Persephone-ELAN, 5 was developed for ELAN, 6 a piece of software that is widely used for annotation in language documentation (Cox, 2019) .", "cite_spans": [ { "start": 207, "end": 208, "text": "4", "ref_id": null }, { "start": 411, "end": 422, "text": "(Cox, 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "User-friendly speech recognition interfaces", "sec_num": null }, { "text": "Meanwhile, Elpis is a toolkit that provides a user-friendly front-end to the Kaldi speech recognition system. The interface steps the user through the process of preparing language recordings using existing ELAN transcription files, training a model and applying the model to obtain a hypothesis orthographic transcription for untranscribed speech recordings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User-friendly speech recognition interfaces", "sec_num": null }, { "text": "ESPnet is an end-to-end neural network-based speech recognition toolkit. Developed with Pytorch (Paszke et al., 2019) in a research context, the tool satisfies three desiderata for our purposes: (a) it is easy to modify training recipes, which consist of collections of scripts and configuration files that make it easy to perform training and decoding by calling a wrapper script. These recipes describe a wide range of the hyperparameters and architecture choices of the model; (b) it is actively developed, with frequent integration of the latest advances in end-to-end speech recognition; and (c) it supports Kaldi-style data formatting, which makes it a natural end-to-end counterpart to Kaldi backend that was already supported in Elpis. These points make it a more appealing candidate backend than Persephone, primarily due to ESPnet's larger developer base.", "cite_spans": [ { "start": 96, "end": 117, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Bringing ESPnet to Elpis", "sec_num": "3" }, { "text": "One goal of the integration is to create a default ESPnet recipe for Elpis to use, that performs well across a variety of languages and with the small amount and type of data typically available in a language documentation context. To get a sense of how easy it would be using ESPnet to get similar performance as previously attained we applied it to the single-speaker Na and Chatino datasets as used in Adams et al. (2018) (see Table 1 , which includes other details of the datasets used, including the amount of training data). We report character error rate (CER) rather than phoneme error rate (PER) because it is general, does not require a subsequent languagespecific post-processing step, and also captures characters that a linguist might want transcribed that aren't strictly phonemic. Because of minor differences in the training sets, their preprocessing, and metrics used, these numbers are not intended to be directly comparable with previous work. While these results are not directly comparable to the results they reported, the performance was good enough to confirm that integrating ES-Pnet was preferable to Persephone. We do no language-specific preprocessing, though the Elpis interface allows the user to define a character set for which instances of those characters will be removed from the text. For the Na data and the Japhug data in \u00a74, the Pangloss XML format is converted to ELAN XML using a XSLT-based tool, Pangloss-Elpis. 7 7 gitlab.com/lacito/pangloss-elpis While we did not aggressively tune hyperparameters and architecture details, they do have a substantial impact on performance and computational requirements. Owing to the small datasets and limited computational resources of many of the machines that Elpis may run on, we used a relatively small neural network. In the future we aim to grow a representative suite of evaluation languages from a language documentation setting for further tuning to determine what hyperparameters and architecture best suit different scenarios. Though we aim for a recipe that does well across a range of possible language documentation data circumstances, the best architecture and hyperparameters will vary depending on the characteristics of the input dataset. Rather than have the user fiddle with such hyperparameters directly, which would undermine the user-friendliness of the tool, there is potential to automatically adjust the hyperparameters of the model on the basis of the data supplied to the model. For example, the parameters could be automatically set depending on the number of speakers in the ELAN file and the total amount of speech.", "cite_spans": [], "ref_spans": [ { "start": 430, "end": 437, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Development of an ESPnet recipe for Elpis", "sec_num": "3.1" }, { "text": "The architecture we used for these experiments is a hybrid CTC-attention model (Watanabe et al., 2017b ) with a 3-layer BiLSTM encoder and a single layer decoder. We use a hidden size of 320 and use an equal weighting between the CTC and ", "cite_spans": [ { "start": 79, "end": 102, "text": "(Watanabe et al., 2017b", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Development of an ESPnet recipe for Elpis", "sec_num": "3.1" }, { "text": "Beyond integration of ESPnet into Elpis, several other noteworthy enhancements have been made to Elpis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Elpis enhancements", "sec_num": "3.2" }, { "text": "Detailed training feedback Prior to the work reported in this paper, the progress of training and transcribing stages was shown as a spinning icon with no other feedback. Due to the amount of time it takes to train even small speech recognition models, the lack of detailed feedback may cause a user to wonder what stage the training was at, or whether a fault had caused the system to fail. During training and transcription, the backend processes' logs are now output to the screen (see Figure 1) . Although the information in these logs may be more detailed than what the intended audience of the tool needs to understand, it does serve to give any user feedback on how training is going, and reassure them that it is still running (or notify them if a process has failed). The logs can also provide useful contextual information when debugging an experiment in collaborations between language workers and software engineers.", "cite_spans": [], "ref_spans": [ { "start": 489, "end": 498, "text": "Figure 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Elpis enhancements", "sec_num": "3.2" }, { "text": "CUDA-supported Docker image The type of Kaldi model which Elpis trains (using a Gaussian mixture model as the acoustic model) was originally selected to be computationally efficient, and able to run on the type of computers commonly used by language researchers, which often don't have a GPU (graphics processing unit). With the addition of ESPnet, the benefit of using a GPU will be felt through vastly reduced training times for the neural network. To this end, Elpis has been 8 https://github.com/ persephone-tools/espnet/commit/ 1c529eab738cc8e68617aebbae520f7c9c919081 adapted to include Compute Unified Device Architecture (CUDA) support, which is essential in order to leverage a GPU when training ESPnet on a machine that has one available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Elpis enhancements", "sec_num": "3.2" }, { "text": "The point of this work is to provide a tool that can be used by linguists in their limited-data scenarios. To this end we aim to experiment with diverse datasets that reflect the breadth of language documentation contexts. Going forward, this will be useful in getting a sense of what sort of model performance users can expect given the characteristics of dataset. In this section we report on further application of the model underpinning the Elpis-ESPnet integration to another data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Application to a new data set: Japhug", "sec_num": "4" }, { "text": "Japhug is a Sino-Tibetan language with a rich system of consonant clusters, as well as flamboyant morphology. In Japhug, syllables can have initial clusters containing at most three consonants, and at most one coda (Jacques, 2019) . Japhug does not have lexical tones. The language's phonological profile is thus very different from Na (about which see Michaud, 2017) and Chatino (Cruz, 2011; Cruz and Woodbury, 2014; Cavar et al., 2016) .", "cite_spans": [ { "start": 215, "end": 230, "text": "(Jacques, 2019)", "ref_id": null }, { "start": 353, "end": 367, "text": "Michaud, 2017)", "ref_id": "BIBREF37" }, { "start": 380, "end": 392, "text": "(Cruz, 2011;", "ref_id": "BIBREF9" }, { "start": 393, "end": 417, "text": "Cruz and Woodbury, 2014;", "ref_id": "BIBREF10" }, { "start": 418, "end": 437, "text": "Cavar et al., 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Application to a new data set: Japhug", "sec_num": "4" }, { "text": "The data set comprises a total of about 30 hours of transcribed recordings of narratives, timealigned at the level of the sentence (Macaire, 2020) , which is a huge amount in a language documentation context. The recordings were made in the course of field trips from the first years of the century until now, in a quiet environment, and almost all of a single speaker. Our tests on various data sets so far suggest that these settings (one speaker -hence no speaker overlap -and clean audio) are those in which performance is most likely to be good when one happens to be training an acoustic model from scratch.", "cite_spans": [ { "start": 131, "end": 146, "text": "(Macaire, 2020)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Application to a new data set: Japhug", "sec_num": "4" }, { "text": "The full data set is openly accessible online from the Pangloss Collection, under a Creative Commons license, allowing visitors to browse the texts, and computer scientists to try their hand at the data set. 9 The data collector's generous approach to data sharing sets an impressive example, putting into practice some principles which gather increasing support, but which are not yet systematically translated into institutional and editorial policies (Garellek et al., 2020) .", "cite_spans": [ { "start": 208, "end": 209, "text": "9", "ref_id": null }, { "start": 454, "end": 477, "text": "(Garellek et al., 2020)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Application to a new data set: Japhug", "sec_num": "4" }, { "text": "The dataset can be downloaded by sending a request to the Cocoon data repository, which hosts the Pangloss Collection. A script, retriever.py, 10 retrieves resources with a certain language name. Data sets can then be created in various ways, such as sorting by speaker (tests suggest that singlespeaker models are a good way to start) and by genre, e.g. excluding materials such as songs, which are a very different kettle of fish from ordinary speech and complicate model training. Figure 2 shows how the phoneme error rate decreases as the amount of training data increases up to 170 minutes. Tests are currently being conducted to verify whether performance stagnates when the amount of data is increased beyond 170 minutes. As with the other experiments the recipe described in \u00a73.1 was used. For each amount of training data, the model was trained for 20 epochs for each of these training runs, with the smaller sets always as a subset of all larger sets. Figure 3 shows the training profile for a given training run using 170 minutes of data. ", "cite_spans": [], "ref_spans": [ { "start": 484, "end": 492, "text": "Figure 2", "ref_id": null }, { "start": 962, "end": 970, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Application to a new data set: Japhug", "sec_num": "4" }, { "text": "Devoting a section to reflections about adoption of automatic speech recognition tools in language documentation may seem superfluous here. The audience of a conference on the use of computational methods in the study of endangered languages is highly knowledgeable about the difficulties and the rewards of interdisciplinary projects, as a matter of course. But it seemed useful to include a few general thoughts on this topic nonetheless, for the attention of the broader readership which we hope will probe into the Proceedings of the ComputEL-4 conference: colleagues who may consider joining international efforts for wider adoption of natural language processing tools in language documentation workflows. We briefly address a few types of doubts and misgivings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges concerning adoption of automatic speech recognition tools in language documentation", "sec_num": "5" }, { "text": "A first concern is that automatic speech recognition software is simply too complex for language workers. But it should be recalled that new technologies that seem inaccessible to language workers can be game-changers in linguistics. For instance, the L A T E X software is the typesetting backend used by the journal Glossa (Rooryck, 2016) and by the publishing house Language Science Press (Nordhoff, 2018) , which publish research in linguistics, offering high-quality open-access venues with no author fees or reader fees. Thus, L A T E X, a piece of software which is notorious for its complexity, is used on a large scale in linguistics publishing: Glossa publishes more than 100 articles a year, and Language Science Press about 30 books a year. Key to this success is an organizational setup whereby linguists receive not only a set of stylesheets and instructions, but also hands-on support from a L A T E X expert all along the typesetting process. Undeniably complex software is only accessible to people with no prior knowledge of it if support is available. Automatic speech recognition software should be equally accessible for language workers, given the right organization and setup. Accordingly, special emphasis is placed on user design in the Elpis project. This aspect of the work falls outside of the scope of the present paper, but we wanted to reassure potential users that it is clear to Elpis developers that the goal is to make the technology available to people who do not use the command line. If users can operate software such as ELAN then they will be more than equipped for the skills of uploading ELAN files to Elpis and clicking the Train button.", "cite_spans": [ { "start": 325, "end": 340, "text": "(Rooryck, 2016)", "ref_id": null }, { "start": 392, "end": 408, "text": "(Nordhoff, 2018)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Is automatic speech recognition software too complex for language workers?", "sec_num": "5.1" }, { "text": "A second concern among language workers is whether the technology can deliver on its promise, or whether transcription acceleration projects are a case of \"digital innovation fetishism\" (Ampuja, 2020) . Some language workers have reported a feeling that integration of automatic transcription into the language documentation workflow (as described in ) feels out of reach for them. There is no denying that natural language processing tools such as ESPnet and Kaldi are very complex, and that currently, the help of specialists is still needed to make use of this technology in language documentation. However, progress is clearly being made, and a motivated interdisciplinary community is growing at the intersection of language documentation and computer science, comprising linguists who are interested in investing time to learn about natural language processing and computer scientists who want to achieve \"great things with small languages\" (Thieberger and Nordlinger, 2006) . It seems well worth investing in computational methods to assist in the urgent task of documenting the world's languages.", "cite_spans": [ { "start": 186, "end": 200, "text": "(Ampuja, 2020)", "ref_id": "BIBREF2" }, { "start": 947, "end": 980, "text": "(Thieberger and Nordlinger, 2006)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Will the technology deliver on its promise?", "sec_num": "5.2" }, { "text": "5.3 Keeping up with the state of the art vs. stabilizing the tool", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Will the technology deliver on its promise?", "sec_num": "5.2" }, { "text": "Finally, a concern among linguists is that the state of the art in computer science is evolving so rapidly that the tool cannot be stabilized, and hence cannot be proposed to language workers for enduring integration into the language documentation workflow. In cases where significant, high-frequency updates are required to keep up with changes in speech recognition software, the investment could be too much for the relatively small communities of programmers involved in transcription acceleration projects. Our optimistic answer is that state-of-the-art code, or code close to the state of the art, need not be difficult to integrate, use or maintain. For example, the developers of Huggingface's Transformers 11 do an impressive job of wrapping the latest and greatest in natural language processing into an easy-to-use interface (Wolf et al., 2019) . They have shown an ability to integrate new models quickly after their initial publication. Usability and stability of the interface is dictated by the quality of the code that is written by the authors of the backend library. If this is done well then the state of the art can be integrated with minimal coding effort by users of the library. For this reason, we are not so concerned about the shifting sands of the underlying building blocks, but the choice of quality backend library does count here. While it is true that there will have to be some modest effort to keep up to date with ESPnet (as would be the case using any other tool), in using ESPnet we are optimistic that the models supported by Elpis can remain up to date with the state of the art without too much hassle.", "cite_spans": [ { "start": 837, "end": 856, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF62" } ], "ref_spans": [], "eq_spans": [], "section": "Will the technology deliver on its promise?", "sec_num": "5.2" }, { "text": "The broader context to the work reported here is a rapidly evolving field in which various initiatives aim to package natural language processing toolkits in intuitive interfaces so as to allow a wider audience to leverage the power of these toolkits. Directions for new developments in Elpis include (i) refining the ESPnet recipe, (ii) refining the user interface through user design processes, (iii) preparing pre-trained models that can be adapted to a small amount of data in a target language, and (iv) providing Elpis as a web service.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further improvements", "sec_num": "6" }, { "text": "Refinement of the ESPnet recipe that is used in the Elpis pipeline, such that it works as well as possible given the type of data found in language documentation contexts, is a top priority. This work focuses on achieving lower error rates across data sets, starting with refining hyperparameters for model training and extends to other project objectives including providing pre-trained models (see \u00a76.3). This work is of a more experimental nature and can be done largely independently of the Elpis front-end.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Refining the ESPnet recipe", "sec_num": "6.1" }, { "text": "In parallel with the technical integration of ESPnet with Elpis, a user-design process has been investigating how users expect to use these new features. In a series of sessions, linguists and language workers discussed their diverse needs with a designer. The feedback from this process informed the building of a prototype interface based on the latest version of Elpis at the time. The test interface was then used in individual testing sessions to discover points of confusion and uncertainty in the interface. Results of the design process will guide an update to the interface and further work on writing supporting documentation and user guides. The details of this process are beyond the scope of this paper and will be reported separately in future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Refining the interface", "sec_num": "6.2" }, { "text": "Adapting a trained model to a new language has a long history in speech recognition, having been used both for Hidden Markov Model based systems (Schultz and Waibel, 2001; Le and Besacier, 2005; Stolcke et al., 2006; T\u00f3th et al., 2008; Plahl et al., 2011; Thomas et al., 2012; Imseng et al., 2014; Do et al., 2014; Heigold et al., 2013; Scharenborg et al., 2017) and end-to-end neural systems (Toshniwal et al., 2017; Chiu et al., 2018; M\u00fcller et al., 2017; Dalmia et al., 2018; Watanabe et al., 2017a; Inaguma et al., 2018; Yi et al., 2018; Adams et al., 2019) . In scenarios where data in the target domain or language is limited, leveraging models trained on a number of speakers in different languages often can result in a better performance. The model can learn to cope with acoustic and phonetic characteristics that are common between languages, such as building robustness to channel variability due to different recording conditions, as well as learning common features of phones and sequences of phones between languages.", "cite_spans": [ { "start": 145, "end": 171, "text": "(Schultz and Waibel, 2001;", "ref_id": "BIBREF51" }, { "start": 172, "end": 194, "text": "Le and Besacier, 2005;", "ref_id": "BIBREF32" }, { "start": 195, "end": 216, "text": "Stolcke et al., 2006;", "ref_id": "BIBREF52" }, { "start": 217, "end": 235, "text": "T\u00f3th et al., 2008;", "ref_id": "BIBREF56" }, { "start": 236, "end": 255, "text": "Plahl et al., 2011;", "ref_id": "BIBREF45" }, { "start": 256, "end": 276, "text": "Thomas et al., 2012;", "ref_id": "BIBREF54" }, { "start": 277, "end": 297, "text": "Imseng et al., 2014;", "ref_id": "BIBREF28" }, { "start": 298, "end": 314, "text": "Do et al., 2014;", "ref_id": "BIBREF13" }, { "start": 315, "end": 336, "text": "Heigold et al., 2013;", "ref_id": null }, { "start": 337, "end": 362, "text": "Scharenborg et al., 2017)", "ref_id": "BIBREF50" }, { "start": 393, "end": 417, "text": "(Toshniwal et al., 2017;", "ref_id": "BIBREF55" }, { "start": 418, "end": 436, "text": "Chiu et al., 2018;", "ref_id": "BIBREF7" }, { "start": 437, "end": 457, "text": "M\u00fcller et al., 2017;", "ref_id": "BIBREF40" }, { "start": 458, "end": 478, "text": "Dalmia et al., 2018;", "ref_id": "BIBREF11" }, { "start": 479, "end": 502, "text": "Watanabe et al., 2017a;", "ref_id": "BIBREF58" }, { "start": 503, "end": 524, "text": "Inaguma et al., 2018;", "ref_id": "BIBREF29" }, { "start": 525, "end": 541, "text": "Yi et al., 2018;", "ref_id": "BIBREF63" }, { "start": 542, "end": 561, "text": "Adams et al., 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-trained models and transfer learning", "sec_num": "6.3" }, { "text": "In recent years pre-training of models on large amounts of unannotated data has led to breakthrough results in text-based natural language processing, initially gaining widespread popularity with the context-independent embeddings of word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) , before the recent contextual word embedding revolution (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2020) that has harnessed the transformer architecture (Vaswani et al., 2017) . It is now the case that the best approaches in natural language processing are typically characterized by pre-training of a model on a large amount of unannotated data using the cloze task (a.k.a masked language model training) before fine-tuning to a target task. Models pre-trained in this way best make use of available data since the amount of unannotated data far outweighs annotated data and such pre-training is advantageous to downstream learning, whether a small or large amount of data is available in the target task (Gururangan et al., 2020) . Despite the established nature of pre-training in natural language processing, it is less well established in speech recognition, though there has been recent work (Rivi\u00e8re et al., 2020; Baevski et al., 2020) .", "cite_spans": [ { "start": 243, "end": 265, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF39" }, { "start": 276, "end": 301, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF43" }, { "start": 359, "end": 380, "text": "(Peters et al., 2018;", "ref_id": "BIBREF44" }, { "start": 381, "end": 401, "text": "Devlin et al., 2019;", "ref_id": "BIBREF12" }, { "start": 402, "end": 419, "text": "Liu et al., 2020)", "ref_id": null }, { "start": 468, "end": 490, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF57" }, { "start": 1021, "end": 1046, "text": "(Gururangan et al., 2020)", "ref_id": "BIBREF19" }, { "start": 1213, "end": 1235, "text": "(Rivi\u00e8re et al., 2020;", "ref_id": "BIBREF48" }, { "start": 1236, "end": 1257, "text": "Baevski et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Pre-trained models and transfer learning", "sec_num": "6.3" }, { "text": "The language documentation scenario, where annotated data is very limited is a scenario that we argue stands most to gain from such pretraining (both supervised and self-supervised outof-domain); followed by model adaptation to limited target language data. One of the features Elpis could provide is to include pre-trained models in its distribution or via an online service. Such models may be pre-trained in a self-supervised manner on lots of untranscribed speech, trained in a supervised manner on transcribed speech in other languages, or use a combination of both pre-training tasks. In cases where the pre-trained model was trained in a supervised manner, there is scope to deploy techniques to reconcile the differences in acoustic realization between phonemes of different languages via methods such as that of Allosaurus (Li et al., 2020) which uses a joint model of language-independent phones and language-dependent phonemes. Providing a variety of pretrained models would be valuable, since the best seed model for adaptation may vary on the basis of the data in the target language (Adams et al., 2019) .", "cite_spans": [ { "start": 832, "end": 849, "text": "(Li et al., 2020)", "ref_id": "BIBREF33" }, { "start": 1097, "end": 1117, "text": "(Adams et al., 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-trained models and transfer learning", "sec_num": "6.3" }, { "text": "A recognized problem in language documentation is that, owing to the transcription bottleneck, a large amount of unannotated and untranscribed data ends up in data graveyards (Himmelmann, 2006) : archived recordings that go unused in linguistic research. It is frequently the case that the vast majority of speech collected by field linguists is untranscribed. Here too, self-supervised pretraining in the target language is likely a promising avenue to pursue, perhaps in tandem with supervised pre-training regiments. For this reason, we are optimistic that automatic transcription will have a role to play in almost all data scenarios found in the language documentation contexteven when training data is extremely limited -and is not just reserved for certain single-speaker corpora with consistently high quality audio and clean alignments with text. In the past one could plausibly argue that the limited amount of transcribed speech as training data is an insurmountable hurdle in a language documentation context, but that will likely not remain the case.", "cite_spans": [ { "start": 175, "end": 193, "text": "(Himmelmann, 2006)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-trained models and transfer learning", "sec_num": "6.3" }, { "text": "One of the next steps planned for Elpis is to allow for acoustic models to be exported and loaded. Beyond the immediate benefit of saving the trouble of training models anew each time, having a library of acoustic models available in an online repository would facilitate further research on adaptation of acoustic models to (i) more speakers, and (ii) more language varieties. Building universal phone recognition systems is an active area of research (Li et al., 2020) ; these developments could benefit from the availability of acoustic models on a range of languages. Hosting acoustic models in an online repository, and using them for transfer learning, appear as promising perspectives.", "cite_spans": [ { "start": 453, "end": 470, "text": "(Li et al., 2020)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-trained models and transfer learning", "sec_num": "6.3" }, { "text": "Training models requires a lot of computing power. Elpis now supports high-speed parallel processing in situations where the user's operating system has compatible GPUs (see Section 3.2). However, many users don't have this technology in the computers they have ready access to, so we also plan to investigate possibilities for host-ing Elpis on a high-capacity server for end-user access. Providing language technologies via web services appears to be a successful method of making tools widely available, with examples including the WebMAUS forced-alignment tool. 12 The suite of tools provided by the Bavarian Speech Archive (Kisler et al., 2017) have successfully processed more than ten million media files since their introduction in 2012. For users who want to avoid sending data to a server, there are other possibilities: Kaldi can be compiled to Web Assembly so it can do decoding in a browser (Hu et al., 2020) . But for the type of user scenarios considered here, hosting on a server would have major advantages, and transfer over secure connection is a strong protection against data theft (for those data sets that must not be made public, to follow the consultants' wishes or protect the data collectors' exclusive access rights to the data so that they will not be scooped in research and placed at a disadvantage in job applications).", "cite_spans": [ { "start": 566, "end": 568, "text": "12", "ref_id": null }, { "start": 628, "end": 649, "text": "(Kisler et al., 2017)", "ref_id": "BIBREF31" }, { "start": 904, "end": 921, "text": "(Hu et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Providing Elpis as a web service", "sec_num": "6.4" }, { "text": "This context suggests that it would be highly desirable to design web hosting for Elpis. It would facilitate conducting broad sets of tests training acoustic models, and would also facilitate the transcription of untranscribed recordings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Providing Elpis as a web service", "sec_num": "6.4" }, { "text": "In this paper we have reported on integrating ES-Pnet, an end-to-end neural network speech recognition system, into Elpis, the user-friendly speech recognition interface. We described changes that have been made to the front-end, the addition of a CUDA supported Elpis Dockerfile, and the creation of an ESPnet recipe for Elpis. We reported preliminary results on several languages and articulated plans going forward.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Religion, region, language and the state\" ) and Agence Nationale de la Recherche (as part of two projects, \"Computational Language Documentation by 2025\" [ANR-19-CE38-0015-04] and \"Empirical Foundations of Linguistics\" [ANR-10-LABX-0083]).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Linguistic resources used in the present study were collected as part of projects funded by the European Research Council (\"Discourse reporting in African storytelling\" ) and by Agence Nationale de la Recherche (\"Parallel corpora in languages of the Greater Himalayan area\" [ANR-12-CORP-0006]).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://github.com/CoEDL/elpis 2 https://github.com/kaldi-asr/kaldi", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.dictate.app 5 https://github.com/coxchristopher/ persephone-elan 6 https://archive.mpi.nl/tla/elan", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Each text has a Digital Object Identifier, allowing for one-click access. Readers are invited to take a look: https: //doi.org/10.24397/pangloss-000336010 https://gitlab.com/lacito/panglosselpis", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/huggingface/ transformers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://clarin.phonetik.uni-muenchen. de/BASWebServices/interface", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Many thanks to the three reviewers for comments and suggestions.We are grateful for financial support to the Elpis project from the Australian Research Council Centre of Excellence for the Dynamics of Language, the University of Queensland, the Institut des langues rares (ILARA) at \u00c9cole Pratique des Hautes \u00c9tudes, the European Research Council (as part of the project \"Beyond boundaries:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Evaluating phonemic transcription of low-resource tonal languages for language documentation", "authors": [ { "first": "Oliver", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Hilaria", "middle": [], "last": "Cruz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Michaud", "suffix": "" } ], "year": 2018, "venue": "Proceedings of LREC 2018 (Language Resources and Evaluation Conference)", "volume": "", "issue": "", "pages": "3356--3365", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oliver Adams, Trevor Cohn, Graham Neubig, Hi- laria Cruz, Steven Bird, and Alexis Michaud. 2018. Evaluating phonemic transcription of low-resource tonal languages for language documentation. In Proceedings of LREC 2018 (Language Resources and Evaluation Conference), pages 3356-3365, Miyazaki. https://halshs.archives- ouvertes.fr/halshs-01709648.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Massively multilingual adversarial speech recognition", "authors": [ { "first": "Oliver", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Wiesner", "suffix": "" }, { "first": "Shinji", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "96--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oliver Adams, Matthew Wiesner, Shinji Watanabe, and David Yarowsky. 2019. Massively multilin- gual adversarial speech recognition. In Proceed- ings of the 2019 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 96-108, Minneapolis, Minnesota. Association for Compu- tational Linguistics. https://www.aclweb. org/anthology/N19-1009.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The blind spots of digital innovation fetishism", "authors": [ { "first": "Marko", "middle": [], "last": "Ampuja", "suffix": "" } ], "year": 2020, "venue": "The digital age and its discontents: Critical reflections in education", "volume": "", "issue": "", "pages": "31--54", "other_ids": { "DOI": [ "10.33134/HUP-4-2" ] }, "num": null, "urls": [], "raw_text": "Marko Ampuja. 2020. The blind spots of digital in- novation fetishism. In Matteo Stocchetti, editor, The digital age and its discontents: Critical reflec- tions in education, pages 31-54. Helsinki Univer- sity Press, Helsinki. https://doi.org/10. 33134/HUP-4-2.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "2020. wav2vec 2.0: A framework for self-supervised learning of speech representations", "authors": [ { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Abdelrahman", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.11477" ] }, "num": null, "urls": [], "raw_text": "Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representa- tions. arXiv preprint arXiv:2006.11477. https: //arxiv.org/abs/2006.11477.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic speech recognition for under-resourced languages: A survey", "authors": [ { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "Etienne", "middle": [], "last": "Barnard", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Karpov", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Schultz", "suffix": "" } ], "year": 2014, "venue": "Speech Communication", "volume": "56", "issue": "", "pages": "85--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurent Besacier, Etienne Barnard, Alexey Karpov, and Tanja Schultz. 2014. Automatic speech recog- nition for under-resourced languages: A survey. Speech Communication, 56:85-100.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Language documentation meets language technology", "authors": [ { "first": "Rogier", "middle": [], "last": "Blokland", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Fedina", "suffix": "" }, { "first": "Ciprian", "middle": [], "last": "Gerstenberger", "suffix": "" }, { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rie\u00dfler", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Wilbur", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the First International Workshop on Computational Linguistics for Uralic Languages -Septentrio Conference Series", "volume": "", "issue": "", "pages": "8--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rogier Blokland, Marina Fedina, Ciprian Gersten- berger, Niko Partanen, Michael Rie\u00dfler, and Joshua Wilbur. 2015. Language documen- tation meets language technology. In Pro- ceedings of the First International Workshop on Computational Linguistics for Uralic Lan- guages -Septentrio Conference Series, pages 8- 18. http://septentrio.uit.no/index. php/SCS/article/view/3457/3386.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Endangered language documentation: bootstrapping a Chatino speech corpus, forced aligner, ASR", "authors": [ { "first": "Ma\u0142gorzata", "middle": [], "last": "Cavar", "suffix": "" }, { "first": "Damir", "middle": [], "last": "Cavar", "suffix": "" }, { "first": "Hilaria", "middle": [], "last": "Cruz", "suffix": "" } ], "year": 2016, "venue": "Proceedings of LREC 2016, Tenth International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "4004--4011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ma\u0142gorzata Cavar, Damir Cavar, and Hilaria Cruz. 2016. Endangered language documentation: boot- strapping a Chatino speech corpus, forced aligner, ASR. In Proceedings of LREC 2016, Tenth Interna- tional Conference on Language Resources and Eval- uation, pages 4004-4011, Portoro\u017e, Slovenia.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "State-of-the-art speech recognition with sequence-to-sequence models", "authors": [ { "first": "Chung-Cheng", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rohit", "middle": [], "last": "Prabhavalkar", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Anjuli", "middle": [], "last": "Kannan", "suffix": "" }, { "first": "Ron", "middle": [ "J" ], "last": "Weiss", "suffix": "" }, { "first": "Kanishka", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Katya", "middle": [], "last": "Gonina", "suffix": "" } ], "year": 2018, "venue": "ICASSP", "volume": "", "issue": "", "pages": "4774--4778", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chung-Cheng Chiu, Tara N Sainath, Yonghui Wu, Ro- hit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J Weiss, Kanishka Rao, Katya Gonina, et al. 2018. State-of-the-art speech recogni- tion with sequence-to-sequence models. In ICASSP, pages 4774-4778. https://arxiv.org/abs/ 1712.01769.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Persephone-ELAN (software)", "authors": [ { "first": "Christopher", "middle": [], "last": "Cox", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Cox. 2019. Persephone-ELAN (software). https://github.com/coxchristopher/persephone-elan. https://github.com/coxchristopher/ persephone-elan.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Phonology, tone and the functions of tone in San Juan Quiahije Chatino", "authors": [ { "first": "Emiliana", "middle": [], "last": "Cruz", "suffix": "" } ], "year": 2011, "venue": "Ph.D", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emiliana Cruz. 2011. Phonology, tone and the functions of tone in San Juan Quiahije Chatino. Ph.D., University of Texas at Austin, Austin. http://hdl.handle.net/2152/ ETD-UT-2011-08-4280.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Finding a way into a family of tone languages: The story and methods of the Chatino Language Documentation Project. Language Documentation and Conservation", "authors": [ { "first": "Emiliana", "middle": [], "last": "Cruz", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Woodbury", "suffix": "" } ], "year": 2014, "venue": "", "volume": "8", "issue": "", "pages": "490--524", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emiliana Cruz and Tony Woodbury. 2014. Finding a way into a family of tone languages: The story and methods of the Chatino Language Documentation Project. Language Documentation and Conserva- tion, 8:490-524. http://hdl.handle.net/ 10125/24615.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Sequence-based multilingual low resource speech recognition", "authors": [ { "first": "Siddharth", "middle": [], "last": "Dalmia", "suffix": "" }, { "first": "Ramon", "middle": [], "last": "Sanabria", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Metze", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2018, "venue": "ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharth Dalmia, Ramon Sanabria, Florian Metze, and Alan W Black. 2018. Sequence-based multi- lingual low resource speech recognition. In ICASSP. https://arxiv.org/abs/1802.07420.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics. https:// www.aclweb.org/anthology/N19-1423.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Cross-lingual phone mapping for large vocabulary speech recognition of underresourced languages", "authors": [ { "first": "Xiong", "middle": [], "last": "Van Hai Do", "suffix": "" }, { "first": "", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Eng Siong Chng", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2014, "venue": "IEICE Transactions on Information and Systems", "volume": "", "issue": "2", "pages": "285--295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Van Hai Do, Xiong Xiao, Eng Siong Chng, and Haizhou Li. 2014. Cross-lingual phone mapping for large vocabulary speech recognition of under- resourced languages. IEICE Transactions on Infor- mation and Systems, E97-D(2):285-295.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Future directions in technological support for language documentation", "authors": [ { "first": "Ben", "middle": [], "last": "Daan Van Esch", "suffix": "" }, { "first": "Nay", "middle": [], "last": "Foley", "suffix": "" }, { "first": "", "middle": [], "last": "San", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Workshop on Computational Methods for Endangered Languages", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daan van Esch, Ben Foley, and Nay San. 2019. Future directions in technological support for language documentation. In Proceedings of the Workshop on Computational Methods for Endan- gered Languages, volume 1, Honolulu, Hawai'i.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Building speech recognition systems for language documentation: the CoEDL Endangered Language Pipeline and Inference System (ELPIS)", "authors": [ { "first": "Ben", "middle": [], "last": "Foley", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Arnold", "suffix": "" }, { "first": "Rolando", "middle": [], "last": "Coto-Solano", "suffix": "" }, { "first": "Gautier", "middle": [], "last": "Durantin", "suffix": "" }, { "first": "T", "middle": [ "Mark" ], "last": "Ellison", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU)", "volume": "", "issue": "", "pages": "200--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Foley, Josh Arnold, Rolando Coto-Solano, Gau- tier Durantin, and T. Mark Ellison. 2018. Building speech recognition systems for language documen- tation: the CoEDL Endangered Language Pipeline and Inference System (ELPIS). In Proceedings of the 6th Intl. Workshop on Spoken Language Tech- nologies for Under-Resourced Languages (SLTU), 29-31 August 2018, pages 200-204, Gurugram, India. ISCA. https://www.isca-speech. org/archive/SLTU_2018/pdfs/Ben.pdf.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Elpis, an accessible speech-to-text tool", "authors": [ { "first": "Ben", "middle": [], "last": "Foley", "suffix": "" }, { "first": "Alina", "middle": [], "last": "Rakhi", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Lambourne", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Buckeridge", "suffix": "" }, { "first": "Janet", "middle": [], "last": "Wiles", "suffix": "" } ], "year": 2019, "venue": "Proceedings of Interspeech 2019", "volume": "", "issue": "", "pages": "306--310", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Foley, Alina Rakhi, Nicholas Lambourne, Nicholas Buckeridge, and Janet Wiles. 2019. Elpis, an accessible speech-to-text tool. In Proceedings of Interspeech 2019, pages 306-310, Graz. https: //www.isca-speech.org/archive/ Interspeech_2019/pdfs/8006.pdf.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Toward open data policies in phonetics: What we can gain and how we can avoid pitfalls", "authors": [ { "first": "Marc", "middle": [], "last": "Garellek", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Gordon", "suffix": "" }, { "first": "James", "middle": [], "last": "Kirby", "suffix": "" }, { "first": "Wai-Sum", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Michaud", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Mooshammer", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Niebuhr", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "Timo", "middle": [], "last": "Roettger", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Simpson", "suffix": "" }, { "first": "Kristine", "middle": [ "M" ], "last": "Yu", "suffix": "" } ], "year": 2020, "venue": "Journal of Speech Science", "volume": "9", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Garellek, Matthew Gordon, James Kirby, Wai- Sum Lee, Alexis Michaud, Christine Mooshammer, Oliver Niebuhr, Daniel Recasens, Timo Roettger, Adrian Simpson, and Kristine M. Yu. 2020. Toward open data policies in phonetics: What we can gain and how we can avoid pitfalls. Journal of Speech Science, 9(1). https://halshs.archives- ouvertes.fr/halshs-02894375.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Connectionist Temporal Classification : Labelling Unsegmented Sequence Data with Recurrent Neural Networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Santiago", "middle": [], "last": "Fernandez", "suffix": "" }, { "first": "Faustino", "middle": [], "last": "Gomez", "suffix": "" }, { "first": "Jurgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 23rd international conference on Machine Learning", "volume": "", "issue": "", "pages": "369--376", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves, Santiago Fernandez, Faustino Gomez, and Jurgen Schmidhuber. 2006. Connection- ist Temporal Classification : Labelling Unseg- mented Sequence Data with Recurrent Neural Networks. Proceedings of the 23rd interna- tional conference on Machine Learning, pages 369-376. http://www.cs.utoronto.ca/ graves/icml_2006.pdf.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "authors": [ { "first": "Ana", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Marasovi\u0107", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Iz", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Downey", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8342--8360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8342- 8360. Association for Computational Linguistics. https://arxiv.org/abs/2004.10964.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "End-to-end speech recognition using lattice-free MMI", "authors": [ { "first": "Hossein", "middle": [], "last": "Hadian", "suffix": "" }, { "first": "Hossein", "middle": [], "last": "Sameti", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Povey", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2018, "venue": "Interspeech", "volume": "", "issue": "", "pages": "12--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hossein Hadian, Hossein Sameti, Daniel Povey, and Sanjeev Khudanpur. 2018. End-to-end speech recognition using lattice-free MMI. In Interspeech, pages 12-16. https://danielpovey.com/ files/2018_interspeech_end2end.pdf.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Deep speech: Scaling up end-to-end speech recognition", "authors": [ { "first": "Awni", "middle": [], "last": "Hannun", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Case", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Casper", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Catanzaro", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Diamos", "suffix": "" }, { "first": "Erich", "middle": [], "last": "Elsen", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Prenger", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Satheesh", "suffix": "" }, { "first": "Shubho", "middle": [], "last": "Sengupta", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Coates", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.5567" ] }, "num": null, "urls": [], "raw_text": "Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. 2014. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567. https://arxiv.org/abs/ 1412.5567.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Multilingual acoustic models using distributed deep neural networks", "authors": [ { "first": "Devin", "middle": [], "last": "", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ICASSP", "volume": "", "issue": "", "pages": "8619--8623", "other_ids": {}, "num": null, "urls": [], "raw_text": "Devin, and Jeffrey Dean. 2013. Multilingual acous- tic models using distributed deep neural networks. In Proceedings of ICASSP, pages 8619-8623.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Language documentation: what is it and what is it good for?", "authors": [ { "first": "Nikolaus", "middle": [], "last": "Himmelmann", "suffix": "" } ], "year": 2006, "venue": "Josh Gippert, Nikolaus Himmelmann, and Ulrike Mosel", "volume": "", "issue": "", "pages": "1--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikolaus Himmelmann. 2006. Language documenta- tion: what is it and what is it good for? In Josh Gip- pert, Nikolaus Himmelmann, and Ulrike Mosel, ed- itors, Essentials of language documentation, pages 1-30. De Gruyter, Berlin/New York.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "" }, { "first": "George", "middle": [ "E" ], "last": "Dahl", "suffix": "" }, { "first": "Abdel-Rahman", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Senior", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Vanhoucke", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Others", "middle": [], "last": "", "suffix": "" } ], "year": 2012, "venue": "Signal Processing Magazine", "volume": "29", "issue": "6", "pages": "82--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, and Others. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82-97.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Towards a speech recognizer for Komi, an endangered and low-resource Uralic language", "authors": [ { "first": "Nils", "middle": [], "last": "Hjortnaes", "suffix": "" }, { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rie\u00dfler", "suffix": "" }, { "first": "Francis", "middle": [ "M" ], "last": "Tyers", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Sixth International Workshop on Computational Linguistics of Uralic Languages", "volume": "", "issue": "", "pages": "31--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Hjortnaes, Niko Partanen, Michael Rie\u00dfler, and Francis M. Tyers. 2020. Towards a speech recognizer for Komi, an endangered and low-resource Uralic language. In Proceed- ings of the Sixth International Workshop on Computational Linguistics of Uralic Languages, pages 31-37, Wien. Association for Computa- tional Linguistics. https://www.aclweb.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Kaldi-web: An installationfree, on-device speech recognition system", "authors": [ { "first": "Mathieu", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Pierron", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Jouvet", "suffix": "" } ], "year": 2020, "venue": "Proceedings of Interspeech 2020 Show & Tell, Shanghai", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mathieu Hu, Laurent Pierron, Emmanuel Vincent, and Denis Jouvet. 2020. Kaldi-web: An installation- free, on-device speech recognition system. In Pro- ceedings of Interspeech 2020 Show & Tell, Shang- hai. https://hal.archives-ouvertes. fr/hal-02910876.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Using out-of-language data to improve an under-resourced speech recognizer", "authors": [ { "first": "David", "middle": [], "last": "Imseng", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Motlicek", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "Bourlard", "suffix": "" }, { "first": "Philip N", "middle": [], "last": "Garner", "suffix": "" } ], "year": 2014, "venue": "Speech Communication", "volume": "56", "issue": "", "pages": "142--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Imseng, Petr Motlicek, Herv\u00e9 Bourlard, and Philip N Garner. 2014. Using out-of-language data to improve an under-resourced speech recognizer. Speech Communication, 56:142-151.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Transfer learning of languageindependent end-to-end ASR with language model fusion", "authors": [ { "first": "Hirofumi", "middle": [], "last": "Inaguma", "suffix": "" }, { "first": "Jaejin", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Murali", "middle": [ "Karthick" ], "last": "Baskar", "suffix": "" }, { "first": "Tatsuya", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "Shinji", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.02134" ] }, "num": null, "urls": [], "raw_text": "Hirofumi Inaguma, Jaejin Cho, Murali Karthick Baskar, Tatsuya Kawahara, and Shinji Watan- abe. 2018. Transfer learning of language- independent end-to-end ASR with language model fusion. arXiv:1811.02134. https://arxiv. org/abs/1811.02134.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Multilingual processing of speech via web services", "authors": [ { "first": "Thomas", "middle": [], "last": "Kisler", "suffix": "" }, { "first": "Uwe", "middle": [], "last": "Reichel", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Schiel", "suffix": "" } ], "year": 2017, "venue": "Computer Speech & Language", "volume": "45", "issue": "", "pages": "885--2308", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Kisler, Uwe Reichel, and Florian Schiel. 2017. Multilingual processing of speech via web services. Computer Speech & Language, 45:326-347. ISBN: 0885-2308 Publisher: Elsevier.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "First steps in fast acoustic modeling for a new target language: application to Vietnamese", "authors": [ { "first": "Bac", "middle": [], "last": "Viet", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Besacier", "suffix": "" } ], "year": 2005, "venue": "ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Viet Bac Le and Laurent Besacier. 2005. First steps in fast acoustic modeling for a new target language: application to Vietnamese. In ICASSP.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Universal phone recognition with a multilingual allophone system", "authors": [ { "first": "Xinjian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Dalmia", "suffix": "" }, { "first": "Juncheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" }, { "first": "Jiali", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Antonios", "middle": [], "last": "Anastasopoulos", "suffix": "" }, { "first": "David", "middle": [ "R" ], "last": "Mortensen", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2020, "venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "8249--8253", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinjian Li, Siddharth Dalmia, Juncheng Li, Matthew Lee, Patrick Littell, Jiali Yao, Antonios Anas- tasopoulos, David R. Mortensen, Graham Neu- big, and Alan W. Black. 2020. Universal phone recognition with a multilingual allophone system. In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 8249-8253. IEEE. https:// arxiv.org/abs/2002.11800.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "2020. A survey on contextual embeddings", "authors": [ { "first": "Qi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "J", "middle": [], "last": "Matt", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Kusner", "suffix": "" }, { "first": "", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.07278" ] }, "num": null, "urls": [], "raw_text": "Qi Liu, Matt J Kusner, and Phil Blunsom. 2020. A survey on contextual embeddings. arXiv preprint arXiv:2003.07278. https://arxiv. org/abs/2003.07278.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Alignement temporel entre transcriptions et audio de donn\u00e9es de langue japhug", "authors": [ { "first": "C\u00e9cile", "middle": [], "last": "Macaire", "suffix": "" } ], "year": 2020, "venue": "Actes des Journ\u00e9es scientifiques du Groupement de Recherche \"Linguistique informatique, formelle et de terrain\" (LIFT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C\u00e9cile Macaire. 2020. Alignement temporel entre tran- scriptions et audio de donn\u00e9es de langue japhug. In Actes des Journ\u00e9es scientifiques du Groupement de Recherche \"Linguistique informatique, formelle et de terrain\" (LIFT), Paris. https://hal. archives-ouvertes.fr/hal-03047146.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Documenting and researching endangered languages: the Pangloss Collection. Language Documentation and Conservation", "authors": [ { "first": "Boyd", "middle": [], "last": "Michailovsky", "suffix": "" }, { "first": "Martine", "middle": [], "last": "Mazaudon", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Michaud", "suffix": "" }, { "first": "S\u00e9verine", "middle": [], "last": "Guillaume", "suffix": "" } ], "year": 2014, "venue": "", "volume": "8", "issue": "", "pages": "119--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boyd Michailovsky, Martine Mazaudon, Alexis Michaud, S\u00e9verine Guillaume, Alexandre Fran\u00e7ois, and Evangelia Adamou. 2014. Doc- umenting and researching endangered lan- guages: the Pangloss Collection. Language Documentation and Conservation, 8:119- 135. https://halshs.archives- ouvertes.fr/halshs-01003734.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Tone in Yongning Na: lexical tones and morphotonology", "authors": [ { "first": "Alexis", "middle": [], "last": "Michaud", "suffix": "" } ], "year": 2017, "venue": "Number 13 in Studies in Diversity Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Michaud. 2017. Tone in Yongning Na: lex- ical tones and morphotonology. Number 13 in Studies in Diversity Linguistics. Language Science Press, Berlin. http://langsci-press.org/ catalog/book/109.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Integrating automatic transcription into the language documentation workflow: experiments with Na data and the Persephone toolkit. Language Documentation and Conservation", "authors": [ { "first": "Alexis", "middle": [], "last": "Michaud", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "S\u00e9verine", "middle": [], "last": "Guillaume", "suffix": "" } ], "year": 2018, "venue": "", "volume": "12", "issue": "", "pages": "393--429", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Michaud, Oliver Adams, Trevor Cohn, Graham Neubig, and S\u00e9verine Guillaume. 2018. Integrat- ing automatic transcription into the language docu- mentation workflow: experiments with Na data and the Persephone toolkit. Language Documentation and Conservation, 12:393-429. http://hdl. handle.net/10125/24793.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Phonemic and graphemic multilingual CTC based speech recognition", "authors": [ { "first": "Markus", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "St\u00fcker", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1711.04564" ] }, "num": null, "urls": [], "raw_text": "Markus M\u00fcller, Sebastian St\u00fcker, and Alex Waibel. 2017. Phonemic and graphemic multilingual CTC based speech recognition. arXiv:1711.04564. https://arxiv.org/abs/1711.04564.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Language Science Press business model: Evaluated version of the 2015 model", "authors": [ { "first": "Sebastian", "middle": [], "last": "Nordhoff", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5281/zenodo.1286972" ] }, "num": null, "urls": [], "raw_text": "Sebastian Nordhoff. 2018. Language Science Press business model: Evaluated version of the 2015 model. Language Science Press, Berlin. https: //doi.org/10.5281/zenodo.1286972.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" } ], "year": 2019, "venue": "Advances in neural information processing systems. Proceedings of the 33rd Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "8026--8037", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, and Luca Antiga. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems. Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), pages 8026-8037, Vancouver, Canada. https://papers.nips.cc/paper/9015- pytorch-an-imperative-style- high-performance-deep-learning- library.pdf.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Glove: global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Empiricial Methods in Natural Language Processing", "volume": "12", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: global vectors for word rep- resentation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12:1532-1543.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Deep contextualized word representations", "authors": [ { "first": "E", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.05365" ] }, "num": null, "urls": [], "raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365. https://arxiv.org/abs/1802.05365.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Cross-lingual portability of Chinese and English neural network features for French and German LVCSR", "authors": [ { "first": "Christian", "middle": [], "last": "Plahl", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Schl\u00fcter", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2011, "venue": "IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)", "volume": "", "issue": "", "pages": "371--376", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Plahl, Ralf Schl\u00fcter, and Hermann Ney. 2011. Cross-lingual portability of Chinese and En- glish neural network features for French and Ger- man LVCSR. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 371-376.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "The Kaldi speech recognition toolkit", "authors": [ { "first": "Daniel", "middle": [], "last": "Povey", "suffix": "" }, { "first": "Arnab", "middle": [], "last": "Ghoshal", "suffix": "" }, { "first": "Gilles", "middle": [], "last": "Boulianne", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Burget", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Glembek", "suffix": "" }, { "first": "Nagendra", "middle": [], "last": "Goel", "suffix": "" }, { "first": "Mirko", "middle": [], "last": "Hannemann", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Motlicek", "suffix": "" }, { "first": "Yanmin", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Schwarz", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Silovsky", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Stemmer", "suffix": "" }, { "first": "Karel", "middle": [], "last": "Vesely", "suffix": "" } ], "year": 2011, "venue": "IEEE 2011 Workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society. https://infoscience. epfl.ch/record/192584/files/Povey_ ASRU2011_2011.pdf.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "The pytorch-kaldi speech recognition toolkit", "authors": [ { "first": "Mirco", "middle": [], "last": "Ravanelli", "suffix": "" }, { "first": "Titouan", "middle": [], "last": "Parcollet", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2019, "venue": "ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "6465--6469", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mirco Ravanelli, Titouan Parcollet, and Yoshua Ben- gio. 2019. The pytorch-kaldi speech recogni- tion toolkit. In ICASSP 2019-2019 IEEE Interna- tional Conference on Acoustics, Speech and Sig- nal Processing (ICASSP), pages 6465-6469. IEEE. https://arxiv.org/abs/1811.07453.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Unsupervised pretraining transfers well across languages", "authors": [ { "first": "Morgane", "middle": [], "last": "Rivi\u00e8re", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Pierre-Emmanuel", "middle": [], "last": "Mazar\u00e9", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" } ], "year": 2020, "venue": "ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "7414--7418", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morgane Rivi\u00e8re, Armand Joulin, Pierre-Emmanuel Mazar\u00e9, and Emmanuel Dupoux. 2020. Unsuper- vised pretraining transfers well across languages. In ICASSP 2020 -2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 7414-7418. https://arxiv. org/abs/2002.02848.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Building an ASR system for a low-research language through the adaptation of a high-resource language ASR system: preliminary results", "authors": [ { "first": "Odette", "middle": [], "last": "Scharenborg", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Ciannella", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Palaskar", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Black", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Metze", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Ondel", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hasegawa-Johnson", "suffix": "" } ], "year": 2017, "venue": "International Conference on Natural Language, Signal and Speech Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Odette Scharenborg, Francesco Ciannella, Shruti Palaskar, Alan Black, Florian Metze, Lucas Ondel, and Mark Hasegawa-Johnson. 2017. Building an ASR system for a low-research language through the adaptation of a high-resource language ASR system: preliminary results. In International Conference on Natural Language, Signal and Speech Processing (ICNLSSP).", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Experiments on cross-language acoustic modeling", "authors": [ { "first": "Tanja", "middle": [], "last": "Schultz", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "2721--2724", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tanja Schultz and Alex Waibel. 2001. Experi- ments on cross-language acoustic modeling. EU- ROSPEECH'01, pages 2721-2724.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Cross-domain and cross-language portability of acoustic features estimated by multilayer perceptrons", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Frantisek", "middle": [], "last": "Grezl", "suffix": "" }, { "first": "Mei-Yuh", "middle": [], "last": "Hwang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Nelson", "middle": [], "last": "Morgan", "suffix": "" }, { "first": "Dimitra", "middle": [], "last": "Vergyri", "suffix": "" } ], "year": 2006, "venue": "ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke, Frantisek Grezl, Mei-Yuh Hwang, Xin Lei, Nelson Morgan, and Dimitra Vergyri. 2006. Cross-domain and cross-language portabil- ity of acoustic features estimated by multilayer per- ceptrons. In ICASSP. http://ieeexplore. ieee.org/document/1660022/.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Doing great things with small languages (Australian Research Council grant DP0984419", "authors": [ { "first": "Nick", "middle": [], "last": "Thieberger", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Nordlinger", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nick Thieberger and Rachel Nordlinger. 2006. Doing great things with small languages (Australian Research Council grant DP0984419). https: //arts.unimelb.edu.au/school-of- languages-and-linguistics/our- research/past-research-projects/ great-things-small-languages.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Multilingual MLP features for low-resource LVCSR systems", "authors": [ { "first": "Samuel", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Sriram", "middle": [], "last": "Ganapathy", "suffix": "" } ], "year": 2012, "venue": "ICASSP", "volume": "", "issue": "", "pages": "4269--4272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel Thomas, Sriram Ganapathy, Hynek Herman- sky, and Speech Processing. 2012. Multilingual MLP features for low-resource LVCSR systems. In ICASSP, pages 4269-4272.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Multilingual speech recognition with a single end-to-end model", "authors": [ { "first": "Shubham", "middle": [], "last": "Toshniwal", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Ron", "middle": [ "J" ], "last": "Weiss", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Moreno", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Weinstein", "suffix": "" }, { "first": "Kanishka", "middle": [], "last": "Rao", "suffix": "" } ], "year": 2017, "venue": "ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shubham Toshniwal, Tara N. Sainath, Ron J. Weiss, Bo Li, Pedro Moreno, Eugene Weinstein, and Kan- ishka Rao. 2017. Multilingual speech recognition with a single end-to-end model. In ICASSP. http: //arxiv.org/abs/1711.01694.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Cross-lingual portability of MLP-based tandem features -a case study for English and Hungarian", "authors": [ { "first": "L\u00e1szl\u00f3", "middle": [], "last": "T\u00f3th", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Frankel", "suffix": "" }, { "first": "G\u00e1bor", "middle": [], "last": "Gosztolya", "suffix": "" }, { "first": "Simon", "middle": [], "last": "King", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L\u00e1szl\u00f3 T\u00f3th, Joe Frankel, G\u00e1bor Gosztolya, and Simon King. 2008. Cross-lingual portability of MLP-based tandem features -a case study for English and Hun- garian. INTERSPEECH.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Language independent end-to-end architecture for joint language identification and speech recognition", "authors": [ { "first": "Shinji", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Takaaki", "middle": [], "last": "Hori", "suffix": "" }, { "first": "John", "middle": [ "R" ], "last": "Hershey", "suffix": "" } ], "year": 2017, "venue": "IEEE Workshop on Automatic Speech Recognition and Understanding Workshop (ASRU)", "volume": "", "issue": "", "pages": "265--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shinji Watanabe, Takaaki Hori, and John R. Her- shey. 2017a. Language independent end-to-end architecture for joint language identification and speech recognition. In IEEE Workshop on Automatic Speech Recognition and Under- standing Workshop (ASRU), pages 265-271. https://www.merl.com/publications/ docs/TR2017-182.pdf.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "ESPnet: End-to-end speech processing toolkit", "authors": [ { "first": "Shinji", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Takaaki", "middle": [], "last": "Hori", "suffix": "" }, { "first": "Shigeki", "middle": [], "last": "Karita", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Hayashi", "suffix": "" }, { "first": "Jiro", "middle": [], "last": "Nishitoba", "suffix": "" }, { "first": "Yuya", "middle": [], "last": "Unno", "suffix": "" }, { "first": "Nelson", "middle": [ "Enrique" ], "last": "", "suffix": "" }, { "first": "Yalta", "middle": [], "last": "Soplin", "suffix": "" }, { "first": "Jahn", "middle": [], "last": "Heymann", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Wiesner", "suffix": "" }, { "first": "Nanxin", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.00015" ] }, "num": null, "urls": [], "raw_text": "Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, and Nanxin Chen. 2018. ESPnet: End-to-end speech processing toolkit. arXiv preprint arXiv:1804.00015. https://arxiv. org/abs/1804.00015.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Hybrid CTC/attention architecture for end-to-end speech recognition", "authors": [ { "first": "Shinji", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Takaaki", "middle": [], "last": "Hori", "suffix": "" }, { "first": "Suyoun", "middle": [], "last": "Kim", "suffix": "" }, { "first": "R", "middle": [], "last": "John", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Hershey", "suffix": "" }, { "first": "", "middle": [], "last": "Hayashi", "suffix": "" } ], "year": 2017, "venue": "IEEE Journal of Selected Topics in Signal Processing", "volume": "11", "issue": "8", "pages": "1240--1253", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R Hershey, and Tomoki Hayashi. 2017b. Hy- brid CTC/attention architecture for end-to-end speech recognition. IEEE Journal of Selected Topics in Signal Processing, 11(8):1240-1253. https://www.merl.com/publications/ docs/TR2017-190.pdf.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Phonemic transcription of low-resource languages: To what extent can preprocessing be automated?", "authors": [ { "first": "Guillaume", "middle": [], "last": "Wisniewski", "suffix": "" }, { "first": "S\u00e9verine", "middle": [], "last": "Guillaume", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Michaud", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint SLTU (Spoken Language Technologies for Under-resourced languages) and CCURL (Collaboration and Computing for Under-Resourced Languages) Workshop", "volume": "", "issue": "", "pages": "306--315", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Wisniewski, S\u00e9verine Guillaume, and Alexis Michaud. 2020. Phonemic transcription of low-resource languages: To what extent can pre- processing be automated? In Proceedings of the 1st Joint SLTU (Spoken Language Technologies for Under-resourced languages) and CCURL (Col- laboration and Computing for Under-Resourced Languages) Workshop, pages 306-315, Marseille, France. European Language Resources Associa- tion (ELRA). https://halshs.archives- ouvertes.fr/hal-02513914.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natu- ral language processing. ArXiv, abs/1910.03771. https://arxiv.org/abs/1910.03771.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Adversarial multilingual training for lowresource speech recognition", "authors": [ { "first": "Jiangyan", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Jianhua", "middle": [], "last": "Tao", "suffix": "" }, { "first": "Zhengqi", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Ye", "middle": [], "last": "Bai", "suffix": "" } ], "year": 2018, "venue": "ICASSP", "volume": "", "issue": "", "pages": "4899--4903", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiangyan Yi, Jianhua Tao, Zhengqi Wen, and Ye Bai. 2018. Adversarial multilingual training for low- resource speech recognition. ICASSP, pages 4899- 4903.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Adadelta: an adaptive learning rate method", "authors": [ { "first": "D", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "", "middle": [], "last": "Zeiler", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1212.5701" ] }, "num": null, "urls": [], "raw_text": "Matthew D. Zeiler. 2012. Adadelta: an adaptive learn- ing rate method. arXiv preprint arXiv:1212.5701. https://arxiv.org/abs/1212.5701.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Improved training of end-to-end attention models for speech recognition", "authors": [ { "first": "Albert", "middle": [], "last": "Zeyer", "suffix": "" }, { "first": "Kazuki", "middle": [], "last": "Irie", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Schl\u00fcter", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Albert Zeyer, Kazuki Irie, Ralf Schl\u00fcter, and Hermann Ney. 2018. Improved training of end-to-end at- tention models for speech recognition. https: //arxiv.org/abs/1805.03294.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "The RWTH ASR system for TED-LIUM Release 2: Improving Hybrid HMM with SpecAugment", "authors": [ { "first": "Wei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wilfried", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Kazuki", "middle": [], "last": "Irie", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Kitza", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Schl\u00fcter", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Zhou, Wilfried Michel, Kazuki Irie, Markus Kitza, Ralf Schl\u00fcter, and Hermann Ney. 2020. The RWTH ASR system for TED-LIUM Release 2: Improv- ing Hybrid HMM with SpecAugment. https: //arxiv.org/abs/2004.00960.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Training stages of the Elpis interface. Notice the choice of backend in the upper right-hand corner.", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Character error rate for Japhug as a function of the amount of training data, using the ESPnet recipe included in Elpis. Character error rate on the training set (blue) and validation set (orange) for Japhug as training progresses (up to 20 epochs), using the ESPnet recipe included in Elpis.", "uris": null, "type_str": "figure" }, "TABREF0": { "html": null, "num": null, "type_str": "table", "text": "Information on the evaluation datasets used and the character error rate performance of the current recipe.", "content": "
Language Num speakers TypeTrain (minutes) CER (%)
Na1Spontaneous narratives27314.5
Na1Elicited words & phrases 1884.7
Chatino1Read speech8123.5
Japhug1Spontaneous narratives17012.8
attention objectives. For optimization we use a
batch length of 30 and the Adadelta gradient de-
scent algorithm (Zeiler, 2012). For more details,
we include a link to the recipe. 8
" } } } }