{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:46:49.569594Z" }, "title": "Learning Robust and Multilingual Speech Representations", "authors": [ { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oxford", "location": { "settlement": "Oxford", "country": "UK" } }, "email": "kawakamik@google.com" }, { "first": "Luyu", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oxford", "location": { "settlement": "Oxford", "country": "UK" } }, "email": "luyuwang@google.com" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oxford", "location": { "settlement": "Oxford", "country": "UK" } }, "email": "cdyer@google.com" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oxford", "location": { "settlement": "Oxford", "country": "UK" } }, "email": "pblunsom@google.com" }, { "first": "Aaron", "middle": [], "last": "Van Den Oord", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oxford", "location": { "settlement": "Oxford", "country": "UK" } }, "email": "" }, { "first": "\u2663", "middle": [ "\u2663" ], "last": "Deepmind", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oxford", "location": { "settlement": "Oxford", "country": "UK" } }, "email": "" }, { "first": "", "middle": [], "last": "London", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oxford", "location": { "settlement": "Oxford", "country": "UK" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Unsupervised speech representation learning has shown remarkable success at finding representations that correlate with phonetic structures and improve downstream speech recognition performance. However, most research has been focused on evaluating the representations in terms of their ability to improve the performance of speech recognition systems on read English (e.g. Wall Street Journal and Lib-riSpeech). This evaluation methodology overlooks two important desiderata that speech representations should have: robustness to domain shifts and transferability to other languages. In this paper we learn representations from up to 8000 hours of diverse and noisy speech data and evaluate the representations by looking at their robustness to domain shifts and their ability to improve recognition performance in many languages. We find that our representations confer significant robustness advantages to the resulting recognition systems: we see significant improvements in out-of-domain transfer relative to baseline feature sets and the features likewise provide improvements in 25 phonetically diverse languages.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Unsupervised speech representation learning has shown remarkable success at finding representations that correlate with phonetic structures and improve downstream speech recognition performance. However, most research has been focused on evaluating the representations in terms of their ability to improve the performance of speech recognition systems on read English (e.g. Wall Street Journal and Lib-riSpeech). This evaluation methodology overlooks two important desiderata that speech representations should have: robustness to domain shifts and transferability to other languages. In this paper we learn representations from up to 8000 hours of diverse and noisy speech data and evaluate the representations by looking at their robustness to domain shifts and their ability to improve recognition performance in many languages. We find that our representations confer significant robustness advantages to the resulting recognition systems: we see significant improvements in out-of-domain transfer relative to baseline feature sets and the features likewise provide improvements in 25 phonetically diverse languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The input representation of machine learning model strongly determines the difficulty faced by the learning algorithm, how much data the learner will require to find a good solution, and whether the learner generalizes out of sample and out of the domain of the training data. Representations (or features) that encode relevant information about data enable models to achieve good performance on downstream tasks, while representations that are invariant to factors that are not relevant to downstream tasks can further improve generalization. Traditionally, many invariances were hard-coded in feature extraction methods. For example, in image representations, geometric and photometric invariance has been investigated (Mundy et al., 1992; Van De Weijer et al., 2005) . For acoustic representations, standard MFCC features are sensitive to additive noise and many modifications have been proposed to overcome those limitations (Dev and Bansal, 2010; Kumar et al., 2011) .", "cite_spans": [ { "start": 721, "end": 741, "text": "(Mundy et al., 1992;", "ref_id": "BIBREF35" }, { "start": 742, "end": 769, "text": "Van De Weijer et al., 2005)", "ref_id": "BIBREF46" }, { "start": 929, "end": 951, "text": "(Dev and Bansal, 2010;", "ref_id": "BIBREF8" }, { "start": 952, "end": 971, "text": "Kumar et al., 2011)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, unsupervised representation learning algorithms have shown significant improvements at learning representations that correlate well with phonetic structure (van den Oord et al., 2018; Kahn et al., 2019b) and improving downstream speech recognition performance . Most of this work focused on learning representations from read English speech (from the LibriSpeech and LibriVox datasets) and evaluating the features when used to recognize speech in a rather similar domain (read English text). However, this approach to evaluation fails to test for the invariances that we would like good speech representations to have: robustness to domain shifts and transferability to other languages.", "cite_spans": [ { "start": 166, "end": 193, "text": "(van den Oord et al., 2018;", "ref_id": "BIBREF36" }, { "start": 194, "end": 213, "text": "Kahn et al., 2019b)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our experiments we learn representations from 8000 hours of diverse and noisy speech, using an extended version of contrastive predictive coding model: bidirectional predictive models with dense residual connections ( \u00a72- \u00a74), and evaluate the robustness and transferability of our representations by estimating how invariant they are to domain and language shifts. To do so, an ASR model is trained using our representations on one dataset but evaluated on the test sets of other datasets. In this experiment, we find that the representations derived from the large pretraining dataset lead the ASR model to be much more robust to domain shifts, compared to both log filterbank features as well as to pretraining just on LibriSpeech. We also train ASR models on 25 languages, including low-resource languages (e.g. Amharic, Fongbe, Swahili, Wolof), and show that our representations significantly outperform both standard features and those pretrained only on clean English data in the language transfer setup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In summary, we confirm several increasingly common patterns that may be discerned in the literature on unsupervised representation learning, across a variety of modalities. First, scale matters: good representation learning requires a large amount of data. Second, unsupervised representations consistently improve robustness on downstream tasks. And finally, representations learned from multilingual data can transfer across many languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unsupervised representation learning methods rely on differentiable objectives which quantify the degree to which representations have succeeded at capturing the relevant characteristics in data. Mutual information measures relationships between random variables (Fano and Hawkins, 1961) . Mutual information maximization techniques, that learn representations that describe data by maximizing mutual information between data and representation variables, have been explored for a long time in unsupervised representation learning (Linsker, 1988; Bell and Sejnowski, 1995) . However, since the exact computation of mutual information is not tractable for continuous variables, recently many estimators have been proposed for enabling unsupervised representation learning with neural networks (Belghazi et al., 2018; van den Oord et al., 2018; Poole et al., 2019) .", "cite_spans": [ { "start": 263, "end": 287, "text": "(Fano and Hawkins, 1961)", "ref_id": "BIBREF15" }, { "start": 531, "end": 546, "text": "(Linsker, 1988;", "ref_id": "BIBREF33" }, { "start": 547, "end": 572, "text": "Bell and Sejnowski, 1995)", "ref_id": "BIBREF5" }, { "start": 792, "end": 815, "text": "(Belghazi et al., 2018;", "ref_id": "BIBREF4" }, { "start": 816, "end": 842, "text": "van den Oord et al., 2018;", "ref_id": "BIBREF36" }, { "start": 843, "end": 862, "text": "Poole et al., 2019)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Contrastive Predictive Coding: CPC", "sec_num": "2" }, { "text": "Contrastive predictive coding (van den Oord et al., 2018, CPC) is a mutual information maximization method that has been successfully applied to many modalities such as images and speech (H\u00e9naff et al., 2019; . The objective is designed to extract features that allow the model to make long-term predictions about future observations. This is done by maximizing the mutual information of these features with those extracted from future timesteps. The intuition is that the representations capture different levels of structure dependent on how far ahead the model predicts. For example, if the model only predicts a few steps ahead, the resulting representations can capture local structures. On the other hand, if the model predicts further in the future, the representations will need to infer \"slow features\" (Wiskott and Sejnowski, 2002) ; more global structures such as phonemes, words and utterances in speech.", "cite_spans": [ { "start": 187, "end": 208, "text": "(H\u00e9naff et al., 2019;", "ref_id": "BIBREF23" }, { "start": 812, "end": 841, "text": "(Wiskott and Sejnowski, 2002)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Contrastive Predictive Coding: CPC", "sec_num": "2" }, { "text": "The overall unsupervised learning process is visualized in Figure 1 . Given a raw audio signal of length L (", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 67, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Contrastive Predictive Coding: CPC", "sec_num": "2" }, { "text": "x = x 1 , x 2 , . . . , x L , x i \u2208 R where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contrastive Predictive Coding: CPC", "sec_num": "2" }, { "text": "x i represents the acoustic amplitude at time i), a function g enc encodes the audio signals into vector representations (z = z 1 , z 2 . . . , z M , z \u2208 R dz ). Next, an autoregressive function g ar , such as a recurrent neural network, summarizes the past representations and produces context vectors (c = c 1 , c 2 . . . , c M , c \u2208 R dc ). The representations are learned to maximize mutual information between context vectors (c t ) and future latent representations (z + k) as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contrastive Predictive Coding: CPC", "sec_num": "2" }, { "text": "I(c t , z t+k ) = ct,z t+k p(c t , z t+k | k) log p(z t+k | c t , k) p(z t+k ) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contrastive Predictive Coding: CPC", "sec_num": "2" }, { "text": "Since the mutual information is not tractable for high dimensional data, it is common to use a lower-bound on the mutual information such as InfoNCE (van den Oord et al., 2018) which is a loss function based on noise contrastive estimation (Gutmann and Hyv\u00e4rinen, 2010) . Given a set Z = {z 1 , . . . z N } which contains one positive sample from p(z t+k |c t ) and N \u2212 1 negative samples from a \"noise\" distribution p(z), the approximated lower-bound is written as:", "cite_spans": [ { "start": 149, "end": 176, "text": "(van den Oord et al., 2018)", "ref_id": "BIBREF36" }, { "start": 240, "end": 269, "text": "(Gutmann and Hyv\u00e4rinen, 2010)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Contrastive Predictive Coding: CPC", "sec_num": "2" }, { "text": "I(c t , z t+k ) \u2265 E Z log f k (c t , z t+k ) 1 N z\u2208Z f k (c t ,z) = L NCE tk ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contrastive Predictive Coding: CPC", "sec_num": "2" }, { "text": "where f k (c t , z t+k ) is a scoring function. We used the standard log-bilinear model as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contrastive Predictive Coding: CPC", "sec_num": "2" }, { "text": "f k (c t , z t+k ) = exp(c T t W k z t+k ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contrastive Predictive Coding: CPC", "sec_num": "2" }, { "text": "The loss function we maximize is a sum of the InfoNCE loss for each step, L NCE = t k L NCE tk and the negatives are uniformly sampled from representations in the same audio signal (z).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contrastive Predictive Coding: CPC", "sec_num": "2" }, { "text": "In this section, we describe our models and objectives for unsupervised representation learning and downstream speech recognition. First, an acoustic feature extractor is trained with a bidirectional variant of contrastive predictive coding on an unlabeled audio dataset. Next, the parameters of this model are frozen and its output representations are used as input to train various speech recognition models, potentially on a different or smaller labeled dataset (Figure 1 ). ", "cite_spans": [], "ref_spans": [ { "start": 465, "end": 474, "text": "(Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Methods", "sec_num": "3" }, { "text": "Following the success of bidirectional models in representation learning (Peters et al., 2018; Devlin et al., 2019) , we extend the original CPC method explained above with bidirectional context networks. The encoder function g enc is shared for both directions, but there are two autoregressive models (g fwd ar and g bwd ar ) which read encoded observations (z) from the forward and backward contexts, respectively. The forward and backward context representations c fwd t , c bwd t are learned with separate InfoNCE losses. When they are used for downstream tasks, a concatenation of two representations c t = [c fwd t ; c bwd t ] is used. A similar technique has been used in image representation learning where representations are learned along different spatial dimensions (H\u00e9naff et al., 2019) .", "cite_spans": [ { "start": 73, "end": 94, "text": "(Peters et al., 2018;", "ref_id": "BIBREF40" }, { "start": 95, "end": 115, "text": "Devlin et al., 2019)", "ref_id": "BIBREF9" }, { "start": 779, "end": 800, "text": "(H\u00e9naff et al., 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised learning with bi-directional CPC", "sec_num": "3.1" }, { "text": "All audio signals have a sampling rate of 16kHz and we normalize the mean and variance of the input signals over each utterance in order to mitigate volume differences between samples. For architectures, we use encoder and autoregressive models similar to . The encoder function g enc , is a stack of causal convolutions with kernel sizes (10, 8, 4, 4, 4, 1, 1) and stride sizes (5, 4, 2, 2, 2, 1, 1), corresponding to a receptive field of 10 ms of audio. For autoregressive functions, we use a 13 layer causal convolution architecture with kernel sizes (1, 2, . . . , 12, 13) and stride size 1, for both forward and backward functions. Layer-normalization across the temporal and feature dimensions is applied to every layer. Also, each layer has dense skip connections with layers below as in DenseNet (Huang et al., 2017) . The objective function we optimize is the sum of the forward and backward InfoNCE losses (eq.2).", "cite_spans": [ { "start": 804, "end": 824, "text": "(Huang et al., 2017)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised learning with bi-directional CPC", "sec_num": "3.1" }, { "text": "Once the acoustic representations are trained, the resulting context vectors (c) are used as inputs to character-level speech recognition models which predict transcriptions of audio-signals character by character. The model first predicts frame-level character probabilities with a series of convolution layers while the CTC forward algorithm (Graves et al., 2006) calculates conditional probabilities of a transcription given an audio signal. The model parameters are trained to maximize the log likelihood of the data. The training terminates when the word error rate on the development set stops improving or the model has trained for more than a certain number of epochs. The models are evaluated on the standard word error rate (WER) metric on held-out test data. During training, the parameters in the speech recognition models are trained with supervision but the parameters of the representation models remain fixed. For decoding, we use greedy CTC decoding. In most experiments, we do not use a language model (LM) in order to isolate the effects of the acoustic representations, but we do include results with a 4-gram LM to facilitate comparisons with published results.", "cite_spans": [ { "start": 344, "end": 365, "text": "(Graves et al., 2006)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Semi-supervised speech recognition", "sec_num": "3.2" }, { "text": "Common practice in unsupervised representation learning is to evaluate learned representations using a linear classifier rather than a more complex nonlinear model. However, we find that a simple linear layer followed by a CTC decoder does not have enough capacity to recognize speech. Thus, for our first set of experiments we use a smaller version of DeepSpeech2 (Amodei et al., 2016) to predict the frame-level character probabilities. The model has two 2d-convolutions with kernel sizes (11, 41) and (11, 21) and stride sizes (2, 2) and (1, 2) and one unidirectional recurrent neural network (GRU) on top of the output from the convolution layers. A linear transformation and a softmax function are applied to predict frame-level character probabilities. We refer to DeepSpeech2 small for the model specifics (Amodei et al., 2016) . In order to further investigate how the representations interact with larger speech recognition models, we use the timedelay neural networks (TDNN) that are commonly used in speech recognition (Collobert et al., 2016; Kuchaiev et al., 2018) . These consist of 17 layers of 1d-convolutions followed by 2 fully connected layers. Refer to OpenSeq2Seq for a detailed description. 1 These large models have been designed to perform well with log-filterbank features and purely supervised learning on large datasets, so they represent a challenging and informative test case for the value of learned representations.", "cite_spans": [ { "start": 365, "end": 386, "text": "(Amodei et al., 2016)", "ref_id": "BIBREF1" }, { "start": 491, "end": 495, "text": "(11,", "ref_id": null }, { "start": 496, "end": 499, "text": "41)", "ref_id": null }, { "start": 504, "end": 508, "text": "(11,", "ref_id": null }, { "start": 509, "end": 512, "text": "21)", "ref_id": null }, { "start": 813, "end": 834, "text": "(Amodei et al., 2016)", "ref_id": "BIBREF1" }, { "start": 1030, "end": 1054, "text": "(Collobert et al., 2016;", "ref_id": "BIBREF7" }, { "start": 1055, "end": 1077, "text": "Kuchaiev et al., 2018)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Semi-supervised speech recognition", "sec_num": "3.2" }, { "text": "We collected publicly available speech datasets which cover a variety of types of speech (e.g. read and spoken), noise conditions and languages. For unsupervised pretraining we use a combination of datasets, using the audio but not any transcriptions, even when they are available. For semi-supervised learning (i.e., evaluation) on top of the representations we use the transcribed datasets following their standard train-test splits. Table 1 summarizes the datasets used for unsupervised learning and English speech recognition tasks.", "cite_spans": [], "ref_spans": [ { "start": 436, "end": 443, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "Unlabeled speech pretraining corpus For pretraining, we collected a diverse and noisy speech corpus from several existing datasets: the subset of Audio Set (Gemmeke et al., 2017) containing speech examples, the audio part of AVSpeech (Ephrat et al., 2018) , and the Common Voice (CV) 2 dataset in all 29 available languages. In addition we used the audio from TIMIT (Garofolo, 1993) tions. Finally, we include the audio (again ignoring transcriptions) from the standard training splits of the evaluation datasets below. This collection spans a range of recording conditions, noise levels, speaking styles, and languages and amounts to about 8000 hours of audio.", "cite_spans": [ { "start": 234, "end": 255, "text": "(Ephrat et al., 2018)", "ref_id": "BIBREF14" }, { "start": 366, "end": 382, "text": "(Garofolo, 1993)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "Transcribed read English For evaluation, we look at the performance of our representations on a variety of standard English recognition tasks, as well as their ability to be trained on one and tested on another. For read English, we use Lib-riSpeech (Panayotov et al., 2015) and the Wall Street Journal (Paul and Baker, 1992) .", "cite_spans": [ { "start": 250, "end": 274, "text": "(Panayotov et al., 2015)", "ref_id": "BIBREF37" }, { "start": 303, "end": 325, "text": "(Paul and Baker, 1992)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "Transcribed spoken English To explore more extreme domain shifts, we additionally used conversational speech and public speaking datasets. We used Switchboard (Godfrey et al., 1992), a standard conversational speech recognition dataset consisting of two-sided telephone conversations (test only). Since the data was recorded more than 10 years ago and at a lower sampling rate than the other corpora, it presents a noisy and challenging recognition problem. Finally, we also use the Tedlium-3 (Hernandez et al., 2018) corpus, a large spoken English dataset containing 450 hours of speech extracted from TED conference talks. The recordings are clear, but there is some reverberation.", "cite_spans": [ { "start": 493, "end": 517, "text": "(Hernandez et al., 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "Transcription normalization Since we are comparing ASR systems trained on one dataset but evaluated on the test set of another, we normalize transcriptions to reduce systematic biases in the transfer condition. To do so, we use the format of the LibriSpeech dataset, which also ensures that our results are comparable with standard speech recognition systems on that task (Kuchaiev et al., 2018) . For the other datasets, transcriptions are lowercased and unpronounced symbols (e.g., punctuation, silence markers) are removed. We also remove utterances containing numbers as they are transcribed inconsistently across and within datasets.", "cite_spans": [ { "start": 372, "end": 395, "text": "(Kuchaiev et al., 2018)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "Transcribed multilingual speech In order to evaluate the transferability of the representations, we use speech recognition datasets in 4 African languages collected by the ALFFA project, 3 Amharic (Tachbelie et al., 2014) , Fongbe (A. A Laleye et al., 2016), Swahili (Gelas et al., 2012) , Wolof (Gauthier et al., 2016) , for evaluation. These languages have unique phonological properties (e.g. height harmony) and phonetic inventories, making them a good contrast to English. These African languages are low-resource, each with 20 hours or less of transcribed speech. We also use 21 phonetically diverse languages from OpenSLR. 4 See Appendix A for more detail.", "cite_spans": [ { "start": 197, "end": 221, "text": "(Tachbelie et al., 2014)", "ref_id": "BIBREF45" }, { "start": 267, "end": 287, "text": "(Gelas et al., 2012)", "ref_id": "BIBREF18" }, { "start": 296, "end": 319, "text": "(Gauthier et al., 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "We train the model described above ( \u00a73.1) using the datasets described in the previous section ( \u00a74.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Representation Learning", "sec_num": "4.2" }, { "text": "Similarly to Schneider et al. 2019), audio signals are randomly cropped with a window size 149,600 observations (9.35 seconds) and encoded with the model. The bidirectional contrastive predictive coding objective (Eq. 2) with prediction steps (k) 12 and negatives (N ) 10 is optimized with the Adam optimizer with learning rate 0.0001. A batch size of 128 is used as well as a polynomial learning rate scheduler with power 2 and gradient clipping with maximum norm 5.0. Training was terminated at 4.2 million steps based on speech recognition performance on the dev (= validation) set of the LibriSpeech corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Representation Learning", "sec_num": "4.2" }, { "text": "Robustness to shifts in domain, recording conditions, and noise levels is an important desideratum for a good ASR system, and we hypothesized that the diversity of our largest pretraining regime would improve robustness along these dimensions. In contrast, standard MFCC features have been tested in terms of noise robustness and it is known that such representations are sensitive to additive noise (Zhao and Wang, 2013) . Moreover, speech recognition systems developed on top of such features are not robust when they are evaluated on out-of-domain datasets (Amodei et al., 2016) .", "cite_spans": [ { "start": 400, "end": 421, "text": "(Zhao and Wang, 2013)", "ref_id": "BIBREF50" }, { "start": 560, "end": 581, "text": "(Amodei et al., 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Robustness", "sec_num": "4.3" }, { "text": "To test whether our pretraining approach improves robustness, we evaluate speech recognition models trained on the learned representations on many different datasets so as to investigate benefit of using the representations learned from largescale data. We compare ASR systems on all of the Wall Street Journal and LibriSpeech corpora with the same optimization as explained above and evaluate word error rate on different evaluation sets, such as phone call conversations (Switchboard). Table 2 summarizes the results on models trained on Wall Street Journal, LibriSpeech or the Tedlium corpora and evaluated on different evaluation sets. CPC-LibriSpeech and CPC-8k indicate representations are learned from LibriSpeech and 8000h of speech datasets listed above respectively. The features trained on large-scale data consistently outperform other representations across different evaluation sets. The speech recognition models trained on the Wall Street Journal perform badly on phone call data in general. However, CPC representations learned on large datasets are more robust than those trained only on read English data (LibriSpeech).", "cite_spans": [], "ref_spans": [ { "start": 488, "end": 495, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Robustness", "sec_num": "4.3" }, { "text": "Thus far, all our experiments have compared our representations in terms of their impacts on English recognition tasks (although we know that the pretraining dataset contains samples from many languages). We now turn to the question of whether these representations are suitable for driving recognition different languages with substantially different phonetic properties than English has. Specifically, we look at the performance on four languages-Amharic, Fongbe, Swahili, and Wolof-which manifest a variety of interesting phonological properties that are quite different from English. Evaluating on such languages will provide insights into the phonetic space learned in the representations. Moreover, our non-English languages are low-resource in terms of speech recognition data, but have 2-20 million native speakers each. It is therefore valuable if the representations learned from large-scale unlabelled data can improve low-resource speech recognition. Although there is a chance that the large-scale pretraining dataset may contain some examples from those languages, we did not add any extra data specifically from those languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Low-resource Languages", "sec_num": "4.4" }, { "text": "To test the cross-linguistic value of these features, we trained speech recognition models on low-resource languages ( \u00a74.1) and compare the relative reduction in WER by switching from standard spectrogram features and the learned representations. As these are very small datasets, we trained the same DeepSpeech2 small architecture with the Adam optimizer with a fixed learning rate of 0.0002 and gradient clipping with maximum norm 25.0 for all languages. Figure 2 summarizes results. Again, we find that the CPC-8k representations outperform other features by a large margin and that the models trained on the representations trained on using the audio of (English-only) LibriSpeech do not perform even as well as basic spectrogram features. This suggests that the representations learned on large-scale data capture a phonetic space that generalizes across different languages, but that diversity of linguistic inputs is crucial for developing this universality.", "cite_spans": [], "ref_spans": [ { "start": 458, "end": 466, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Low-resource Languages", "sec_num": "4.4" }, { "text": "As a final exploration of the transferability of the representations, we evaluate the representations on a diverse language set of languages with varying amounts of training data and compare the relative reductions in word error rate we obtain when using standard features and switching to the CPC-8k representations. As most of the dataset are small, we trained DeepSpeech2 small models with the Adam optimizer with a fixed learning rate of 0.0002 and applied gradient clipping with maximum norm 25.0, using the same configuration for all languages. Since the experiments above showed that CPC-LibriSpeech features performed badly, we only compare the relative error rediction with CPC-8k features over spectrogram features. In all cases, we find that the CPC-8k representations improve performance relative to spectorgram feature baselines. The largest improvement was obtained on Sundanese where the WER with spectrogram was 27.85 but dropped to 11.49 using CPC-8k features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Transfer", "sec_num": "4.5" }, { "text": "Discussion As our pre-training data did not have any language labels, it is unclear how many samples were seen for each language during pretraining. However, it is important to know that the uncurated multilingual pre-training can improve speech recognition performance on many languages. These results suggests, in practice, that one could use a universal speech feature extractor for many languages instead of training one for each language individually (Kannan et al., 2019) .", "cite_spans": [ { "start": 456, "end": 477, "text": "(Kannan et al., 2019)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual Transfer", "sec_num": "4.5" }, { "text": "Thus far, we have focused on robustness and transferability and seen that CPC-8k features offer considerable benefits in these dimensions compared to traditional features. It remains to demonstrate how well they work in powerful architectures where large amounts of labeled training data is available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Control: English Speech Recognition", "sec_num": "4.6" }, { "text": "To test this, we used 10% and 100% portions of Lib-riSpeech dataset to train speech recognition models, again comparing different features. Our architecture is a standard TDNN. The speech recognition models are trained in the similar way as standard models (Collobert et al., 2016; Kuchaiev et al., 2018) . The models are trained with Adam optimizer with learning rate 0.0002 and gradient clipping with a maximum norm 5.0 together with the polynomial learning rate decay method with power 2.0 is used over 200 epochs. 5 Table 3 summarizes the results with TDNN models trained on different sizes of LibriSpeech dataset. We see that even if the speech recognition models have a large number of parameters and are trained on plenty of supervised data, the learned representations still provide significant improvements. The pattern continues to hold if we use beam search decoding with a language model. 6 Our + LM decoding results are comparable to the OpenSeq2Seq benchmark, since we used the exact same LM and decoding algorithm as they used (Kuchaiev et al., 2018) .", "cite_spans": [ { "start": 257, "end": 281, "text": "(Collobert et al., 2016;", "ref_id": "BIBREF7" }, { "start": 282, "end": 304, "text": "Kuchaiev et al., 2018)", "ref_id": "BIBREF31" }, { "start": 1042, "end": 1065, "text": "(Kuchaiev et al., 2018)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 520, "end": 527, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Control: English Speech Recognition", "sec_num": "4.6" }, { "text": "Although better results contain be obtained using newer architectures than TDNN (Park et al., 2019; Synnaeve et al., 2019) , it still represents a standard and important recognition architecture and the results prove that the representations learned from diverse and noisy data can improve large speech", "cite_spans": [ { "start": 75, "end": 99, "text": "TDNN (Park et al., 2019;", "ref_id": null }, { "start": 100, "end": 122, "text": "Synnaeve et al., 2019)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Control: English Speech Recognition", "sec_num": "4.6" }, { "text": "Unsupervised learning played an import role in the reintroduction of deep networks to speech processing (Hinton et al., 2012) , as well as other application areas (Hinton et al., 2006; Bengio et al., 2007; Vincent et al., 2010) . After a period of focusing on supervised techniques, unsupervised representation learning has recently seen a resurgence in a variety of modalities (Doersch and Zisserman, 2017; van den Oord et al., 2018; Donahue and Simonyan, 2019; Bachman et al., 2019) and has led to improved results, especially in low-data regimes (H\u00e9naff et al., 2019; . In natural language processing, pretrained representations can outperform state-of-the-art system even in high data regimes (Mikolov et al., 2013; Devlin et al., 2019) .", "cite_spans": [ { "start": 104, "end": 125, "text": "(Hinton et al., 2012)", "ref_id": "BIBREF25" }, { "start": 163, "end": 184, "text": "(Hinton et al., 2006;", "ref_id": "BIBREF26" }, { "start": 185, "end": 205, "text": "Bengio et al., 2007;", "ref_id": "BIBREF6" }, { "start": 206, "end": 227, "text": "Vincent et al., 2010)", "ref_id": "BIBREF47" }, { "start": 378, "end": 407, "text": "(Doersch and Zisserman, 2017;", "ref_id": "BIBREF10" }, { "start": 408, "end": 434, "text": "van den Oord et al., 2018;", "ref_id": "BIBREF36" }, { "start": 435, "end": 462, "text": "Donahue and Simonyan, 2019;", "ref_id": "BIBREF11" }, { "start": 463, "end": 484, "text": "Bachman et al., 2019)", "ref_id": "BIBREF2" }, { "start": 549, "end": 570, "text": "(H\u00e9naff et al., 2019;", "ref_id": "BIBREF23" }, { "start": 697, "end": 719, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF34" }, { "start": 720, "end": 740, "text": "Devlin et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "The last two years have produced a large amount of work on unsupervised speech representation learning. Some of this work has been evaluated in terms of its ability to perform phone recognition and similar audio classification tasks (van den Oord et al., 2018) . Like us, Schneider et al. (2019) ; applied learned representations to speech recognition tasks and evaluated on how well in-domain WER was improved. However, as we argued in the paper, such an evaluation misses the opportunity to assess whether these systems become more robust to domain shift and to what extent the learned representations appropriate for different languages.", "cite_spans": [ { "start": 233, "end": 260, "text": "(van den Oord et al., 2018)", "ref_id": "BIBREF36" }, { "start": 263, "end": 295, "text": "Like us, Schneider et al. (2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Finally, the ZeroSpeech challenges have explicitly looked at correlations between learned representations and phonetic structures that generalize across many languages and adapt to new speakers (Dunbar et al., 2017 (Dunbar et al., , 2019 . Kahn et al. (2019b) ; Rivi\u00e8re et al. (2020) learned representations with contrastive predictive coding on 60,000 hours of English speech and could show that their representations are correlated well with phonetic structure of English and other languages; however, they did not evaluate these representations in a supervised speech recognizer.", "cite_spans": [ { "start": 194, "end": 214, "text": "(Dunbar et al., 2017", "ref_id": "BIBREF13" }, { "start": 215, "end": 237, "text": "(Dunbar et al., , 2019", "ref_id": "BIBREF12" }, { "start": 240, "end": 259, "text": "Kahn et al. (2019b)", "ref_id": "BIBREF29" }, { "start": 262, "end": 283, "text": "Rivi\u00e8re et al. (2020)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Recently, there have been considerable improvements in purely supervised speech recognition systems. Data augmentation (Park et al., 2019), selftraining (Synnaeve et al., 2019; Kahn et al., 2019a) have advanced the state-of-the-art performance on English speech recognition. It is likely that augmentation methods are orthogonal to the proposed improvements on universal speech representation learning, and that one could combine both to improve results even further. Additionally, the impact of data augmentation and self-training can be further assessed in terms of its impact on robustness using the methods proposed in this paper.", "cite_spans": [ { "start": 153, "end": 176, "text": "(Synnaeve et al., 2019;", "ref_id": "BIBREF44" }, { "start": 177, "end": 196, "text": "Kahn et al., 2019a)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We have introduced an unsupervised speech representation learning method that discovers acoustic representations from up to 8000 hours of diverse and noisy speech data. We have shown, for the first time, that such pretrained representations lead speech recognition systems to be robust to domain shifts compared to standard acoustic representations, and compared to representations trained on smaller and more domain-narrow pretraining datasets. These representations were evaluated on a standard speech recognition setup where the models are trained and evaluated on in-domain data and also on transfer tasks where the models are evaluated on out-of-domain data. We obtained consistent improvements on 25 phonetically diverse languages including tonal and low-resource languages. This suggests we are making progress toward models that implicitly discover phonetic structure from large-scale unlabelled audio signals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "For the multilingual evaluation, we only include (labeled) datasets from OpenSLR that containing more than 1GB of audio. When there is more than one dataset available for one language, we used the largest dataset. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Multilingual evaluation datasets", "sec_num": null }, { "text": "https://nvidia.github.io/OpenSeq2Seq/ html/speech-recognition/wave2letter.html 2 https://voice.mozilla.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://alffa.imag.fr 4 https://openslr.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These hyperparameters were chosen to give optimal performance with baseline log filterbank features, and used, unchanged for our learned features.6 http://www.openslr.org/resources/11/ 4-gram.arpa.gz recognition model on English in both low-data and high-data regimes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "First automatic fongbe continuous speech recognition system: Development of acoustic models and language models", "authors": [ { "first": "A", "middle": [], "last": "Fr\u00e9jus", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Laleye", "suffix": "" }, { "first": "", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "C", "middle": [], "last": "Eug\u00e8ne", "suffix": "" }, { "first": "Cina", "middle": [], "last": "Ezin", "suffix": "" }, { "first": "", "middle": [], "last": "Motamed", "suffix": "" } ], "year": 2016, "venue": "Proc. FedCSIS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fr\u00e9jus A. A Laleye, Laurent Besacier, Eug\u00e8ne C. Ezin, and Cina Motamed. 2016. First automatic fongbe continuous speech recognition system: Develop- ment of acoustic models and language models. In Proc. FedCSIS.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Deep speech 2: End-to-end speech recognition in english and mandarin", "authors": [ { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Rishita", "middle": [], "last": "Sundaram Ananthanarayanan", "suffix": "" }, { "first": "Jingliang", "middle": [], "last": "Anubhai", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Battenberg", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Case", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Casper", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Catanzaro", "suffix": "" }, { "first": "Guoliang", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2016, "venue": "Proc. ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guo- liang Chen, et al. 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In Proc. ICML.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning representations by maximizing mutual information across views", "authors": [ { "first": "Philip", "middle": [], "last": "Bachman", "suffix": "" }, { "first": "Devon", "middle": [], "last": "Hjelm", "suffix": "" }, { "first": "William", "middle": [], "last": "Buchwalter", "suffix": "" } ], "year": 2019, "venue": "Proc. NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Bachman, R Devon Hjelm, and William Buch- walter. 2019. Learning representations by maxi- mizing mutual information across views. In Proc. NeurIPS.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "vq-wav2vec: Self-supervised learning of discrete speech representations", "authors": [ { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.05453" ] }, "num": null, "urls": [], "raw_text": "Alexei Baevski, Steffen Schneider, and Michael Auli. 2019. vq-wav2vec: Self-supervised learning of discrete speech representations. arXiv preprint arXiv:1910.05453.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Mine: mutual information neural estimation", "authors": [ { "first": "Mohamed", "middle": [ "Ishmael" ], "last": "Belghazi", "suffix": "" }, { "first": "Aristide", "middle": [], "last": "Baratin", "suffix": "" }, { "first": "Sai", "middle": [], "last": "Rajeswar", "suffix": "" }, { "first": "Sherjil", "middle": [], "last": "Ozair", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "R Devon", "middle": [], "last": "Hjelm", "suffix": "" } ], "year": 2018, "venue": "Proc. ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and R Devon Hjelm. 2018. Mine: mutual information neural estimation. In Proc. ICML.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An information-maximization approach to blind separation and blind deconvolution", "authors": [ { "first": "J", "middle": [], "last": "Anthony", "suffix": "" }, { "first": "Terrence", "middle": [ "J" ], "last": "Bell", "suffix": "" }, { "first": "", "middle": [], "last": "Sejnowski", "suffix": "" } ], "year": 1995, "venue": "Neural computation", "volume": "7", "issue": "6", "pages": "1129--1159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony J Bell and Terrence J Sejnowski. 1995. An information-maximization approach to blind separa- tion and blind deconvolution. Neural computation, 7(6):1129-1159.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Greedy layer-wise training of deep networks", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Lamblin", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Popovici", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" } ], "year": 2007, "venue": "Proc. NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2007. Greedy layer-wise training of deep networks. In Proc. NeurIPS.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Wav2letter: an end-to-end convnetbased speech recognition system", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Puhrsch", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Synnaeve", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.03193" ] }, "num": null, "urls": [], "raw_text": "Ronan Collobert, Christian Puhrsch, and Gabriel Syn- naeve. 2016. Wav2letter: an end-to-end convnet- based speech recognition system. arXiv preprint arXiv:1609.03193.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Robust features for noisy speech recognition using mfcc computation from magnitude spectrum of higher order autocorrelation coefficients", "authors": [ { "first": "Amita", "middle": [], "last": "Dev", "suffix": "" }, { "first": "Poonam", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2010, "venue": "International Journal of Computer Applications", "volume": "10", "issue": "8", "pages": "36--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amita Dev and Poonam Bansal. 2010. Robust features for noisy speech recognition using mfcc computa- tion from magnitude spectrum of higher order au- tocorrelation coefficients. International Journal of Computer Applications, 10(8):36-38.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proc. of NAACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Multitask self-supervised visual learning", "authors": [ { "first": "Carl", "middle": [], "last": "Doersch", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2017, "venue": "Proc. ICCV", "volume": "", "issue": "", "pages": "2051--2060", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carl Doersch and Andrew Zisserman. 2017. Multi- task self-supervised visual learning. In Proc. ICCV, pages 2051-2060.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Large scale adversarial representation learning", "authors": [ { "first": "Jeff", "middle": [], "last": "Donahue", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.02544" ] }, "num": null, "urls": [], "raw_text": "Jeff Donahue and Karen Simonyan. 2019. Large scale adversarial representation learning. arXiv preprint arXiv:1907.02544.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The zero resource speech challenge 2019: Tts without t", "authors": [ { "first": "Ewan", "middle": [], "last": "Dunbar", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Algayres", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Karadayi", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Bernard", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Benjumea", "suffix": "" }, { "first": "Xuan-Nga", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Lucie", "middle": [], "last": "Miskic", "suffix": "" }, { "first": "Charlotte", "middle": [], "last": "Dugrain", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Ondel", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2019, "venue": "Proc. INTERSPEECH", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ewan Dunbar, Robin Algayres, Julien Karadayi, Math- ieu Bernard, Juan Benjumea, Xuan-Nga Cao, Lucie Miskic, Charlotte Dugrain, Lucas Ondel, Alan W Black, et al. 2019. The zero resource speech chal- lenge 2019: Tts without t. In Proc. INTERSPEECH.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The zero resource speech challenge 2017", "authors": [ { "first": "Ewan", "middle": [], "last": "Dunbar", "suffix": "" }, { "first": "Xuan", "middle": [ "Nga" ], "last": "Cao", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Benjumea", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Karadayi", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Bernard", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Anguera", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)", "volume": "", "issue": "", "pages": "323--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ewan Dunbar, Xuan Nga Cao, Juan Benjumea, Julien Karadayi, Mathieu Bernard, Laurent Besacier, Xavier Anguera, and Emmanuel Dupoux. 2017. The zero resource speech challenge 2017. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 323-330. IEEE.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation", "authors": [ { "first": "A", "middle": [], "last": "Ephrat", "suffix": "" }, { "first": "I", "middle": [], "last": "Mosseri", "suffix": "" }, { "first": "O", "middle": [], "last": "Lang", "suffix": "" }, { "first": "T", "middle": [], "last": "Dekel", "suffix": "" }, { "first": "A", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "W", "middle": [ "T" ], "last": "Hassidim", "suffix": "" }, { "first": "M", "middle": [], "last": "Freeman", "suffix": "" }, { "first": "", "middle": [], "last": "Rubinstein", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.03619" ] }, "num": null, "urls": [], "raw_text": "A. Ephrat, I. Mosseri, O. Lang, T. Dekel, K Wilson, A. Hassidim, W. T. Freeman, and M. Rubinstein. 2018. Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation. arXiv preprint arXiv:1804.03619.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Transmission of information: A statistical theory of communications", "authors": [ { "first": "M", "middle": [], "last": "Robert", "suffix": "" }, { "first": "David", "middle": [], "last": "Fano", "suffix": "" }, { "first": "", "middle": [], "last": "Hawkins", "suffix": "" } ], "year": 1961, "venue": "American Journal of Physics", "volume": "29", "issue": "", "pages": "793--794", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert M Fano and David Hawkins. 1961. Transmis- sion of information: A statistical theory of communi- cations. American Journal of Physics, 29:793-794.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Timit acoustic phonetic continuous speech corpus", "authors": [ { "first": "S", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Garofolo", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John S Garofolo. 1993. Timit acoustic phonetic contin- uous speech corpus. Linguistic Data Consortium.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Collecting resources in sub-saharan african languages for automatic speech recognition: a case study of wolof", "authors": [ { "first": "Elodie", "middle": [], "last": "Gauthier", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "Sylvie", "middle": [], "last": "Voisin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Melese", "suffix": "" }, { "first": "Uriel", "middle": [ "Pascal" ], "last": "Elingui", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elodie Gauthier, Laurent Besacier, Sylvie Voisin, Michael Melese, and Uriel Pascal Elingui. 2016. Collecting resources in sub-saharan african lan- guages for automatic speech recognition: a case study of wolof. LREC.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Developments of swahili resources for an automatic speech recognition system", "authors": [ { "first": "Hadrien", "middle": [], "last": "Gelas", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "Francois", "middle": [], "last": "Pellegrino", "suffix": "" } ], "year": 2012, "venue": "Workshop Proc. SLTU", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hadrien Gelas, Laurent Besacier, and Francois Pelle- grino. 2012. Developments of swahili resources for an automatic speech recognition system. In Work- shop Proc. SLTU.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Audio set: An ontology and human-labeled dataset for audio events", "authors": [ { "first": "F", "middle": [], "last": "Jort", "suffix": "" }, { "first": "", "middle": [], "last": "Gemmeke", "suffix": "" }, { "first": "P", "middle": [ "W" ], "last": "Daniel", "suffix": "" }, { "first": "Dylan", "middle": [], "last": "Ellis", "suffix": "" }, { "first": "Aren", "middle": [], "last": "Freedman", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "R", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Manoj", "middle": [], "last": "Moore", "suffix": "" }, { "first": "Marvin", "middle": [], "last": "Plakal", "suffix": "" }, { "first": "", "middle": [], "last": "Ritter", "suffix": "" } ], "year": 2017, "venue": "Proc. ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In Proc. ICASSP.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Switchboard: Telephone speech corpus for research and development", "authors": [ { "first": "J", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Godfrey", "suffix": "" }, { "first": "C", "middle": [], "last": "Edward", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Holliman", "suffix": "" }, { "first": "", "middle": [], "last": "Mc-Daniel", "suffix": "" } ], "year": 1992, "venue": "Proc. ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John J Godfrey, Edward C Holliman, and Jane Mc- Daniel. 1992. Switchboard: Telephone speech cor- pus for research and development. In Proc. ICASSP.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Santiago", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "Faustino", "middle": [], "last": "Gomez", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2006, "venue": "Proc. ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves, Santiago Fern\u00e1ndez, Faustino Gomez, and J\u00fcrgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented se- quence data with recurrent neural networks. In Proc. ICML.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Noisecontrastive estimation: A new estimation principle for unnormalized statistical models", "authors": [ { "first": "Michael", "middle": [], "last": "Gutmann", "suffix": "" }, { "first": "Aapo", "middle": [], "last": "Hyv\u00e4rinen", "suffix": "" } ], "year": 2010, "venue": "Proc. AIS-TATS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Gutmann and Aapo Hyv\u00e4rinen. 2010. Noise- contrastive estimation: A new estimation principle for unnormalized statistical models. In Proc. AIS- TATS.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Data-efficient image recognition with contrastive predictive coding", "authors": [ { "first": "J", "middle": [], "last": "Olivier", "suffix": "" }, { "first": "Ali", "middle": [], "last": "H\u00e9naff", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Razavi", "suffix": "" }, { "first": "", "middle": [], "last": "Doersch", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Sm Eslami", "suffix": "" }, { "first": "", "middle": [], "last": "Van Den Oord", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.09272" ] }, "num": null, "urls": [], "raw_text": "Olivier J H\u00e9naff, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. 2019. Data-efficient im- age recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Ted-lium 3: twice as much data and corpus repartition for experiments on speaker adaptation", "authors": [ { "first": "Fran\u00e7ois", "middle": [], "last": "Hernandez", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Sahar", "middle": [], "last": "Ghannay", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Tomashenko", "suffix": "" }, { "first": "Yannick", "middle": [], "last": "Est\u00e8ve", "suffix": "" } ], "year": 2018, "venue": "Proc. SPECOM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fran\u00e7ois Hernandez, Vincent Nguyen, Sahar Ghan- nay, Natalia Tomashenko, and Yannick Est\u00e8ve. 2018. Ted-lium 3: twice as much data and corpus reparti- tion for experiments on speaker adaptation. In Proc. SPECOM.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Deep neural networks for acoustic modeling in speech recognition. IEEE Signal processing magazine", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "" }, { "first": "George", "middle": [], "last": "Dahl", "suffix": "" }, { "first": "Abdel-Rahman", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Senior", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Vanhoucke", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Kingsbury", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Brian Kingsbury, et al. 2012. Deep neural networks for acoustic modeling in speech recognition. IEEE Sig- nal processing magazine, 29.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A fast learning algorithm for deep belief nets", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Osindero", "suffix": "" }, { "first": "Yee-Whye", "middle": [], "last": "Teh", "suffix": "" } ], "year": 2006, "venue": "Neural computation", "volume": "18", "issue": "7", "pages": "1527--1554", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527-1554.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Densely connected convolutional networks", "authors": [ { "first": "Gao", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Zhuang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Kilian Q", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": 2017, "venue": "Proc. CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected con- volutional networks. In Proc. CVPR.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Selftraining for end-to-end speech recognition", "authors": [ { "first": "Jacob", "middle": [], "last": "Kahn", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Awni", "middle": [], "last": "Hannun", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.09116" ] }, "num": null, "urls": [], "raw_text": "Jacob Kahn, Ann Lee, and Awni Hannun. 2019a. Self- training for end-to-end speech recognition. arXiv preprint arXiv:1909.09116.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Libri-light: A benchmark for asr with limited or no supervision", "authors": [ { "first": "Jacob", "middle": [], "last": "Kahn", "suffix": "" }, { "first": "Morgane", "middle": [], "last": "Rivi\u00e8re", "suffix": "" }, { "first": "Weiyi", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Evgeny", "middle": [], "last": "Kharitonov", "suffix": "" }, { "first": "Qiantong", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Pierre-Emmanuel", "middle": [], "last": "Mazar\u00e9", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Karadayi", "suffix": "" }, { "first": "Vitaliy", "middle": [], "last": "Liptchinsky", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Fuegen", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.07875" ] }, "num": null, "urls": [], "raw_text": "Jacob Kahn, Morgane Rivi\u00e8re, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-Emmanuel Mazar\u00e9, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, et al. 2019b. Libri-light: A benchmark for asr with limited or no supervision. arXiv preprint arXiv:1912.07875.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Large-scale multilingual speech recognition with a streaming end-to-end model", "authors": [ { "first": "Anjuli", "middle": [], "last": "Kannan", "suffix": "" }, { "first": "Arindrima", "middle": [], "last": "Datta", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Weinstein", "suffix": "" }, { "first": "Bhuvana", "middle": [], "last": "Ramabhadran", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Bapna", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Seungji", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "Proc. INTER-SPEECH", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anjuli Kannan, Arindrima Datta, Tara N Sainath, Eu- gene Weinstein, Bhuvana Ramabhadran, Yonghui Wu, Ankur Bapna, Zhifeng Chen, and Seungji Lee. 2019. Large-scale multilingual speech recognition with a streaming end-to-end model. In Proc. INTER- SPEECH.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Mixed-precision training for nlp and speech recognition with openseq2seq", "authors": [ { "first": "Oleksii", "middle": [], "last": "Kuchaiev", "suffix": "" }, { "first": "Boris", "middle": [], "last": "Ginsburg", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Gitman", "suffix": "" }, { "first": "Vitaly", "middle": [], "last": "Lavrukhin", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Li", "suffix": "" }, { "first": "Huyen", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Case", "suffix": "" }, { "first": "Paulius", "middle": [], "last": "Micikevicius", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.10387" ] }, "num": null, "urls": [], "raw_text": "Oleksii Kuchaiev, Boris Ginsburg, Igor Gitman, Vitaly Lavrukhin, Jason Li, Huyen Nguyen, Carl Case, and Paulius Micikevicius. 2018. Mixed-precision train- ing for nlp and speech recognition with openseq2seq. arXiv preprint arXiv:1805.10387.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Delta-spectral cepstral coefficients for robust speech recognition", "authors": [ { "first": "Kshitiz", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Chanwoo", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Richard M", "middle": [], "last": "Stern", "suffix": "" } ], "year": 2011, "venue": "Proc. ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kshitiz Kumar, Chanwoo Kim, and Richard M Stern. 2011. Delta-spectral cepstral coefficients for robust speech recognition. In Proc. ICASSP.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "An application of the principle of maximum information preservation to linear systems", "authors": [ { "first": "Ralph", "middle": [], "last": "Linsker", "suffix": "" } ], "year": 1988, "venue": "Proc. NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Linsker. 1988. An application of the principle of maximum information preservation to linear sys- tems. In Proc. NeurIPS.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proc. NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Proc. NeurIPS.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Geometric invariance in computer vision", "authors": [ { "first": "L", "middle": [], "last": "Joseph", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mundy", "suffix": "" }, { "first": "", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 1992, "venue": "", "volume": "92", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph L Mundy, Andrew Zisserman, et al. 1992. Ge- ometric invariance in computer vision, volume 92. MIT press Cambridge, MA.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Representation learning with contrastive predictive coding", "authors": [ { "first": "Aaron", "middle": [], "last": "Van Den Oord", "suffix": "" }, { "first": "Yazhe", "middle": [], "last": "Li", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1807.03748" ] }, "num": null, "urls": [], "raw_text": "Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive pre- dictive coding. arXiv preprint arXiv:1807.03748.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Librispeech: an asr corpus based on public domain audio books", "authors": [ { "first": "Vassil", "middle": [], "last": "Panayotov", "suffix": "" }, { "first": "Guoguo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Povey", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2015, "venue": "Proc. ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr cor- pus based on public domain audio books. In Proc. ICASSP.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Specaugment: A simple data augmentation method for automatic speech recognition", "authors": [ { "first": "S", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "William", "middle": [], "last": "Park", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Chung-Cheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Barret", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "D", "middle": [], "last": "Ekin", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Cubuk", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Proc. INTERSPEECH", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le. 2019. Specaugment: A simple data augmentation method for automatic speech recognition. In Proc. INTERSPEECH.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "The design for the wall street journal-based csr corpus", "authors": [ { "first": "B", "middle": [], "last": "Douglas", "suffix": "" }, { "first": "Janet", "middle": [ "M" ], "last": "Paul", "suffix": "" }, { "first": "", "middle": [], "last": "Baker", "suffix": "" } ], "year": 1992, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas B Paul and Janet M Baker. 1992. The design for the wall street journal-based csr corpus. In Proc. ACL.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proc. of NAACL.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "On variational bounds of mutual information", "authors": [ { "first": "Ben", "middle": [], "last": "Poole", "suffix": "" }, { "first": "Sherjil", "middle": [], "last": "Ozair", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "Alexander", "middle": [ "A" ], "last": "Oord", "suffix": "" }, { "first": "George", "middle": [], "last": "Alemi", "suffix": "" }, { "first": "", "middle": [], "last": "Tucker", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.06922" ] }, "num": null, "urls": [], "raw_text": "Ben Poole, Sherjil Ozair, Aaron van den Oord, Alexan- der A Alemi, and George Tucker. 2019. On varia- tional bounds of mutual information. arXiv preprint arXiv:1905.06922.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Unsupervised pretraining transfers well across languages", "authors": [ { "first": "Morgane", "middle": [], "last": "Rivi\u00e8re", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Pierre-Emmanuel", "middle": [], "last": "Mazar\u00e9", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" } ], "year": 2020, "venue": "Proc. ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morgane Rivi\u00e8re, Armand Joulin, Pierre-Emmanuel Mazar\u00e9, and Emmanuel Dupoux. 2020. Unsuper- vised pretraining transfers well across languages. In Proc. ICASSP.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "wav2vec: Unsupervised pre-training for speech recognition", "authors": [ { "first": "Steffen", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proc. INTER-SPEECH", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised pre-training for speech recognition. In Proc. INTER- SPEECH.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "End-to-end asr: from supervised to semi-supervised learning with modern architectures", "authors": [ { "first": "Gabriel", "middle": [], "last": "Synnaeve", "suffix": "" }, { "first": "Qiantong", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Kahn", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Tatiana", "middle": [], "last": "Likhomanenko", "suffix": "" }, { "first": "Vineel", "middle": [], "last": "Pratap", "suffix": "" }, { "first": "Anuroop", "middle": [], "last": "Sriram", "suffix": "" }, { "first": "Vitaliy", "middle": [], "last": "Liptchinsky", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.08460" ] }, "num": null, "urls": [], "raw_text": "Gabriel Synnaeve, Qiantong Xu, Jacob Kahn, Edouard Grave, Tatiana Likhomanenko, Vineel Pratap, Anuroop Sriram, Vitaliy Liptchinsky, and Ronan Collobert. 2019. End-to-end asr: from supervised to semi-supervised learning with modern architectures. arXiv preprint arXiv:1911.08460.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Using different acoustic, lexical and language modeling units for asr of an underresourced language -amharic", "authors": [ { "first": "Martha", "middle": [], "last": "Tachbelie", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Solomon Teferra Abate", "suffix": "" }, { "first": "", "middle": [], "last": "Besacier", "suffix": "" } ], "year": 2014, "venue": "", "volume": "56", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Tachbelie, Solomon Teferra Abate, and Lau- rent Besacier. 2014. Using different acoustic, lexi- cal and language modeling units for asr of an under- resourced language -amharic. Speech Communica- tion, 56.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Robust photometric invariant features from the color tensor", "authors": [ { "first": "Joost", "middle": [], "last": "Van De Weijer", "suffix": "" }, { "first": "Theo", "middle": [], "last": "Gevers", "suffix": "" }, { "first": "Arnold Wm", "middle": [], "last": "Smeulders", "suffix": "" } ], "year": 2005, "venue": "IEEE Transactions on Image Processing", "volume": "15", "issue": "1", "pages": "118--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joost Van De Weijer, Theo Gevers, and Arnold WM Smeulders. 2005. Robust photometric invariant fea- tures from the color tensor. IEEE Transactions on Image Processing, 15(1):118-127.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "authors": [ { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Lajoie", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Pierre-Antoine", "middle": [], "last": "Manzagol", "suffix": "" } ], "year": 2010, "venue": "Journal of machine learning research", "volume": "11", "issue": "", "pages": "3371--3408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local de- noising criterion. Journal of machine learning re- search, 11(Dec):3371-3408.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Towards a typology of english accents", "authors": [ { "first": "H", "middle": [], "last": "Steven", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Weinberger", "suffix": "" }, { "first": "", "middle": [], "last": "Kunath", "suffix": "" } ], "year": 2009, "venue": "", "volume": "104", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven H Weinberger and Stephen Kunath. 2009. To- wards a typology of english accents. AACL Abstract Book, 104.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Slow feature analysis: Unsupervised learning of invariances", "authors": [ { "first": "Laurenz", "middle": [], "last": "Wiskott", "suffix": "" }, { "first": "Terrence", "middle": [ "J" ], "last": "Sejnowski", "suffix": "" } ], "year": 2002, "venue": "Neural computation", "volume": "14", "issue": "4", "pages": "715--770", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurenz Wiskott and Terrence J Sejnowski. 2002. Slow feature analysis: Unsupervised learning of invari- ances. Neural computation, 14(4):715-770.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Analyzing noise robustness of mfcc and gfcc features in speaker identification", "authors": [ { "first": "Xiaojia", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Deliang", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2013, "venue": "Proc. ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojia Zhao and DeLiang Wang. 2013. Analyzing noise robustness of mfcc and gfcc features in speaker identification. In Proc. ICASSP.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Left, unsupervised representation learning with forward contrastive predictive coding. The learned representations are fixed and used as inputs to a speech recognition model (Right)." }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "Speech recognition performance on lowresource African languages (in word error rate). CPC features trained on diverse datasets features significantly outperform baseline log-filterbank features whereas the features trained only on English underperform the baseline." }, "FIGREF4": { "uris": null, "type_str": "figure", "num": null, "text": "summarizes results." }, "FIGREF5": { "uris": null, "type_str": "figure", "num": null, "text": "si bn es gl wo ta kn en gu eu ca fon ml am ug sw ar fr cs" }, "FIGREF6": { "uris": null, "type_str": "figure", "num": null, "text": "Relative improvements (in percentage) on speech recognition on many languages with CPC-8k features over Spectrogram features. Each column correspond to language code explained in" }, "TABREF0": { "type_str": "table", "content": "
NameLanguageTypeHours
Audio SetMultilingual-2500
AVSpeechMultilingual-3100
Common Voice Multilingualread430
LibriSpeechEnglishread960
WSJEnglishread80
TIMITEnglishread5
SSAEnglishread<1
TedliumEnglish spoken440
SwitchboardEnglish spoken310
", "html": null, "num": null, "text": "and the Speech Accent Archive (Weinberger andKunath, 2009), ignoring the transcrip-" }, "TABREF1": { "type_str": "table", "content": "", "html": null, "num": null, "text": "Summary of English Datasets." }, "TABREF3": { "type_str": "table", "content": "
", "html": null, "num": null, "text": "Domain transfer experiments to test the robustness of the representations to domain shifts. The models are trained on the Wall Street Journal, LibriSpeech or Tedlium and evaluated on different evaluation sets. The results on in-domain evaluation sets are in gray color. All the results are without a language model." }, "TABREF4": { "type_str": "table", "content": "
LibriSpeech
dev-cleandev-othertest-cleantest-other
10%100%10%100%10%100%10%100%
LibriSpeech
LogFilterbank (OpenSeq2Seq)-6.67-18.67-6.58-19.61
LogFilterbank (ours)19.836.63 38.97 18.77 19.656.43 41.26 20.16
CPC-LibriSpeech15.076.70 33.55 19.77 14.966.91 36.05 21.60
CPC-8k13.926.20 30.85 17.93 13.696.25 32.81 19.10
+ LM decoding
LogFilterbank (OpenSeq2Seq)-4.75-13.87-4.94-15.06
LogFilterbank (ours)12.494.87 28.71 14.14 12.295.04 31.03 15.25
CPC-LibriSpeech9.664.87 24.72 14.349.415.05 26.77 16.06
CPC-8k8.864.35 22.10 12.968.704.72 24.15 14.47
", "html": null, "num": null, "text": "Note that en is Nigerian English and fr is African French." }, "TABREF5": { "type_str": "table", "content": "", "html": null, "num": null, "text": "Sample efficiency experiments with the TDNN trained and evaluated on LibriSpeech. The results are word error rate on the LibriSpeech development and evaluation sets. 10% vs. 100% indicates the amount of training data used. The section in + LM decoding contain results with beamsearch decoding with a 4-gram language model. The underlined (OpenSeq2Seq) scores are taken from public benchmarks. 7" }, "TABREF6": { "type_str": "table", "content": "
Language nameCodeDataset Hours
AmharicamALFFA18.3
FongbefonALFFA5.2
SwahilliswALFFA8.9
WolofwoALFFA16.8
CzechcsOpenSLR-615.0
Uyghurug OpenSLR-2220.2
Javanesejv OpenSLR-35236.8
Sundanesesu OpenSLR-36265.9
Tunisian Arabicar OpenSLR-464.5
Sinhalasi OpenSLR-52179.6
Bengalibn OpenSLR-53172.3
Nepaline OpenSLR-54123.6
African Frenchfr OpenSLR-5713.7
Catalanca OpenSLR-5971.9
Malayalamml OpenSLR-634.4
Tamilta OpenSLR-655.7
Spanishes OpenSLR-6719.6
Nigerian Englishen OpenSLR-7039.5
Chilean Spanishes OpenSLR-715.7
Columbian Spanishes OpenSLR-726.1
Peruvian Spanishes OpenSLR-737.3
Basqueeu OpenSLR-7611.0
Galiciangl OpenSLR-778.2
Gujaratigu OpenSLR-786.3
Kannadakn OpenSLR-796.7
", "html": null, "num": null, "text": "summarizes the multilingual dataset statistics used in our evaluation." }, "TABREF7": { "type_str": "table", "content": "", "html": null, "num": null, "text": "Summary of Multilingual Datasets." } } } }