{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:43:50.986984Z" }, "title": "Howl: A Deployed, Open-Source Wake Word Detection System", "authors": [ { "first": "Raphael", "middle": [], "last": "Tang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": { "addrLine": "2 Mozilla" } }, "email": "" }, { "first": "Jaejun", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": { "addrLine": "2 Mozilla" } }, "email": "" }, { "first": "Afsaneh", "middle": [], "last": "Razi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": { "addrLine": "2 Mozilla" } }, "email": "" }, { "first": "Julia", "middle": [], "last": "Cambre", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": { "addrLine": "2 Mozilla" } }, "email": "" }, { "first": "Ian", "middle": [], "last": "Bicking", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": { "addrLine": "2 Mozilla" } }, "email": "" }, { "first": "Jofish", "middle": [], "last": "Kaye", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": { "addrLine": "2 Mozilla" } }, "email": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": { "addrLine": "2 Mozilla" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe Howl, an open-source wake word detection toolkit with native support for open speech datasets such as Mozilla Common Voice (MCV) and Google Speech Commands (GSC). We report benchmark results of various models supported by our toolkit on GSC and our own freely available wake word detection dataset, built from MCV. One of our models is deployed in Firefox Voice, a plugin enabling speech interactivity for the Firefox web browser. Howl represents, to the best of our knowledge, the first fully productionized, open-source wake word detection toolkit with a web browser deployment target. Our codebase is at howl.ai. * Equal contribution. Order decided by coin flip.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We describe Howl, an open-source wake word detection toolkit with native support for open speech datasets such as Mozilla Common Voice (MCV) and Google Speech Commands (GSC). We report benchmark results of various models supported by our toolkit on GSC and our own freely available wake word detection dataset, built from MCV. One of our models is deployed in Firefox Voice, a plugin enabling speech interactivity for the Firefox web browser. Howl represents, to the best of our knowledge, the first fully productionized, open-source wake word detection toolkit with a web browser deployment target. Our codebase is at howl.ai. * Equal contribution. Order decided by coin flip.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Wake word detection is the task of recognizing an utterance for activating a speech assistant, such as \"Hey, Alexa\" for the Amazon Echo. Given that such systems are meant to support fully automatic speech recognition, the task seems simple. However, it introduces a different set of challenges because these systems have to be always listening, computationally efficient, and, most of all, privacy respecting. Therefore, researchers treat it as a separate line of work, with most recent advancements driven by neural networks (Sainath and Parada, 2015; Tang and Lin, 2018) .", "cite_spans": [ { "start": 526, "end": 552, "text": "(Sainath and Parada, 2015;", "ref_id": "BIBREF13" }, { "start": 553, "end": 572, "text": "Tang and Lin, 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unfortunately, most existing toolkits are closed source and often specific to a target platform. Such design choices restrict the flexibility of the application and add unnecessary maintenance as the number of target domains increases. We argue that using JavaScript is a solution: unlike many languages and their runtimes, the JavaScript engine powers a wide range of modern user-facing applications ranging from mobile to desktop ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To this end, we have previously developed Honkling, a JavaScript-based keyword spotting system (Lee et al., 2019) . Leveraging one of the lightest models available for the task from Tang and Lin (2018) , Honkling efficiently detects the target commands with high precision. However, we notice that Honkling is still quite far from being a stable wake word detection system. This gap mainly arises from the model being trained as a speech commands classifier instead of a wake word detector; its high false alarm rate results from the limited number of negative samples in the training dataset (Warden, 2018) .", "cite_spans": [ { "start": 95, "end": 113, "text": "(Lee et al., 2019)", "ref_id": "BIBREF5" }, { "start": 182, "end": 201, "text": "Tang and Lin (2018)", "ref_id": "BIBREF17" }, { "start": 593, "end": 607, "text": "(Warden, 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, to achieve greater real-world impact, we close this gap in the Honkling ecosystem and present Howl, an open-source wake word detection toolkit with support for open datasets such as Mozilla Common Voice (MCV; Ardila et al., 2019) and the Google Speech Commands dataset (GSC; Warden, 2018) . Howl is the first in-browser wake word detection system which powers a widely deployed consumer application, Firefox Voice. 1 By processing the audio in the browser and being completely open source, including the datasets and models, Howl is a privacy-respecting, noneavesdropping toolkit that users can trust. With a false reject rate of 16% at five false alarms per hour of speech, our deployed model has enabled Firefox Voice to provide a completely hands-free experience to over 8,000 users in the 9 days since its launch in August 2020. 2 Figure 1 : An illustration of Howl's end-to-end pipeline and its control flow. First, we preprocess the incoming audio dataset by filtering for the wake word vocabulary, aligning the speech, and saving the negative and positives examples to disk. Next, we introduce a noise dataset and augment the data on the fly at training time. Finally, we evaluate the optimized model and, if the results are satisfactory, export it for deployment. Snowboy also exist. Such ecosystems provide an open-source modeling toolkit, some data, and deployment capabilities. Unfortunately, these ecosystems are still closed at heart; they keep their data, models, or deployment proprietary. As far as opensource ecosystems go, Precise 3 represents a step in the right direction, but its datasets are limited, and its deployment target is the Raspberry Pi.", "cite_spans": [ { "start": 224, "end": 244, "text": "Ardila et al., 2019)", "ref_id": "BIBREF1" }, { "start": 290, "end": 303, "text": "Warden, 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 850, "end": 858, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We further make the distinction between wake word detection and speech commands classification toolkits such as Honk (Tang and Lin, 2017) . These frameworks focus on classifying fixed-length audio as one of a few dozen keywords, with no evaluation on a sizable negative set, as required in wake word detection. While these trained models may be used in detection applications, they are not rigorously tested for such.", "cite_spans": [ { "start": 117, "end": 137, "text": "(Tang and Lin, 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "We present a high-level description of our toolkit and its goals (see Howl's architecture in Figure 1 ). For specific details, we refer users to our code repository, as linked in the abstract.", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 101, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "System Description", "sec_num": "3" }, { "text": "Howl is written in Python 3.7+, with notable dependencies being PyTorch (Paszke et al., 2019) for model training, Librosa (McFee et al., 2015) for audio preprocessing, and the Montreal Forced Aligner (MFA; McAuliffe et al., 2017) for speech data alignment. We release Howl under the Mozilla", "cite_spans": [ { "start": 72, "end": 93, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF11" }, { "start": 114, "end": 142, "text": "Librosa (McFee et al., 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Requirements", "sec_num": "3.1" }, { "text": "Howl consists of the three following major components: audio preprocessing, data augmentation, and model training and evaluation. These components form a pipeline, in the written order, for producing deployable models from raw audio data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components and Pipeline", "sec_num": "3.2" }, { "text": "Preprocessing. A wake word dataset must first be preprocessed from an annotated data source, which is defined as a collection of (audio, transcription) pairs, with predefined training, development, and test splits. Since Howl is a frame-level keyword spotting system, it relies on a forced aligner to provide word-or phone-based alignment. We choose MFA for its popularity and free license, and hence Howl structures the processed datasets to interface well with MFA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components and Pipeline", "sec_num": "3.2" }, { "text": "Another preprocessing task is to parse the global configuration settings for the framework. Such settings include the learning rate, the dataset path, and model-specific hyperparameters. The toolkit reads in most of these settings as environment variables, which enable easy shell scripting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components and Pipeline", "sec_num": "3.2" }, { "text": "Augmentation. For improved robustness and better model quality, we implement a set of popular augmentation routines: time stretching, time shifting, synthetic noise addition, recorded noise mixing, SpecAugment (without time warping; Park et al., 2019), and vocal tract length perturbation (Jaitly and Hinton, 2013) . These are readily extensible, so practitioners may easily add new augmentation modules.", "cite_spans": [ { "start": 289, "end": 314, "text": "(Jaitly and Hinton, 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Components and Pipeline", "sec_num": "3.2" }, { "text": "Training and evaluation. Howl provides several off-the-shelf neural models, as well as training and evaluation routines using PyTorch for computing the loss gradient and the task-specific metrics, such as the false alarm rate and reject rate. These routines are also responsible for serializing the model and exporting it to our browser-side deployment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components and Pipeline", "sec_num": "3.2" }, { "text": "Pipeline. Given these components, our pipeline, visually presented in Figure 1 , is as follows: First, users produce a wake word detection dataset, either manually or from a data source like Common Voice and Google Speech Commands, setting the appropriate environment variables. This can be quickly accomplished using Common Voice, whose ample breadth and coverage of popular English words allow for a wide selection of custom wake words; for example, it has about a thousand occurrences of the word \"next.\" In addition to a positive subset containing the vocabulary and wake word, this dataset ideally contains a sizable negative set, which is necessary for more robust models and a more accurate evaluation of the false positive rate.", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 78, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Components and Pipeline", "sec_num": "3.2" }, { "text": "Next, users (optionally) select which augmentation modules to use, and they train a model with the provided hyperparameters on the selected dataset, which is first processed into log-Mel frames with zero mean and unit variance, as is standard. This training process should take less than a few hours on a GPU-capable device for most use cases, including ours. Finally, users may run the model in the included command line interface demo or deploy it to the browser using Honkling, our inbrowser keyword spotting (KWS) system, if the model is supported (Lee et al., 2019) .", "cite_spans": [ { "start": 552, "end": 570, "text": "(Lee et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Components and Pipeline", "sec_num": "3.2" }, { "text": "For the data sources, Howl works out of the box with Mozilla Common Voice, a general speech corpus, and Google Speech Commands, a commands recognition dataset. Users can quickly extend Howl to accept other speech corpora such as LibriSpeech (Panayotov et al., 2015) or the Hey Snips dataset (Coucke et al., 2019) . Howl also accepts any folder that contains audio files and", "cite_spans": [ { "start": 241, "end": 265, "text": "(Panayotov et al., 2015)", "ref_id": "BIBREF9" }, { "start": 291, "end": 312, "text": "(Coucke et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Models", "sec_num": "3.3" }, { "text": "Dev/Test # Par. interprets them as recorded noise for data augmentation, which covers popular noise datasets such as MUSAN (Snyder et al., 2015) and Microsoft SNSD (Reddy et al., 2019) . For modeling, Howl provides implementations of convolutional neural networks (CNNs) and recurrent neural networks (RNNs) for wake word detection. These models are from the existing literature, such as residual CNNs (Tang and Lin, 2018) , a modified listen-attend-spell (LAS) encoder (Chan et al., 2015; Park et al., 2019) , and MobileNetv2 (Sandler et al., 2018) . Most of the models are lightweight since the end application requires efficient inference, though some are parameter heavy to establish a rough upper bound on the quality, as far as parameters go. Of particular focus is the lightweight res8 model (Tang and Lin, 2018) , which is directly exportable to Honkling, the in-browser KWS system. For this reason, we choose it in our deployment to Firefox Voice.", "cite_spans": [ { "start": 123, "end": 144, "text": "(Snyder et al., 2015)", "ref_id": "BIBREF15" }, { "start": 149, "end": 184, "text": "Microsoft SNSD (Reddy et al., 2019)", "ref_id": null }, { "start": 402, "end": 422, "text": "(Tang and Lin, 2018)", "ref_id": "BIBREF17" }, { "start": 470, "end": 489, "text": "(Chan et al., 2015;", "ref_id": "BIBREF2" }, { "start": 490, "end": 508, "text": "Park et al., 2019)", "ref_id": "BIBREF10" }, { "start": 527, "end": 549, "text": "(Sandler et al., 2018)", "ref_id": "BIBREF14" }, { "start": 799, "end": 819, "text": "(Tang and Lin, 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "To verify the correctness of our implementation, we first train and evaluate our models on the Google Speech Commands dataset, for which there exists many known results. Next, we curate a wake word detection datasets and report our resulting model quality. Training details are in the repository.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Benchmark Results", "sec_num": "4" }, { "text": "Commands recognition. Table 1 summarizes the metrics collected from Howl for the twelvekeyword recognition task from Speech Commands (v1), where we classify a one-second clip as one of \"yes,\" \"no,\" \"up,\" \"down,\" \"left,\" \"right,\" \"on,\" \"off,\" \"stop,\" \"go,\" unknown, or silence. We report average accuracy collected from fifty iterations. The results indicate that our implementations are com- Figure 2 : Receiver operating characteristic (ROC) curves for the wake word. The threshold ranges from 0.05 to 1.00 with a marker for every increment of 0.05.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 29, "text": "Table 1", "ref_id": null }, { "start": 392, "end": 400, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Benchmark Results", "sec_num": "4" }, { "text": "petitive with the state of the art, with the res8 model achieving the highest accuracy of 97.8% on the test set, despite having fewer parameters. Our other implemented models, the LSTM, LAS encoder, and MobileNetv2, compare favorably. Wake word detection. For wake word detection, we target \"hey, Firefox\" for waking up Firefox Voice. From the single-word segment of MCV, we use 1,894 and 1,877 recordings of \"hey\" and \"Firefox,\" respectively; from the MCV general speech corpus, we select all 1,037 recordings containing \"hey,\" \"fire,\" or \"fox.\" We additionally collect 632 recordings of \"hey, Firefox\" from volunteers. For the negative set, we use about 10% of the entire MCV speech corpus. We choose the training, dev, and test splits to be 80%, 10%, and 10% of the resulting corpus, stratified by speaker IDs for the positive set. For robustness to noise, we use portions of MUSAN and SNSD as the noise dataset. We arrive at 31 hours of data for training and 3 hours each for dev and test.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Benchmark Results", "sec_num": "4" }, { "text": "For the model, we select res8 (Tang and Lin, 2018) for its high quality on Speech Commands (see evaluation results above) and easy adaptability with our browser deployment target. We follow the pipeline mentioned in the previous section to train ten models with different seeds; details are not repeated, and hyperparameters can be found in the repository.", "cite_spans": [ { "start": 30, "end": 50, "text": "(Tang and Lin, 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Benchmark Results", "sec_num": "4" }, { "text": "In Figure 2 , we present the resulting receiver operating characteristic (ROC) curves generated from the averaged metrics. As we increase the threshold in increments of 0.05, we naturally observe lower false alarm rates at the expense of higher false re-ject rates. From the figure, we find that, at a threshold of 0.8, Howl achieves five false alarms per hour of speech with an acceptable 16% false reject rate. Our negative set contains diverse adversarial examples that misrepresent real-world usage, e.g., many utterances of \"Firefox,\" which are responsible for at least 90% of the false positives. Thus, combined with preliminary results from live testing the system ourselves, we comfortably choose the operating point at five false alarms per hour.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Benchmark Results", "sec_num": "4" }, { "text": "We finally note that the discrepancy between the dev and test curves is likely explained by differences in the data distribution, not hyperparameter fiddling, because there are only 76 and 54 clips in the positive dev and test sets, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Benchmark Results", "sec_num": "4" }, { "text": "To protect user security and privacy, wake word detection must be directly performed on the user's device. This setting introduces various technical challenges, as the available resources are often limited and may not be accessible. In the case of Firefox Voice, our target application, the platform is Firefox, where the major challenge is the limited support in machine learning frameworks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Browser Deployment", "sec_num": "5" }, { "text": "However, our previous work demonstrates the feasibility of in-browser wake word detection with Honkling (Lee et al., 2019) . Our application is written purely in JavaScript and supports different models using TensorFlow.js. Since our task is to provide an accurate wake word detection system for Firefox Voice, we rewrite the audio processing logic to match the new Python pipeline and optimize various preprocessing routines to substantially reduce the computational burden.", "cite_spans": [ { "start": 104, "end": 122, "text": "(Lee et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Browser Deployment", "sec_num": "5" }, { "text": "To measure the performance of our application, we refer to the built-in energy impact metric of Firefox, which reports the CPU consumption of each open tab. To establish a reference, playing a YouTube video reports an average energy impact of 10, while a static Google search reports 0.1. Our wake word detection model yields an energy impact of only 3, which efficiently enables hands-free interaction for initiating the speech recognition engine. Our wake word detection demo and browserside integration details can be found at https: //github.com/castorini/howl-deploy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Browser Deployment", "sec_num": "5" }, { "text": "This paper introduces Howl, the first in-browser wake word detection system which powers a widely deployed application, Firefox Voice. Leveraging a continuously growing speech dataset, Howl enables a community-based endeavour for building a privacy-respecting and non-eavesdropping wake word detection system. To expand the scope of Howl, our future work includes embedded systems as deployment targets, where the computational resources are even more constrained, with some systems lacking even modern memory managers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "https://github.com/MycroftAI/ mycroft-precise Public License v2, a file-level copyleft free license. For speedy model training, we recommend a CUDA-enabled graphics card with at least 4GB of VRAM; we used an Nvidia Titan RTX in all of our experiments. For resource-restricted users, we suggest exploring Google Colab 4 and other cloudbased solutions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://colab.research.google.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council of Canada.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A neural attention model for speech command recognition", "authors": [ { "first": "Douglas", "middle": [], "last": "Coimbra De Andrade", "suffix": "" }, { "first": "Sabato", "middle": [], "last": "Leo", "suffix": "" }, { "first": "Martin Loesener Da Silva", "middle": [], "last": "Viana", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Bernkopf", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.08929" ] }, "num": null, "urls": [], "raw_text": "Douglas Coimbra de Andrade, Sabato Leo, Martin Loe- sener Da Silva Viana, and Christoph Bernkopf. 2018. A neural attention model for speech command recog- nition. arXiv:1808.08929.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Common Voice: A massivelymultilingual speech corpus", "authors": [ { "first": "Rosana", "middle": [], "last": "Ardila", "suffix": "" }, { "first": "Megan", "middle": [], "last": "Branson", "suffix": "" }, { "first": "Kelly", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Henretty", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Kohler", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Meyer", "suffix": "" }, { "first": "Reuben", "middle": [], "last": "Morais", "suffix": "" }, { "first": "Lindsay", "middle": [], "last": "Saunders", "suffix": "" }, { "first": "Francis", "middle": [ "M" ], "last": "Tyers", "suffix": "" }, { "first": "Gregor", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.06670" ] }, "num": null, "urls": [], "raw_text": "Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M. Tyers, and Gregor Weber. 2019. Common Voice: A massively- multilingual speech corpus. arXiv:1912.06670.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Listen, attend and spell", "authors": [ { "first": "William", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.01211" ] }, "num": null, "urls": [], "raw_text": "William Chan, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. 2015. Listen, attend and spell. arXiv:1508.01211.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Efficient keyword spotting using dilated convolutions and gating", "authors": [ { "first": "Alice", "middle": [], "last": "Coucke", "suffix": "" }, { "first": "Mohammed", "middle": [], "last": "Chlieh", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "Gisselbrecht", "suffix": "" }, { "first": "David", "middle": [], "last": "Leroy", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Poumeyrol", "suffix": "" }, { "first": "Thibaut", "middle": [], "last": "Lavril", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alice Coucke, Mohammed Chlieh, Thibault Gissel- brecht, David Leroy, Mathieu Poumeyrol, and Thibaut Lavril. 2019. Efficient keyword spotting us- ing dilated convolutions and gating. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Vocal Tract Length Perturbation (VTLP) improves speech recognition", "authors": [ { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the ICML Workshop on Deep Learning for Audio, Speech and Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Navdeep Jaitly and Geoffrey E. Hinton. 2013. Vocal Tract Length Perturbation (VTLP) improves speech recognition. In Proceedings of the ICML Workshop on Deep Learning for Audio, Speech and Language.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Honkling: In-browser personalization for ubiquitous keyword spotting", "authors": [ { "first": "Jaejun", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaejun Lee, Raphael Tang, and Jimmy Lin. 2019. Hon- kling: In-browser personalization for ubiquitous keyword spotting. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Highly efficient deep neural networks for speech recognition on the edge", "authors": [ { "first": "Audrey", "middle": [ "G" ], "last": "Zhong Qiu Lin", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Chung", "suffix": "" }, { "first": "", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.08559" ] }, "num": null, "urls": [], "raw_text": "Zhong Qiu Lin, Audrey G. Chung, and Alexander Wong. 2018. EdgeSpeechNets: Highly efficient deep neural networks for speech recognition on the edge. arXiv:1810.08559.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Montreal Forced Aligner: Trainable text-speech alignment using Kaldi", "authors": [ { "first": "Michael", "middle": [], "last": "Mcauliffe", "suffix": "" }, { "first": "Michaela", "middle": [], "last": "Socolof", "suffix": "" }, { "first": "Sarah", "middle": [], "last": "Mihuc", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wagner", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Sonderegger", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighteenth Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. 2017. Montreal Forced Aligner: Trainable text-speech alignment using Kaldi. In Proceedings of the Eighteenth Annual Conference of the International Speech Communication Association.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Eric Battenberg, and Oriol Nieto. 2015. librosa: Audio and music signal analysis in Python", "authors": [ { "first": "Brian", "middle": [], "last": "Mcfee", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Dawen", "middle": [], "last": "Liang", "suffix": "" }, { "first": "P", "middle": [ "W" ], "last": "Daniel", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Ellis", "suffix": "" }, { "first": "", "middle": [], "last": "Mcvicar", "suffix": "" } ], "year": null, "venue": "Proceedings of the 14th Python in Science Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian McFee, Colin Raffel, Dawen Liang, Daniel P. W. Ellis, Matt McVicar, Eric Battenberg, and Oriol Ni- eto. 2015. librosa: Audio and music signal analysis in Python. In Proceedings of the 14th Python in Sci- ence Conference.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "LibriSpeech: An ASR corpus based on public domain audio books", "authors": [ { "first": "Vassil", "middle": [], "last": "Panayotov", "suffix": "" }, { "first": "Guoguo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Povey", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. LibriSpeech: An ASR corpus based on public domain audio books. In Pro- ceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "SpecAugment: A simple augmentation method for automatic speech recognition", "authors": [ { "first": "Daniel", "middle": [ "S" ], "last": "Park", "suffix": "" }, { "first": "William", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chung-Cheng", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "", "middle": [], "last": "Ekin Dogus Cubuk", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Twentieth Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel S. Park, William Chan, Yu Zhang, Chung- Cheng Chiu, Barret Zoph, Ekin Dogus Cubuk, and Quoc V. Le. 2019. SpecAugment: A simple aug- mentation method for automatic speech recognition. In Proceedings of the Twentieth Annual Conference of the International Speech Communication Associ- ation.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "PyTorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. PyTorch: An imperative style, high-performance deep learning library. In Ad- vances in Neural Information Processing Systems, pages 8024-8035.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A scalable noisy speech dataset and online subjective test framework", "authors": [ { "first": "K", "middle": [ "A" ], "last": "Chandan", "suffix": "" }, { "first": "Ebrahim", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Beyrami", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Pool", "suffix": "" }, { "first": "Sriram", "middle": [], "last": "Cutler", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Srinivasan", "suffix": "" }, { "first": "", "middle": [], "last": "Gehrke", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Twentieth Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chandan K. A. Reddy, Ebrahim Beyrami, Jamie Pool, Ross Cutler, Sriram Srinivasan, and Johannes Gehrke. 2019. A scalable noisy speech dataset and online subjective test framework. In Proceedings of the Twentieth Annual Conference of the Interna- tional Speech Communication Association.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Convolutional neural networks for small-footprint keyword spotting", "authors": [ { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Carolina", "middle": [], "last": "Parada", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Sixteenth Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tara N. Sainath and Carolina Parada. 2015. Convolu- tional neural networks for small-footprint keyword spotting. In Proceedings of the Sixteenth Annual Conference of the International Speech Communica- tion Association.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Mo-bileNetv2: Inverted residuals and linear bottlenecks", "authors": [ { "first": "Mark", "middle": [], "last": "Sandler", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Howard", "suffix": "" }, { "first": "Menglong", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Andrey", "middle": [], "last": "Zhmoginov", "suffix": "" }, { "first": "Liang-Chieh", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Sandler, Andrew Howard, Menglong Zhu, An- drey Zhmoginov, and Liang-Chieh Chen. 2018. Mo- bileNetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "MUSAN: A music, speech, and noise corpus", "authors": [ { "first": "David", "middle": [], "last": "Snyder", "suffix": "" }, { "first": "Guoguo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Povey", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1510.08484" ] }, "num": null, "urls": [], "raw_text": "David Snyder, Guoguo Chen, and Daniel Povey. 2015. MUSAN: A music, speech, and noise corpus. arXiv:1510.08484.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Honk: A PyTorch reimplementation of convolutional neural networks for keyword spotting", "authors": [ { "first": "Raphael", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.06554" ] }, "num": null, "urls": [], "raw_text": "Raphael Tang and Jimmy Lin. 2017. Honk: A PyTorch reimplementation of convolutional neural networks for keyword spotting. arXiv:1710.06554.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Deep residual learning for small-footprint keyword spotting", "authors": [ { "first": "Raphael", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raphael Tang and Jimmy Lin. 2018. Deep residual learning for small-footprint keyword spotting. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Speech commands: A dataset for limited-vocabulary speech recognition", "authors": [ { "first": "Pete", "middle": [], "last": "Warden", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.03209" ] }, "num": null, "urls": [], "raw_text": "Pete Warden. 2018. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv:1804.03209.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Effective combination of DenseNet and BiLSTM for keyword spotting", "authors": [ { "first": "Mengjun", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Nanfeng", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mengjun Zeng and Nanfeng Xiao. 2019. Effective combination of DenseNet and BiLSTM for keyword spotting. IEEE Access.", "links": null } }, "ref_entries": {} } }