{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:13:43.452205Z" }, "title": "Semi-supervised Acoustic and Language Model Training for English-isiZulu Code-Switched Speech Recognition", "authors": [ { "first": "Astik", "middle": [], "last": "Biswas", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stellenbosch University", "location": { "country": "South Africa" } }, "email": "abiswas@sun.ac.za" }, { "first": "Febe", "middle": [], "last": "De Wet", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stellenbosch University", "location": { "country": "South Africa" } }, "email": "" }, { "first": "Ewald", "middle": [], "last": "Van Der Westhuizen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stellenbosch University", "location": { "country": "South Africa" } }, "email": "ewaldvdw@sun.ac.za" }, { "first": "Thomas", "middle": [], "last": "Niesler", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stellenbosch University", "location": { "country": "South Africa" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present an analysis of semi-supervised acoustic and language model training for English-isiZulu code-switched ASR using soap opera speech. Approximately 11 hours of untranscribed multilingual speech was transcribed automatically using four bilingual code-switching transcription systems operating in English-isiZulu, English-isiXhosa, English-Setswana and English-Sesotho. These transcriptions were incorporated into the acoustic and language model training sets. Results showed that the TDNN-F acoustic models benefit from the additional semi-supervised data and that even better performance could be achieved by including additional CNN layers. Using these CNN-TDNN-F acoustic models, a first iteration of semi-supervised training achieved an absolute mixed-language WER reduction of 3.4%, and a further 2.2% after a second iteration. Although the languages in the untranscribed data were unknown, the best results were obtained when all automatically transcribed data was used for training and not just the utterances classified as English-isiZulu. Despite reducing perplexity, the semi-supervised language model was not able to improve the ASR performance.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present an analysis of semi-supervised acoustic and language model training for English-isiZulu code-switched ASR using soap opera speech. Approximately 11 hours of untranscribed multilingual speech was transcribed automatically using four bilingual code-switching transcription systems operating in English-isiZulu, English-isiXhosa, English-Setswana and English-Sesotho. These transcriptions were incorporated into the acoustic and language model training sets. Results showed that the TDNN-F acoustic models benefit from the additional semi-supervised data and that even better performance could be achieved by including additional CNN layers. Using these CNN-TDNN-F acoustic models, a first iteration of semi-supervised training achieved an absolute mixed-language WER reduction of 3.4%, and a further 2.2% after a second iteration. Although the languages in the untranscribed data were unknown, the best results were obtained when all automatically transcribed data was used for training and not just the utterances classified as English-isiZulu. Despite reducing perplexity, the semi-supervised language model was not able to improve the ASR performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "South Africa is a multilingual country with 11 official languages, including highly-resourced English which usually serves as a lingua-franca. The largely multilingual population commonly mix these geographically co-located languages in casual conversation. An ASR system deployed in this environment should therefore be able to process speech that includes two or more languages in one utterance. The study and development of code-switching speech recognition systems has recently attracted increased research attention (Li and Fung, 2013; Y\u0131lmaz et al., 2018b; Adel et al., 2015; Emond et al., 2018) . Language pairs that are of current research interest include English-Mandarin (Li and Fung, 2013; Vu et al., 2012; Zeng et al., 2018) , Frisian-Dutch (Y\u0131lmaz et al., 2018b; Y\u0131lmaz et al., 2018a) and Hindi-English (Pandey et al., 2018) . In South Africa, code-switching most often occurs between highly resourced English and one of the nine under-resourced, officially-recognised African languages. In previous work, we showed that multilingual acoustic model training is effective for English-isiZulu codeswitched ASR if additional training data from closely related languages is used (Biswas et al., 2018a) . However, the 12.2 hours of training data provided by combining all our code-switching data is still too little to develop robust ASR systems. A related study indicated that increasing the pool of in-domain training data using semi-supervised training achieved a significant improvement over the baseline acoustic model (Biswas et al., 2019) . These findings motivated us to further optimise semi-supervised acoustic and language modelling training. Specifically, the effect of multiple iterations of semi-supervised training along with the application of a confidence threshold to filter the semisupervised data was considered. We focus our investigation on one language pair, English-isiZulu, to allow for a detailed analysis of various aspects of the semi-supervised training despite the limited computational resources at our disposal.", "cite_spans": [ { "start": 521, "end": 540, "text": "(Li and Fung, 2013;", "ref_id": "BIBREF9" }, { "start": 541, "end": 562, "text": "Y\u0131lmaz et al., 2018b;", "ref_id": "BIBREF19" }, { "start": 563, "end": 581, "text": "Adel et al., 2015;", "ref_id": "BIBREF1" }, { "start": 582, "end": 601, "text": "Emond et al., 2018)", "ref_id": "BIBREF7" }, { "start": 682, "end": 701, "text": "(Li and Fung, 2013;", "ref_id": "BIBREF9" }, { "start": 702, "end": 718, "text": "Vu et al., 2012;", "ref_id": null }, { "start": 719, "end": 737, "text": "Zeng et al., 2018)", "ref_id": "BIBREF20" }, { "start": 754, "end": 776, "text": "(Y\u0131lmaz et al., 2018b;", "ref_id": "BIBREF19" }, { "start": 777, "end": 798, "text": "Y\u0131lmaz et al., 2018a)", "ref_id": "BIBREF18" }, { "start": 817, "end": 838, "text": "(Pandey et al., 2018)", "ref_id": "BIBREF11" }, { "start": 1189, "end": 1211, "text": "(Biswas et al., 2018a)", "ref_id": "BIBREF4" }, { "start": 1533, "end": 1554, "text": "(Biswas et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The multilingual speech corpus was compiled from 626 South African soap opera episodes. Speech from these soap operas is typically spontaneous and fast, rich in codeswitching and often expresses emotion, making it a challenging corpus for ASR development. The data contains examples of code-switching between South African English and four Bantu languages: isiZulu, isiXhosa, Setswana and Sesotho.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Soap Opera Corpus", "sec_num": "2." }, { "text": "Four language-balanced sets, transcribed by mother tongue speakers, were derived from the soap opera speech (van der Westhuizen and Niesler, 2018). In addition, a large but language-unbalanced (English dominated) dataset containing 21.1 hours of code-switched speech data was created (Biswas et al., 2019) . The composition of this larger but unbalanced corpus is summarised in ", "cite_spans": [ { "start": 284, "end": 305, "text": "(Biswas et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Manually Transcribed Data", "sec_num": "2.1." }, { "text": "In addition to the transcribed data introduced in the previous section, 23 290 segmented but untranscribed soap opera utterances were generated during the creation of the multilingual corpus. These utterances correspond to 11.1 hours of speech from 127 speakers (69 male; 57 female). The languages in the untranscribed utterances are not labelled. Several South African languages not among the five present in the transcribed data are known to occur in these segments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manually Segmented Untranscribed Data", "sec_num": "2.2." }, { "text": "Semi-supervised techniques were used to transcribe the data introduced in Section 2.2. (Y\u0131lmaz et al., 2018b; Nallasamy et al., 2012; Thomas et al., 2013) , starting with our best existing code-switching speech recognition system. In this study the manually-segmented data was transcribed twice, as illustrated in Figure 1 . After each transcription pass, the acoustic models were retrained and recognition performance was evaluated in terms of WER. We distinguish between the acoustic models used to transcribe data (AutoT) and those that were used to evaluate WER (ASR) on the test set introduced in Table 2 .1.. These two models differ in the composition of their training sets. The acoustic models indicated by AutoT 1 in Figure 1 were trained on all the manually transcribed (ManT) data described in Section 2.1. as well as monolingual data from the NCHLT Speech Corpus (Barnard et al., 2014) . These were the best available models to start semi-supervised training. The ManT and NCHLT data were subsequently pooled with the transcriptions produced by the AutoT 1 models to train an updated set of acoustic models (AutoT 2 in Figure 1 ) which were used to obtain a new set of transcriptions of the untranscribed data for semi-supervised training. In contrast, the acoustic models ASR 1 and ASR 2 were trained by pooling only the ManT and AutoT soap opera data; no outof-domain NCHLT data was used. Separate AutoT and ASR acoustic models are maintained because we use only in-domain data for semi-supervised training. This is computationally much easier, since the out-of-domain NCHLT datasets are approximately five times larger than the in-domain sets. However, it was found that better performance can be achieved in the second pass of semi-supervised training if the acoustic models maintain a similar training set composition to that used in the first pass. Hence, AutoT 1 and AutoT 2 were purpose-built, intermediate systems used solely to generate semi-supervised data. Figure 1 also shows that each untranscribed utterance was decoded by four bilingual ASR systems. The highest confidence score was used to assign a language pair label to an utterance. In initial experiments, we added only EZ data identified in this way to the pool of multilingual training data. However, it was found that better performance could be achieved when all the AutoT data was added, and this was therefore done in the experiments reported here. Two ways of augmenting the acoustic model training set with automatically-transcribed data were considered. First, all automatic transcriptions were pooled with the manuallylabelled data. Second, utterances with a recognition confidence score below a threshold were excluded. The average confidence score across each language pair was used as a threshold. A larger variety of thresholds was not considered for computational reasons, but this remains part of ongoing work. Confidence thresholds were applied in three ways.", "cite_spans": [ { "start": 87, "end": 109, "text": "(Y\u0131lmaz et al., 2018b;", "ref_id": "BIBREF19" }, { "start": 110, "end": 133, "text": "Nallasamy et al., 2012;", "ref_id": "BIBREF10" }, { "start": 134, "end": 154, "text": "Thomas et al., 2013)", "ref_id": "BIBREF15" }, { "start": 875, "end": 897, "text": "(Barnard et al., 2014)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 314, "end": 322, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 602, "end": 609, "text": "Table 2", "ref_id": "TABREF0" }, { "start": 726, "end": 734, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1131, "end": 1139, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1981, "end": 1989, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Semi-Supervised Training", "sec_num": "3." }, { "text": "1. No threshold applied in either iteration 1 or 2 of semisupervised training. The ManT data (21.1 h) was pooled with the AutoT 1 data to train ASR 1 and with the AutoT 2 data to train ASR 2 . The duration of both AutoT 1 and AutoT 2 was 11.1 h.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised Training", "sec_num": "3." }, { "text": "2. Threshold applied only in iteration 1. In this case only a subset of the AutoT 1 data (4.2 h) was pooled with the ManT data to train ASR 1 . All 11.1 h of AutoT 2 data was used to train ASR 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised Training", "sec_num": "3." }, { "text": "3. Threshold applied in both iteration 1 and iteration 2. This resulted in a 4.2 h subset of AutoT 1 used to train ASR 1 and a 4.3 h subset of AutoT 2 used to train ASR 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised Training", "sec_num": "3." }, { "text": "These three scenarios are indicated by N T , T P 1 and T P 1P 2 respectively in Table 3 ., which shows the number of utterances assigned to each language pair. The total number of utterances and corresponding duration of the data included in the training set is shown in the last column. ", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 87, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Semi-Supervised Training", "sec_num": "3." }, { "text": "The English-isiZulu vocabulary consisted of 11 292 unique word types and was closed with respect to the training, development and test sets. The SRILM toolkit (Stolcke, 2002) was used to train a bilingual trigram language model (LM) using the transcriptions described in Section 2.1. This LM was interpolated with two monolingual trigrams trained on 471 million English and 3.2 million isiZulu words of newspaper text, respectively. The interpolation weights were chosen to minimise the development set perplexity. The resulting language model was further interpolated with LMs derived from the transcriptions produced by the process illustrated in Figure 1 to obtain a semi-supervised LM.", "cite_spans": [ { "start": 159, "end": 174, "text": "(Stolcke, 2002)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 649, "end": 657, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Language Modelling", "sec_num": "4.1." }, { "text": "All ASR experiments were performed using the Kaldi toolkit (Povey and others, 2011) and the data described in Section 2. The automatic transcription systems were implemented using factorized time-delay neural networks (TDNN-F) (Povey et al., 2018) . For multilingual training, the training sets of all four language pairs were combined. However, the acoustic models were language dependent and no phone merging across languages took place. A context-dependent GMM-HMM was trained to provide the alignments for neural network training. Three-fold data augmentation was applied prior to feature extraction (Ko et al., 2015) and the acoustic features comprised 40dimensional MFCCs (without derivatives), 3-dimensional pitch features and 100-dimensional i-vectors for speaker adaptation.", "cite_spans": [ { "start": 59, "end": 83, "text": "(Povey and others, 2011)", "ref_id": null }, { "start": 227, "end": 247, "text": "(Povey et al., 2018)", "ref_id": "BIBREF13" }, { "start": 604, "end": 621, "text": "(Ko et al., 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Acoustic Modelling", "sec_num": "4.2." }, { "text": "We used two types of neural network-based acoustic model architectures: (1) TDNN-F with 10 time-delay layers followed by a rank reduction layer trained using the Kaldi Librispeech recipe (version 5.2.164) and (2) CNN-TDNN-F consisting of two CNN layers followed by the TDNN-F architecture. TDNN-F models have been shown to be effective in under-resourced scenarios (Povey et al., 2018) . The locality, weight sharing and pooling properties of the CNNs have been shown to benefit ASR (Abdel-Hamid et al., 2014). The default recipe parameters were used during neural network training. In a final training step the multilingual acoustic models were adapted with English-isiZulu code-switched speech.", "cite_spans": [ { "start": 365, "end": 385, "text": "(Povey et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Acoustic Modelling", "sec_num": "4.2." }, { "text": "Table 5.1. shows the test set perplexities (PP) for the LM configurations described in Section 4.1. The baseline language model, LM 0 , was trained on the English-isiZulu acoustic training data transcriptions as well as monolingual English and isiZulu text (Biswas et al., 2018b) . LM 0 was also interpolated with trigram LMs trained on the 1-best and 10-best outputs of AutoT 2 respectively. MPP indicates monolingual perplexity and is calculated over monolingual stretches of text only, omitting points at which the language alternates. CPP indicates code-switch perplexity and is calculated only over language switch points. Therefore CPP indicates the uncertainty of the first word following a language switch. perplexity, while English suffers a small deterioration. CPP is reduced when incorporating the 1-best automatic transcriptions but less so when incorporating the 10-best. This indicates that the code-switches present in the 1-best outputs are more representative of the unseen test set switches than those present in the 10-best output.", "cite_spans": [ { "start": 257, "end": 279, "text": "(Biswas et al., 2018b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Language Modelling", "sec_num": "5.1." }, { "text": "ASR performance was evaluated on the English-isiZulu test set for various configurations of the ASR 1 and ASR 2 systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acoustic Modelling", "sec_num": "5.2." }, { "text": "5.2.1. ASR 1 Table 5 .2.1. reports WER results for different configurations of ASR 1 . Previously-reported results using a balanced subset of the corpus described in Section 2.1. are reproduced in rows 1 and 2. Language specific WERs are provided for the test set but not the development set. The results in row 4 of the table show that, when the TDNN-F network is preceded by two CNN layers, test set recognition performance improves by 1.9% absolute. Row 5, on the other hand, shows that the inclusion of the automatically-transcribed English-isiZulu utterances reduces the test set WER of the TDNN-F models by 1.8% absolute. This improvement increases by an additional 0.8% absolute when including all the automatically transcribed data and not just the English-isiZulu utterances, as shown in row 6. Row 7 shows that the performance of the CNN-TDNN-F system is also enhanced by including the automatically transcribed data. In all the above cases, the WER improvements are seen not only overall but also in the English and isiZulu language-specific error rates. Finally, the results in row 8 illustrate the impact of apply-ing a confidence threshold to decide which automaticallytranscribed utterances to include in the training set. The values in the table indicate that the mixed WER deteriorates marginally and that the English WER improves at the cost of a higher isiZulu WER. A comparison between row 1 in Table 5 .2.2. and row 7 in Table 5 .2.1. reveals that a second pass of retraining affords a further 1.5% absolute reduction in test set WER. This was found to be statistically significant at more than 95% confidence level using bootstrap interval estimation (Bisani and Ney, 2004) . Retraining ASR 2 with a threshold applied only to the output of AutoT 1 results in a slightly higher WER on the test set (row 2). Applying thresholds in both passes (row 3) improved the English WER but resulted in a substantial deterioration in isiZulu WER. This result suggests that, for the threshold value used here, English benefits from the exclusion of low-confidence automatically transcribed data while isiZulu does not. Thus, further study on the optimum threshold configuration is required.", "cite_spans": [ { "start": 1673, "end": 1695, "text": "(Bisani and Ney, 2004)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 13, "end": 20, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 1415, "end": 1422, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 1442, "end": 1449, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Acoustic Modelling", "sec_num": "5.2." }, { "text": "The results in row 4 of Table 5 .2.2. show that a further 0.6% absolute WER reduction can be achieved for the test set by tuning the learning rate during adaptation. Rows 5 and 6 show that retraining the LM on text that includes automatic transcriptions hardly influences recognition performance. Thus, although semi-supervised training led to appreciable improvements in the acoustic models, the corresponding positive effects on the language model were marginal. A detailed analysis of different ASR outputs is shown in ", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 31, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Acoustic Modelling", "sec_num": "5.2." }, { "text": "We have applied semi-supervised training to improve ASR for under-resourced code-switched English-isiZulu speech. Four different automatic transcription systems were used in two phases to decode 11 hours of multilingual, manually segmented but untranscribed soap opera speech. We found that by including CNN layers, CNN-TDNN-F acoustic models outperformed TDNN-F models on the codeswitched speech. Furthermore, semi-supervised training provided a further absolute reduction of 5.5% in WER for the CNN-TDNN-F system. While the automatically transcribed English-isiZulu text data reduced language model perplexity, this improvement did not lead to a reduction in WER. By selective data inclusion using a confidence threshold, approximately 60% of the automatically transcribed data could be discarded at minimal loss in recognition performance. A more thorough investigation of this threshold remains part of ongoing work. We also aim to further extend the pool of training data by incorporating speaker and language diarisation systems to allow automatic segmentation of new audio.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "We would like to thank the Department of Arts & Culture (DAC) of the South African government for funding this research. We are grateful to e.tv and Yula Quinn at Rhythm City, as well as the SABC and Human Stark at Generations: The Legacy, for assistance with data compilation. We also gratefully acknowledge the support of the NVIDIA corporation for the donation of GPU equipment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "7." } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Convolutional neural networks for speech recognition", "authors": [ { "first": "O", "middle": [], "last": "Abdel-Hamid", "suffix": "" }, { "first": "A.-R", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "H", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "L", "middle": [], "last": "Deng", "suffix": "" }, { "first": "G", "middle": [], "last": "Penn", "suffix": "" }, { "first": "Yu", "middle": [], "last": "", "suffix": "" }, { "first": "D", "middle": [], "last": "", "suffix": "" } ], "year": 2014, "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing", "volume": "22", "issue": "10", "pages": "1533--1545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdel-Hamid, O., Mohamed, A.-R., Jiang, H., Deng, L., Penn, G., and Yu, D. (2014). Convolutional neu- ral networks for speech recognition. IEEE/ACM Trans- actions on Audio, Speech, and Language Processing, 22(10):1533-1545.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Syntactic and semantic features for codeswitching factored language models", "authors": [ { "first": "H", "middle": [], "last": "Adel", "suffix": "" }, { "first": "N", "middle": [ "T" ], "last": "Vu", "suffix": "" }, { "first": "K", "middle": [], "last": "Kirchhoff", "suffix": "" }, { "first": "D", "middle": [], "last": "Telaar", "suffix": "" }, { "first": "T", "middle": [], "last": "Schultz", "suffix": "" } ], "year": 2015, "venue": "IEEE Transactions on Audio, Speech, and Language Processing", "volume": "23", "issue": "3", "pages": "431--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adel, H., Vu, N. T., Kirchhoff, K., Telaar, D., and Schultz, T. (2015). Syntactic and semantic features for code- switching factored language models. IEEE Transactions on Audio, Speech, and Language Processing, 23(3):431- 440.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The NCHLT speech corpus of the South African languages", "authors": [ { "first": "E", "middle": [], "last": "Barnard", "suffix": "" }, { "first": "M", "middle": [ "H" ], "last": "Davel", "suffix": "" }, { "first": "C", "middle": [ "V" ], "last": "Heerden", "suffix": "" }, { "first": "F", "middle": [], "last": "De Wet", "suffix": "" }, { "first": "J", "middle": [], "last": "Badenhorst", "suffix": "" } ], "year": 2014, "venue": "Proc. SLTU", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barnard, E., Davel, M. H., Heerden, C. v., de Wet, F., and Badenhorst, J. (2014). The NCHLT speech corpus of the South African languages. In Proc. SLTU, St Petersburg, Russia.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bootstrap estimates for confidence intervals in ASR performance evaluation", "authors": [ { "first": "M", "middle": [], "last": "Bisani", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proc. ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bisani, M. and Ney, H. (2004). Bootstrap estimates for confidence intervals in ASR performance evaluation. In Proc. ICASSP, Montreal, Canada.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Multilingual neural network acoustic modelling for ASR of under-resourced English-isiZulu code-switched speech", "authors": [ { "first": "A", "middle": [], "last": "Biswas", "suffix": "" }, { "first": "F", "middle": [], "last": "De Wet", "suffix": "" }, { "first": "E", "middle": [], "last": "Van Der Westhuizen", "suffix": "" }, { "first": "E", "middle": [], "last": "Y\u0131lmaz", "suffix": "" }, { "first": "T", "middle": [ "R" ], "last": "Niesler", "suffix": "" } ], "year": 2018, "venue": "Proc. Interspeech, Hyderabad", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Biswas, A., de Wet, F., van der Westhuizen, E., Y\u0131lmaz, E., and Niesler, T. R. (2018a). Multilingual neural network acoustic modelling for ASR of under-resourced English- isiZulu code-switched speech. In Proc. Interspeech, Hy- derabad, India.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Improving ASR for codeswitched speech in under-resourced languages using outof-domain data", "authors": [ { "first": "A", "middle": [], "last": "Biswas", "suffix": "" }, { "first": "E", "middle": [], "last": "Van Der Westhuizen", "suffix": "" }, { "first": "T", "middle": [ "R" ], "last": "Niesler", "suffix": "" }, { "first": "F", "middle": [], "last": "De Wet", "suffix": "" } ], "year": 2018, "venue": "Proc. SLTU", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Biswas, A., van der Westhuizen, E., Niesler, T. R., and de Wet, F. (2018b). Improving ASR for code- switched speech in under-resourced languages using out- of-domain data. In Proc. SLTU, Gurugram, India.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Semi-supervised acoustic model training for five-lingual code-switched ASR", "authors": [ { "first": "A", "middle": [], "last": "Biswas", "suffix": "" }, { "first": "E", "middle": [], "last": "Y\u0131lmaz", "suffix": "" }, { "first": "F", "middle": [], "last": "De Wet", "suffix": "" }, { "first": "E", "middle": [], "last": "Van Der Westhuizen", "suffix": "" }, { "first": "T", "middle": [ "R" ], "last": "Niesler", "suffix": "" } ], "year": 2019, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Biswas, A., Y\u0131lmaz, E., de Wet, F., van der Westhuizen, E., and Niesler, T. R. (2019). Semi-supervised acoustic model training for five-lingual code-switched ASR. In Proc. Interspeech, Graz, Austria.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Transliteration based approaches to improve code-switched speech recognition performance", "authors": [ { "first": "J", "middle": [], "last": "Emond", "suffix": "" }, { "first": "B", "middle": [], "last": "Ramabhadran", "suffix": "" }, { "first": "B", "middle": [], "last": "Roark", "suffix": "" }, { "first": "P", "middle": [], "last": "Moreno", "suffix": "" }, { "first": "M", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2018, "venue": "Proc. SLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emond, J., Ramabhadran, B., Roark, B., Moreno, P., and Ma, M. (2018). Transliteration based approaches to im- prove code-switched speech recognition performance. In Proc. SLT, Athens, Greece.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Audio augmentation for speech recognition", "authors": [ { "first": "T", "middle": [], "last": "Ko", "suffix": "" }, { "first": "V", "middle": [], "last": "Peddinti", "suffix": "" }, { "first": "D", "middle": [], "last": "Povey", "suffix": "" }, { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2015, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ko, T., Peddinti, V., Povey, D., and Khudanpur, S. (2015). Audio augmentation for speech recognition. In Proc. In- terspeech, Dresdan, Germany.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improved mixed language speech recognition using asymmetric acoustic model and language model with code-switch inversion constraints", "authors": [ { "first": "Y", "middle": [], "last": "Li", "suffix": "" }, { "first": "P", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2013, "venue": "Proc. ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Y. and Fung, P. (2013). Improved mixed language speech recognition using asymmetric acoustic model and language model with code-switch inversion constraints. In Proc. ICASSP, Vancouver, Canada.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Semisupervised learning for speech recognition in the context of accent adaptation", "authors": [ { "first": "U", "middle": [], "last": "Nallasamy", "suffix": "" }, { "first": "F", "middle": [], "last": "Metze", "suffix": "" }, { "first": "T", "middle": [], "last": "Schultz", "suffix": "" } ], "year": 2012, "venue": "Symposium on Machine Learning in Speech and Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nallasamy, U., Metze, F., and Schultz, T. (2012). Semi- supervised learning for speech recognition in the context of accent adaptation. In Symposium on Machine Learn- ing in Speech and Language Processing, Portland, Ore- gon, USA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Phonetically balanced code-mixed speech corpus for Hindi-English automatic speech recognition", "authors": [ { "first": "A", "middle": [], "last": "Pandey", "suffix": "" }, { "first": "B", "middle": [ "M L" ], "last": "Srivastava", "suffix": "" }, { "first": "R", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "B", "middle": [ "T" ], "last": "Nellore", "suffix": "" }, { "first": "K", "middle": [ "S" ], "last": "Teja", "suffix": "" }, { "first": "S", "middle": [ "V" ], "last": "Gangashetty", "suffix": "" } ], "year": 2018, "venue": "Proc. LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pandey, A., Srivastava, B. M. L., Kumar, R., Nellore, B. T., Teja, K. S., and Gangashetty, S. V. (2018). Phonetically balanced code-mixed speech corpus for Hindi-English automatic speech recognition. In Proc. LREC, Miyazaki, Japan.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The Kaldi speech recognition toolkit", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" } ], "year": 2011, "venue": "Proc. ASRU", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Povey, D. et al. (2011). The Kaldi speech recognition toolkit. In Proc. ASRU, Hawaii, USA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Semi-orthogonal low-rank matrix factorization for deep neural networks", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" }, { "first": "G", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wang", "suffix": "" }, { "first": "K", "middle": [], "last": "Li", "suffix": "" }, { "first": "H", "middle": [], "last": "Xu", "suffix": "" }, { "first": "M", "middle": [], "last": "Yarmohammadi", "suffix": "" }, { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2018, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Povey, D., Cheng, G., Wang, Y., Li, K., Xu, H., Yarmoham- madi, M., and Khudanpur, S. (2018). Semi-orthogonal low-rank matrix factorization for deep neural networks. In Proc. Interspeech, Graz, Austria.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "SRILM -An extensible language modeling toolkit", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proc. ICSLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, A. (2002). SRILM -An extensible language mod- eling toolkit. In Proc. ICSLP, Denver, USA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Deep neural network features and semisupervised training for low resource speech recognition", "authors": [ { "first": "S", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "M", "middle": [ "L" ], "last": "Seltzer", "suffix": "" }, { "first": "K", "middle": [], "last": "Church", "suffix": "" }, { "first": "H", "middle": [ ";" ], "last": "Hermansky", "suffix": "" }, { "first": "Canada", "middle": [], "last": "Vancouver", "suffix": "" }, { "first": "E", "middle": [], "last": "Van Der Westhuizen", "suffix": "" }, { "first": "T", "middle": [ "R" ], "last": "Niesler", "suffix": "" } ], "year": 2013, "venue": "Proc. ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas, S., Seltzer, M. L., Church, K., and Hermansky, H. (2013). Deep neural network features and semi- supervised training for low resource speech recognition. In Proc. ICASSP, Vancouver, Canada. van der Westhuizen, E. and Niesler, T. R. (2018). A first South African corpus of multilingual code-switched soap opera speech. In Proc. LREC, Miyazaki, Japan.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A first speech recognition system for Mandarin-English code-switch conversational speech", "authors": [], "year": null, "venue": "Proc. ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A first speech recognition system for Mandarin-English code-switch conversational speech. In Proc. ICASSP, Kyoto, Japan.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Building a unified codeswitching ASR system for South African languages", "authors": [ { "first": "E", "middle": [], "last": "Y\u0131lmaz", "suffix": "" }, { "first": "A", "middle": [], "last": "Biswas", "suffix": "" }, { "first": "E", "middle": [], "last": "Van Der Westhuizen", "suffix": "" }, { "first": "F", "middle": [], "last": "De Wet", "suffix": "" }, { "first": "T", "middle": [ "R" ], "last": "Niesler", "suffix": "" } ], "year": 2018, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y\u0131lmaz, E., Biswas, A., van der Westhuizen, E., de Wet, F., and Niesler, T. R. (2018a). Building a unified code- switching ASR system for South African languages. In Proc. Interspeech, Hyderabad, India.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Semi-supervised acoustic model training for speech with code-switching", "authors": [ { "first": "E", "middle": [], "last": "Y\u0131lmaz", "suffix": "" }, { "first": "M", "middle": [], "last": "Mclaren", "suffix": "" }, { "first": "H", "middle": [], "last": "Van Den Heuvel", "suffix": "" }, { "first": "D", "middle": [ "A" ], "last": "Van Leeuwen", "suffix": "" } ], "year": 2018, "venue": "Speech Communication", "volume": "105", "issue": "", "pages": "12--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y\u0131lmaz, E., McLaren, M., van den Heuvel, H., and van Leeuwen, D. A. (2018b). Semi-supervised acoustic model training for speech with code-switching. Speech Communication, 105:12-22.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "On the end-to-end solution to Mandarin-English code-switching speech recognition", "authors": [ { "first": "Z", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Y", "middle": [], "last": "Khassanov", "suffix": "" }, { "first": "V", "middle": [ "T" ], "last": "Pham", "suffix": "" }, { "first": "H", "middle": [], "last": "Xu", "suffix": "" }, { "first": "E", "middle": [ "S" ], "last": "Chng", "suffix": "" }, { "first": "H", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.00241" ] }, "num": null, "urls": [], "raw_text": "Zeng, Z., Khassanov, Y., Pham, V. T., Xu, H., Chng, E. S., and Li, H. (2018). On the end-to-end solution to Mandarin-English code-switching speech recognition. arXiv preprint arXiv:1811.00241.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Semi-supervised training framework for English-isiZulu code-switched (CS) ASR." }, "TABREF0": { "text": ".1.. Note that all utterances in the development and test sets contain code-switching and that the balanced data is a subset of the unbalanced data.", "num": null, "html": null, "type_str": "table", "content": "
Language Mono (m) CS (m) Subtotal (m) Word tokens Word types
English755.0121.8876.6194 4267 908
isiZulu92.857.4150.024 4126 789
TrainisiXhosa65.123.888.813 8255 630
Sesotho44.734.078.622 2262 321
Setswana36.934.571.421 4091 525
DevEZ-8.08.01 572858
TestEZ-30.430.45 6583711
Total994.4271.51304.4283 52024 933
" }, "TABREF1": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
: Duration, in minutes (m), word type and word
token counts for the unbalanced soap opera corpus. Both
monolingual and code-switched (CS) durations are given.
" }, "TABREF3": { "text": "", "num": null, "html": null, "type_str": "table", "content": "" }, "TABREF4": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
LMPP (dev)PPMPPE MPPZMPPCPP
LM0 (baseline)425.8601.7121.2777.8358.1 3 292.0
LM0 + 1-best416.1587.4123.1743.6351.1 3 160.3
LM0 + 10-best408.2583.6124.4722.8346.9 3 205.2
" }, "TABREF5": { "text": "Perplexity of bilingual English-isiZulu trigram LMs.", "num": null, "html": null, "type_str": "table", "content": "" }, "TABREF7": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
: WER (%) on the English-isiZulu development
(dev) and test sets for different configurations of ASR 1 .
5.2.2. ASR 2
The results for the second iteration of semi-supervised
training are reported in Table 5.2.2.. In all cases the ManT
data was pooled with all the AutoT data and not just the
EZ sub-set as was done in row 5 of Table 5.2.1.. Only
the results using the CNN-TDNN-F acoustic models are
shown, since this gave consistently superior performance
in Table 5.2.1..
Training dataLMDev Test WERE WERZ
1ManT + AutoT2 (NT)LM038.6 42.5 36.247.6
2ManT + AutoT2 (TP 1)LM038.0 43.1 37.547.4
3ManT + AutoT2 (TP 1P 2)LM040.1 43.9 34.251.3
4 ManT + AutoT2 5 (NT, tuned) 6LM0 LM0 + 1-best LM0 + 10-best 36.7 42.0 34.0 36.5 41.9 33.0 36.5 41.8 33.948.8 47.9 48.1
" }, "TABREF8": { "text": "WER (%) on the English-isiZulu development (dev) and test sets for different configurations of ASR 2 .", "num": null, "html": null, "type_str": "table", "content": "" }, "TABREF9": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
.2.2.. The analysis confirms that semi-supervised
training resulted in substantial improvements in the English
and isiZulu word correct accuracy. The results also reveal a
substantial improvement in bigram correct accuracy at the
1 464 code-switch points occurring in the test set, where
bigram correct accuracy (%) is defined as the percentage of
words correctly recognised immediately after code-switch
points.
Accuracy (%)Table 4 (Row 3)
" }, "TABREF10": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
(Row 4)
" }, "TABREF11": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
(Row 7)
" }, "TABREF12": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
(Row 8)
" }, "TABREF13": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
(Row 4)
" }, "TABREF14": { "text": "Detailed analysis of ASR accuracy for different acoustic models.", "num": null, "html": null, "type_str": "table", "content": "" } } } }