{ "paper_id": "U13-1014", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:10:14.849420Z" }, "title": "Error Detection in Automatic Speech Recognition", "authors": [ { "first": "Farshid", "middle": [], "last": "Zavareh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Monash University Clayton", "location": { "postCode": "3800", "region": "VICTORIA", "country": "Australia" } }, "email": "" }, { "first": "Ingrid", "middle": [], "last": "Zukerman", "suffix": "", "affiliation": { "laboratory": "", "institution": "Monash University Clayton", "location": { "postCode": "3800", "region": "VICTORIA", "country": "Australia" } }, "email": "ingrid.zukerman@monash.edu" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "", "affiliation": { "laboratory": "", "institution": "Monash University Clayton", "location": { "postCode": "3800", "region": "VICTORIA", "country": "Australia" } }, "email": "su.kim@monash.edu" }, { "first": "Thomas", "middle": [], "last": "Kleinbauer", "suffix": "", "affiliation": { "laboratory": "", "institution": "Monash University Clayton", "location": { "postCode": "3800", "region": "VICTORIA", "country": "Australia" } }, "email": "thomas.kleinbauer@monash.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We offer a supervised machine learning approach for recognizing erroneous words in the output of a speech recognizer. We have investigated several sets of features combined with two word configurations, and compared the performance of two classifiers: Decision Trees and Na\u00efve Bayes. Evaluation was performed on a corpus of 400 spoken referring expressions, with Decision Trees yielding a high recognition accuracy.", "pdf_parse": { "paper_id": "U13-1014", "_pdf_hash": "", "abstract": [ { "text": "We offer a supervised machine learning approach for recognizing erroneous words in the output of a speech recognizer. We have investigated several sets of features combined with two word configurations, and compared the performance of two classifiers: Decision Trees and Na\u00efve Bayes. Evaluation was performed on a corpus of 400 spoken referring expressions, with Decision Trees yielding a high recognition accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "One of the main stumbling blocks for spoken Natural Language Understanding (NLU) systems is the lack of reliability of Automatic Speech Recognizers (ASRs) (Pellegrini and Trancoso, 2010) . Recent research prototypes of ASRs yield Word Error Rates (WERs) between 15.6% (Pellegrini and Trancoso, 2010) and 18.7% (Sainath et al., 2011) for broadcast news. However, the WER of the ASR we employed (Microsoft Speech SDK 6.1) is 34% when trained on an open vocabulary plus a small language model for our corpus. This WER is consistent with that obtained in the 2010 Spoken Dialogue Challenge (Black et al., 2011) .", "cite_spans": [ { "start": 155, "end": 186, "text": "(Pellegrini and Trancoso, 2010)", "ref_id": "BIBREF9" }, { "start": 268, "end": 299, "text": "(Pellegrini and Trancoso, 2010)", "ref_id": "BIBREF9" }, { "start": 310, "end": 332, "text": "(Sainath et al., 2011)", "ref_id": "BIBREF12" }, { "start": 586, "end": 606, "text": "(Black et al., 2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we offer a supervised machine learning approach to detect erroneous words in ASR output (this step will be followed by automatic error correction). Our approach was evaluated on a corpus of 400 spoken referring expressions, with the best-performing option yielding an average accuracy of 89% (Section 5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows. In the next section, we discuss related work. In Section 4, we describe our experimental design, focusing on the features considered for our machinelearning approach. In Section 5, we discuss our results, followed by concluding remarks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Approaches for improving the performance of spoken NLU systems may be classified into prevention and recovery.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Research", "sec_num": "2" }, { "text": "Prevention avoids errors by constraining the vocabulary (Gorniak and Roy, 2005; Sugiura et al., 2009) and grammatical constructs (Brooks and Breazeal, 2006) understood by an ASR. ASRs that employ this approach can process expected utterances efficiently, and work well in restricted domains. However, these ASRs have trouble processing unexpected utterances.", "cite_spans": [ { "start": 56, "end": 79, "text": "(Gorniak and Roy, 2005;", "ref_id": "BIBREF4" }, { "start": 80, "end": 101, "text": "Sugiura et al., 2009)", "ref_id": "BIBREF13" }, { "start": 129, "end": 156, "text": "(Brooks and Breazeal, 2006)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Research", "sec_num": "2" }, { "text": "Recovery involves error detection followed by correction. During detection, an NLU system posits that a word in an utterance was incorrectly recognized. Three approaches to error recovery are described in (L\u00f3pez-C\u00f3zar and Griol, 2010; Ringger and Allen, 1996; Zhou et al., 2006) .", "cite_spans": [ { "start": 205, "end": 234, "text": "(L\u00f3pez-C\u00f3zar and Griol, 2010;", "ref_id": "BIBREF8" }, { "start": 235, "end": 259, "text": "Ringger and Allen, 1996;", "ref_id": "BIBREF11" }, { "start": 260, "end": 278, "text": "Zhou et al., 2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Research", "sec_num": "2" }, { "text": "L\u00f3pez-C\u00f3zar and Griol (2010) consider statistical information, and lexical, syntactic, semantic and dialogue-related information to correct ASR errors (i.e., replace, insert or delete words in a textual ASR output), and syntactic approaches to modify tenses of verbs and grammatical numbers to better match grammatical expectations. Ringger and Allen (1996) use statistical information to construct a language model that quantifies the likelihood of word sequences, and a noisy channel model that predicts errors made by an ASR. They perform error detection and correction at the same time based on these models, which are trained using the words expected in the domain. Zhou et al. (2006) perform error detection and correction of utterances, words and characters in Mandarin. They experiment with the Generalized Word Posterior Probability (GWPP) of an utterance, computed from word hypotheses, utterance length, language model, and acoustic observations; and features based on the N -best hypotheses, obtained from acoustic, language model and purity scores. When an erroneous word is de- tected, all the characters in it are deemed to be wrong. Correction is then performed using a list of candidate alternatives for each erroneous character to generate a list of word hypotheses, and a linguistic model based on mutual information and trigrams to select the best word hypothesis.", "cite_spans": [ { "start": 333, "end": 357, "text": "Ringger and Allen (1996)", "ref_id": "BIBREF11" }, { "start": 671, "end": 689, "text": "Zhou et al. (2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Research", "sec_num": "2" }, { "text": "Like these researchers, we offer corpus-based techniques to detect ASR errors. However, we employ features of the ASR output, rather than actual words or expectations from the context. By doing this, we hope to avoid over-fitting to domainspecific words and expectations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Research", "sec_num": "2" }, { "text": "Error detection performance was evaluated using the corpus constructed by Kleinbauer et al. (2013). The corpus originally comprised 432 free-form descriptions spoken by 26 trial subjects to refer to 12 designated objects in four scenarios (three objects per scenario, where a scenario contains between 8 and 16 objects; two scenarios appear in Figure 1 ). Half of the participants were native English speakers, and half were non-native. All the speakers were proficient in English, but the nonnative speakers had a foreign accent, and some had idiosyncratic turns of phrase.", "cite_spans": [], "ref_spans": [ { "start": 344, "end": 352, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The Corpus", "sec_num": "3" }, { "text": "We manually filtered out 32 descriptions that were broken up by the ASR due to pauses made by the speakers, leaving 400 descriptions, which comprise 3, 128 words in total, and 118 unique words. The descriptions, which varied in length and complexity, had an average length of 10 words and a median length of 8 words, with the longest description containing 21 words. Sample descriptions are: \"the green plate next to the screwdriver at the top of the table\", \"the large pink ball in the middle of the room\", \"the plate in the corner of the table\", and \"the picture on the wall\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Corpus", "sec_num": "3" }, { "text": "The ASR produced up to 50 alternative textual interpretations for each spoken description, ranked in descending order of probability. In total, 4, 249 texts, with 33, 927 words (706 unique) were generated. It is worth noting that more alternatives, with a higher average WER for the top-ranked options, were generated for non-native speakers than for native speakers. We used the Levenshtein distance to align each alternative produced by the ASR with the reference (correct) description. The words in the alternative were then labeled as follows: Correct, Inserted -absent from the reference interpretation, Replaced -an incorrect word instead of the reference word, and Deleted -a placeholder for a reference word that is not in the alternative. The Inserted and Replaced words comprise the Wrong class (Deleted words cannot be modeled).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Corpus", "sec_num": "3" }, { "text": "In this section, we discuss the classifiers we considered, our feature sets, and evaluation methods. Classifiers. We investigated two classifiers to decide whether a word in a text produced by the ASR is correct: Decision Trees (DT) (Quinlan, 1993) and Na\u00efve Bayes classifiers (NB) (Domingos and Pazzani, 1997) (cs.waikato.ac.nz/ml/ weka/). 1 For NB, we used equal-width binning to discretize continuous features (Catlett, 1991; Kerber, 1992) . (4) Time taken by the speaker to pronounce word w (in fraction of a second); and (5) Confidence Score given to word w by the ASR.", "cite_spans": [ { "start": 233, "end": 248, "text": "(Quinlan, 1993)", "ref_id": "BIBREF10" }, { "start": 413, "end": 428, "text": "(Catlett, 1991;", "ref_id": "BIBREF2" }, { "start": 429, "end": 442, "text": "Kerber, 1992)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "Sentence-based features. (6) Repetition Count -number of alternatives where w is repeated; (7) Repetition Ratio (equivalent to purity score (Zhou et al., 2006) ) -Repetition Count divided by the total number of alternatives; (8) Replacement Ratio -number of alternatives which, when aligned with the current alternative, label w with \"R\", divided by the total number of alternatives; (9) Insertion Ratio -number of alternatives which, when aligned with the current one, label w with \"I\", divided by the total number of alternatives; (10) Rank of the alternative containing w in the ASR output; and (11) Sentence Length -number of words in the current alternative.", "cite_spans": [ { "start": 140, "end": 159, "text": "(Zhou et al., 2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "Phoneme-based features (according to the CMU Pronunciation Dictionary, speech.cs.cmu.edu/ cgi-bin/cmudict). (12) Broad Sound Groups (BSGs) -a vector of length 8 that represents the number of times each BSG occurs in word w, e.g., the word \"problem\" has 2 vowels, 2 stops, 2 liquids, and 1 nasal; and (13) Phonemes -a vector of length 39 that represents the number of times a phonetic symbol appears in w's phonetic transcription.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "We experimented with the following sets of features: (1) Word + Sentence features, (2) BSGs, and (3) Phonemes. These features were computed for the current word (C), which is being classified, and for the previous, current and next word (PCN). For example, the following vector is produced when all 58 features are used for the current word (the first and last word in an alternative have missing features for P and N respectively):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "f 1 , . . . , f 5 Word , f 6 , . . . , f 11 Sentence , f 12 , . . . , f 19 BSGs , f 20 , . . . , f 58 Phonemes .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "Sets of features that included actual words produced accuracies of over 95%, but were unlikely to generalize. This was evident by inspecting the generated decision tree, which was shallow and wide. In fact, when w was used, most other features were ignored. Consequently, we decided not to include the actual words in our feature sets. Evaluation method. We employed 13-fold cross validation to train and test our corpus, where each fold comprises descriptions spoken by one native English speaker and one non-native speaker (Section 3). The per-speaker split ensures that sentences spoken by one trial subject do not appear in both training and test sets; and the native/nonnative pairing balances the test sets, in the sense that they are of similar size, and ASR performance is similar for all sets (Section 3). Table 1 shows the results of our initial tests, which compare the performance of DT with that of NB in terms of micro-and macro-averaged accuracy (recall that the majority class of Correct words is 66%, Section 1). The odd-numbered rows contain the results for the three sets of features computed only for C, and the even-numbered rows contain the results for PCN. The statistically significant best result is boldfaced (statistical significance was calculated using the Paired Student's t-test).", "cite_spans": [], "ref_spans": [ { "start": 815, "end": 822, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "As seen in Table 1 , compared to C, PCN has a mixed effect on NB's performance, depending on the base features: PCN yields a statistically significant drop in accuracy for Word+Sentence (p-value=0.03), no statistically significant change for BSGs, and an improvement for Phonemes (p-value=0.015). The results are more consistent for DT: there is no significant difference in performance between C and PCN for Word + Sentence, but PCN yields statistically significant improvements for the other feature sets (p-value \u2264 0.05).", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 18, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "There were no statistically significant differences in accuracy between DT and NB for Word+Sentence with C and PCN. However, DT significantly outperformed NB in the remaining tests (p-values << 0.01). In addition, PCN yielded a better performance than C for DT. Hence, our next tests are carried out using DT with PCN only. Table 2 shows the results of combining Phonemes, which give the best accuracy (Table 1) , with three feature sets: Word+Sentence, BSGs and PoS. The last two rows in Table 2 (boldfaced) show the feature sets that yield the highest (statistically equivalent) accuracies. These results, which were obtained with PoS, with and without BSGs, are significantly better than those achieved when Word + Sentence features or BSGs were used (p-value \u2264 0.05). Also, combining Phonemes with Word+Sentence, BSGs and both Word+Sentence and BSGs does not yield significant performance changes.", "cite_spans": [], "ref_spans": [ { "start": 324, "end": 331, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 402, "end": 411, "text": "(Table 1)", "ref_id": "TABREF1" }, { "start": 489, "end": 496, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The most significant features in the bestperforming decision trees are (in descending order): presence of the phonemes TH and Z, number of occurrences of N (\u22641 versus 1<), whether PoS=JJ (adjective), and whether the next word contains a stop BSG (at level 5 in the tree). This indicates that certain phonemes are prone to ASR mis-interpretation -an insight that has significant implications for the next stage of the ASR process, which consists of proposing replacements for words that are classified as Wrong. For example, we could create a confusion matrix between error-prone phonemes produced by the ASR and likely replacement phonemes, and suggest replacement words that include these hypothesized phonemes (Thomas et al., 1997; Zhou et al., 2006) . It is worth noting that the ASR's Confidence Score was not used in the best-performing DTs. In fact, we observed that this score is often inconsistent with the Correct/Wrong class of a word.", "cite_spans": [ { "start": 712, "end": 733, "text": "(Thomas et al., 1997;", "ref_id": "BIBREF14" }, { "start": 734, "end": 752, "text": "Zhou et al., 2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "As mentioned in Section 4, using the actual words as a classification feature yielded decision trees that over-fitted the data. Thus, it is possible that a similar effect takes place when Phonemes are used. Additional tests on different datasets should be conducted to rule out this (Table 3) . This is noteworthy because BSGs are abstractions of Phonemes, and hence are less likely than Phonemes to fit a small number of words. Further, a correction procedure similar to that suggested for Phonemes would be applicable for BSGs.", "cite_spans": [], "ref_spans": [ { "start": 283, "end": 292, "text": "(Table 3)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "We have proposed a supervised learning method to predict the correctness of words in an ASR output. Our best classifier yields 89% accuracy. However, these results were obtained on a relatively small corpus with a limited vocabulary (Section 3). Hence, further tests with larger, more diverse corpora are needed to verify our results. As mentioned in Section 3, we aligned the alternatives returned by the ASR with the reference text in order to label the words in each alternative. In addition, we aligned the alternatives with each other to compute multi-alternative features, such as Repetition count and Replacement ratio. In doing so, we implicitly assumed that there is a one-to-one mapping between the words in an alternative and those in the reference text, and also between the words in alternatives generated for the same spoken description. However this assumption is not always valid: we have observed cases where one word has been split into two words by the ASR, or a few words have been merged into one. Ringger and Allen (1996) have proposed a statistical solution to this problem, but unfortunately their method relies heavily on the vocabulary on which the system was trained. This problem will be addressed in the future.", "cite_spans": [ { "start": 1019, "end": 1043, "text": "Ringger and Allen (1996)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "The methods offered in this paper do not distinguish between a Wrong word and Noise (sighs or hesitations that are often mis-heard by the ASR as \"and\", \"on\" or \"in\"). In the future, we propose to retrain our system to deal with three classes, viz Correct, Wrong and Noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "Initially we also considered linear chain Conditional Random Fields (CRF)(Lafferty et al., 2001) (mallet.cs. umass.edu), but they exhibited inferior performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by grants DP110100500 and DP120100103 from the Australian Research Council. The authors thank Masud Moshtaghi for his help with statistical issues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Spoken dialog challenge 2010: Comparison of live and control test results", "authors": [ { "first": "A", "middle": [], "last": "Black", "suffix": "" }, { "first": "S", "middle": [], "last": "Burger", "suffix": "" }, { "first": "A", "middle": [], "last": "Conkie", "suffix": "" }, { "first": "H", "middle": [], "last": "Hastie", "suffix": "" }, { "first": "S", "middle": [], "last": "Keizer", "suffix": "" }, { "first": "O", "middle": [], "last": "Lemon", "suffix": "" }, { "first": "N", "middle": [], "last": "Merigaud", "suffix": "" }, { "first": "G", "middle": [], "last": "Parent", "suffix": "" }, { "first": "G", "middle": [], "last": "Schubiner", "suffix": "" }, { "first": "B", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Williams", "suffix": "" }, { "first": "K", "middle": [], "last": "Yu", "suffix": "" }, { "first": "S", "middle": [], "last": "Young", "suffix": "" }, { "first": "M", "middle": [], "last": "Eskenazi", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 11th SIGdial Conference on Discourse and Dialogue", "volume": "", "issue": "", "pages": "2--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Black, S. Burger, A. Conkie, H. Hastie, S. Keizer, O. Lemon, N. Merigaud, G. Parent, G. Schubiner, B. Thomson, J.D. Williams, K. Yu, S. Young, and M. Eskenazi. 2011. Spoken dialog challenge 2010: Comparison of live and control test results. In Pro- ceedings of the 11th SIGdial Conference on Dis- course and Dialogue, pages 2-7, Portland, Oregon.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Working with robots and objects: Revisiting deictic reference for achieving spatial common ground", "authors": [ { "first": "A", "middle": [ "G" ], "last": "Brooks", "suffix": "" }, { "first": "C", "middle": [], "last": "Breazeal", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-robot Interaction", "volume": "", "issue": "", "pages": "297--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "A.G. Brooks and C. Breazeal. 2006. Working with robots and objects: Revisiting deictic reference for achieving spatial common ground. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-robot Interaction, pages 297-304, Salt Lake City, Utah.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "On changing continuous attributes into ordered discrete attributes", "authors": [ { "first": "J", "middle": [], "last": "Catlett", "suffix": "" } ], "year": 1991, "venue": "EWSL-91 -Proceedings of the European Working Session on Learning", "volume": "", "issue": "", "pages": "164--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Catlett. 1991. On changing continuous attributes into ordered discrete attributes. In EWSL-91 -Pro- ceedings of the European Working Session on Learn- ing, pages 164-178, Porto, Portugal.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "On the optimality of the simple Bayesian classifier under zero-one loss", "authors": [ { "first": "P", "middle": [], "last": "Domingos", "suffix": "" }, { "first": "M", "middle": [], "last": "Pazzani", "suffix": "" } ], "year": 1997, "venue": "Machine Learning", "volume": "29", "issue": "", "pages": "103--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Domingos and M. Pazzani. 1997. On the optimal- ity of the simple Bayesian classifier under zero-one loss. Machine Learning, 29:103-130.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Probabilistic grounding of situated speech using plan recognition and reference resolution", "authors": [ { "first": "P", "middle": [], "last": "Gorniak", "suffix": "" }, { "first": "D", "middle": [], "last": "Roy", "suffix": "" } ], "year": 2005, "venue": "ICMI'05 -Proceedings of the 7th International Conference on Multimodal Interfaces", "volume": "", "issue": "", "pages": "138--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Gorniak and D. Roy. 2005. Probabilistic grounding of situated speech using plan recognition and refer- ence resolution. In ICMI'05 -Proceedings of the 7th International Conference on Multimodal Inter- faces, pages 138-143, Trento, Italy.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "ChiMerge: Discretization of numeric attributes", "authors": [ { "first": "R", "middle": [], "last": "Kerber", "suffix": "" } ], "year": 1992, "venue": "AAAI92 -Proceedings of the 10th National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "123--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Kerber. 1992. ChiMerge: Discretization of numeric attributes. In AAAI92 -Proceedings of the 10th Na- tional Conference on Artificial Intelligence, pages 123-128, San Jose, California.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Evaluation of the Scusi? spoken language interpretation system -A case study", "authors": [ { "first": "", "middle": [], "last": "Th", "suffix": "" }, { "first": "I", "middle": [], "last": "Kleinbauer", "suffix": "" }, { "first": "S", "middle": [ "N" ], "last": "Zukerman", "suffix": "" }, { "first": "", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2013, "venue": "IJCNLP2013 -Proceedings of the 6th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "225--233", "other_ids": {}, "num": null, "urls": [], "raw_text": "Th. Kleinbauer, I. Zukerman, and S.N. Kim. 2013. Evaluation of the Scusi? spoken language interpre- tation system -A case study. In IJCNLP2013 - Proceedings of the 6th International Joint Confer- ence on Natural Language Processing, pages 225- 233, Nagoya, Japan.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "J", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "ICML'2001 -Proceedings of the 18th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.D. Lafferty, A. McCallum, and F.C.N. Pereira. 2001. Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data. In ICML'2001 -Proceedings of the 18th International Conference on Machine Learning, pages 282-289, Williamstown, Massachusetts.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "New technique to enhance the performance of spoken dialogue systems based on dialogue states-dependent language models and grammatical rules", "authors": [ { "first": "R", "middle": [], "last": "L\u00f3pez-C\u00f3zar", "suffix": "" }, { "first": "D", "middle": [], "last": "Griol", "suffix": "" } ], "year": 2010, "venue": "Proceedings of Interspeech 2010", "volume": "", "issue": "", "pages": "2998--3001", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. L\u00f3pez-C\u00f3zar and D. Griol. 2010. New technique to enhance the performance of spoken dialogue sys- tems based on dialogue states-dependent language models and grammatical rules. In Proceedings of In- terspeech 2010, pages 2998-3001, Makuhari, Japan.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improving ASR error detection with non-decoder based features", "authors": [ { "first": "T", "middle": [], "last": "Pellegrini", "suffix": "" }, { "first": "I", "middle": [], "last": "Trancoso", "suffix": "" } ], "year": 2010, "venue": "Proceedings of Interspeech 2010", "volume": "", "issue": "", "pages": "1950--1953", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Pellegrini and I. Trancoso. 2010. Improving ASR error detection with non-decoder based features. In Proceedings of Interspeech 2010, pages 1950-1953, Makuhari, Japan.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "C4.5: Programs for Machine Learning", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Quinlan", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. R. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, San Ma- teo, California.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A fertility channel model for post-correction of continuous speech recognition", "authors": [ { "first": "E", "middle": [], "last": "Ringger", "suffix": "" }, { "first": "J", "middle": [ "F" ], "last": "Allen", "suffix": "" } ], "year": 1996, "venue": "ICSLP-96 -Proceedings of the 4th International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "897--900", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Ringger and J.F. Allen. 1996. A fertility chan- nel model for post-correction of continuous speech recognition. In ICSLP-96 -Proceedings of the 4th International Conference on Spoken Language Pro- cessing, pages 897-900, Philadelphia, Pennsylvania.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Exemplarbased sparse representation features: From TIMIT to LVCSR", "authors": [ { "first": "T", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "B", "middle": [], "last": "Ramabhadran", "suffix": "" }, { "first": "M", "middle": [], "last": "Picheny", "suffix": "" }, { "first": "D", "middle": [], "last": "Nahamoo", "suffix": "" }, { "first": "D", "middle": [], "last": "Kanevsky", "suffix": "" } ], "year": 2011, "venue": "IEEE Transactions on Audio, Speech and Language Processing", "volume": "19", "issue": "8", "pages": "2598--2613", "other_ids": {}, "num": null, "urls": [], "raw_text": "T.N. Sainath, B. Ramabhadran, M. Picheny, D. Na- hamoo, and D. Kanevsky. 2011. Exemplar- based sparse representation features: From TIMIT to LVCSR. IEEE Transactions on Audio, Speech and Language Processing, 19(8):2598-2613.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bayesian learning of confidence measure function for generation of utterances and motions in object manipulation dialogue task", "authors": [ { "first": "K", "middle": [], "last": "Sugiura", "suffix": "" }, { "first": "N", "middle": [], "last": "Iwahashi", "suffix": "" }, { "first": "H", "middle": [], "last": "Kashioka", "suffix": "" }, { "first": "S", "middle": [], "last": "Nakamura", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Interspeech", "volume": "", "issue": "", "pages": "2483--2486", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Sugiura, N. Iwahashi, H. Kashioka, and S. Naka- mura. 2009. Bayesian learning of confidence mea- sure function for generation of utterances and mo- tions in object manipulation dialogue task. In Pro- ceedings of Interspeech 2009, pages 2483-2486, Brighton, United Kingdom.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Lexical access for speech understanding using Minimum Message Length encoding", "authors": [ { "first": "I", "middle": [ "E" ], "last": "Thomas", "suffix": "" }, { "first": "I", "middle": [], "last": "Zukerman", "suffix": "" }, { "first": "I", "middle": [], "last": "Oliver", "suffix": "" }, { "first": "D", "middle": [], "last": "Albrecht", "suffix": "" }, { "first": "B", "middle": [], "last": "Raskutti", "suffix": "" } ], "year": 1997, "venue": "UAI'97 -Proceedings of the 13th Annual Conference on Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "464--471", "other_ids": {}, "num": null, "urls": [], "raw_text": "I.E. Thomas, I. Zukerman, I. Oliver, D. Albrecht, and B. Raskutti. 1997. Lexical access for speech under- standing using Minimum Message Length encoding. In UAI'97 -Proceedings of the 13th Annual Confer- ence on Uncertainty in Artificial Intelligence, pages 464-471, Providence, Rhode Island.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A multipass error detection and correction framework for Mandarin LVCSR", "authors": [ { "first": "Z", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "H", "middle": [ "M" ], "last": "Meng", "suffix": "" }, { "first": "W", "middle": [ "K" ], "last": "Lo", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Interspeech", "volume": "", "issue": "", "pages": "17--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Zhou, H.M. Meng, and W.K. Lo. 2006. A multi- pass error detection and correction framework for Mandarin LVCSR. In Proceedings of Interspeech 2006, pages 17-21, Pittsburgh, Pennsylvania.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "(a) Projective relations and \"end, edge, corner\" and \"center\" of a table (b) Colour, size, positional relation and intervening object in a room", "uris": null }, "FIGREF1": { "type_str": "figure", "num": null, "text": "Two of the scenarios used to construct our corpus.", "uris": null }, "FIGREF2": { "type_str": "figure", "num": null, "text": "Features. The target classes are Correct or Wrong, and three types of features were computed for each word w in a text: word based (5), sentence based (6), and phoneme based (2).Word-based features. (1) Part of Speech (PoS) as determined by the Stanford PoS Tagger (nlp.stanford.edu/software/tagger.shtml); (2) Stop Word as determined by the list in webconfs.com/stop-words.php; (3) Position of w in the text, defined as a nominal feature taking one of the values Beginning, Middle or End;", "uris": null }, "TABREF0": { "html": null, "type_str": "table", "content": "", "num": null, "text": "Farshid Zavareh, Ingrid Zukerman, Su Nam Kim and Thomas Kleinbauer. 2013. Error Detection in Automatic Speech Recognition. In Proceedings of Australasian Language Technology Association Workshop, pages 101\u2212105." }, "TABREF1": { "html": null, "type_str": "table", "content": "
ClassifierFeaturesMicro-Macro-
average average
NBWord+Sentence, C0.81560.8146
NBWord+Sentence, PCN0.80600.8066
NBBSGs, C0.64790.6446
NBBSGs, PCN0.64760.6479
NBPhonemes, C0.66100.6605
NBPhonemes, PCN0.67220.6731
DTWord+Sentence, C0.81100.8110
DTWord+Sentence, PCN0.80820.8121
DTBSGs, C0.79590.7974
DTBSGs, PCN0.83080.8324
DTPhonemes, C0.86140.8591
DTPhonemes, PCN0.87710.8770
", "num": null, "text": "Accuracy of DT versus NB: Different feature combinations." }, "TABREF2": { "html": null, "type_str": "table", "content": "
FeaturesMicro-Macro-
Phonemes, PCN +average average
0.87710.8770
Word+Sentence0.87750.8787
BSGs0.87760.8783
Word+Sentence and BSGs0.87410.8754
PoS0.89020.8906
PoS and BSGs0.89720.8971
", "num": null, "text": "Accuracy comparison for DT with Phonemes plus different feature combinations." }, "TABREF3": { "html": null, "type_str": "table", "content": "
FeaturesMicro-Macro-
BSGs, PCN +average average
0.83080.8324
Word+Sentence0.86400.8626
PoS0.86390.8632
possibility. Notice, however, that BSGs with PCN
yield a creditable performance (third last row
in Table 1), which improves statistically signifi-
cantly (p-value << 0.01) when BSGs are com-
bined with PoS and Word+Sentence
", "num": null, "text": "Accuracy comparison for DT with BSGs plus different feature combinations." } } } }