Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H91-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:33:04.218669Z"
},
"title": "SPEECH RECOGNITION IN SRI'S RESOURCE MANAGEMENT AND ATIS SYSTEMS",
"authors": [
{
"first": "Hy",
"middle": [],
"last": "Murveit",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"postCode": "94025 OVERVIEW",
"settlement": "Menlo Park",
"region": "CA"
}
},
"email": ""
},
{
"first": "John",
"middle": [],
"last": "Butzberger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"postCode": "94025 OVERVIEW",
"settlement": "Menlo Park",
"region": "CA"
}
},
"email": ""
},
{
"first": "Mitch",
"middle": [],
"last": "Weintraub",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"postCode": "94025 OVERVIEW",
"settlement": "Menlo Park",
"region": "CA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes improvements to DECIPHER, the speech recognition component in SKI's Air Travel Information Systems (ATIS) and Resource Management systems. DECIPHER is a speaker-independent continuous speech recognition system based on hidden Markov model (HMM) technology. We show significant performance improvements in DECIPHER due to (I) the addition of tied-mixture I-IMM modeling (2) rejection of outof-vocabulary speech and background noise while continuing to recognize speech (3) adapting to the current speaker (4) the implementation of N-gram statistical grammars with DECIPHER. Finally we describe our performance in the February 1991 DARPA Resource Management evaluation (4.8 percent word error) and in the February 1991 DARPA-ATIS speech and SLS evaluations (95 sentences correct, 15 wrong of 140). We show that, for the ATIS evaluation, a well-conceived system integration can be relatively robust to speech recognition errors and to linguistic variability and errors.",
"pdf_parse": {
"paper_id": "H91-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes improvements to DECIPHER, the speech recognition component in SKI's Air Travel Information Systems (ATIS) and Resource Management systems. DECIPHER is a speaker-independent continuous speech recognition system based on hidden Markov model (HMM) technology. We show significant performance improvements in DECIPHER due to (I) the addition of tied-mixture I-IMM modeling (2) rejection of outof-vocabulary speech and background noise while continuing to recognize speech (3) adapting to the current speaker (4) the implementation of N-gram statistical grammars with DECIPHER. Finally we describe our performance in the February 1991 DARPA Resource Management evaluation (4.8 percent word error) and in the February 1991 DARPA-ATIS speech and SLS evaluations (95 sentences correct, 15 wrong of 140). We show that, for the ATIS evaluation, a well-conceived system integration can be relatively robust to speech recognition errors and to linguistic variability and errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The DARPA ATIS Spoken Language System (SLS) task represents significant new challenges for speech and natural language technologies. For speech recognition, the SIS task is more difficult than our previous task, DARPA Resource Management, along several dimensions: it is recorded in a noisier environment, the vocabulary is not fixed, and, most important, it is spontaneous speech, which differs significantly from read speech. Spontaneous speech is a significant challenge to speech recognition, since it contains false starts, and non-words, and because it tends to be more casual than read speech. It is also a major challenge to natural language technologies because the structure of spontaneous language differs dramatically from the structure of written language, and almost all natural language research has been focused on written language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "SRI has developed a spoken language system (SLS) for DARPA's ATIS benchmark task [1] . This system can be broken up into two distinct components, the speech recognition and natural language components. DECIPHER, the speech recognition component, accepts the speech waveform as input and produces a word list. The word list is processed by the natural language (NL) component, which generates a data base query (or no response). This simple serial integration of speech and natural language processing works well because the speech recognition system uses a statistical language model to improve recognition performance, and because the natural language processing uses a template matching approach that makes it somewhat insensitive to recognition errors. SRI's SLS achieves relatively high performance because the SLS-level system integration acknowledges the imperfect performance of the speech and natural language technologies. Our natural language component is described in another paper in this volume [2] . This paper focuses on the speech recognition system and the evaluation of the speech recognition and overall ATIS SLS systems.",
"cite_spans": [
{
"start": 81,
"end": 84,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 1008,
"end": 1011,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SLS Architecture",
"sec_num": null
},
{
"text": "SRI has also evaluated DECIPHER using DARPA's Resource Management task [3, 4] . The system architecture for this task is simply the speech recognition system with no NL postprocessing. There are two language models used in the evaluation: a perplexity 60 word-pair grammar, and a perplexity 1000 all-word grammar. The output is simply an attempted transcription of the input speech.",
"cite_spans": [
{
"start": 71,
"end": 74,
"text": "[3,",
"ref_id": "BIBREF2"
},
{
"start": 75,
"end": 77,
"text": "4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resource Management Architecture",
"sec_num": null
},
{
"text": "This section reviews the structure of the DECIPHER system [5] . The following sections describe changes to DECIPHER.",
"cite_spans": [
{
"start": 58,
"end": 61,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DECIPHER",
"sec_num": null
},
{
"text": "DECIPHER uses an FFT-based Mel-cepstra front end. Twentyfive FFT-Mel filters spanning 100 to 6400 Hz are used to derive 12 Mel-cepslxa coefficients every 10-ms frame. Four features are derived every frame from this cepstra sequence. They are We use 256-word speaker-independent codebooks to vectorquantize the Mel-cepstra and the Mel-cepstral differences. The resulting four-feature-per-frame vector is used as input to the DECIPHER HMM-based speech recognition system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Front End Analysis",
"sec_num": null
},
{
"text": "DECIPHER uses pronunciation models generated by applying a phonological rule set to word baseforms. The techniques used to generate the rules are described in [6] and [5] , These generate approximately 40 pronunciations per word as measured on the DARPA Resource Management vocabulary and 75 per word on the ATIS vocabulary. Speaker-independent pronunciation probabilities are then estimated using these bushy word networks and the forward-backward algorithm in DECIPHER. The networks are then pruned so that only the likely pronunciations remain--typically about 4 per word for the resource management task and 2.6 per word on the ATIS task. This modeling of pronunciation is one of the ways that DECIPHER is distinguished from other HMM-based systems. We have shown in [6] that this modeling reduces error rate.",
"cite_spans": [
{
"start": 159,
"end": 162,
"text": "[6]",
"ref_id": "BIBREF5"
},
{
"start": 167,
"end": 170,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 771,
"end": 774,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pronunciation Models",
"sec_num": null
},
{
"text": "Acoustic Modeling DECIPHER builds and trains word models by using contextdependent phone models arranged according to the pronunciation networks for the word being modeled. Models used inelode uniquephone-in-word, phone-in-word, triphone, biphone, and generalized biphones and Wiphones, as well as context-independent models. Similar contexts are automatically smoothed together, if they do not adequately model the training data, according to a deletedestimation interpolation algorithm similar to [7] . The acoustic models reflect both inter-word and across-word eoarticulatory effects. Training proceeds as follows:",
"cite_spans": [
{
"start": 499,
"end": 502,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pronunciation Models",
"sec_num": null
},
{
"text": "\u2022 Initially, context-independent boot models are estimated from hand-labels in the TIMIT training database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronunciation Models",
"sec_num": null
},
{
"text": "\u2022 The boot models are used as input for a two-iteration context-independent model training run, where context-independent models are refined and pronunciation probabilities are calculated using the full word networks. These large networks are then pruned by eliminating low probability pronunciations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronunciation Models",
"sec_num": null
},
{
"text": "\u2022 Context-dependent models are then estimated from a seeond two-iteration forward-backward run, which uses the context-independent models and the pruned networks from the previous iterations as input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronunciation Models",
"sec_num": null
},
{
"text": "We have implemented tied-mixture HMMs (TM-HMMs) in the DECIPHER system. Tied mixtures were first described by Huang [9] and more recently in by Bellegarda and Nahamoo [8] . TM-HMMs use Gaussian mixtures as HMM output probabilities. The mixture weights are unique to each phonetic model used, but the set of Gaussians is shared among the states. The tied Ganssians could be viewed as forming a Gaussian-based VQ codebook that is reestimated by the HMM forward -backward algorithm.",
"cite_spans": [
{
"start": 116,
"end": 119,
"text": "[9]",
"ref_id": "BIBREF9"
},
{
"start": 167,
"end": 170,
"text": "[8]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Mixtures",
"sec_num": null
},
{
"text": "Our implementation of TM-HMMs has the following characteristics:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Mixtures",
"sec_num": null
},
{
"text": "\u2022 We used 12-dimensional diagonal-eovariance Gaussians.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Mixtures",
"sec_num": null
},
{
"text": "The variances were estimated and then smoothed with grand variances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Mixtures",
"sec_num": null
},
{
"text": "\u2022 Computation can be significantly reduced in TM-HMMs by pruning either the mixture weights or the Gaussians themselves. We found that shortfall threshold Gaussian pruning---discarding all Gaussians whose probability density of input at a frame is less than a constant times the best probability density for that flame--works as well for us as standard top-N pruning (keeping the N best Gaussians) and requires less computation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Mixtures",
"sec_num": null
},
{
"text": "\u2022 We use two separate sets of Gaussian mixtures for our TM-HMMs; one for Mel cepstra and one for Mel-cepstral derivatives. We retained our discrete distribution models for our energy features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Mixtures",
"sec_num": null
},
{
"text": "Corrective training [5, 10, 11] was used to update the mixture weights for the TM-HMMs. The algorithm is identical to that used for discrete HMMs. That is, the mixture weights are updated as ff they were discrete output probabilities. No mixture means or variances were corrected.",
"cite_spans": [
{
"start": 20,
"end": 23,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 24,
"end": 27,
"text": "10,",
"ref_id": "BIBREF10"
},
{
"start": 28,
"end": 31,
"text": "11]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Mixtures",
"sec_num": null
},
{
"text": "We evaluated TM-HMMs on the RM task using the perplexity 60 word-pair grammar. Our training corpus was the standard 3990 sentence training set. We used the combined DARPA 1988, February 1989, and October 1989 test sets for our development set. This contains 900 sentences from 32 speakers. We achieved a 6.8 percent word error rate using our discrete HMM system on this test set. The TM-HMM approach achieved an error rate of 5.5 percent. Thus, the TM-HMMs improved word recognition error rate by 20 percent compared to discrete HMMs. In the June 1990 DARPA Speech and Natural Language meeting [5] , we reported a 20 percent reduction in RM word-error rate by training separate male and female recognizers, decoding using recognizers from both sexes, and then choosing the sex according to the recognizer with the highest probability hypothesis. This improvement was achieved using a recognizer trained on 11,190 sentences. We did not achieve a significant improvement using malefemale separation on the smaller 3990 sentence training set. We set out to see, as has been claimed in [8] , whether TM-HMMs can take advantage of male-female separation with smaller (3990 sentence) training sets. Our results were mixed. Although performance did improve from 5.5 percent word error with combined models, to 4.9 percent word error with separate male-female models (a 10 percent improvement) we note that 2/3 of the overall improvement was due to the dramatic improvement for speaker HXS. Aside from this one speaker, the performance gain was not significant. Based on our last study, however, we are confident that male-female separation does improve performance with sufficient training data. The table below shows performance for tied-mixture HMMs using combined and sexseparated models. There was no significant additional gain from using corrective training in addition to male-female separation. Performance improved from 4.9 percent error (male-female only) or 4.7 percent error (corrective training only) to 4.5 percent error (both methods). This lack of further improvement is due to the reduction in training data.",
"cite_spans": [
{
"start": 594,
"end": 597,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 1082,
"end": 1085,
"text": "[8]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Mixtures",
"sec_num": null
},
{
"text": "We have begun experiments into speaker-adaptation, converting speaker-independent models into speaker-dependent ones. Our experiment involved using VQ codebook adaptation via tied-mixture HMMs as proposed by Rtischev [13] . That is, we adjusted VQ codeword locations based on forward-backward alignments of adaptation sentences. However, since we are using a tied-mixture recognition system, we adapted the Gaussian means instead of the codebook.",
"cite_spans": [
{
"start": 217,
"end": 221,
"text": "[13]",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Adaptation",
"sec_num": null
},
{
"text": "We selected 21 of the speakers in our development test set for use in an adaptation experiment. We had either 25 or 30 Resource Management sentences recorded for each of these speakers. We chose to use their first 20 sentences for adaptation, and the other 5 or 10 sentences for adaptation testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Adaptation",
"sec_num": null
},
{
"text": "Using our original TM-HMM models, we achieved an error rate of 7.4 percent (114 errors in 1541 reference words) on this adaptation test set. After adjusting means for each speaker using the 20 adaptation sentences, we achieved an error rate of 6.1 percent (94 errors in 1541 reference words) on the adaptation test sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Adaptation",
"sec_num": null
},
{
"text": "This improvement with adaptation leads to performance that is still quite short of speaker-dependent accuracy (the ultimate goal of adaptation). Thus, it does not seem worth the added inconvenience of obtaining 20 known sentences from a potential system user, though it is promising for on-line adaptation. We plan to look into several areas for further improvement. For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Adaptation",
"sec_num": null
},
{
"text": "1. Rtischev et al. [14] have shown that adapting mixture weights is at least as important as adapting means.",
"cite_spans": [
{
"start": 19,
"end": 23,
"text": "[14]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Adaptation",
"sec_num": null
},
{
"text": "2. Kubala [15] et al. have shown that adapting speaker-dependent models can be superior to adapting from speaker-independent models.",
"cite_spans": [
{
"start": 10,
"end": 14,
"text": "[15]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Adaptation",
"sec_num": null
},
{
"text": "3. It is possible that the adaptation sentences need not be supervised given the relatively good (7.4 percent error) initial performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Adaptation",
"sec_num": null
},
{
"text": "We implemented a version of DECIPHER that rejects false input as well as recognizing legal input (our standard recognizer attempts to classify all the inpu0. In addition to standard word models, it uses an out-of-vocabulary word model to recognize the extraneous input. The word model has the following pronunciation network similar to [17] .",
"cite_spans": [
{
"start": 336,
"end": 340,
"text": "[17]",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rejection of Out-of-Vocabulary Input",
"sec_num": null
},
{
"text": "All context independent phones ah._ ,dlh",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Q",
"sec_num": null
},
{
"text": "All context All context independent independent phones phones FIGURE 1. Out-of-vocabulary word model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Q",
"sec_num": null
},
{
"text": "There are 67 phonetic models on each of the arcs in the above word network. All phonetic transition probabilities in this word network are equal, and are scaled by a parameter that adjusts the amount of false rejection vs. false acceptance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Q",
"sec_num": null
},
{
"text": "Thus far, we have performed a pilot study that shows this method to be promising. We gathered a database of 58 sentences total from six people. About half of the sentences are digit strings and the other half are digits mixed with other things. There are a total of 426 digits in the database, and 176 additional non-digit words. Example sentences are outlined in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 364,
"end": 371,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Q",
"sec_num": null
},
{
"text": "We considered correct recognition for these sentences to be the digits in the string without the rest of the words (i.e. 2138767287, 3876541104, 33589170429 are the correct answers for the top three sentences in Table 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 212,
"end": 219,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Q",
"sec_num": null
},
{
"text": "We trained a digit recognizer with rejection from the Resource Management training set and achieved a word error rate of 5.3 percent for the 27 sentences that contained only digits (13 errors = 1 insert 3 delete 9 subs in 243 reference words), which is within one error of the system without rejection. Thus, in this pilot study, using rejection didn't hurt performance for \"clean\" input. The overall error rate was 11.7 percent (26 inserts 15 deletes 9 subs in 426 reference words). That is, 402 of 426 digits were detected, and at least 141 of the 176 extraneous words were rejected. We used a bigram language model to constrain the speech recognition system for the ATIS evaluation. A back-off estimation algorithm [16] was used for estimation of the bigram parameters. The training data for the grammar consisted of 5,050 sentences of spontaneous speech from various sites--l,606 from MIT's ATIS data collection project, 774 from NIST CD-ROM releases, 538 from SRI's ATIS data collection project, and 2,132 from various other sites.",
"cite_spans": [
{
"start": 718,
"end": 722,
"text": "[16]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Q",
"sec_num": null
},
{
"text": "Robust estimates for many of the bigram probabilities cannot be achieved since the vast majority of them are seen very infrequently (because of the lack of sufficient training data). Furthermore, frequencies of words such as months and cities were biased by the data collection scenarios and the time of year the data was collected. To reduce these effects, words with effectively similar usage were assigned to groups, and instead of collecting counts for the individual words, counts were collected for the groups. After estimation of the bigram probabilities, the probabilities of transitioning to individual words were assigned the group probability divided by the number of words in the group. This scheme not only reduced some of the problems due to the sparse training data, but also allowed some unseen words (other city names, restriction codes, etc.) to be easily added to the grammar. We also explored various class-grammar implementations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "my parents number is 2 1 3 urn 8 7 6 ok 7 2 8 7 if you have questions please dial extension 3 8 7 6 at5 4 1 1 oh 4 please call3 3 5 89 1 urn 7oh4 2 9 hmm let's see what's this 1 2 3 4 5 uh that's not right 2 3 4 5 1 2 3 oh no that's wrong 2 4 5 8 9 yeah i think that's it this is a test I 2 3 4 5 8 7 this was only a test <grunt> 1 2 <cough> 3 4 5 <sneeze> 8 7 <mic-noise> 4 1 dollars and 3 1 8 cents what's this oh 4 1 0 8 well let's see 3 1 4 7 8 ok",
"sec_num": null
},
{
"text": "These grammars were generated by interpolating word-based bigrams with class-based bigrams. We were able to vary the grammars and their perplexities by varying the interpolation coefficients. However, recognition performance never improved over that for the back-off bigram. In fact, accuracy remained relatively constant throughout a large range of perplexities. These tables also illustrate that recognition performance did not depend strongly on the test-set perplexity. Clearly, other factors are dominating performance. We believe that one of our most pressing needs in this research is to understand what this bottleneck is, and to develop ways that express it better than perplexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "my parents number is 2 1 3 urn 8 7 6 ok 7 2 8 7 if you have questions please dial extension 3 8 7 6 at5 4 1 1 oh 4 please call3 3 5 89 1 urn 7oh4 2 9 hmm let's see what's this 1 2 3 4 5 uh that's not right 2 3 4 5 1 2 3 oh no that's wrong 2 4 5 8 9 yeah i think that's it this is a test I 2 3 4 5 8 7 this was only a test <grunt> 1 2 <cough> 3 4 5 <sneeze> 8 7 <mic-noise> 4 1 dollars and 3 1 8 cents what's this oh 4 1 0 8 well let's see 3 1 4 7 8 ok",
"sec_num": null
},
{
"text": "Many words occur with sufficient frequency and with significant cross-word coarticulation that a better acoustic model might be made by training these word combinations as a single word model. These words include \"what-are-the,\" \"give-me,\" etc., which can have a variety of pronunciations best modeled with a network of phones representing the phonetic and phonological variation of the whole sequence (\"what're-the,\" \"gimme,\" etc.) instead of each word separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Word Lexieal Units",
"sec_num": null
},
{
"text": "Also, when considering class grammars, multiple word sequences allow classes which could not be constructed by considering every word separately. For instance, having distinct models of all the restriction codes (e.g. \"v-u-slash-one\") might be more appropriate than modeling alpha->alpha->slash->number in the bigram. The latter form would allow all the alphabet letters to transition to all the alphabet letters, with probabilities as prescribed by the bigram, and would incorrectly increase the probability for invalid restriction codes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Word Lexieal Units",
"sec_num": null
},
{
"text": "This multi-word technique allows all the probabilities of all the restriction codes to be tied together, so that all are equally covered at the appropriate place in the grammar, instead of depending completely on the individual words' statistics estimated from sparse training data. The multi-word approach resulted in only a slight performance improvement compared to a system where non-coarticulatory multiwords were left separated. That is, for the \"separate words\" system, words like \"a p slash eighty\" were separate words, but coarticulatory word models like \"what-are-the\" and \"list-the\" were retained. On a ll9-sentence subset of the June 90 evaluation set, the results were as shown in Table 6 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 694,
"end": 701,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-Word Lexieal Units",
"sec_num": null
},
{
"text": "Note that the higher perplexity of the multi-word system is deceiving since high probability grammar transitions are now hidden within the multi-word models, and are not seen by the grammar. Tables 7 and 8 ",
"cite_spans": [],
"ref_spans": [
{
"start": 191,
"end": 205,
"text": "Tables 7 and 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "TABLE 6. Effectiveness of multi-word modeling",
"sec_num": null
},
{
"text": "Our performance is severely limited by training data[S], and many further improvements for the RM task may only be ways to work around RM's artificial limit on training data. Thus, we expect to develop and evaluate our system in the future with the ATIS task which both has more training data available and uses more realistic (spontaneous) speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DARPA-RM February 1991 speaker-independent evaluation",
"sec_num": null
},
{
"text": "We evaluated on DARPA's February 1991 ATIS test set using a system similar to the one described above except:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SLS Evaluation",
"sec_num": null
},
{
"text": "\u2022 The system was trained on 17,042 sentences (3990 RM-SI, 4200 TIM1T, 7932 read ATIS, 920 spontaneous ATIS).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SLS Evaluation",
"sec_num": null
},
{
"text": "\u2022 1,139 word vocabulary (the test set vocabulary was not revealed in advance) using multi-word units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SLS Evaluation",
"sec_num": null
},
{
"text": "\u2022 Discrete distribution HMM modeling was used for all features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SLS Evaluation",
"sec_num": null
},
{
"text": "\u2022 A back-off bigram language model [16] with tied wordgroups was used, with a test set perplexity of 43 (not counting 26 words out of vocabulary).",
"cite_spans": [
{
"start": 35,
"end": 39,
"text": "[16]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SLS Evaluation",
"sec_num": null
},
{
"text": "\u2022 A template-matcher natural language component [2] was used to generate ATIS database queries based on the speech recognition output.",
"cite_spans": [
{
"start": 48,
"end": 51,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SLS Evaluation",
"sec_num": null
},
{
"text": "We achieved the performance shown in Table 10 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 45,
"text": "Table 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "SLS Evaluation",
"sec_num": null
},
{
"text": "As can be seen, speakers CI and CM contributed significantly to the overall error rate. Furthermore, many of the errors occurred despite their relatively small bigram probabilities, indicating that the grammar is still not completely effective in overriding poor acoustic matches. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "speech evaluation",
"sec_num": "1991"
},
{
"text": "The most interesting result of this evaluation (see the paper by PaUett in this proceedings) was that, though SRI along with BBN achieved the best speech recognition accuracy, and SRI along with CMU had the best natural-language-only performance, the accuracy of SRI's combined speech and natural language systems 1. NA is no answer 2. WErr or weighted error is percent no answer plus two times the percent wrong. 3. Score = 100 -Werr was far better than that for the other sites. We attribute this to the error tolerant nature of our speech/natural-language interface. For instance, note that performance using spoken language is not much worse than the performance of the NL component given transcribed input (i.e. given a perfect speech recognition component) even though the SLS speech recognition component had a 60 percent sentence error rate (at least one word was wrong in 60 percent of the sentences).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": null
},
{
"text": "The above results indicate to us that steady progress in the speech recognition and natural language technologies, together with errortolerant speech/natural-language interfaces can lead to practical spoken language systems in the near future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The ATIS Common Task: Selection and Overview",
"authors": [
{
"first": "P",
"middle": [],
"last": "Price",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Price, P., \"The ATIS Common Task: Selection and Overview,\" Proceedings DARPA Speech and Natural Language Workshop, June 1990.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Template Matcher for Robust NL Interpretation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Jackson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Appelt",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bear",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Podlozny",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jackson, E., D. Appelt, J. Bear, R. Moore, and A. Podlozny, \"A Template Matcher for Robust NL Interpretation,\" Proceedings DARPA Speech and Natural Language Workshop, June 1991.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Benchmark Tests for DARPA Resource Management Database Performance Evaluations",
"authors": [
{
"first": "D",
"middle": [],
"last": "Pallet",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings ICASSP-89",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pallet, D., \"Benchmark Tests for DARPA Resource Management Database Performance Evaluations,\" Proceedings ICASSP-89.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The DARPA 1000-Word Resource Management Database for Continuous Speech Recognition",
"authors": [
{
"first": "P",
"middle": [],
"last": "Price",
"suffix": ""
},
{
"first": "W",
"middle": [
"M"
],
"last": "Fisher",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bemstein",
"suffix": ""
},
{
"first": "D",
"middle": [
"S"
],
"last": "Pallet",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings ICASSP-88",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Price, P., W.M. Fisher, J. Bemstein, and D.S. Pallet, \"The DARPA 1000-Word Resource Management Database for Continuous Speech Recognition,\" Proceedings ICASSP-88.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Training Set Issues in SRI's DECIPHER Speech Recognition System",
"authors": [
{
"first": "H",
"middle": [],
"last": "Murveit",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Weintraub",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murveit, H., M. Weintraub, M. Cohen, \"Training Set Issues in SRI's DECIPHER Speech Recognition System,\" Proceedings DARPA Speech and Natural Language Workshop, June 1990.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The DECIPHER Speech Recognition System",
"authors": [
{
"first": "M",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Murveit",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Price",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Weintraub",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, M., H. Murveit, J. Bernstein, P. Price, M. Weintraub, \"The DECIPHER Speech Recognition System,\" Proceedings ICASSP, April 1990.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Interpolated Estimation of Markov Source Parameters from Sparse Data",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "381--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, F. and R. Mercer, \"Interpolated Estimation of Markov Source Parameters from Sparse Data,\" pp. 381-397 in E.S.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Pattern Recognition in Pract/ce",
"authors": [
{
"first": "L",
"middle": [
"N"
],
"last": "Gelsima",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gelsima and L.N. Kanal (editors), Pattern Recognition in Prac- t/ce, North Holland Publishing Company, Amsterdam, The Neth- erlands.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Tied Mixture Continuous Parameter Modeling for Speech Recognition",
"authors": [
{
"first": "J",
"middle": [],
"last": "Beuegarda",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Nahamoo",
"suffix": ""
}
],
"year": 1990,
"venue": "IEEE Trans. ASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "BeUegarda, J., D. Nahamoo, \"Tied Mixture Continuous Parameter Modeling for Speech Recognition,\" IEEE Trans. ASSP, December 1990.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semi-continuous hidden Markov models for speech recognition",
"authors": [
{
"first": "X",
"middle": [
"D"
],
"last": "Huang",
"suffix": ""
}
],
"year": 1989,
"venue": "Computer Speech and Language",
"volume": "3",
"issue": "",
"pages": "239--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, X.D.,\"Semi-continuous hidden Markov models for speech recognition,\" Computer Speech and Language, 3 pp. 239-251 (1989)",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A New Algorithm for the Estimation of Hidden Markov Model Parameters",
"authors": [
{
"first": "L",
"middle": [],
"last": "Bahl",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "P",
"middle": [
"De"
],
"last": "Souza",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings ICASSP-g8",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bahl, L., P. Brown, P. De Souza, and R. Mercer, \"A New Algo- rithm for the Estimation of Hidden Markov Model Parameters,\" Proceedings ICASSP-g8.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Corrective and Reinforcement Learning for Speaker-Independent Continuous Speech Recognition",
"authors": [
{
"first": "K-F",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mahajan",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, K-F., and S. Mahajan, Corrective and Reinforcement Learn- ing for Speaker-Independent Continuous Speech Recognition, Technical Report CMU-CS-89-100, Carnegie Mellon University, January 1989.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improved Hidden Markov Modeling for Speaker-Independent Continuous Speech Recognition",
"authors": [
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Alleva",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Hayamizu",
"suffix": ""
},
{
"first": "H.-W",
"middle": [],
"last": "Hon",
"suffix": ""
},
{
"first": "M.-Y",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "K.-E",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, X., F. Alleva, S. Hayamizu, H.-W. Hon, M.-Y. Hwang, and K.-E Lee, \"Improved Hidden Markov Modeling for Speaker-Inde- pendent Continuous Speech Recognition,\" Proceedings DARPA Speech and Natural Language Workshop, June 1990.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Speaker Adaptation in a Large-Vocabulary Speech Recognition System",
"authors": [
{
"first": "Dimitry",
"middle": [],
"last": "Rtischev",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rtischev, Dimitry, Speaker Adaptation in a Large-Vocabulary Speech Recognition System, Master's Thesis, Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Speaker Adaptation via VQ Prototype Modification",
"authors": [
{
"first": "D",
"middle": [],
"last": "Rtischev",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Nahamoo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Picheny",
"suffix": ""
}
],
"year": null,
"venue": "IEEE Trans. Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rtischev, D., D.. Nahamoo, and M. Picheny, \"Speaker Adapta- tion via VQ Prototype Modification,\" submitted to IEEE Trans. Signal Processing.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Speaker Adaptation from a Speaker Independent Training Corpus",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Kubala",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Barry",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kubala, Francis, Richard Schwartz, and Chris Barry, \"Speaker Adaptation from a Speaker Independent Training Corpus,\" Pro- ceedings ICASSP-90.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer",
"authors": [
{
"first": "S",
"middle": [],
"last": "Katz",
"suffix": ""
}
],
"year": 1987,
"venue": "IEEE Trans. ASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katz, S., \"Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer,\" IEEE Trans. ASSP, March 1987.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic Detection of New Words in a Large Vocabulary Continuous Speech Recognition System",
"authors": [
{
"first": "A",
"middle": [],
"last": "Asadi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asadi, A., R. Schwartz, and J. Makhoul, \"Automatic Detection of New Words in a Large Vocabulary Continuous Speech Rec- ognition System,\" Proceedings DARPA Speech and Natural Language Workshop, OcL 1989.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Energy-normalized Mel-cepstra \u2022 Smoothed 40-ms time derivatives of the Mel-cepstra \u2022 Energy \u2022 Smoothed 40-ms energy differences.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "list the various multi-word units. flights-from, what-is-the, show-me-the, show-me-all, show-me, how-many, one-way, what-are-the, give-me, what-is, i-would-like, i'd-like-to, what-does , washington-de .... a-l, c-o, t-w-a, u-s-air, ... d-e-ten, seven-forty-seven .... a-t-l, b-o-s, s-f-o, d-f-w, ... q-x, f-y-b-m-q, k-y, y-n .... a-p-eighty, a-p-slash-eighty,... d-u-r-a, e-q-p, r-t-n-max ....",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>LANGUAGE MODELING</td></tr><tr><td>Bigram Language Modeling</td></tr></table>",
"text": "Sample sentences for the rejection study",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>months, days, digits, teens, decades, date-ordinals, cities, airports,</td></tr><tr><td>states, airlines, class-codes, restriction-codes, fare-codes, airline-</td></tr><tr><td>codes, aircraft-codes, airport-codes, other-codes</td></tr></table>",
"text": "The table below contains the groups of words tied together.",
"num": null
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Tied GroupsUsing our back-off bigram on our ATIS development set (most of the June 1990 DARPA-ATIS test set), we achieved a 14.1 percent word error rate with a test-set perplexity of 19 (not counting 6 words not covered by the grammar). When we applied this grammar to the February 1991 ATIS evaluation test set (200 sentences) the perplexity was 43, excluding 26 instances of words not covered in our vocabulary. For the 148 Class A sentences, the recognition word error rate was 17.8 percent.",
"num": null
},
"TABREF5": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td>Word Error</td></tr><tr><td colspan=\"3\">Perplexity (percent)</td></tr><tr><td>Backed-off Bigram</td><td>19</td><td>14.1</td></tr><tr><td>Interpolated Bigrams</td><td>20 24 71 89 91 113</td><td>14.5 15.3 14.9 14.7 14.5 14.9</td></tr><tr><td/><td>442</td><td>29.2</td></tr></table>",
"text": "recognition accuracy using bigrams with different perplexities on our ATIS development test set. A preliminary set of models was used for recognition (with 442 words in the vocabulary) and the grammars were estimated using 2,909 sentences.",
"num": null
},
"TABREF6": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Perplexity vs. word error on the ATIS development set",
"num": null
},
"TABREF8": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>EVALUATION</td></tr><tr><td>RM Evaluation</td></tr><tr><td>SRI evaluated the DECIPHER system on DARPA's February</td></tr><tr><td>1991 speaker-independent test set. The characteristics of the</td></tr><tr><td>evaluated system were:</td></tr><tr><td>\u2022 Speaker-independent recognition</td></tr><tr><td>\u2022 3990 sentence DARPA-RM training</td></tr><tr><td>\u2022 3 state, left-to-right, context-dependent hidden Marker</td></tr><tr><td>model using deleted-interpolation estimation of parameters</td></tr><tr><td>\u2022 Input features were 12 Mel-cepstra and delta-Mel-cepstra</td></tr><tr><td>and scalar quantized energy and delta-energy</td></tr><tr><td>\u2022 Tied-mixture modeling for Mel cepstra and delta-Mel-cep-</td></tr><tr><td>stra</td></tr><tr><td>\u2022 256 diagonal covariance Gaussians for each</td></tr><tr><td>\u2022 Independent discrete density HMM models for energy and</td></tr><tr><td>delta energy</td></tr></table>",
"text": "",
"num": null
},
"TABREF9": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>SPKR</td><td>Corr</td><td>Sub</td><td>Del</td><td>Ins</td><td colspan=\"2\">Err Sent Err</td></tr><tr><td>CL CJ CO</td><td>93.6 92.0 92.0</td><td>5.1 6.9 3.7</td><td>1.3 1.0 4.3</td><td>1.7 0.7 1.2</td><td>8.1 8.7 9.3</td><td>42.3 46.2 56.2</td></tr><tr><td>CP CK CH CE CI CM</td><td>90.7 83.3 84.2 81.5 73.1 75.0</td><td>7.5 8.8 5.3 12.0 24.0 23.5</td><td>1.8 7.8 10.5 6.5 2.9 1.5</td><td>2.5 1.0 5.3 3.2 5.8 26.5</td><td colspan=\"2\">11.8 17.6 21.1 100.0 59.3 58.3 21.8 70.0 32.7 90.0 51.5 100.0</td></tr><tr><td colspan=\"2\">Average 86.5</td><td>10.3</td><td>3.1</td><td>4.3</td><td>17.8</td><td>60.1</td></tr><tr><td colspan=\"3\">All-word (Perplexity 1139)</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Average 86.5</td><td>23.9</td><td>3.7</td><td>8.0</td><td>35.5</td><td>91.2</td></tr><tr><td colspan=\"5\">TABLE 10. DARPA-ATIS February 1991</td><td>speech</td><td>evaluation</td></tr><tr><td/><td colspan=\"4\">148 Class A Sentences</td><td/><td/></tr><tr><td>SPKR</td><td>Corr</td><td>Sub</td><td>Del</td><td>Ins</td><td colspan=\"2\">Err Sent Err</td></tr><tr><td>CJ</td><td>91.9</td><td>6.5</td><td>1.6</td><td>0.8</td><td>8.9</td><td>54.5</td></tr><tr><td>CP CL CK</td><td>91.7 91.4 85.0</td><td>6.6 6.7 8.7</td><td>1.7 1.9 6.3</td><td>1.7 1.9 0.5</td><td>10.0 10.4 15.5</td><td>55.2 44.8 64.0</td></tr><tr><td>CE</td><td>83.0</td><td>11.8</td><td>5.2</td><td>2.6</td><td>19.6</td><td>73.9</td></tr><tr><td>CO CH</td><td>79.4 78.6</td><td>13.7 13.1</td><td>6.9 8.3</td><td>1.4 3.6</td><td>22.0 25.0</td><td>75.9 100.0</td></tr><tr><td>CI</td><td>67.1</td><td>27.3</td><td>5.6</td><td>5.6</td><td>38.6</td><td>92.9</td></tr><tr><td colspan=\"2\">CM Average 83.5 72.5</td><td>25.2 12.6</td><td>2.3 3.9</td><td>23.9 4.2</td><td>51.4 20.7</td><td>100.0 66.5</td></tr><tr><td/><td colspan=\"4\">DARPA-ATIS February</td><td/><td/></tr><tr><td/><td colspan=\"2\">All sentences</td><td/><td/><td/><td/></tr></table>",
"text": "",
"num": null
},
"TABREF10": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"5\">describes overall spoken language system</td></tr><tr><td>System</td><td colspan=\"5\">Right Wrong NA 1 WErr 2 Score 3</td></tr><tr><td>NL Only</td><td>109</td><td>9</td><td>27</td><td>31.0</td><td>69.0</td></tr><tr><td>SLS</td><td>96</td><td>11</td><td>38</td><td>41.4</td><td>58.6</td></tr><tr><td colspan=\"6\">TABLE 12. DARPA-ATIS February 1991 SLS evaluation</td></tr><tr><td/><td colspan=\"3\">148 Class A sentences</td><td/><td/></tr></table>",
"text": "",
"num": null
}
}
}
}