|
{ |
|
"paper_id": "H91-1008", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:33:09.074887Z" |
|
}, |
|
"title": "SESSION 2: DARPA RESOURCE MANAGEMENT AND ATIS BENCHMARK TEST POSTER SESSION", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Pailett", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Institute of Standards and Technology", |
|
"location": { |
|
"addrLine": "Building 225", |
|
"postCode": "A216, 20837", |
|
"settlement": "Room, Gaithersburg", |
|
"region": "MD" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "I. INTRODUCTION human peripheral Following precedents established as early as the March 1987 DARPA Speech Recognition Workshop, previouslyunreleased Benchmark Test Material was selected and released to DARPA contractors and others prior to the February 1991 meeting. Results were reported to NIST and scored using \"official\" scoring software and reference answers and the results were reported to the participants. All papers in this poster session at the DARPA speech workshop reported results obtained using the Benchmark Test Material. The Workshop Planning Committee suggested a three-part session format consisting of: (1) Introductory remarks, (2) one hour to review and discuss posters, and (3) open discussion.", |
|
"pdf_parse": { |
|
"paper_id": "H91-1008", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "I. INTRODUCTION human peripheral Following precedents established as early as the March 1987 DARPA Speech Recognition Workshop, previouslyunreleased Benchmark Test Material was selected and released to DARPA contractors and others prior to the February 1991 meeting. Results were reported to NIST and scored using \"official\" scoring software and reference answers and the results were reported to the participants. All papers in this poster session at the DARPA speech workshop reported results obtained using the Benchmark Test Material. The Workshop Planning Committee suggested a three-part session format consisting of: (1) Introductory remarks, (2) one hour to review and discuss posters, and (3) open discussion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Section H of this paper presents an overview describing the approaches used by the participants in this session, while Section III summarizes the open discussion. Section IV describes the benchmark test material selection process and benchmark test protocols. Section V presents tabulations of these results, and discussion of these results is included in Section VI.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A total of fourteen papers were presented in poster form. Eleven of the papers were presented by DARPA SLS contractors, and three were from non-DARPA sites.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "II. SESSION OVERVIEW", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Five papers dealt with speech recognition systems:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "II. SESSION OVERVIEW", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) The group at Dragon Systems reported speaker-dependent system results for the Resource Management (RM) test set, using the word-pair grammar [1] . Dragon's results were obtained on a 25 Mhz 80486-based PC, with an RM vocabulary modelled using \"roughly 30,000 phonemes in context or PICs\", and making use of the Dragon rapid match module.", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 148, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "II. SESSION OVERVIEW", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) Doug Paul of MIT Lincoln Laboratory reported speech recognition results for both the RM and ATIS SPREC test material [2] . Recent work includes: variations in semiphone modelling, a \"very simple improved duration model\" responsible for reducing the error rate by about 10%, a new training strategy, and modifications to the recognizer to use back-off bigrarn language models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 124, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "II. SESSION OVERVIEW", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) The Spoken Language Group at MIT's Laboratory for Computer Science also reported results for both the RM and ATIS SPREC test material [3] . The MIT SUMMIT system is a \"segment-based\" speech recognition system, including a front end that incorporates a model of the auditory system, a hierarchical segmentation algorithm to identify a network of possible acoustical segments, segmental measurements, and a statistical classifier to produce a phonetic network. The best-scoring word sequence is derived by matching the phonetic network against a pronunciation network. Recent developments have incorporated more complex context-dependency modelling as well as an improved corrective training procedure.", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 141, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "II. SESSION OVERVIEW", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for both the RM and ATIS SPREC test material [4] . The reported RM speaker-independent results include results for a SI model built using only 12 training speakers. BBN's ATIS results include speaker-independent results for two conditions. \"The first is a controlled condition using a specific training set and bigram grammar\" [similar to that used by Paul [2] ]. The second condition makes use of augmented training data (collected at BBN) and a 4-grarn class grammar.", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 48, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 360, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(5) A collaborative effort involving Marl Ostendorf and her colleagues at Boston University and others at BBN makes use of a general formalism for integrating two or more speech recognition technologies [5] . \"In this formalism, one system uses the N-best search strategy to generate a list of candidate sentences; the list is restored by other systems; and the different scores are combined to optimize perforrnanee.\" Ostendorf et al. \"report on combining the BU system based on stochastic segment models and the BBN system based on hidden Markov models.\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 59, |
|
"text": "Ostendorf and her", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 203, |
|
"end": 206, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Six papers were presented by DARPA contractors describing integrations of speech and natural language processing into ATIS systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) The Spoken Language Group at MIT's Laboratory for Computer Science presented a status report on the MIT ATIS system [6] . A context-independent version of the SUMMIT system (described in [3] ) including a word-pair grammar with perplexity 92 has been incorporated. The back-end has been redesigned, and the parser now produces an intermediate semantic-frame representation \"which serves as the focal-point for all back-end operations.\" Results are reported for both the February '91 ATIS benchmark test set and for a test set collected at MIT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 123, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 194, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) The Speech and Natural Language Groups at SRI reported results for both the RM and ATIS SPREC speech recognition test sets and for the ATIS NL and SLS tests [7] . The primary emphasis of the SRI presentation was to describe improvements to the SRI DECIPHER speech recognition system, a component in SRI's ATIS system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 164, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Recent \"significant\" performance improvements are attributed to the addition of tied-mixture HMM modelling. Other approaches discussed include experiments with malefemale separation, speaker adaptation, rejection of out-ofvocabulary input, and language modelling (including the use of multi-word lexical units). SRI's \"simple serial integration of speech and natural language processing\" is said to work well \"because the speech recognition system uses a statistical language model to improve recognition performance, and because the natural language processing uses a template matching approach (described elsewhere in this proceedings) that makes it somewhat insensitive to recognition errors\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) Wayne Ward presented one of two papers from CMU describing the CMU ATIS System, \"PHOENIX\" [8] . The speech recognition component consists of a recent vocabulary-independent version of SPHINX, presently without incorporation of out-of-vocabulary models. PHOENIX's \"concept of flexible parsing combines framebased semantics with a semantic phrase grammar,\" so that the \"operation of the parser can be viewed as 'phrase spotting.'\" Language modelling included a bigram model for the recognizer and a grammar for the parser.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 97, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(4) The second paper from CMU, by Sheryl Young, described the \"structure and operation of SOUL (for Semantically-Oriented Understanding of Language)\" [9] . SOUL can use semantic and pragmatic knowledge to correct, reject and/or clarify the outputs of the PHOENIX case frame parser in the ATIS domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 153, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(5) BBN's NL group reported on the BBN DELPHI natural language system and the integration of this system with the BBN BYBLOS system (described in [4] ), using an Nbest architecture [10] . The BBN authors cite a number of improvements to the DELPHI system that are described in other papers in this Proceedings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 149, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 185, |
|
"text": "[10]", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(6) Recent work on the Unisys ATIS Spoken Language System was described by Norton et al. [11] . \"Enhancements to the system's semantic processing for handling nontransparent argument structure and enhancements to the system's pragmatic processing of material in answers displayed to the user\" are described. In addition to the Unisys system's NL results, results were reported for the case of SLS systems consisting of the Unisys natural language system coupled with two ATIS speech recognition systems: (1) the MIT SUMMIT system (described in [3] ) and (2) the MIT Lincoln Labs system (described in [2] ). The Unisys system's natural language constraints were also used to select the first-best of N-best speech recognition results (for the SPREC tests) based on syntactic, semantic and pragmatic knowledge.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 93, |
|
"text": "[11]", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 544, |
|
"end": 547, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 600, |
|
"end": 603, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Three papers were presented by non-DARPA sites.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) Douglas O'Shaughnessy described \"the initial development of a natural language text processor, as the first step in an INRS [INRS-Telecommunications, University of Quebec] dialogue-by-voice system [12] . A keyword slot-filling approach is used, rather than a \"standard parser for English.\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 205, |
|
"text": "[12]", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) In one of two papers from AT&T Bell Laboratories included in this session, Evelyne Tzoukermarm described \"The Use of a Commercial Natural Language Interface in the ATIS Task\" [13] . Tzoukermarm relates their \"experience in adapting [a commercial natural language interface] to handle domain dependent ATIS queries.\" The discussion of error analysis notes that, in contrast to the \"well-formed\" written English for which the commercial product was designed, spontaneous speech contains repetitions, restarts, deletions, interjections and ellipsis, as well as the omission of punctuation marks that \"might give the system information\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 183, |
|
"text": "[13]", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) The second AT&T Bell Laboratories paper, by Pieraccini, Levin and Lee, proposes \"a model for a statistical representation of the conceptual structure in a restricted subset of spoken natural language\" [14] . The \"technique of ease decoding\" is applied to the Class A sentences in the ATIS domain, with sentences analyzed in terms of 7 general cases: QUERY, OBJECT, ATTRIBUTE, RESTRICTION, Q[UERY] ATTRIBUTE, AND, and DUMMY. Unlike other papers in this session, this paper implements a non-standard test paradigm that prevents explicit comparisons with the results cited for other systems. To address this shortcoming, the authors indicate that they \"are developing a module that translates the conceptual representation into an SQL query\". Presumably the SQL queries, in conjunction with the ATIS relational database, will permit use of existing DARPA ATIS queryanswer performance evaluation procedures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 209, |
|
"text": "[14]", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4) Francis Kubala et al. reported on BBN's BYBLOS results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Following review of the posters, a number of issues were discussed. (1) Differences between ATIS Test Sets:", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 71, |
|
"text": "(1)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HI. DISCUSSION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It was noted that there were a number of differences between the June 1990 and February 1991 ATIS test sets, including evidence of greater-than-expected incidence of dysfluencies in the speech and \"skewed\" or disproportionate representation of some syntactic/semantic phenomena. Doug Paul noted that the test set perplexity for the June 1990 \"Class A\" test set was 18, in contrast with 22 for the present \"Class A\" test set, and a perplexity of 45 for the \"non-Class A\" test material (i.e., all other utterances). Inferences about \"progress\" or \"trends\" may thus be complicated by these differences between test sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HI. DISCUSSION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) Limited training material:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HI. DISCUSSION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Also noted was the fact that only a limited amount of fully \"canonized\" training material--for training acoustic models and for studying such phenomena as dialogue modelling--was available prior to this meeting, in some cases limiting system development. This factor was cited in a number of papers e.g., [2, 4, 8, 10] ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 305, |
|
"end": 308, |
|
"text": "[2,", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 311, |
|
"text": "4,", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 312, |
|
"end": 314, |
|
"text": "8,", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 318, |
|
"text": "10]", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HI. DISCUSSION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) Limitations on the future value of the Resource Management Corpora: Hy Murveit noted his belief that demonstrable progress in recognizing speaker independent RM1 speech was limited by \"how much information we can tease out of [3990] training utterances\". Richard Schwartz took exception to this, citing IV. BENCHMARK TEST MATERIAL AND steady progress in recognizing RM speech.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HI. DISCUSSION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Properties of ATIS-domain speech:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PROTOCOLS (4)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Richard Schwartz shared some analysis of the ATIS-domain test set speakers. He noted that there was one speaker in the test set with \"24 instances of 'uh' in 12 sentences\", [which leads to] \"a 50% word error rate\" for that speaker. On the basis of his analysis, he noted that \"people don't know how to talk to a system\", and suggested that there ought to be more user/speaker feedback during the data collection process so that the incidence of dysfluencies would be reduced. In response, Hy Murveit noted that if we regard the two worst speakers in the test material as atypical, then \"the current word error rate is close to 15%, and with some success in modelling the 'urns' and 'ers', the error rate may be only 10%, or abeut twice as bad as for RM\". Correlation was noted between difficulty in recognizing both the speech and [in understanding] the natural language for the \"bad\" speakers, so that the suggestion that these speakers may be atypical may be warranted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PROTOCOLS (4)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Patti Price noted that the [speech recognition] error rates suggest that \"ATIS is more difficult, but we don't know why\". It may be that ATIS speech is \"more casual\", but we need to study these issues in more detail, especially as they affect data collection. 5Selection of the February ATIS test material:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PROTOCOLS (4)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Victor Zue and Rich Schwartz asked about selection of the February 1991 ATIS test set, asking if there had been screening to select or reject potential test material on the basis of the incidence of dysfluencies noted in the transcriptions. NIST noted that the only such screening was to partition some of the utterances into the \"Optional\" categories on the basis of evidence of verbal deletions in the \"lexical SNOR\" transcriptions, since this evidence does not appear in the conventional SNOR transcriptions. For the June test set, there was no such screening, since attention had not been directed to the subset containing verbal deletions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PROTOCOLS (4)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Use of \"Baseline\" or \"Reference\" Conditions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PROTOCOLS (4)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "John Makhoul noted that there \"too many uncontrolled variables\" (e.g., algorithms, training materials, grammar) to make comparisons of the ATIS speech recognition systems beneficial using the SPREC results. BBN had advocated use of a \"baseline\" condition and provided SPREC data for both a \"baseline\" and an \"augmented\" training condition to permit such comparisons [4] . MIT/LL also made use of this \"baseline\" condition [2] . Makhoul noted that a similar situation (i.e., \"too many uncontrolled variables\") applies for the case of the NL results. Hy Murveit noted that SRI's reluctance to \"lock into a baseline condition\" was based on a reluctance to choose one with the 'wrong operating point'\", based on inadequate training. Francis Kubala noted, however, that choosing a \"baseline that undershoots\" [performance] ought not to be a problem if one wished to \"demonstrate clear wins\", and that such a baseline could be changed over time. John Makhoul also noted that reporting error rates is in general preferable to reporting \"scores\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 366, |
|
"end": 369, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 425, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PROTOCOLS (4)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One portion of the test material consisted of beth \"Speaker-Dependent\" and \"Speaker-Independent\" test sets from the Resource Management (RM1) Corpus, for use in tests of speech recognition technology. Each of these test sets consisted of 300 sentence utterances. The most recent tests using the RM1 corpus were conducted prior to the October 1989 Meeting, sixteen months ago.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark Test Material", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A second portion of the test material consisted of Air Travel Information System (ATIS) domain speech material and related transcriptions. This material was collected at TI in recent months, using the \"Wizard\" protocol described by Hemphill at the June Meeting [15] . There were a total of 9 speakers in the ATIS test set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 265, |
|
"text": "[15]", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark Test Material", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This material was partitioned into four subsets: one subset consisting of an extension of the \"Class A\" category used at the June meeting (expanded to include \"testably ambiguous\" queries) and containing 145 queries, a second subset consisting of 38 Class D1 query pairs, and two additional smaller \"optional\" subsets that included examples of \"verbal deletion\" and/or \"verbal correction\" (i.e., Optional Class A and Optional Class D1). The transcriptions used as input to the NL systems and for scoring the ATIS SPREC tests were provided using a recently developed \"lexical SNOR\" format.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark Test Material", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CMU reported benchmark Resource Management results that were not represented in the poster session. These data from CMU are included in the tables of reported results. A paper describing how these results were achieved appears in [17] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 234, |
|
"text": "[17]", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark Test Material", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Test Protocols", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In addition to the Resource Management speech recognition tests, for which there is considerable precedent, the ATIS material could be used for three tests: . During the June meeting, several sites reported results for NL tests, with CMU being the sole site to report complete SLS test results at that meeting [16] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 310, |
|
"end": 314, |
|
"text": "[16]", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The SPREC test design was outlined by an ad hoc Working Group chaired by Victor Zue, with scoring software adapted for this purpose by NIST. This is the first time that the SPREC test has been implemented.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In computing results tabulated for the NL and SLS tests, the most reeeent version of the NIST \"comparator\" was used to compare the hypothesized CAS-format answers against NIST's \"canonical\" reference answers, as described in a previous paper [16] . Answers are scored as either \"True\", \"False\", or (if the No_Answer option has been exercised) \"No Answer\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 246, |
|
"text": "[16]", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A DARPA SLS Coordinating Committee decision in November, 1990 suggested computation of a \"weighted error percentage\" on the basis that (on \"intuitive grounds\") \"a false answer is twice as bad as no answer\". The weighted error so defmed consists of two times the percentage of total queries in the subset that are scored \"False\" plus the percentage scored 53.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\"No Answer\". A single-number \"score\" may be derived by subtracting the weighted error from 100%, providing a singlenumber score that may range from -100%, for the case of all false answers, to +100% for all true answers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A \"Class D1 test protocol\" was developed and used on a trial basis for these tests. Class D1 consists of query pairs for which the second query (\"Q2\") has been classified as \"context dependent\", and for which an answerable prior query (\"QI\") has been identified as defining the context for Q2. Scoring of Class D1 query pairs was for the answers provided only for Q2, regardless of the answers provided for the context-setting query, Q1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Class D1 test protocol had never previously been implemented, and its usage was regarded by many participants more as a \"debugging of a test protocol\" than as a valid indicator of systems' abilities to handle context-dependent queries. It is also the case that the amount of labelled \"Class Di\" training material was extremely small and that it was not widely available until shortly before the test --thus limiting system developers' abilities to make adequate use of the training material. Future implementations of the Class D1 test protocol will undoubtedly yield more significant results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The \"optional\" test subsets are not discussed extensively in Section VI since these subsets axe too small, and their usage too limited, to have significance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benchmark", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Tables 1 -4 (included at the end of the text of this paper) present tabulations of results reported to NIST for uniform scoring against the final \"official\" sets of reference transcriptions and reference answers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "V. BENCHMARK TEST RESULTS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Some of these numbers may differ slightly from those reported at the meeting or in some of the papers in this proceedings, since earlier results reported at the Asilomar meeting were derived with: (1) a slightly larger Class A test set (148 vs. 145 queries), since the classification of 3 utterances, originally included in the Class A subset, was reconsidered, after the meeting, and determined to be \"unanswerable\" and thus not Class A, and (2) the reference answers for several utterances were corrected andlor modified in response to comments from the participants in these tests. However, these differences are not likely to be statistically significant.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "V. BENCHMARK TEST RESULTS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Designation of a set of results as \"LATE\" signifies that the results were received at NIST some time after midnight on February 6th, 1991. \"COB\" on that date had been designated as the due date for submission of results. In some cases prior notice had been given to NIST that results would arrive \"late\", and in a few cases, late results were invited for the sake of completeness and to permit informative comparisons with earlier results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "V. BENCHMARK TEST RESULTS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Tests Table 1 presents a tabulation of speech recognition system results for the (read speech) RM1 test material.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 13, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Resource Management (RM1) Speech Recognition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Speech Recognition Component Tests (SPREC) Table 2 presents a tabulation of SPREC results for speech recognition systems (or SLS speech recognition components) results for the spontaneous speech in the ATIS domain. Table 3 presents a tabulation of natural language system results for the ATIS NL system components (or systems).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 50, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 222, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ATIS Spontaneous", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Tables 3 and 4 , both the number of queries (and the corresponding percentage of the total number of queries in a given category) are shown for the categories \"True\" (or correct), \"False\" (incorrect) and \"No Answer\". The \"Weighted Error\" percentage was computed by multiplying the percentage of False answers by 2 and adding the percentage of \"No_Answer\" responses. The column labelled \"Score\" was computed by subtracting the Weighted Error (%) from 100%. Table 4 presents a tabulation of spoken language system results for complete ATIS systems.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 17, |
|
"text": "Tables 3 and 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 459, |
|
"end": 466, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ATIS Natural Language Component Tests (NL)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Focusing on the Speaker Independent test set results, with use of the Word Pair grammar, the word error ranges from 9.7% to 3.6%, while the sentence error ranges from 47.3% to 19.3%. Using the NIST implementation of the McNemar test used in previous tests [16] , the differences between the sentence errorlevel results for the system with the lowest reported word and sentence error rates (sys4, the CMU system described in reference [18] ) and all other systems in this category are significant for all but sysl0 and sysll (two BBN systems described in reference [4] ). The sentence-error-levelperformance differences between the CMU system and the two BBN systems are not significant.", |
|
"cite_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 260, |
|
"text": "[16]", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 438, |
|
"text": "[18]", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 567, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RM1 Speech Recognition Results (Table 1)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are three sets of results for the BU-BBN collaborative effort described in [5] . The first of these (designated sys7), with a santenee error rate of 27.0%, results from the hybrid BU-BBN system. The second of these (sysl2), with a sentence error rate of 27.7%, results from the top answer from the BBN N-best system used for this study. The third (sysl3), with a sentence error rate of 47.3%, results from the top answer of the BU context-independent, gender-dependent segment model system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 84, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RM1 Speech Recognition Results (Table 1)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lowest overall word and sentence error rates (1.8% and 12.0%, respectively) are reported for the case of the speakerdependent Word-Pair grammar system results (sys5) reported by Paul, at MIT/LL, described in [2] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 211, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RM1 Speech Recognition Results (Table 1)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In addition to results reported in this session, note that results were reported to NIST for two systems not described in this session. Huang et al. at CMU reported results for an HMM system incorporating a \"shared semi-continuons model\". That system is described in a paper to be presented at ICASSP-91 [17] . Gauvain and Lee at AT&T Bell Laboratories reported results for an investigation \"into the use of Bayesian learning of the parameters of a Gaussian mixture density\", and this study is reported in another paper in this proceedings [18] . (Table 2) Focusing on the word error for the 145 utterances in the Class A test set, the range is from 46.1% to 15.7%, while the sentence error ranges from 91.0% to 52.4%.", |
|
"cite_spans": [ |
|
{ |
|
"start": 304, |
|
"end": 308, |
|
"text": "[17]", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 540, |
|
"end": 544, |
|
"text": "[18]", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 547, |
|
"end": 556, |
|
"text": "(Table 2)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "RM1 Speech Recognition Results (Table 1)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The McNemar sentence-error-level significance test (not shown) indicates that the system with the lowest reported word and sentence error rate for the Class A utterances (sys24-a, the Unisys implementation of syntactic, semantic and pragmatic constraints in selecting the first candidate from an N-best listing provided by BBN, described in [11] ) has an error rate that is significantly less than all but two other systems, (sysl8-a, the BBN \"augmented training\" system, and sys06-a, the SRI system). Performance differences (at the sentence error level) between these three systems, however, are not significant.", |
|
"cite_spans": [ |
|
{ |
|
"start": 341, |
|
"end": 345, |
|
"text": "[11]", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ATIS SPREC Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Comparison of the results for the BBN \"baseline\" and \"augmented\" training condition (sysl8 and sysl9) gives some indication of the benefits of additional (in this case, domainspecific) training and a more powerful 4-gram statistical class grammar. The McNemar test indicates that the difference in performance between sysl8-a and 19-a is significant.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ATIS SPREC Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Comparisons of results for similar systems for the two larger test subsets (i.e., Class A results vs. Class D1 results) suggest that the Class D1 material is somewhat more difficult to recognize (i.e., the error rates are higher). An interesting hypothesis that may account, in part, for this phenomenon is offered by Norton et al.: \"...this higher error rate in context dependent spontaneous utterances may be due in part to the presence of prosodic phenomena common in dialogue such as destressing 'old' information\" [11] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 519, |
|
"end": 523, |
|
"text": "[11]", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ATIS SPREC Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Typical SPREC error rates are higher still for the two \"optional\" test subsets. This ought not to be surprising in view of the fact that these utterance subsets are, by definition and selection, more dysfluent (i.e., contain verbal deletions). Table 2 , but indicated by other analyses, is high inter-subject variability for the SPREC tests as well as for the NL and SiS tests. (Table 3) For the Class A subset, results are tabulated for eight NL systems at 5 DARPA contractors' sites, and at AT&T Bell Laboratories and at INRS-Telecom. For the DARPA contractor's systems, the weighted error ranges from 51.7% to 31.0%.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 251, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 387, |
|
"text": "(Table 3)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ATIS SPREC Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The two sets of CMU results include data for the PHOENIX system described in [8] (sysOl), and for the PHOENIX system integrated with the SOUL module described by Young in [9] (sys02).", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 80, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 171, |
|
"end": 174, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ATIS NL Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the Class A test material, the lowest weighted error figures (31.0%) are found for both the SRI system described by Murveit et al. in [7] (sys13-a), and for the CMU PHOENIX + SOUL system of [9] (sys02-a).", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 141, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 197, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ATIS NL Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the Class D1 and Optional Class D1 subsets, the weighted error percentages are substantially higher than for the Class A results. For the Class D1 test material, the lowest weighted error figures (36.8%) are found for the Unisys system described by Norton et al. in [11] (sys09-d).", |
|
"cite_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 274, |
|
"text": "[11]", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ATIS NL Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that two sets of results are reported for the Class D1 material for BBN (denoted syslS-d and sys23-d) . Subsequent to submission of the initial results for sysl5-d, BBN's representatives notified NIST that \"...there was a small bug in the component that translates the result of the understanding (i.e., the output of the discourse component) into SQL... [and that since] the bug in our system.., was NOT in the UNDERSTANDING or the DISCOURSE component but between the output of those components and the SQL backend and ...", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 106, |
|
"text": "for BBN (denoted syslS-d and sys23-d)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ATIS NL Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[since] one small quick fix in the backend corrected the problem, we concluded that it is reasonable to send you new answers for our Class D test\" [19] . The data designated as sys23-d is derived from these \"new answers\". ATIS SLS Results (Table 4) For the Class A subset, results are tabulated for 7 SLS systems at 5 DARPA contractors' sites.", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 151, |
|
"text": "[19]", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 248, |
|
"text": "(Table 4)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ATIS NL Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Non-DARPA contractors declined to participate in the SLS tests. The weighted error ranges considerably, from 90.3% to 41.4%, with the best (lowest weighted error) results for the SRI system described in [7] and in other SRI papers in this proceedings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 206, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ATIS NL Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The low SRI SLS weighted error rate (41.4%) appears to be a consequence of both a well-performing ATIS speech recognition component and a well-performing natural language component (i.e., a SPREC test word error rate of 18.0% and an NL weighted error rate of 31.0%).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ATIS NL Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Not surprisingly, weighted error figures for complete SLS systems are higher than for corresponding NL components (processing the lexical SNOR formatted versions of the same utterances). The relative increase in weighted error rate appears to correspond to the relative performance of the speech recognition component. Tables 3 and 4 , note that for the SRI system the weighted error rate for the Class A subset increases from 31.0% (for the NL component) to 41.4% (for the complete SLS system. Two SLS systems made use of BBN's BYBLOS ATIS SPREC data: the BBN HARC system (sys16-a) and the Unisys-BBN SPREC system (sys22-a). Comparing the increases of weighted error rates for NL vs. SLS systems, one can note an approximate increase in weighted error rate of only 8 or 9 percentage points for these systems (i.e., from 49.0% for the BBN DELPHI NL system to 57.2% for the BBN HARC SIS system, and from 51.7% for the Unisys NL system to 60.7% for the Unisys-((BBN SPREC)) SLS system). This relatively small increase in error rate is probably attributable to the BBN \"augmented training\" (sysl8-a) SPREC test word error rate of (only) 16.1%, which is not significantly different from SRI's SPREC test results of 18.0%.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 319, |
|
"end": 333, |
|
"text": "Tables 3 and 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ATIS NL Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In contrast, a substantially larger increase in error rate can be noted for the CMU systems (i.e., 35.9% and 31.0% for the two CMU NL systems vs. 65.5% for the SLS system), probably due to performance of the CMU SPREC system with error rates that are significantly higher than for the SRI SPREC system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "By comparing comparable data from", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unisys reported results for three system configurations: using speech recognition results provided by the MIT/LCS ATIS SPREC system (designated sysl0-a), by the MIT/LL ATIS SPREC system (sys ll-a), and by the BBN BYBLOS/ATIS system (sys22-a). In this case, better performance on the SLS test (i.e., lower weighted error) correlates with better performance on the SPREC results, as would be expected.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "By comparing comparable data from", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As was also the case for the NL results, the weighted error results for the Class D1 test subset are substantially higher than for the Class A results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "By comparing comparable data from", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Too many individuals have served as points-of-contact at the research sites involved in these benchmark tests to be individually thanked, but their efforts and patience in seeing that information and data are made available are greatly appreciated. My colleagues at NIST deserve special thanks for their efforts and effieiancy in making these tests possible and in tabulation of the results. In particular, Bill Fisher has had a key role, both as Chairman of the DARPA SLS Performance Evaluation Working Group and as the individual responsible for ATIS test material selection and in reviewing the \"canonical\" auxiliary files. Jon Fiscus and John Garofolo, also at NIST, have been responsible for implementation of scoring software and for preparation of corpora on CD-ROM. Tables 2, 3 and 4: The following key is provided as an aid in cross-referencing the NIST-ID numbers to the sites submitting ATIS results and to descriptions of the systems in the references cited in this paper. Note: key for these tables differs from that for the RM1 results of ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 774, |
|
"end": 785, |
|
"text": "Tables 2, 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "VII. ACKNOWLEDGEMENT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Corr", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NIST-ID", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Dragon Systems Resource Management Benchmark Test Results", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Baker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baker, J., et al., \"Dragon Systems Resource Management Benchmark Test Results--February 1991\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "New Results with the Lincoln Tied-Mixture HMM CSR System", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Paul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul, D. B. \"New Results with the Lincoln Tied-Mixture HMM CSR System\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Modelling Context Dependency in Acoustic-Phonetic and Lexical Representations", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Phillips", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Zue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phillips, M., Glass, J. and Zue, V., \"Modelling Context Dependency in Acoustic-Phonetic and Lexical Representations\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BYBLOS Speech Recognition Benchmark Results", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Kubala", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kubala, F. et al., \"BYBLOS Speech Recognition Benchmark Results\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Integration of Diverse Recognition Methodologies Through Reevaluation of N-Best Sentence Hypotheses", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ostendorf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ostendorf, M. et al., \"Integration of Diverse Recognition Methodologies Through Reevaluation of N-Best Sentence Hypotheses\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Development and Preliminary Evaluation of the M1T ATIS System", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Seneff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Seneff, S. et al., \"Development and Preliminary Evaluation of the M1T ATIS System\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "SRI's Speech and Natural Language Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Murveit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Murveit, H. et al., \"SRI's Speech and Natural Language Evaluation\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Current Status of the CMU ATIS System", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ward, W., \"Current Status of the CMU ATIS System\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Using Semantics to Correct Parser Output for ATIS Utterances", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Young, S., \"Using Semantics to Correct Parser Output for ATIS Utterances\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "BBN HARC and Delphi Results on the ATIS Benchmarks", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Austin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Austin, S. et al., \"BBN HARC and Delphi Results on the ATIS Benchmarks--February 1991\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Augmented Role filling Capabilities for Semantic Interpretation of Spoken Language", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Norton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Norton, L. et al., \"Augmented Role filling Capabilities for Semantic Interpretation of Spoken Language\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A Textual Processor to Handle ATIS Queries", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "O'shaughnessy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "O'Shaughnessy, D., \"A Textual Processor to Handle ATIS Queries\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The Use of a Commercial Natural Language Interface in the ATIS Task", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Tzoukermann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tzoukermann, E., \"The Use of a Commercial Natural Language Interface in the ATIS Task\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Stochastic Representation of Conceptual Structure in the ATIS Task", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Pieraccini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Levin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pieraccini, R., Levin, E. and Lee, C.H., \"Stochastic Representation of Conceptual Structure in the ATIS Task\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "The ATIS Spoken Language System Pilot Corpus", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Hemphill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Godfrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Doddington", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the DARPA] Speech and Natural Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "96--101", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hemphill, C.T., Godfrey, J.J., and Doddington, G.R., \"The ATIS Spoken Language System Pilot Corpus\" in Proceedings of the DARPA] Speech and Natural Language Workshop\" June 1990, pp. 96 -101.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "DARPA ATIS Test Results", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Pallett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the DARPA] Speech and Natural Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "114--121", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pallett, D.S., et al. \"DARPA ATIS Test Results: June 1990\" in Proceedings of the DARPA] Speech and Natural Language Workshop\" June 1990, pp. 114 -121.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Improved Acoustic Modelling for the SPHINX Speech Recognition System", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huang, X. et al., \"Improved Acoustic Modelling for the SPHINX Speech Recognition System\", (to be presented at ICASSP-91).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Bayesian Learning of Gaussian Mixture Densities for Hidden Marker Models", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Gauvaln", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gauvaln, J. and Lee, C.H., \"Bayesian Learning of Gaussian Mixture Densities for Hidden Marker Models\" (in this Proceedings).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "ARPANET communication from M. Bates and R. Ingria (BBN) to Dave Pallett (NIST)", |
|
"authors": [], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "ARPANET communication from M. Bates and R. Ingria (BBN) to Dave Pallett (NIST), February 13, 1991.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Description CMU Class-A SPREC MIT-LL Class-A SPREC SRI Class-A SPREC MIT-LCS Class-A SPREC Unisys/MIT-LCS Class-A SPREC BBN Class-A SPREC Caugmented\")-LATE BBN Class-A SPREC", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Description CMU Class-A SPREC MIT-LL Class-A SPREC SRI Class-A SPREC MIT-LCS Class-A SPREC Unisys/MIT-LCS Class-A SPREC BBN Class-A SPREC Caugmented\")-LATE BBN Class-A SPREC (\"baseline\")-LATE Unisys/BBN Class-A Spree-LATE", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Description CMU Class-D1 SPREC M1T-LL Class-D1 SPREC-LATE SRI Class-D1 SPREC Unisys/MIT-LCS Class-D1 SPREC BBN Class-D1 SPREC (\"augmented\")-LATE BBN Class-D1 SPREC Cbaseline\")-LATE Unisys/BBN Class-D1", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Description CMU Class-D1 SPREC M1T-LL Class-D1 SPREC-LATE SRI Class-D1 SPREC Unisys/MIT-LCS Class-D1 SPREC BBN Class-D1 SPREC (\"augmented\")-LATE BBN Class-D1 SPREC Cbaseline\")-LATE Unisys/BBN Class-D1 Spree-LATE", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Feb-13 A Description CMU Optional Class-A SPREC MIT-LL Optional Class-A SPREC-LATE SRI Optional Class-A SPREC Unisys/MIT-I.,CS Optional Class-A SPREC BBN Optional Class-A SPREC (\"augmented\")-LATE BBN Optional Class-A SPREC Cbaseline", |
|
"authors": [], |
|
"year": null, |
|
"venue": "LATE Unisys/BBN Optional Class-A Spree-LATE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Feb-13 A Description CMU Optional Class-A SPREC MIT-LL Optional Class-A SPREC-LATE SRI Optional Class-A SPREC Unisys/MIT-I.,CS Optional Class-A SPREC BBN Optional Class-A SPREC (\"augmented\")-LATE BBN Optional Class-A SPREC Cbaseline\")-LATE Unisys/BBN Optional Class-A Spree-LATE", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Description CMU Optional Class-D1 SPREC M1T-LL Optional Class-D1 SPREC-LATE SRI Optional Class-D SPREC Unisys/MIT-LCS Optional Class-D1 SPREC BBN Optional Class-D1 SPREC", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Description CMU Optional Class-D1 SPREC M1T-LL Optional Class-D1 SPREC-LATE SRI Optional Class-D SPREC Unisys/MIT-LCS Optional Class-D1 SPREC BBN Optional Class-D1 SPREC (\"augmented\")-LATE BBN Optional Class-D1 SPREC (\"baseline\")-LATE Unisys/BBN Optional Class-D1 Spree-LATE TABLE 2.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "(1) spontaneous ATISdomain SPeech RECognition component tests (designated as SPREC in this paper), (2) ATIS-domain Natural Language system component tests (designated as NL), and (3) complete ATIS-domain Spoken Language System tests (designated as SLS)", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"6\">FEB91 RM1 SPEECH RECOGNITION TEST</td></tr><tr><td/><td/><td/><td colspan=\"6\">SPEAKER-INDEPENDENT WITHOUT</td><td>GRAMMAR</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>Arr.</td><td/></tr><tr><td/><td/><td>Sub</td><td colspan=\"3\">Dei Ins Err</td><td>S.Err</td><td>Date</td><td colspan=\"2\">Description</td></tr><tr><td>sysl</td><td>84.3</td><td>12.5</td><td>3.2</td><td>1.8</td><td colspan=\"2\">17.6 66.0</td><td>Jan-31</td><td colspan=\"2\">SRI Spkr-Indep no grammar</td></tr><tr><td>sys4</td><td>86.2</td><td>11.5</td><td>2.3</td><td>3.2</td><td colspan=\"2\">17.0 66.7</td><td>Feb-5</td><td colspan=\"2\">CMU Spkr-Indep no grammar</td></tr><tr><td>sys6</td><td>83.2</td><td>14.2</td><td>2.7</td><td>2.9</td><td colspan=\"2\">19.7 71.7</td><td>Feb-ll</td><td colspan=\"2\">M1T-LL Spkr-Indep no grammar-LATE</td></tr><tr><td>sys8</td><td>81.9</td><td>14.8</td><td>3.3</td><td>2.5</td><td colspan=\"2\">20.7 74.7</td><td>Feb-6</td><td colspan=\"2\">AT&T Spkr-Indep no grammar</td></tr><tr><td>sys9</td><td>82.2</td><td>14.4</td><td>3.4</td><td>2.0</td><td colspan=\"2\">19.8 70.7</td><td>Feb-7</td><td colspan=\"2\">AT&T-R Spkr-Indep no grammar-LATE</td></tr><tr><td>sysl0</td><td>83.3</td><td>13.6</td><td>3.1</td><td>2.1</td><td colspan=\"2\">18.8 69.3</td><td>Feb-7</td><td colspan=\"2\">BBN Spkr-Indep (109) no grammar-LATE</td></tr><tr><td/><td/><td/><td colspan=\"6\">SPEAKER-INDEPENDENT WORD-PAIR</td><td>GRAMMAR</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>Arr.</td><td/></tr><tr><td>NIST-ID</td><td>Corr</td><td>Sub</td><td colspan=\"3\">Del Ins Err</td><td>S.Err</td><td>Date</td><td colspan=\"2\">Description</td></tr><tr><td>sysl</td><td>95.9</td><td>3.0</td><td>1.0</td><td>0.8</td><td>4.8</td><td>26.0</td><td>Jan-31</td><td colspan=\"2\">SRI Spkr-Indep Word-Pair</td></tr><tr><td>sys2</td><td>93.3</td><td>6.0</td><td>0.7</td><td>1.2</td><td>8.0</td><td>33.7</td><td>Feb-4</td><td colspan=\"2\">M1T Spkr-Indep Word-Pair</td></tr><tr><td>sys4</td><td>96.8</td><td>2.5</td><td>0.8</td><td>0.4</td><td>3.6</td><td>19.3</td><td>Feb-5</td><td colspan=\"2\">CMU Spkr-Indep Word-Pair</td></tr><tr><td>sys6</td><td>96.2</td><td>2.8</td><td>1.0</td><td>0.6</td><td>4.4</td><td>23.3</td><td>Feb-6</td><td colspan=\"2\">MIT-LL Spkr-Indep Word-Pair</td></tr><tr><td>sys7</td><td>95.7</td><td>3.3</td><td>1.0</td><td>1.2</td><td>5.6</td><td>27.0</td><td>Feb-6</td><td colspan=\"2\">BU-BBN Spkr-Indep Word-Pair</td></tr><tr><td>sys8</td><td>95.5</td><td>3.5</td><td>1.0</td><td>0.7</td><td>5.2</td><td>28.0</td><td>Feb-6</td><td colspan=\"2\">AT&T Spkr-Indep Word-Pair</td></tr><tr><td>sysl0</td><td>96.7</td><td>2.3</td><td>0.9</td><td>0.5</td><td>3.8</td><td>21.0</td><td>Feb-7</td><td colspan=\"2\">BBN Spkr-Indep (109) Word-Pair-LATE</td></tr><tr><td>sysll</td><td>96.7</td><td>2.8</td><td>0.6</td><td>0.5</td><td>3.8</td><td>23.0</td><td>Feb-7</td><td colspan=\"2\">BBN Spkr-Indep (12) Word-Pair-LATE</td></tr><tr><td>sysl2</td><td>95.7</td><td>3.3</td><td>1.0</td><td>1.1</td><td>5.4</td><td>27.7</td><td>Feb-8</td><td colspan=\"2\">BU-BBN (W/O BU SSM) Spkr-Indep Word-Pair-LATE</td></tr><tr><td>sysl3</td><td>93.0</td><td>5.3</td><td>1.8</td><td>2.6</td><td>9.7</td><td>47.3</td><td>Feb-12</td><td colspan=\"2\">BU Segment Model Spkr-Indep Word-Pair-LATE</td></tr><tr><td>sysl4</td><td>96.1</td><td>3.0</td><td>0.8</td><td>0.7</td><td>4.5</td><td>25.7</td><td>Feb-28</td><td colspan=\"2\">AT&T Sex-Modelled Spkr-Indep Word-Pair-LATE</td></tr><tr><td/><td/><td/><td colspan=\"7\">SPEAKER.DEPENDENT WITHOUT GRAMMAR</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>Arr.</td><td/></tr><tr><td>NIST-ID</td><td>Corr</td><td>Sub</td><td colspan=\"3\">Del Ins Err</td><td>S.Err</td><td>Date</td><td colspan=\"2\">Description</td></tr><tr><td>sys5</td><td>92.5</td><td>5.8</td><td>1.7</td><td>1.3</td><td>8.7</td><td>44.0</td><td>Feb-6</td><td colspan=\"2\">Mrr-LL Spkr-Dep no grammar</td></tr><tr><td/><td/><td/><td colspan=\"4\">SPEAKER-DEPENDENT</td><td colspan=\"3\">WORD-PAIR GRAMMAR</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>Arr.</td><td/></tr><tr><td>NIST-ID</td><td>Corr</td><td>Sub</td><td colspan=\"3\">Del Ins Err</td><td>S.Err</td><td>Date</td><td colspan=\"2\">Description</td></tr><tr><td>sys3</td><td>94.1</td><td>4.5</td><td>1.4</td><td>1.5</td><td>7.5</td><td>34.3</td><td>Feb-5</td><td colspan=\"2\">Dragon Spkr-Dep Word-Pair</td></tr><tr><td>sys5</td><td>98.3</td><td>1.0</td><td>0.7</td><td>0.1</td><td>1.8</td><td>12.0</td><td>Feb-6</td><td colspan=\"2\">MIT-LL Spkr-Dep Word-Pair</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">TABLE 1.</td><td/></tr><tr><td colspan=\"10\">Key to KEY: RM1 SPEECH RECOGNITION TEST REFERENCES</td></tr><tr><td colspan=\"2\">NIST-ID</td><td>Site</td><td colspan=\"3\">Reference</td><td/><td/><td colspan=\"2\">NIST-ID</td><td>Site</td><td>Reference</td></tr><tr><td>sysl</td><td/><td>SRI</td><td/><td>[7]</td><td/><td/><td/><td>sys8</td><td>AT&T</td><td>[18]</td></tr><tr><td>sys2</td><td/><td colspan=\"2\">MIT-LCS</td><td>[3]</td><td/><td/><td/><td>sys9</td><td>AT&T</td><td>[18]</td></tr><tr><td>sys3</td><td/><td>Dragon</td><td/><td>[1]</td><td/><td/><td/><td colspan=\"2\">sysl0</td><td>BBN</td><td>[4]</td></tr><tr><td>sys4</td><td/><td>CMU</td><td/><td>[17]</td><td/><td/><td/><td colspan=\"2\">sysl 1</td><td>BBN</td><td>[4]</td></tr><tr><td>sys5</td><td/><td>MIT-LL</td><td/><td>[2]</td><td/><td/><td/><td colspan=\"2\">sys 12</td><td>BU-BBN</td><td>[5]</td></tr><tr><td>sys6</td><td/><td>AT&T</td><td/><td>[2]</td><td/><td/><td/><td colspan=\"2\">sys13</td><td>BU-BBN</td><td>[5]</td></tr><tr><td>sys7</td><td/><td>BU-BBN</td><td/><td>[5]</td><td/><td/><td/><td colspan=\"2\">sys14</td><td>AT&T</td><td>[18]</td></tr></table>", |
|
"type_str": "table", |
|
"text": "The following key is provided as an aid in cross-referencing the NIST-ID numbers to the sites submitting results and to descriptions of the systems in the references cited in this paper. Ans, W. Err Score", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td>NIST-ID</td><td>Site</td><td>Reference</td><td>NIST-ID</td><td>Site</td><td>Reference</td></tr><tr><td>sysOl</td><td>CMU</td><td>[8]</td><td>sysl3</td><td>SRI</td><td>[7]</td></tr><tr><td>sys02</td><td>CMU</td><td>[9]</td><td>sys14</td><td>Unisys</td><td>[11]</td></tr><tr><td>sys03</td><td>CMU</td><td>[8]</td><td>sysl5</td><td>BBN</td><td>[10]</td></tr><tr><td>sys04</td><td>CMU</td><td>[8]</td><td>sys16</td><td>BBN</td><td>[10]</td></tr><tr><td>sys05</td><td>MIT-LL</td><td>[2]</td><td>sys 17</td><td>INRS</td><td>[12]</td></tr><tr><td>sys06</td><td>SRI</td><td>[7]</td><td>sys18</td><td>BBN</td><td>[4]</td></tr><tr><td>sys07</td><td>MIT-LCS</td><td>[6]</td><td>sys19</td><td>BBN</td><td>[4]</td></tr><tr><td>sys08</td><td>MIT-LCS</td><td>[3]</td><td>sys20</td><td>MIT-LCS</td><td>[6]</td></tr><tr><td>sys09</td><td>Unisys</td><td>[11]</td><td>sys21</td><td>SRI</td><td>[7]</td></tr><tr><td>sys 10</td><td>Unisys</td><td>[11]</td><td>sys22</td><td>Unisys</td><td>[11]</td></tr><tr><td>sys 11</td><td>Unisys</td><td>[11]</td><td>sys23</td><td>BBN</td><td>10]</td></tr><tr><td>sys12</td><td>AT&T</td><td>[13]</td><td>sys24</td><td>Unisys</td><td>[11]</td></tr></table>", |
|
"type_str": "table", |
|
"text": "KEY: ATIS SPREC, NL, AND SLS TEST REFERENCES", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |