Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H90-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:36:26.157055Z"
},
"title": "Efficient, High-Performance Algorithms for N-Best Search",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Systems and Technologies Inc",
"location": {
"addrLine": "10 Moulton St",
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": ""
},
{
"first": "Steve",
"middle": [],
"last": "Austin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Systems and Technologies Inc",
"location": {
"addrLine": "10 Moulton St",
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present two efficient search algorithms for real-time spoken language systems. The first called the Word-Dependent N-Best algorithm is an improved algorithm for finding the top N sentence hypotheses. The new algorithm is shown to perform as well as the Exact Sentence-Dependent algorithm presented previously but with an order of magnitude less computation. The second algorithm is a fast match scheme for continuous speech recognition called the Forward-Backward Search. This algorithm, which is directly motivated by the Baum-Welch Forward-Backward training algorithm, has been shown to reduce the computation of a time-synchronous beam search by a factor of 40 with no additional search errors.",
"pdf_parse": {
"paper_id": "H90-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "We present two efficient search algorithms for real-time spoken language systems. The first called the Word-Dependent N-Best algorithm is an improved algorithm for finding the top N sentence hypotheses. The new algorithm is shown to perform as well as the Exact Sentence-Dependent algorithm presented previously but with an order of magnitude less computation. The second algorithm is a fast match scheme for continuous speech recognition called the Forward-Backward Search. This algorithm, which is directly motivated by the Baum-Welch Forward-Backward training algorithm, has been shown to reduce the computation of a time-synchronous beam search by a factor of 40 with no additional search errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In a Spoken Language System (SLS) we must use all available knowledge sources (KSs) to decide on the spoken sentence. While there are many knowledge sources, they are often grouped together into speech models, statistical language model, and natural language understanding models. To optimize accuracy we must choose the sentence that has the highest score (probability) given all of the KSs. This potentially requires a very large search space. The N-Best paradigm for integrating several diverse KSs has been described previously [2, 10] . First, we use a subset of the KSs to choose a small number of likely sentences. Then these sentences are scored using the remainder of the KSs.",
"cite_spans": [
{
"start": 532,
"end": 535,
"text": "[2,",
"ref_id": "BIBREF1"
},
{
"start": 536,
"end": 539,
"text": "10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In Chow et. al., we also presented an efficient speech recognition search algorithm that was capable of computing the N most likely sentence hypotheses for an utterance, given the speech models and statistical language models. However, this algorithm greatly increases the needed computation over that needed for finding the best single sentence. In this paper we introduce two techniques that dramatically decrease the computation needed for the N-Best search. These algorithms are being used in a real-time SLS [1]. In the remainder of the introduction we review the exact N-Best search briefly and describe its problems. In Section 2 we describe two approximations to the exact algorithm and compare their accuracy with that of the exact algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The resulting algorithm is still not fast enough for realtime implementation. In Section 3 we present a new sentence-level fast match scheme for continuous speech recognition. The algorithm is motivated by the mathematics of the Baum-Welch Forward-Backward training algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The basic notion of the n-best paradigm is that, while we must ultimately use all the available KSs to improve recognition accuracy, the sources vary greatly in terms of perplexity reduction and required complexity. For example, a first-order statistical language model can reduce perplexity by at least a factor of 10 with little computation, while applying complete natural language (NL) models of syntax and semantics to all partial hypotheses typically requires more computation for less perplexity reduction. (Murveit [6] has shown that the use of an efficiently implemented syntax within a recognition search actually slowed down the search unless it was used very sparingly.) Therefore it is advantageous to use a strategy in which we use the most powerful, efficient KSs first to produce a scored list of all the likely sentences. This list is then filtered and reordered using the remaining KSs to arrive at the best single sentence. Figure 1 contains a block diagram that illustrates this basic idea. In addition to reducing total computation the resulting systems would be more modular ff we could separate radically different KSs.",
"cite_spans": [
{
"start": 514,
"end": 526,
"text": "(Murveit [6]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 943,
"end": 951,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The N-Best Paradigm",
"sec_num": null
},
{
"text": "We have previously presented an efficient time-synchronous algorithm for finding the N most likely sentence hypotheses. This algorithm was unique in that it computed the correct forward probability score for each hypothesis found. The way this is accomplished is that, at each state, we keep an independent score for each different preceding sequence of words. That is, the scores for two theories are added only if the preceding word sequences are identical. We preserve up to N different theories at each state, as long as they are above the pruning beamwidth. This algorithm guarantees finding the N best hypotheses within a threshold of the best hypothesis. The algorithm was optimized to avoid expensive sorting operations so that it required computation that was less than linear with the number of sentence hypotheses found. It is easy to show that the inaccuracy in the scores computed is bounded by the product of the sentence length Semantics, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Exact Sentence-Dependent Algorithm",
"sec_num": null
},
{
"text": "Higher-order statistical Figure 1 : The N-best Search Paradigm. The most efficient knowledge sources, KS1, are used to find the N Best sentences. Then the remaining knowledge sources, KS2 are used to reorder the sentences and pick the most likely one. and the pruning beamwidth. For example, if a sentence is 1000 frarms long and a relative pruning beamwidth of 10-15 is maintained throughout the sentence, then all scores are guaranteed to be accurate to within 10 -12 of the maximum score. The proof is not given here, since it is not the subject of this paper. In the remainder of the paper we will refer to this particular algorithm as the Exact algorithm or the Sentence-Dependent algorithm.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Exact Sentence-Dependent Algorithm",
"sec_num": null
},
{
"text": "There is a problem associated with the use of this exact algorithm. If we assume that the probability of a single word being misrecognized is roughly independent of the position within a sentence, then we would expect that alonger sentence will have more errors. Consequently the typical rank of the correct answer will be lower (further from the top) on longer sentences. Therefore if we wanted the algorithm to find the correct answer within the list of hypotheses some fixed percentage of the time, the value of N will have to increase significantly for longer sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Exact Sentence-Dependent Algorithm",
"sec_num": null
},
{
"text": "When we examine the different answers found we notice that, many of the different answers are simple one-word variations of each other. This is likely to result in much duplicated computation. One might imagine that if the difference between two hypothesized word sequences were several words in the past then any difference in score due to that past word would remain constant. In the next section we present two algorithms that attempt to avoid these problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Exact Sentence-Dependent Algorithm",
"sec_num": null
},
{
"text": "While the exact N-Best algorithm is theoretically interesting, we can generate lists of sentences with much less computation if we are willing to allow for some approximations. As long as the correct sentence can be guaranteed to be within the list, the list can always be reordered by rescoring each hypothesis individually at the end. We present two such approximate algorithms with reduced computation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Approximate N-Best Algorithms",
"sec_num": "2."
},
{
"text": "The first algorithm will derive an approximate list of the N Best sentences with no more computation than the usual 1-Best search. Figure 2 illustrates the algorithm. Within words we use the time-synchronous forward-pass search algorithm [8] , with only one theory at each state. We add the probabilities of all paths that come to each state. At each grammar node (for each frame) we simply store all of the theories that arrive at that node along with their respective scores in a traceback list. This requires no extra computation above the 1-Best algorithm. The score for the best hypothesis at the grammar node is sent on as in the norrnal time-synchronous forward-pass search. A pointer to the saved list is also sent on. At the end of the sentence we simply search (recursively) through the saved Iraceback lists for all of the complete sentence hypotheses that are above some threshold below the best theory. This recursive Iraceback can be performed very quickly. (We typically extract the 100 best answers, which causes no noticeable delay.) We call this algorithm the Lattice N-Best algorithm since we essentially have a dense word lattice represented by the traceback information. Another advantage of this algorithm is that it naturally produces more answers for longer sentences.",
"cite_spans": [
{
"start": 238,
"end": 241,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 131,
"end": 139,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lattice N-Best",
"sec_num": null
},
{
"text": "Word 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "mam~_.~~mmWord 1 ~'~ Word 4",
"sec_num": null
},
{
"text": "Word 3 Word K Figure 2 : The Lattice N-best Algorithm. We save all theories at grammar nodes. Then we recursively Irace back all sequences.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 22,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "mam~_.~~mmWord 1 ~'~ Word 4",
"sec_num": null
},
{
"text": "This algorithm is similar to the one suggested by Steinbiss [9] , with a few differences. First, he uses the standard Viterbi algorithm rather than the time-synchronous algorithm within words. That is he takes the maximum of the path probabilities at a state rather than the sum. We have observed a 20% higher error raate when using the maximum rather than the sum. The second difference is that when several word hypotheses come together at a common grammar node at the same lime, he traces back each of the choices and keeps the N (typically 10) best sentence hypotheses up to that lime and node. This step unnecessarily limits the o,mher of sentence hypotheses that are produced to N. As above the score of the best hypothesis is sent on to all words following the grammar node. At the end of the sentence he then has an approximation to the 3r best sentences. He reports that one third of the errors made by the 1-Best search are corrected in this way. However, as with a word lattice, many of the words are constrained to end at the same time -which leads to the main problem with this algorithm.",
"cite_spans": [
{
"start": 60,
"end": 63,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "mam~_.~~mmWord 1 ~'~ Word 4",
"sec_num": null
},
{
"text": "The Lattice N-Best algorithm, while very fast, underestimates or misses high scoring hypotheses. Figure 3 shows an example in which two different words (words 1 and 2) can each be followed by the same word (word 3). Since there is only one theory at each state within a word, there is only one best beginning time. This best beginning time is determined by the best boundary between the best previous word (word 2 in the example) and the current word. But, as shown in Figure 3 , the second-best theory involving a different previous word (word 1 in the example), would naturally end at a slightly different lime. Thus the best score for the second-best theory would be severely underestimated or lost altogether. ",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 105,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 469,
"end": 477,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "mam~_.~~mmWord 1 ~'~ Word 4",
"sec_num": null
},
{
"text": "As a compromise between the exact sentence-dependent algorithm and the lattice algorithm we devised a Word-Dependent N-Best algorithm_ We reason that while the best starting lime for a word does depend on the preceding word, it probably does not depend on any word before that. Therefore instead of separating theories based on the whole preceding sequence, we separate them only ff previous word is different. At each state within the word we preserve the total probability for each of n(<< N) different preceding words. At the end of each word we record the score for each hypothesis along with the name of the previous word. Then we proceed on with a single theory with the name of the word that just ended. At the end of the sentence we perform a recursive traceback to derive a large list of the most likely sentences. The resulting theory paths are illustrated schematically in Figure 4 . Like the lattice algorithm the word-dependent algorithm naturally produces more answers for longer sentences. However, since we keep multiple theories within the word, we correctly identify the second best path. While the computation needed is greater than for the lattice algorithm it is less than for the sentence-dependent algorithm, since the number of theories only needs to account for number of possible previous words -not all possible preceding sequences. Therefore the number n, of theories kept locally only needs to be 3 to 6 instead of 20 to 100.",
"cite_spans": [],
"ref_spans": [
{
"start": 884,
"end": 892,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Word-Dependent N-Best",
"sec_num": null
},
{
"text": "We performed experiments to compare the behavior of the Figure 5 shows the cumulative distribution of the rank of the correct answer for the three algorithms. As can be seen, all three algorithms get the sentence correct on the first choice about 62% of the time. All three cumulative distributions increase substantially with more choices. However, we observe that the Word-Dependent algorithm yields accuracies quite close to that of the Exact Sentence-Dependent algorithm, while the Lattice N-Best is substantially worse. In particular, the sentence error rate at rank 100 (8%) is double that of the Word-Dependent algorithm (4%). Therefore, ff we can afford the computation of the Word-Dependent algorithm it is clearly preferred.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 64,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Comparison of N-Best Algorithms",
"sec_num": null
},
{
"text": "We also observe in Figure 5 that the Word-Dependent algorithm is actually better than the Sentence-Dependent algorithm for very high ranks. This is because the score of the correct word sequence fell outside the pruning beamwidth. However, in the Word-Dependent algorithm each hypothesis gets the benefit of the best theory two words back. Therefore the correct answer was preserved in the traceback. This is another advantage that both of the approximate algorithms have over the Sentence-Dependent algorithm.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 27,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Comparison of N-Best Algorithms",
"sec_num": null
},
{
"text": "In the next section we describe a technique that can be used to speed up all of these time-synchronous search algorithms by a large factor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of N-Best Algorithms",
"sec_num": null
},
{
"text": "The time-synchronous beam search follows a large number of theories on the off chance that they will get better during the remainder of the sentence. Typically, we must keep over 1000 theories to guarantee finding the highest answer. In some sense the computation for all but one answer will have been wasted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forward-Backward Search",
"sec_num": "3."
},
{
"text": "We need a way to speed up the beam search without causing search errors. We could prune out most of the choices if we only knew the correct answer ahead of time or if we could look ahead at the remainder of the sentence. Several papers have described fast match schemes that look ahead (incurring a delay) to determine which words are likely (e.g. [4] ). The basic idea is to perform some approximate match that can be used to eliminate most of the possible following words. However, since we cannot tell when words end in continuous speech, the predictions of the score for each word is quite approximate. In addition, even if a word matches well we cannot tell whether the remainder of the sentence will be consistent with that word without looking further ahead and incurring a longer delay.",
"cite_spans": [
{
"start": 348,
"end": 351,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Forward-Backward Search",
"sec_num": "3."
},
{
"text": "Let us consider the time-synchronous forward pass. The score at any given state and time at(s) is the probability of the input up to time t, summed over all of the paths that get to state s at t. When these scores are normalized they give the relative probability of paths ending at this state as opposed to paths ending at any other state. These forward pass probabilities are the ideal measure to predict which theories in a backward search are expected to score well. Figure 6 illustrates several paths from the beginning of an utterance to different states at time t, and several theories from the end of the utterance T backward to time t. From the Baum-Welch Forward-Backward Iraining algorithm we have",
"cite_spans": [],
"ref_spans": [
{
"start": 471,
"end": 479,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Forward-Backward Search",
"sec_num": "3."
},
{
"text": "GT where 7t(s) is the probability of the data given all paths through state s, divided by the probability of the data for all paths, which is the probability that slate s is appropriate at time t. aT is derived from the forward pass. Of course if we have already gone through the whole utterance in the forward direction we already know the most likely sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7t(s)",
"sec_num": null
},
{
"text": "Now let us consider a practical Forward-Backward Search algorithm. First we perform a forward pass over the whole utterance using a simplified acoustics or language model. In each fran~ we save the highest forward probability and the probabilities of all words that have ending scores above the pruning beamwidth. Typically this includes about 20 words in each frame. Then we perform a search in the backward direction. This search uses the normal beam search within words. However, whenever a score is about to be transfered backwards through the language model into the end of a word we first check whether that word had an ending score for that frame in the forward pass. That is we ask, \"Was there a reasonable path from the beginning of the utterance to this time ending with this word?\" Again, referring to Figure 6 , the backward theory that is looking for word score forwardsib.",
"cite_spans": [],
"ref_spans": [
{
"start": 813,
"end": 821,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "7t(s)",
"sec_num": null
},
{
"text": "~ backwards Figure 6 : Forward-Backward Search. Forward and backward scores for the same state and time are added to predict final score for each theory extension.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "7t(s)",
"sec_num": null
},
{
"text": "d cannot find any corresponding forward score, and so is aborted. When there is a score, as in the cases for words a,b,c, then we multiply the present backward score of the theory,/3t(s) by the forward pass score for this word; at(s), divided by the whole sentence score, aT. Only if this ratio is greater than the pruning beamwidth do we extend the theory backwards by this word. For example, although the backward theory looking for word c has a good score, the corresponding forward score c' is not good, and the product may be pruned out. The Forward-Backward search is only useful ff the forward pass is faster than the backward would have been. This can be true if we use a different grammar, or a less expensive acoustic model. If the forward acoustic models or language model is different than in the backward pass, then we must reestimate txa, before using it in the algorithm above. For simplicity we estimate txT at each time t as at(t) = max at(s) maxB (s) the product of the maximum state scores in each direction. (Note that since the two maxima are not necessarily on the same state it would be more accurate to use",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7t(s)",
"sec_num": null
},
{
"text": "forcing the two states to be the same. However, since most of the active states are internal to words, this would require a large computation and also require that we had stored all of the state scores in the forward direction for every time.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "at(t) = max a~(s)#t(s)",
"sec_num": null
},
{
"text": "We observe that the average number of active phoneme arcs in the backward direction is reduced by a factor of 40 (e.g. from. 800 to 20) -with a corresonding reduction in computation and with no increase in search errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "at(t) = max a~(s)#t(s)",
"sec_num": null
},
{
"text": "As stated above, this algorithm is only useful when the forward pass can be computed differently (much more quicldy) than the backward (real) search. For example, we could use a null grammar in the forward direction and a more complex grammar in the backward search. We have used this extensively in our past work with very large RTN grammars or high-order statistical grammars [7] . When no grammar is used in the forward pass we can compact the entire dictionary into a phonetic tree, thereby greatly reducing the computation for large dictionaries.",
"cite_spans": [
{
"start": 378,
"end": 381,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Uses of Forward-Backward Search",
"sec_num": null
},
{
"text": "A variation on the above use is to use a simpler acoustic model in the forward direction. For example restricting the model to triphones within words, using simpler HMM topologies, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uses of Forward-Backward Search",
"sec_num": null
},
{
"text": "A second use is for real-time computation of the N Best sentences [1]. First we perform a normal 1-Best search forward. The best answer can be processed by NL immediately (on another processor) while we perform the N-Best search backwards. We find that the backward N-Best search is sped up by a factor of 40 when using the forward pass scores for pruning. Thus the delay until we have the remainder of the answers is usually quite short. If the delay is less than the time required to process the first answer through NL, then we have lost no time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uses of Forward-Backward Search",
"sec_num": null
},
{
"text": "Finally, we can use the Forward-Backward Search to greatly reduce the time needed for experiments. Experiments involving expensive decoding conditions can be reduced from days to hours. For example all of the experirnents with the Word-Dependent and Lattice N-Best algorithms were performed using the Forward-Backward Search. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uses of Forward-Backward Search",
"sec_num": null
},
{
"text": "We have considered several approximations to the exact Sentence-Dependent N-Best algorithm, and evaluated them thoroughly. We show that an approximation that only separates theories when the previous words are different allows a significant reduction in computation, makes the algorithm scalable to long sentences and less susceptable to pruning errors, and does not increase the search errors measurably. In contrast, the Lattice N-Best algorithm, which is still less expensive, appears to miss twice as many sentences within the N-Best choices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4."
},
{
"text": "We have introduced a new two-pass search strategy called the Forward-Backward Search, which is generally applicable to a wide range of problems. This strategy increases the speed of the recognition search by a factor of 40 with no additional pruning errors observed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4."
}
],
"back_matter": [
{
"text": "This work was supported by the Defense Advanced Research Projects Agency and monitored by the Office of Naval Research under Contract No. N00014-89-C-0008.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Toward a Real-Time Commercial System Using Commercial Hardware",
"authors": [
{
"first": "S",
"middle": [],
"last": "Austin",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Peterson",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Placeway",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vandergrift",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the DARPA Speech and Natural Language Workshop Hidden Valley",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Austin, S., Peterson, P., Placeway, P., Schwartz, R, and Vandergrift, J., \"Toward a Real-Time Commercial System Using Commercial Hardware\". Proceedings of the DARPA Speech and Natural Language Workshop Hidden Valley, June 1990 (1990).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The N-Best Algorithm: An Efficient Procedure for Finding Top N Sentence Hypotheses",
"authors": [
{
"first": "Y-L",
"middle": [],
"last": "Chow",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Schwartz",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the DARPA Speech and Natural Language Workshop Cape Cod",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chow, Y-L. and Schwartz, R.M., \"The N-Best Algo- rithm: An Efficient Procedure for Finding Top N Sen- tence Hypotheses\". Proceedings of the DARPA Speech and Natural Language Workshop Cape Cod, October 1989 (1989).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Simple Statistical Class Grammar for Measuring Speech Recognition Performance",
"authors": [
{
"first": "A",
"middle": [],
"last": "Derr",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Schwartz",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the DARPA Speech and Natural Language Workshop Cape Cod",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Derr, A., and Schwartz, R.M., \"A Simple Statisti- cal Class Grammar for Measuring Speech Recognition Performance\". Proceedings of the DARPA Speech and Natural Language Workshop Cape Cod, October 1989 (1989).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Constructing Groups of Acoustically Confusable Words",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Bahl",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "De Souza",
"suffix": ""
},
{
"first": "P",
"middle": [
"S"
],
"last": "Gopalakrishnan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kanevsky",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Nahamoo",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the ICASSP 90",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bahl, L.R., de Souza, P., Gopalakrishnan, P.S., Kanevsky, D., and Nahamoo, D. \"Constructing Groups of Acoustically Confusable Words\". Proceedings of the ICASSP 90, April, 1990.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Very Large Vocabulary Isolated Utterance Recognition: A Comparison Between One Pass and Two Pass Strategies",
"authors": [
{
"first": "L",
"middle": [],
"last": "Fissore",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Micca",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Pieraccini",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the ICASSP 88",
"volume": "",
"issue": "",
"pages": "267--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fissore, L., Micca, G., and Pieraccini, R., \"Very Large Vocabulary Isolated Utterance Recognition: A Com- parison Between One Pass and Two Pass Strategies\". Proceedings of the ICASSP 88, pp. 267-270, April, 1988.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Integrating Natural Language Constraints into HMM-based Speech Recognition",
"authors": [
{
"first": "H",
"middle": [],
"last": "Murveit",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the ICASSP 90",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murveit, H., \"Integrating Natural Language Constraints into HMM-based Speech Recognition\". Proceedings of the ICASSP 90, April, 1990.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Statistical Language Modeling Using a Small Corpus from an Application Domain",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Rohlicek",
"suffix": ""
},
{
"first": "Y-L",
"middle": [],
"last": "Chow",
"suffix": ""
},
{
"first": "Roucos",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1987,
"venue": "Proceedings of the DARPA Speech and Natural Language Workshop Cambridge",
"volume": "",
"issue": "",
"pages": "267--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohlicek, J.A., Chow, Y-L., and Roucos, S., \"Statis- tical Language Modeling Using a Small Corpus from an Application Domain\". Proceedings of the DARPA Speech and Natural Language Workshop Cambridge, October 1987 (1987). Also in Proceedings of the ICASSP 88, pp. 267-270, April, 1988.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Context-Dependent Modeling for Acoustic-Phonetic Recognition of Continuous Speech",
"authors": [
{
"first": "R",
"middle": [
"M"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chow",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Kimball",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roucos",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Krasner",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 1985,
"venue": "Proceedings of the ICASSP 85",
"volume": "",
"issue": "",
"pages": "1205--1208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schwartz, R.M., Chow, Y., Kimball, O., Roucos, S., Krasner, M., and Makhoul, J. \"Context-Dependent Modeling for Acoustic-Phonetic Recognition of Con- tinuous Speech\". Proceedings of the ICASSP 85, pp. 1205-1208, March, 1985.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sentence-Hypotheses Generation in a Continuous-Speech Recognition System",
"authors": [
{
"first": "V",
"middle": [],
"last": "Steinbiss",
"suffix": ""
}
],
"year": 1989,
"venue": "Proc. of the European Conf. on Speech Communciation and Technology",
"volume": "2",
"issue": "",
"pages": "51--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Steinbiss (1989) \"Sentence-Hypotheses Generation in a Continuous-Speech Recognition System,\" Proc. of the European Conf. on Speech Communciation and Technology, Paris, Sept. 1989, Vol. 2, pp. 51-54",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Generating Multiple Solutions from Connected Word DP Recognition Algorithms",
"authors": [
{
"first": "S",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 1984,
"venue": "Proc. of the Institute of Acoustics",
"volume": "6",
"issue": "",
"pages": "351--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Young, S. (1984) \"Generating Multiple Solutions from Connected Word DP Recognition Algorithms\". Proc. of the Institute of Acoustics, 1984, Vol. 6 Part 4, pp. 351-354",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "i",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Alternate paths in the Lattice algorithm. The best path for words 2-3 overrides the best path for words 1-3.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Alternate paths in the Word-Dependent algorithm. Best path for words 1-3 is preserved along with path for words 2-3.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "Comparison of the Rank of the Correct Sentence for the Sentence-Dependent, Word-Dependent, and Latlice N-Best Algorithms.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF5": {
"text": "",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"text": "",
"html": null,
"content": "<table><tr><td>Full NLP</td></tr><tr><td>Semantics, etc.</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "three N-Best algorithms. In all three cases we used the Class Grammar[3], a first-order statistical grammar based on 100 word classes. All words within a class are assumed equally likely. The test set perplexity is approximately 100. The test set used was the June '88 speaker-dependent test set of 300 sentences. To enable direct comparison with previous results we did not use models of triphones across word boundaries, and the models were not smoothed. We expect all three algorithms to improve significantly when the latest modeling methods are used.",
"html": null,
"content": "<table/>"
}
}
}
}