|
{ |
|
"paper_id": "H92-1021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:28:58.277361Z" |
|
}, |
|
"title": "IMPROVEMENTS IN STOCHASTIC LANGUAGE MODELING", |
|
"authors": [ |
|
{ |
|
"first": "Ronald", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University Pittsburgh", |
|
"location": { |
|
"postCode": "15213", |
|
"region": "PA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Xuedong", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University Pittsburgh", |
|
"location": { |
|
"postCode": "15213", |
|
"region": "PA" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We describe two attempt to improve our stochastic language models. In the first, we identify a systematic overestimation in the traditional backoff model, and use statisticalreasoning to correct it. Our modification results in up to 6% reduction in the perplexity of various tasks. Although the improvement is modest, it is achieved with hardly any increasein the complexity of the model. Both analysis and empirical data suggestthat the moditieation is most suitable when training data is sparse. In the second attempt, we propose a new type of adaptive language model. Existing adaptive models use a dynamic eacbe, based on the history of the document seen up to that point. But another source of information in the history, within-document word sequence correlations, has not yet been tapped. We describe a model that attempts to capture this information, using a framework where one word sequence laJggers another, eansing its estimated probability to be raised. We discuss various issues in the design of such a model, and describe our first attempt at building one. Our preliminary results include a perplexity reduction of between 10% and 32%, depending on the test set.", |
|
"pdf_parse": { |
|
"paper_id": "H92-1021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We describe two attempt to improve our stochastic language models. In the first, we identify a systematic overestimation in the traditional backoff model, and use statisticalreasoning to correct it. Our modification results in up to 6% reduction in the perplexity of various tasks. Although the improvement is modest, it is achieved with hardly any increasein the complexity of the model. Both analysis and empirical data suggestthat the moditieation is most suitable when training data is sparse. In the second attempt, we propose a new type of adaptive language model. Existing adaptive models use a dynamic eacbe, based on the history of the document seen up to that point. But another source of information in the history, within-document word sequence correlations, has not yet been tapped. We describe a model that attempts to capture this information, using a framework where one word sequence laJggers another, eansing its estimated probability to be raised. We discuss various issues in the design of such a model, and describe our first attempt at building one. Our preliminary results include a perplexity reduction of between 10% and 32%, depending on the test set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Linguistic constraints are an important factor in human comprehension of speech. Their effect on automatic speech recognition is similar, in that they provide both a pruning method and a means of ordering likely candidates. As vocabularies for speech recognition systems increase in size, more accurate modeling of linguistic constraints becomes essential.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Two fundamental issues in language modeling are smoothing and adaptation. Smoothing allows a model to assign reasonable probabilities to events that have never been observed before. Adaptation takes advantage of recently gained knowledge --the text seen so far --to adjust the model's expectations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In what follows, we discuss two attempts at improving our current stochastic language modeling techniques. In the first, we try to improve smoothing by correcting a deficiency in a successful and well known smoothing method, the backoff model. In the second, we propose a novel kind of adaptation, one that is based on correlation among word sequences occurring in the same document. The backoff language model is a compact yet powerful way of modeling the dependence of the current word on its immediate history. An important factor in the backoff model is its behavior on the backed-off cases, namely when a given n-gram w~ is found not to have occurred in the training data. In these cases, the model assumes that the probability is proportional to the estimate provided by the n-1-gram, Pn-l(Wn [W~-1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "This last assumption is reasonable most of the time, since no other sources of information are available. But for frequent n-l-grams, there may exist sufficient statistical evidence to suggest that the backed-off probabilities should in fact be much lower. This phenomenon occurs at any value of n, but is easiest to demonstrate for the simple case of n = 2, i.e. a bigram. Consider the following fictitious but typical example: N = 1,000,000 C(\"ON\") = 10,000 CCAT') = 10,000 C(\"CALL\") = I00 C(\"ON\",\"AT\") = 0 C(\"ON\",\"CALL\") = 0 N is the total number of words in the training set, and C(wz, w i) is the number of (wi, wj) bigrams occurring in that set. The backoff model computes: P(\"Kr') = P(\"CALL\") = i 10,000 P(\"Nr'r'ON\") = ~(\"ON\") \u2022 P(\"AT\") = a(\"ON\"). ]-~ P(\"CALL\"I\"ON\") = ~(\"ON\") P(\"CALL\") = c~(\"ON\")-1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Thus, according to this model, P(\"AT\"I\"ON\") >> P(\"CALL\"[\"ON\"). But this is clearly incorrect. In the case of \"CAIJ?', the expected number of (\"ON\",\"CALL\") bigrams, assuming independence between \"ON\" and \"CALL\", is 1, so an actual count of 0 does not give much information, and may be ignored. However, in the case of \"AT\", the expected chance count of (\"ON\",\"AT\") is 100, so an actual count of 0 means that the real probability of P(\"AT\"I\"ON\") is in fact much lower than chance. The backoff model does not capture this information, and thus grossly overestimates P(\"AT\"I \"ON\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "10000", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This deficiency of the backoff model has been pointed out before [2, p.457] , but, to the best of our knowledge, has never been corrected. We suspect the reasons are twofold. First, it only occurs during backed-off cases. For a well trained bigram or trigram, this happens in only a small fraction of the time. Second, overestimation degrades perplexity only mildly and indirectly, by affecting a slight underestimation of all the other probabilities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 75, |
|
"text": "[2, p.457]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "10000", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We therefore did not expect this phenomenon to have a strong impact on perplexity. Nevertheless, we wanted to correct the problem and to measure its effect.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "10000", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Let C(~1) = 0. Given a global confidence level Q, to be determined empirically, we calculate a confidence interval in which the true value of P(w~lw~ -1) should lie, using the constraint:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Solution: Confidence Interval Capping", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "[1 --P(wnmw~-l)]c(~ -') > Q", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "The Solution: Confidence Interval Capping", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "The confidence interval is therefore [0 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Solution: Confidence Interval Capping", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": ".. (1 -Q1/C(~-')) ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Solution: Confidence Interval Capping", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "We then provide another parameter, P (0 < P < 1), and establish a ceiling, or a cap, at a point P within the confidence interval:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Solution: Confidence Interval Capping", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "CAPe,e(C(w~-I))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Solution: Confidence Interval Capping", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "= P. (1 -Q1/C(~ -~))", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "The Solution: Confidence Interval Capping", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "We now require that the estimated P(wnlw~ -1) satisfy:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Solution: Confidence Interval Capping", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P(wn I w~-1) _< CAPQ,p (C(w? -1 ))", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "The Solution: Confidence Interval Capping", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "The backoff case of the standard model is therefore modified to:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Solution: Confidence Interval Capping", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "This capping off of the estimates requires renormalization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "e(w.lw~ -1) = min [ o~(w~-l). P,~_l(w,,Iw~-l), CAPQ,p(C(w~-X)) I5)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "But renormalization would increase the a's, which would in turn cause some backed-off probabilities to exceed the cap. An iterative reestimation of the cz's is therefore required. The process was found to converge rapidly in all cases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "e(w.lw~ -1) = min [ o~(w~-l). P,~_l(w,,Iw~-l), CAPQ,p(C(w~-X)) I5)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that, although some computation is required to determine the new weights, once the model has been computed, it is no more complicated neither significantly more time consuming than the original one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "e(w.lw~ -1) = min [ o~(w~-l). P,~_l(w,,Iw~-l), CAPQ,p(C(w~-X)) I5)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The bigrarn perplexity reduction for various tasks is shown in Although the reduction is modest, as expected, it should be remembered that it is achieved with hardly any increase in the complexity of the model. As can be predicted from the statistical analysis, when the vocabulary is larger, the backoff rate is greater, and the improvement in perplexity can be expected to be greater too.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "2.3." |
|
}, |
|
{ |
|
"text": "Several adaptive language models have been proposed recently [3, 4, 5, 6] , which use caching of the partially dictated document, and interpolate a dynamic component based on the cache with the static component. These models have been successful in reducing the perplexity of the text considerably, and [5] also reports a positive effect on the word recognition rate.", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 64, |
|
"text": "[3,", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 65, |
|
"end": 67, |
|
"text": "4,", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 68, |
|
"end": 70, |
|
"text": "5,", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 71, |
|
"end": 73, |
|
"text": "6]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 303, |
|
"end": 306, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Analysis", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "All of these models make direct use of the words in the history of the document. They take advantage of the fact that ygords, and combinations of words, once occurred in a given e document, have a higher likelihood of occurring in it again.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Analysis", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "But there is another source of information in the history that has not yet been tapped: within-document correlation between words or word sequences. Consider the sentence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Analysis", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "\"The district attorney's office launched a comprehensive investigation into loans made by several well connected banks.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Analysis", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "Based on this sentence alone, a cache-based model will not be able to anticipate any of the constituent words. But a human reader might use \"DISTRICT ATTORNEY\" and/or \"LAUNCHED\" to anticipate \"INVESTIGATION\", and \"LOANS\" to anticipate \"BANKS\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Analysis", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "In what follows, we describe a model that attempts to capture this type of information in a systematic way, using correlation between word sequences derived from a large corpus of text. In this model, if a word sequence A is positively and significantly correlated with another word sequence B, then (A ---~ B) is considered a \"trigger pair\", with A being the trigger and B the triggered sequence. When A occurs in the document, it triggers B, causing its probability estimate to be increased.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Analysis", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "In order for such a model to be effective, the following issues have to be addressed:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Analysis", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "1. How to filter all possible trigger pairs. Even if we restrict our attention to pairs where A and B are both single words, the number of such pairs is too large. Let V be the size of the vocabulary. Note that, unlike in a bigram model, where the number of different consecutive word pairs is much less than V 2, the number of word pairs where both words occurred in the same document is a significant fraction of V 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Analysis", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "2. How to combine evidence from multiple triggers. This is a special case of the general problem of combining evidence from several sources. We discuss several heuristics, and a plan for a more disciplined approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Analysis", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "3. How to combine the triggering model with the static model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Analysis", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "We will discuss all 3 problems and our proposed solutions to them. This is ongoing research, and not all of our ideas have been tested yet. A solution to (1) will be discussed in some detail. When combined with simple minded solutions to (2) and 3, it resulted in a perplexity reduction of between 10% and 32%, depending on the test set. We are currently working on implementing and testing some of the other solutions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 241, |
|
"text": "(2)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Analysis", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "Let \"history\" denote the part of the text already seen by the system. Let A, B be any two word sequences. Then the events B and Bo are defined as follows: B : B occurred in the history. Bo : B occurs next in the document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Filtering the Trigger-Pairs", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "Let P(Bo) be the (unconditional) probability of Bo, and let P(Bo IA) be the conditional probability assigned to Bo by the trigger pair (A ---~ B). A natural measure of the information provided by A on Bo is the mutual information between the two:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Filtering the Trigger-Pairs", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "I(A :Bo) = log n,n~lA) (6) (o)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Filtering the Trigger-Pairs", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "Note that, although mutual information is symmetric with regard to its arguments, it is generally not true that I(A : Bo) = l(g :At).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Filtering the Trigger-Pairs", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "Should mutual information be our figure of merit in selecting the most promising trigger pairs? I(A : Bo) measures the average number of bits we can save by considering A in pre-dictingBo. But this savings will materialize onlyifBo is true, namely if we indeed encounter the word sequence B next in the document. Our best estimate of this, at the time filtering is carried out, is P(Bo IA). We therefore define the expected utility of the trigger pair (A ~ B):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Filtering the Trigger-Pairs", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "and suggest it as a criterion for selecting trigger pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "U(A ---~ B) d~f I(A : Bo)P(Bo IA) (7)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The problem of combining evidence from multiple sources is a general, largely unsolved problem in modeling. The ideal solution is to model explicitly each combination of values of the predictor variables, but this leads to an exponential growth in the number of parameters, which renders the model untrainable. At the other extreme, we can assume linearity and simply sum the contribution from the different sources. This may be a reasonable approximation in some models, but it is clearly inadequate in our case: \"LOAN\" is not 3 times more likely after 3 occurrences of \"BANK\" than it is after only 1 occurrence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiply-Triggered Sequences", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "Increase the reliability of the prediction in the face of unreliable history. Since we usually rely on the speech recognizer to provide us with the history, each word has a nonnegligible chance of being erroneous.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple triggers have several important functions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Disambiguate multiple-sense words. Compare: P(\"LOAN\"o r'BANK\") P(\"LOAN\"o I\"B ANK\",\"FINANCIAL\") P(\"LOAN\"o I\"BANK\",\"RIVER\")", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple triggers have several important functions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Intersect several broad semantic domains, and assign a higher weight to the intersected region.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple triggers have several important functions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Compare: P(\"PETE-ROSE\"o I\"BASEBALL\") P(\"PETE-ROSE\"o r'GAMBLING\") P(\"PETE-ROSE\". r'BASEBAI~I:',\"GAMBLING\")", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple triggers have several important functions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We plan to model multiply triggered sequences in a way that will capture at least some of the above phenomena. This requires statistical analysis of the interaction among the triggers, especially as it relates to the triggered sequence. We have just begun this analysis. One possibility, suggested by Kai-Fu Lee, is to consider the mutual information between the triggers. Triggers with high mutual information provide little additional evidence, and thus should not be added up.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple triggers have several important functions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the system reported below, we considered several simple heuristics: averaging the effect of the different triggers, using the most informative trigger only, and a quickly saturating sum. In the limited context of our current model we found no significant difference between the three.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple triggers have several important functions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A straightforward way to integrate the trigger model with a static model is to interpolate them linearly, using independent data to determine the weights. A somewhat fancier variant could use weights that depend on the length of the history. We expect the weight of the adaptive component to increase as the history grows. Using linear interpolation, the trigger model can be viewed as an adaptive unigram. This is the solution we used in the system reported below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Integration with the Static Model", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "However, linear interpolation is not without its faults. Existing static models, such as N-grams, are excellent at using short-range information. For our adaptive component to be useful, it should complement the prediction power of the static component. But linear interpolation means that the adaptive component is blind to short-term constraints, yet the latter strongly affect the behavior of the static model. For example, in processing the sentence \"The district attorney's office launched an investigation into loans made by several well connected banks.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Integration with the Static Model", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "\"DISTRICT-ATtORNEY\" may trigger \"INVESTIGA-TION\", causing its unigram probability to be raised to its level in documents containing the words \"DISTRICT-ATrORNEY\". But when \"INVESTIGATION\" actually occurs, it is preceded by \"LAUNCHED AN\", which causes a trigram model to predict it with an even higher probability, rendering the adaptive contribution useless.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Integration with the Static Model", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "Thus a better method of combining the two components is to consider the information already provided by the static model. This can be done in two different ways:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Integration with the Static Model", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "\u2022 By using a POS-based trigger model, in the spirit of [4] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 58, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Integration with the Static Model", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "\u2022 By dynamically considering the probabilities produced by the static component, and modifying only those for which the adaptive component provided useful information. We are now experimenting with this method. Since it requires dynamic renormalization, it is only suitable for recognizers which compute the entire array of probabilities for every word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Integration with the Static Model", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "We used most of the WSJ LM training corpus, 42M words in all, to train a conventional backoff trigram model[l] for the DARPA 20,000 closed-vocabulary task. We used the same data to derive the triggering list, as described below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Experiment", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "The conditional probability provided by the trigger pair (A B) was estimated as: P(B, IA) = Count of B in documents containing A number of words in documents containing A", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Experiment", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "For the unconditional probability P(Bo) we used the static unigram probability of B. We have since switched to using the average probability with which occurrences of B in the training data are predicted by the trigram model, but the results reported here do not reflect this change.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We first created an index of all but the 100 most frequent words, keeping for each word a detailed description of its occurrences. We included paragraph, sentence, and word location information, to allow consideration of different distance measures and different context levels. Excluding the top 100 words reduced the storage requirements by more than 50%. We assumed that frequently used words provide little contextual information. Using the index, we systematically searched for ordered word pairs whose expected utility, as given by Eq. 7, exceeded a given threshold. Of the 400 million possible pairs, we selected some 620,000.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For combining multiple triggering of the same word, we used MAX or AVERAGE or SUM saturating at 2*MAX, as described in section 3.3. We found no significant difference between these methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We combined the trigger model with the static trigram using linear interpolation. The automatically derived weights varied from task to task, but were usually in the range of 0.02 to 0.06 for the trigger component. We also tried to use weights that depend on the length of the history, but were surprised to find no improvement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We tested our combined model on a large collection of test sets, using perplexity reduction as our measure. A selection is given in table 2. Set WSJ-dev is the CSR development test set (70K words). Set BC-3 is the entire Brown Corpus, where the history was flushed arbitrarily every 3 sentences. Set BC-20 is the same as BC-3, but with history-flushing every 20 sentences. Set RM is the 39K words used in training the Resource Management system, with no history flushing. The last result in table 2 was derived by training the trigram on only 1.2M words of WSJ data, and testing on the WSJ development set. This was done to facilitate a more equitable comparison with the results reported in [5] Our biggest surprise was that \"self triggering\" (trigger pairs of the form (A ~ A)) was found to play a larger role than would be indicated by our utility measure. Correlations of this type are an important special case, and are already captured by the conventional cache based models. We decided to adapt our model in the face of reality, and maintained a separate self-triggering model that was added as a third interpolation component (the results in table 2 already reflect this change). This independent component, although consisting of far fewer trigger pairs, was responsible for as much as half of the overall perplexity reduction. On tasks with a vastly different unigram behavior, such as the Resource Management data set, the selftriggering component accounted for most of the improvement.", |
|
"cite_spans": [ |
|
{ |
|
"start": 692, |
|
"end": 695, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "3.6." |
|
}, |
|
{ |
|
"text": "Why do self-triggering pairs have a higher impact than anticipated? One reason could be an inadequacy in our utility measure. Another could spring from the difference between training and testing. If the test set were statistically identical to the training set, the utility of every trigger pair would be exactly as predicted by our expected utility measure. Since in reality the training and testing sets differ, the actual utility is lower than predicted. All trigger pairs suffer a degradation, except for the self-triggering ones. The latter hold their own because self correlations are robust and are better maintained across different corpora. This explains why the self-triggering component is most dominant when the statistical difference between the training and testing data is greatest.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "3.6." |
|
}, |
|
{ |
|
"text": "We presented two attempts to improve our stochastic language modeling. In the first, we identified a deficiency in the conventional backoff language model, and used statistical reasoning to correct it. Our modified model is about as simple as the original one, but gives a slightly lower perplexity on various tasks. Our analysis suggests that the modification is most suitable when training data is sparse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SUMMARY AND CONCLUSIONS", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "In our second attempt, we extended the notion of adaptation to incorporate within-document word sequence correlation, using the framework of a trigger pair. We discussed the issues involvedin constructing such a model, and reported promising improvements in perplexity. We have only begun to explore the potential of trigger-based adaptive models. The results reported here are preliminary. We believe we can improve our performance by implementing many of the ideas suggested in sections 3.2, 3.3 and 3.4 above. Work is already under way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SUMMARY AND CONCLUSIONS", |
|
"sec_num": "4." |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We are grateful to Doug Paul for providing us with the preprocessed CSR language training data in a timely manner; to Dan Julin for much help in systems issues; to Kai-Fu Lee for helpful discussions; to Fil Alleva for many helpful interactions; and to Raj Reddy for support and encouragement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ACKNOWLEDGEMENTS", |
|
"sec_num": "5." |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Katz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Speech, SignaI Processing", |
|
"volume": "35", |
|
"issue": "", |
|
"pages": "400--401", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katz, S. M., \"Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer,\" IEEE Trans.Acoust., Speech, SignaI Processing, voL ASSP-35, pp. 400-401, March 1987.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Self-Organized Language Modeling for Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Readings in Speech Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jelinek, F., \"Self-Organized Language Modeling for Speech Recognition,\" in Readings in Speech Recognition, Alex Waibel and Kai-Fu Lee (Eds.), Morgan Kaufmann, 1989.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Probabilistic Models of Short and Long Distance Word Dependencies in Running Text", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kupiec", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the Speech and Natural LanguageDARPA Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "290--295", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kupiec, J., \"Probabilistic Models of Short and Long Distance Word Dependencies in Running Text,\" Proceedings of the Speech and Natural LanguageDARPA Workshop, pp.290--295, Feb. 1989.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A Cache-BasedNatural Language Model for Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kuhn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mori", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "IEEE Trans. PatternAnalysis and Machine Intelligence", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "570--583", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kuhn, R., andDe Mori, R., \"A Cache-BasedNatural Language Model for Speech Recognition,\" IEEE Trans. PatternAnalysis and Machine Intelligence, vol. PAMI-12, pp. 570-583, June 1990.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A Dynamic Language Model for Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Medaldo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Strauss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the Speech and Natural Language DARPA Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "293--295", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jelinek, E, Medaldo, B., Roukos, S., and Strauss, M., \"A Dynamic Language Model for Speech Recognition,\" Proceed- ings of the Speech and Natural Language DARPA Workshop, pp.293-295, Feb. 1991.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Statistical LanguageModeUing Using a Cache Memory", |
|
"authors": [ |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Essen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Andney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the First Quantitative Linguistics Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Essen, U., andNey, H., \"Statistical LanguageModeUing Using a Cache Memory,\" Proceedings of the First Quantitative Lin- guistics Conference, University of Trier, Germany. September 1991.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "The Problem The backoff n-gram language model[l] estimates the probn--1 ability of w,, given the immediate past history w~ = (wl .... w~-0. It is defined recursively as:Pn(w\"lw~-t) = / (1 -d)C(w~) / C(~1-1) if C(w~) > 0 o~(C(w~-l)) \u2022 en_l(wnlw~ -1) if C(w~) = 0 k (1)where d, the discount ratio, is a function of C(w~), and the a's are the backoff weights, calculated to satisfy the sum-to-1 probability constraints.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "BC-48K is the brown corpus with the unabridged", |
|
"num": null, |
|
"content": "<table><tr><td>test set</td><td colspan=\"2\">backoff rate PP reduction</td></tr><tr><td>BC-48K</td><td>30%</td><td>6.3%</td></tr><tr><td>BC-5K</td><td>15%</td><td>2.5%</td></tr><tr><td>ATIS</td><td>5%</td><td>1.7%</td></tr><tr><td>WSJ-5K</td><td>2%</td><td>0.8%</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Perplexity reduction by the trigger-based adaptive model for several test sets", |
|
"num": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |