|
{ |
|
"paper_id": "H94-1013", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:29:32.803623Z" |
|
}, |
|
"title": "A HYBRID APPROACH TO ADAPTIVE STATISTICAL LANGUAGE MODELING", |
|
"authors": [ |
|
{ |
|
"first": "Ronald", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University Pittsburgh", |
|
"location": { |
|
"postCode": "12513", |
|
"region": "PA" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We desert'be our latest attempt at adaptive language modeling. At the heart of our approach is a Maximum Entropy (ME) model which inc.orlxnates many knowledge sources in a consistent manner. The other components are a selective unigram cache, a conditional bigram cache, and a conventionalstatic trigram. We describe the knowledge sources used to build such a model with ARPA's official WSJ corpus, and report on perplexity and word error rate results obtained with it. Then, three different adaptation paradigms are discussed, and an additional experiment, based on AP wire data, is used to compare them.", |
|
"pdf_parse": { |
|
"paper_id": "H94-1013", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We desert'be our latest attempt at adaptive language modeling. At the heart of our approach is a Maximum Entropy (ME) model which inc.orlxnates many knowledge sources in a consistent manner. The other components are a selective unigram cache, a conditional bigram cache, and a conventionalstatic trigram. We describe the knowledge sources used to build such a model with ARPA's official WSJ corpus, and report on perplexity and word error rate results obtained with it. Then, three different adaptation paradigms are discussed, and an additional experiment, based on AP wire data, is used to compare them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Using several different probability estimates to arrive at one combined estimate is a general problem that arises in many tasks. The Maximum Entropy (ME) principle has recently been demonstrated as a powerful tool for combining statistical estimates from diverse sources [l, 2, 3] . The ME principle ( [4, 5] ) proposes the following:", |
|
"cite_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 280, |
|
"text": "[l, 2, 3]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 305, |
|
"text": "[4,", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 306, |
|
"end": 308, |
|
"text": "5]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "1. Reformulate the different estimates as constraints on the expectation of various functions, to be satisfied by the target (combined) estimate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "2. Among all probability distributions that satisfy these constraints, choose the one that has the highest entropy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "More specifically, for estimating a probability function P(x), each constraint i is associated with a constraintfunctionfi(x) and a desired expectation ci. The constraint is then written as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "def E Eefi = P(x)fi(x) = ci.", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "X Given consistent constraints, a unique ME solutions is guaranteed to exist, and to be of the form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P(x) = II mf'\u00b0\u00b0,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "i where the pi's are some unknown constants, to be found. Probability functions of the form (2) are called log-linear, and the family of functions defined by holding thefi's fixed and varying the pi's is called an exponential family.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "TO search the family defined by (2) for the pi's that will make P(x) satisfy all the constraints, an iterative algorithm, \"Generalized Iterative Scaling\" (GIS), exists, which is guaranteed to converge to the solution ([6]), as long as the constraints are mut~ally consistent. GIS starts with arbitrary p~ values.", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 35, |
|
"text": "(2)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "At each iteration, it computes the expectations Epfi over the training data, compares them to the desired values c/s, and then adjusts the tJz's by an amount proportional to the ratio of the two.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Generalized Iterative Scaling can be used to find the ME estimate of a simple (non-conditional) probability distribution over some event space. An ~0aptation of GIS to conditional probabilities was proposed by [7] , as follows. Let P(w[h)", |
|
"cite_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 213, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "be the desired probability estimate, and let lS(h,w) be the empirical distribution of the training data. Letfi(h,w) be any constraint function, and let cl be its desired expectation. Equation 1 is now modified to:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "E P(h)\" E P(w[h) .fi(h, w) = ci", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "h w See also [1, 2] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 16, |
|
"text": "[1,", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 17, |
|
"end": 19, |
|
"text": "2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OVERVIEW OF ME FRAMEWORK", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The ME framework is very general, freeing the modeler to concentrate on searching for significant information sources and choosing the phenomena to be modeled. In statistical language modeling, we are interested in information about the identity of the next word, wi, given the history h, namely the part of the document that was already processed by the system. We have so far considered the following information sources, all contained within the history:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CAPTURING LONG-DISTANCE LINGUISTIC PHENOMENA", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Conventional N-grams: the immediately preceding few words, say (wi-2, wi-l).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CAPTURING LONG-DISTANCE LINGUISTIC PHENOMENA", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Long distance N-grams [8] : N-grams preceding wi byjpositions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 25, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CAPTURING LONG-DISTANCE LINGUISTIC PHENOMENA", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "triggers [9] : the appearance in the history of words related to wi.", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 12, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CAPTURING LONG-DISTANCE LINGUISTIC PHENOMENA", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "class triggers: trigger relations among word clusters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CAPTURING LONG-DISTANCE LINGUISTIC PHENOMENA", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "count-based cache: the number of times wi already occurred in the history.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CAPTURING LONG-DISTANCE LINGUISTIC PHENOMENA", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "distance-based cache: the last time wi occurred in the history.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CAPTURING LONG-DISTANCE LINGUISTIC PHENOMENA", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "linguistically defined constraints: number agreement, tense agreement, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CAPTURING LONG-DISTANCE LINGUISTIC PHENOMENA", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Any potential source can be considered separately, and the amount of information in it estimated. For example, in estimating the potential of count-based caches, we might measure dependencies of the form depicted in figure 1 , and calculate the amount of information they may provide. See also [3] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 294, |
|
"end": 297, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 224, |
|
"text": "figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "CAPTURING LONG-DISTANCE LINGUISTIC PHENOMENA", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Similarly, the constraint function for the bigram wt, w2 is 1 ffhendsinwl andw=w2 f~,,n(h,w)= 0 otherwise (6) and its associated constraint is ~P(h) ~ P(wlh)f ~,)n(h,w) = ~f ~,,a(h,w).", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 109, |
|
"text": "(6)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P( DEFAULT )", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "h w (7) and similarly for higher-order N-grams.", |
|
"cite_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 7, |
|
"text": "(7)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P( DEFAULT )", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The constraint functions for long distance N-grams are very similar to those for conventional (distance 1) N-gram. For example, the constrain function for the distance-2 trigram {wl, w2, w3} is: Perhaps the most important feature of the Maximum Entropy framework is its extreme generality. For any conceivable linguistic or statistical phenomena, appropriate constraint functions can readily be written. We will demonstrate this process for several of the knowledge sources listed above. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulating long-distance N-grams as Constraints", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "o l 2 3 4 5+ (2(DEFAUI~T)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulating long-distance N-grams as Constraints", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "l~(h) ~ P(wlh)f ~,~a,~(h, w) = l~f ~,,a,~(h, w). h w (s)", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Formulating N-grams as Constraints", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "and similarly for other long distance N-grams.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulating N-grams as Constraints", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "For class triggers, let A, B be two related word clusters. Define the constraint functionfa.~ as: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulating Triggers as Constraints", |
|
"sec_num": "2.3." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "I ff3wjEA, wjEh, wEB", |
|
"eq_num": "(10" |
|
} |
|
], |
|
"section": "Formulating Triggers as Constraints", |
|
"sec_num": "2.3." |
|
}, |
|
{ |
|
"text": "In a document-based unigram cache, all words that occurred in the history of the document are stored, and are used to dynamically generate a unigram, which is in turn combined with other language model components. N-gram caches were first reported by [10] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 255, |
|
"text": "[10]", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SELECTIVE UNIGRAM CACHE", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The motivation behind a unigram cache is that, once a word occurs in a document, its probability of re-occurring is typically greatly elevated. But the extent of this phenomenon depends on the prior frequency of the word, and is most pronounced for rare words. The occurrence of a common word like \"DIE\" provides little new information. Put another way, the occurrence of a rare word is more surprising, and hence provides more information, whereas the occurrence of a more common word deviates less from the expectations of the static model, and therefore requires a smaller modification to it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SELECTIVE UNIGRAM CACHE", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Bayesian analysis may be used to optimally combine the prior of a word with the new evidence provided by its occurrence. As a rough first approximation, we implemented a selective unigram cache, where only rare words are stored in the cache. A word is defined as rare relative to a threshold of static unigram frequency. The exact value of the threshold was determined by optimizing perplexity on unseen data. This scheme proved more useful for perplexity reduction than the conventional cache.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SELECTIVE UNIGRAM CACHE", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In a document-based bigram cache, all consecutive word pairs that occurred in the history of the document are stored, and are used to dynamically generate a bigram, which is in turn combined with other language model components. A trigram cache is similar but is based on all consecutive word triples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONDITIONAL BIGRAM AND TRIGRAM CACHES", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "An alternative way of viewing a bigram cache is as a set of unigram caches, one for each word in the history. At most one such unigram is consulted at any one time, depending on the identity of the last word of the history. Viewed this way, it is clear that the bigram cache should contribute to the combined model only if the last word of the history is a (nonselective) unigram \"cache hit\". In all other cases, the uniform distribution of the bigram cache would only serve to flatten, hence degrade, the combined estimate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONDITIONAL BIGRAM AND TRIGRAM CACHES", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We therefore chose to use a conditional bigram cache, which has a non-zero weight only during such a \"hit\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONDITIONAL BIGRAM AND TRIGRAM CACHES", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "A similar argument can be applied to the trigram cache. Such a cache should only be consulted if the last two words of the history occurred before, i.e. the trigram cache should contribute only immediately following a bigram cache hit. We experimented with such a trigram cache, constructed similarly to the conditional bigram cache. However, we found that it contributed little to perplexity reduction. This is to be expected: every bigram cache hit is also a unigram cache hit. Therefore, the trigram cache can only refine the distinctions already provided by the bigram cache. A document's history is typically small (225 words on average in the WSJ corpus). For such a modest cache, the refinement provided by the trigram is small and statistically unreliable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONDITIONAL BIGRAM AND TRIGRAM CACHES", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Another way of viewing the selective bigram and trigram caches is as regular (i.e. non-selective) caches, which are later interpolated using weights that depend on the count of their context. Then, zero context-counts force respective zero weights.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONDITIONAL BIGRAM AND TRIGRAM CACHES", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "As a testbed for the above ideas, we used ARPA's CSR task. The training data was 38 million words of Wall Street Journal OVSJ) text from 1987-1989. The vocabulary used was ARPA's official \"20o.nvp\" (20,000 most common WSJ words, non-verbalized punctuation).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE WSJ SYSTEM", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "To measure the impact of the amount of training d,t~ on language model adaptation, we experimented with systems based on varying amounts of training d~t~= The largest model we built was based on the entire 38M words of WSJ training data, and is described below. \u2022 High cutoff, distance-1 (conventional) N-grams:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE WSJ SYSTEM", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "-All trigrams that occurred 9 or more times in the training data (428,000 in all). -All bigrams that occurred 9 or more times in the training data (327,000). -all unigrams.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE WSJ SYSTEM", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "The high cutoffs were necessary in order to reduce the heavy computational requirements of the training procedure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE WSJ SYSTEM", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "\u2022 High cutoff, distance-2 bigrams and trigrams:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE WSJ SYSTEM", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "-All distance-2 trigrams that occurred 5 or more times in the training data (795,000 in all). -All distance-2 bigrams that occurred 5 or more times in the training data (651,000).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE WSJ SYSTEM", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "The cutoffs used for the conventional N-grams were higher than those applied to the distance-2 N-grams. This was done because we expected that the information lost from the former knowledge source will be re-introduced, at least partially, by interpolation with the static model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE WSJ SYSTEM", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "\u2022 Word Trigger Pairs: For every word in the vocabulary, the top 3 triggers were selected based on their mutual information with that word as computed from the training data [l, 2] . This resulted in some 43,000 word trigger pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 179, |
|
"text": "[l, 2]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE WSJ SYSTEM", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "3. A selective unigram cache, as described earlier, using a unigram threshold of 0.001.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE WSJ SYSTEM", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "A conditional bigram cache, as described earlier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The combined model was achieved by consulting an appropriate subset of the above four models. At any one time, the four component LMs were combined linearly. But the weights used were not fixed, nor did they follow a linear pattern over time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining the LM Components", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "Since the Maximum Entropy model incorporated information from trigger pairs, its relative weight should be increased with the length of the history. But since it also incorporated new information from distance-2 N-grams, it is useful even at the very beginning of a document, and its weight should not start at zero.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining the LM Components", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "The computational bottleneck of the Generalized Iterative Scaling algorithm is in constraints which, for typical histoties h, are non-zero for a large number of words w's. This means that bigram constraints are more expensive than trigram constraints. Implicit computation can be used for unigram constraints. Therefore, the time cost of bigram and trigger constraints dominated the total time cost of the algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computational Costs", |
|
"sec_num": "5.4." |
|
}, |
|
{ |
|
"text": "The computational burden of training the Maximum Entropy model for the large system (38MW) was quite severe. Fortunately, the training procedure is highly paralleliTable (see [1] ). Training was run in parallel on 10-25 high performance workstations, with an average of perhaps 15 machines. Even so, it took 3 weeks to complete.", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 178, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computational Costs", |
|
"sec_num": "5.4." |
|
}, |
|
{ |
|
"text": "In comparison, training the 5MW system took only a few machine-days, and training the 1MW system was trivial.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computational Costs", |
|
"sec_num": "5.4." |
|
}, |
|
{ |
|
"text": "We used 325,000 words of unseen WSJ d~tg_ to measure perplexities of the baseline trigram model, the Maximum Entropy component, and the interpolated a0aptive model (the latter consisting of the first two together with the unigram and bigram caches). This was done for each of the three systems (38MW, 5MW and 1MW). Results are summarized in table 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity Reduction", |
|
"sec_num": "5.5." |
|
}, |
|
{ |
|
"text": "We therefore started the Maximum Entropy model with a weight of ,,.,0.3, which was gradually increased over the first 60 words of the document, to ~0.7. The conventional trigram started with a weight of,,4).7, and was decreased concurrently to ~0.3. The conditional bigram cache had a non-zero weight only during a cache hit, which allowed for a relatively high weight of ,~,0.09. The selective unigram cache had a weight proportional to the size of the cache, saturating at -,,0.05. The weights were always normalized to sum to 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity Reduction", |
|
"sec_num": "5.5." |
|
}, |
|
{ |
|
"text": "While the general weighting scheme was chosen based on considerations discussed above, the specific values of the weights were chosen by minimizing perplexity of unseen data. It became clear later that this did not always correspond with minimizing error rate. Subsequently, further weight modifications were determined by direct trial-and-error measurements of word error rate on development data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity Reduction", |
|
"sec_num": "5.5." |
|
}, |
|
{ |
|
"text": "As mentioned before, we also experimented with systems based on less training data. We built two such systems, one based on 5 million words, and the other based on 1 million words. Both systems were identical to the larger systems described above, except that the Maximum Entropy model did not employ high cutoffs, but was instead based on the same N-gram information as the conventional trigram model. As can be observed, the Maximum Entropy model, even when used alone, was significantly better than the static model. Its relative advantage seems greater with more training data. With the large (38MW) system, practical consideration required imposing high cutoffs on the ME model, and yet its perplexity is still significantly better than that of the baseline. This is particularly notable because the ME model uses only one third the number of parameters used by the trigram model (2.26M vs. 6.72M).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Varying the Training Data", |
|
"sec_num": "5.3." |
|
}, |
|
{ |
|
"text": "When the Maximum Entropy model is supplemented with the other three components, perplexity is again reduced significantly. Here the relationship with the amount of training data is reversed: the less training data, the greater the improvement. This effect is due to the caches, and can be explained as follows: The amount of information provided by the caches is independent of the amount of training data, and is therefore fixed aCTOSS the three systems. However, the 1MW system has higher perplexity, and therefore the relative improvement provided by the caches is greater. Put another way, models based on more data are stronger, and therefore harder to improve on.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Varying the Training Data", |
|
"sec_num": "5.3." |
|
}, |
|
{ |
|
"text": "To evaluate error rate reduction, we used the Nov93 ARPA S1 evaluation set [ll, 12, 13] . It consisted of 424 utterances produced in the context of complete long documents by two male and two female speakers. We used the SPHINX-II recognizer(J14, 15, 16]) with sex-dependent non-PD 10K senone acoustic models. In addition to the 20K words in the lexicon, 178 OOV words and their correct phonetic transcriptions were added in order to create closed vocabulary conditions. We first ran the forward and backward passes of SPHINX H to create word lattices, which were then used by three independent A* passes. The first such pass used the 38MW static trigram language model. The other two passes used the 38MW interpolated adaptive LM. The first of these two adaptive runs was for unsupervised word-by-word adaptation, in which the decoder output was used to update the language model. The other run used supervised adaptation, in which the decoder output was used for within-sentence adaptation, while the correct sentence transcription was used for across-sentence adaptation. which the test data comes from a source to which the language model has never been exposed. The most salient aspect of this case is the large number of out-of-vocabulary words, as well as the high proportion of new bigrams and trigrams.", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 87, |
|
"text": "[ll, 12, 13]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Rate Reduction", |
|
"sec_num": "5.6." |
|
}, |
|
{ |
|
"text": "Cross-domain adaptation is most important in cases where no data from the test domain is available for training the system. But in practice this rarely happens. More likely, a limited amount of LM training can be obtained. Thus a hybrid paradigm, limited-data domain, might be the most important one for real-world applications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Rate Reduction", |
|
"sec_num": "5.6." |
|
}, |
|
{ |
|
"text": "The main disadvantage of the Maximum Entropy framework is the computational requirements of training the ME model. But these are not severe for modest amounts of training d~t~ (up to, say, 5M words, with current CPUs). The approach is thus particularly attractive in limited-data domains.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Rate Reduction", |
|
"sec_num": "5.6." |
|
}, |
|
{ |
|
"text": "We have already seen the effect of the amount of training data on perplexity reduction in the WSJ system. To test our adaptation mechanisms under both the cross-domain and limited-data paradigms, we constructed another experiment, this time using AP wire data for testing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE AP WIRE EXPERIMENT", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "For measuring cross-domain aa_aptation, we used the 38MW WSJ models described above. For measuring limited-data adaptation, we used 5M words of AP wire to train a conventional compact backoff trigram, and a Maximum Entropy model, similar to the ones used by the WSJ system, except that the trigger pair list was copied from the WSJ system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE AP WIRE EXPERIMENT", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "All models were tested on 420,000 words of unseen AP a,t~:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE AP WIRE EXPERIMENT", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "We chose the same \"200\" vocabulary used in the WSJ experiments, to facilitate cross comparisons. As before, we measured perplexities ofthebaseline trigram model, the maximum Entropy component, and the interpolated adaptive model. Resuits are summarized in table 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE AP WIRE EXPERIMENT", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "To test error rate reduction under the cross.domain adaptation paradigm, we used 206 sentences, recorded by 3 male and 3 female speakers, under the same system configuration described in section. Results are reported in table 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE AP WIRE EXPERIMENT", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "The adaptation we concentrated on so far was the kind we call within-domain adaptation. In this paradigm, a heterogeneous language source (such as WSJ) is treated as a complex product of multiple domains-of-discourse (\"sublanguages\"). The goal is then to produce a continuously modified model that tracks sublangnage mixtures, sublanguage shifts, style shifts, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THREE PARADIGMS OF ADAPTATION", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "In contrast, a cross-domain adaptation paradigm is one in", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THREE PARADIGMS OF ADAPTATION", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "We described our latest attempt at adaptive language modeling. At the heart of our approach is a Maximum Entropy (ME) model, which incorporates many knowledge sources in a consistent manner. We have demonstrated that the ME model significantly improves on the conventional static trigram, a challenge which has evaded many past attempts( [17, 18] ). The approach is particularly applicable in domains with a modest amount of LM training data. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 338, |
|
"end": 342, |
|
"text": "[17,", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 343, |
|
"end": 346, |
|
"text": "18]", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SUMMARY", |
|
"sec_num": "8." |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Adaptive Statistical Language Modeling: a Maximum Enlropy Approach", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rosenfeld, R., \"Adaptive Statistical Language Modeling: a Maximum Enlropy Approach.\" Ph.D. Thesis, CarnegieMellon University, April 1994.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Trigger-Based Language Models: a Maximum Entropy Approach", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of ICASSP-93", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lan, R.o Rosenfeld, R., Roukos, S., \"Trigger-Based Language Models: a Maximum Entropy Approach.\" Proceedings of ICASSP-93, April 1993.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Adaptive Language Modeling Using the Maximum Entropy Principle", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proc. ARPA Human Language Technology Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lan, R., Rosenfeld, R., Roukos, S., \"Adaptive Language Mod- eling Using the Maximum Entropy Principle\", in Proc. ARPA Human Language Technology Workshop, March 1993.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Information Theo W and Statistical Mechanics", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Jaines", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1957, |
|
"venue": "Phys. Rev", |
|
"volume": "106", |
|
"issue": "", |
|
"pages": "620--630", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jaines, E. T., \"Information Theo W and Statistical Mechanics.\" Phys. Rev. 106, pp. 620-630, 1957.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Information Theory in Statistics. W'fley", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Kullback", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1959, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kullback. S., Information Theory in Statistics. W'fley, New York. 1959.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Generalized Iterative Sealing for Log-Linear Models", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"J N" |
|
], |
|
"last": "Darroch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ratcliff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "The Annals of Mathematical Statistics", |
|
"volume": "43", |
|
"issue": "", |
|
"pages": "1470--1480", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Darroch. J.N. and Ratcliff, D., \"Generalized Iterative Sealing for Log-Linear Models\", The Annals of Mathematical Statis- tics, VoL 43, pp 1470-1480,1972.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Maximum Enlropy Methods and Their Applications to Maximum Likelihood Parameter Estimation of Conditional Exponential Models", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Della Pielra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Della Pielra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Nadu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roukos", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "A forthcoming IBM technicol report", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brown, P., Della Pielra, S., Della Pielra, V., Mercer, R., Nadu, A., and Roukos, S., \"Maximum Enlropy Methods and Their Applications to Maximum Likelihood Parameter Estimation of Conditional Exponential Models,\" A forthcoming IBM techni- col report.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The SPHINX-II Speech Recognition System: An Overview", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Alleva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Hen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Computer", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huang, X.D., Alleva, F., Hen, H.W., Hwang, M.Y., Lee, K.F. and Rosenfeld, R., \"The SPHINX-II Speech Recognition Sys- tem: An Overview.\" Computer, Speech andLan&ua&e, 1992.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Improvements in Stochastic Language Modeling", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Prec. DARPA Speech and Natural Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rosenfeld, R., and Huang, X. D., \"Improvements in Stochas- tic Language Modeling.\" Prec. DARPA Speech and Natural Language Workshop, February 1992.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Speech Recognition and the Frequency of Recently Used Words: A Modified Marker Model for Natural Language", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kuhn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "12th International Conference on Computational Linguistics", |
|
"volume": "88", |
|
"issue": "", |
|
"pages": "348--350", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kuhn, R., \"Speech Recognition and the Frequency of Re- cently Used Words: A Modified Marker Model for Natural Language.\" 12th International Conference on Computational Linguistics [COLlNG 88], pages 348-350, Budapest, August 1988.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The Hub and Spoke Paradigm for CSR Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Kubala", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proc.ARPA Human Language Technology Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kubala, E et al., \"The Hub and Spoke Paradigm for CSR Evalu- ation,\" in Proc.ARPA Human Language Technology Workshop, March 1994.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "1993 Benchmark Tests for the ARPA spoken Language Program", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Pallett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Fiscus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Fisher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Garofolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Lund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ihtzbocki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Prec. ARPA Human Language Technology Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pallett, D.S., Fiscus, J.G., Fisher, W.M., Garofolo, J.S., Lund, B., and IhTzbocki, M, \"1993 Benchmark Tests for the ARPA spoken Language Program\", in Prec. ARPA Human Language Technology Workshop, March 1994.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Language Model Adaptation in ARPA's CSR Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "ARPA Spoken Language Systems Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rosenfeld, R., \"Language Model Adaptation in ARPA's CSR Evaluation\", ARPA Spoken Language Systems Workshop, March 1994.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The SPHINX-II Speech Recognition System: An Overview", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Alleva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Hop", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ice", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huang, X.D., Alleva, E, Hop., H.W., Hwang, M.Y., Lee, ICE, and Rosenfeld, R., \"The SPHINX-II Speech Recognition Sys- tem: An Overview\", Computer, Speech and Language, 1993.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "An Overview of the SPHINX-II Speech Recognition System", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Alieva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M-Y", |
|
"middle": [], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Prec. ARPA Human Language Technology Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huang, X., Alieva, E, Hwang, M-Y, and Rosenfeld, R., \"An Overview of the SPHINX-II Speech Recognition System\", in Prec. ARPA Human Language Technology Workshop, March 1993.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Improving Speech-Recognition Performance Via Phone-Dependent VQ Codebooks, Multiple Speaker Clusters And Adaptive Language Models", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ";", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Thayex", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mosur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Chase", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Weide", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Alleva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "ARPA Spoken Language Systems Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hwang, M., Rosenfeld, R., Thayex; E., Mosur, R., Chase, L., Weide, R., Huang, X., and Alleva, F., \"Improving Speech- Recognition Performance Via Phone-Dependent VQ Code- books, Multiple Speaker Clusters And Adaptive Language Models\", ARPA Spoken Language Systems Workshop, March 1994.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A Tree-Based Statistical Language Model for natural Language Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Bahl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Desouza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "IEEE Transactions on Acustics. Speech and Signal Processing", |
|
"volume": "37", |
|
"issue": "", |
|
"pages": "1001--1008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bahl, L., Brown, E, DeSouza, P., and Mercer, R., \"A Tree- Based Statistical Language Model for natural Language Speech Recognition\", IEEE Transactions on Acustics. Speech and Sig- nal Processing, 37, pp. 1001-1008, 1989.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Up From Ttigramsl", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Eurospeech", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jelinek. E, \"Up From Ttigramsl\" Eurospeech 1991.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Count-basedcache information: Probabilityof'DE-FAULT' as a function of the number of times it already occurred in the document. The horizontal line is the unconditional probability.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": ") f A..~(h, w) = 0 otherwise Set CA--~ tO E[]'~-~S], the empirical expectation offA--~ (i.e, its expectation in the training data). NOW the constraint on P(h, w) is: Ee [fA-~] = i~tf~-~]", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"text": "The Component Models The adaptive language model was based on four component language models: . . A conventional \"compact\" backoff trigram model. \"Compact\" here means that singleton trigrams (word triplets that occurred only once in the training d~ta) were excluded from the model. It consisted of 3.2 million trigrams and 3.5 million bigrams. This model also served as the baseline for comparisons, and was dubbed \"the static model\". A Maximum En~opy model trained on the same d a!8 as the trigram, and consisting of the following knowledge sources:", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Perplexity (PP) improvement of Maximum Entropy and interpolated adaptive models over a conventional trigram model, for varying amounts of training data. The 38MW ME model used far fewer parameters than the baseline, since it employed high N-gram cutoffs. See texL", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Word error rate reduction of adaptive language models over a conventional trigram model.", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>training data</td><td>38MW (WSJ)</td><td/></tr><tr><td>test data</td><td>206 sentences (AP)</td><td/></tr><tr><td>language model</td><td colspan=\"2\">word error rate 1% change</td></tr><tr><td>Irigram (baseline)</td><td>22.1%</td><td/></tr><tr><td>supervised adaptation</td><td>19.8%</td><td>-10%</td></tr></table>", |
|
"text": "fimited-data adaptation, testing on 420KW of unseen AP wire9. ACKNOWLEDGEMENTSI am grateful to the entire CMU speech group, and many other individuals at CMU, for generously allowing me to monopolize their machines for weeks on end. I am particularly grateful to Lin Chase and Ravishankar Mosur for much needed help in designing and implementing the interface to SPHINX-II, to Alex Rudnicky for conditioning tools for the AP wire data, and to Raj Reddy for his support and encouragement.The ideas for this work were developed during my 1992 summer visit with the Speech and Natural Language group at IBM Watson Research Center. I am grateful to Peter Brown, Stephen Della Pietra, Vincent Della Pietra, Raymond Lau, Bob Mercer and Salim Roukos for their very significant part/cipat/on. This research was sponsored by the Department of the Navy, Naval Research Laboratory under Grant No. N00014-93-1-2005. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.", |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Word error rate reduction of the adaptive language model over a conventional trigram model, under the crossdomain adaptation paradigm.", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |