Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H92-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:28:10.482008Z"
},
"title": "MAP Estimation of Continuous Density HMM : Theory and Applications",
"authors": [
{
"first": "Jean-Luc",
"middle": [],
"last": "Gauvain",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Bell Laboratories Murray Hill",
"location": {
"postCode": "07974",
"region": "NJ"
}
},
"email": ""
},
{
"first": "Chin-Hui",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Bell Laboratories Murray Hill",
"location": {
"postCode": "07974",
"region": "NJ"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We discuss maximum a posteriori estimation of continuous density hidden Markov models (CDHMM). The classical MLE reestimation algorithms, namely the forward-backward algorithm and the segmental k-means algorithm, are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities. Because of its adaptive nature, Bayesian learning serves as a unified approach for the following four speech recognition applications, namely parameter smoothing, speaker adaptation, speaker group modeling and corrective ~aining. New experimental results on all four applications are provided to show the effectiveness of the MAP estimation approach.",
"pdf_parse": {
"paper_id": "H92-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "We discuss maximum a posteriori estimation of continuous density hidden Markov models (CDHMM). The classical MLE reestimation algorithms, namely the forward-backward algorithm and the segmental k-means algorithm, are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities. Because of its adaptive nature, Bayesian learning serves as a unified approach for the following four speech recognition applications, namely parameter smoothing, speaker adaptation, speaker group modeling and corrective ~aining. New experimental results on all four applications are provided to show the effectiveness of the MAP estimation approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Estimation of hidden Marknv model (HMM) is usually obtained by the method of maximum likelihood (ML) [1, 10, 6] assuming that the size of the training data is large enough to provide robust estimates. This paper investigates maximum a posteriori (MAP) estimate of continuous density hidden Markov models (CDHMM). The MAP estimate can be seen as a Bayes estimate of the vector parameter when the loss function is not specified [2] . This estimation technique provides a way of incorporatimg prior information in the training process, which is particularly useful to deal with problems posed by sparse training data for which the ML approach gives inaccurate estimates. This approach can be applied to two classes of estimation problems, namely, parameter smoothing and model adaptation, both related to the problem of sparse training data.",
"cite_spans": [
{
"start": 101,
"end": 104,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 105,
"end": 108,
"text": "10,",
"ref_id": "BIBREF9"
},
{
"start": 109,
"end": 111,
"text": "6]",
"ref_id": "BIBREF5"
},
{
"start": 426,
"end": 429,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "In the following the sample x = (zl, ...,z,~) is a given set of n observations, where zl, ..., z n are either independent and identically distributed (i.i.d.), or are drawn from a probabilistic function of a Markov chain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "The difference between MAP and ML estimation lies in the assumption of an appropriate prior disliibution of the parameters to be estimated. If 0, assumed to be a random vector taking values in the space O, is the parameter vector to be estimated from the sample x with probability density function (p.d.f.) f(.lO), and if g is the prior p.d.f, of 0, then the MAP estimate, 0~p, is defined as the mode of the posterior p.d.f, of 0, i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "If 9 is assumed to be fixed but unknown, then there is no knowledge about 8, which is equivalent to assuming a non-informative improper prior, i,e. g (8) ----constant. Equation (1) then reduces to the familiar ML formulation.",
"cite_spans": [
{
"start": 150,
"end": 153,
"text": "(8)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oma, = argmoax f(xlO)g(O) (I)",
"sec_num": null
},
{
"text": "Given the MAP formulation two problems remain: the choice of the prior distribution family and the evaluation of the maximum a ~This work was done while Jean-Luc Gauvain was on leave from the Speech Communication Group at LIMSI/CNRS, Orsay, France.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oma, = argmoax f(xlO)g(O) (I)",
"sec_num": null
},
{
"text": "posteriori. These two problems are closely related, since the appropilate choice of the prior distribution can greatly simplify the MAP estimation. Like for ML estimation, MAP estimation is relatively easy if the famay ofp.d.f.'s {f(-10), 0 ~ O} possesses a sufficient statistic of fixed dimension t(x). In this case, the natural solution is to choose the prior density in a conjugate family, {k(.ko), ~o E ~}, which includes the kernel density of f(. lO), i.e. Vx t(x) e ~b [4, 2] . The MAP estimation is then reduced to the evaluation of the mode of k(Ol~o' ) = k(Oko)k(Olt(x)), a problem almost identical to the ML estimation problem. However, among the families of interest, only exponential families have a sufficient statistic of fixed dimension [7] . When there is no sufficient statistic of fixed dimension, MAP estimation, like ML estimation, is a much more difficult problem because the posterior density is not expressible in terms of a fixed number of parameters and cannot be maximized easily. For both finite mixture density and hidden Markov model, the lack of a sufficient statistic of fixed dimension is due to the underlying hidden process, i.e. a multinomial model for the mixture and a Markov chain for an HMM. In these cases ML estimates are usually obtained by using the expectation-maximization (EM) algorithm [3, I, 13] . This algorithm exploits the fact that the complete-data likelihood can be simpler to maximize than the likelihood of the incomplete data, as in the case where the complete-data model has sufficient statistics of fixed dimension. As noted by Dempster et al. [3] , the EM algorithm can also be applied to MAP estimation. In the next two sections the formulations of this algorithm for MAP estimation of Gaussian mixture and CDHMM with Gaussian mixture observation densities are derived.",
"cite_spans": [
{
"start": 475,
"end": 478,
"text": "[4,",
"ref_id": "BIBREF3"
},
{
"start": 479,
"end": 481,
"text": "2]",
"ref_id": "BIBREF1"
},
{
"start": 752,
"end": 755,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 1333,
"end": 1343,
"text": "[3, I, 13]",
"ref_id": null
},
{
"start": 1603,
"end": 1606,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oma, = argmoax f(xlO)g(O) (I)",
"sec_num": null
},
{
"text": "Suppose that x = (zl,...,x,) is a sample of n i.i.d. observations drawn from a mixture of K p-dimensional multivariate normal densities. Assuming independence between the parameters of the mixture components and the mixture weights, the joint prior density g(0) is taken to be a product of the prior p.d.f.'s defined in equations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR GAUSSIAN MIXTURE",
"sec_num": null
},
{
"text": "\u2022 K \u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR GAUSSIAN MIXTURE",
"sec_num": null
},
{
"text": "(2) and (3), Le. g(0) = g(w~, ...,~K)FL:, z(m~,,-,). As will be shown later, this choice for the prior density family can also be justified by noting that the EM algorithm can be applied to the MAP estimation problem if the prior density is in the conjuguate family of the complete-data density.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR GAUSSIAN MIXTURE",
"sec_num": null
},
{
"text": "The EM algorithm is an iterative procedure for approximating maximum-likelihood estimates in an incomplete-data context such as mixture density and hidden Markov model estimation problems [1, 3, 13] . This procedure consists of maximizing at each iteration the auxilliary function Q(O, ~) defined as the ex- For a mixture of K densities {f(.10~)}~=L...,g with mixture weights {wk } k= ~,...,K, the auxilliary function Q takes the following form [13] : ",
"cite_spans": [
{
"start": 188,
"end": 191,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 192,
"end": 194,
"text": "3,",
"ref_id": "BIBREF2"
},
{
"start": 195,
"end": 198,
"text": "13]",
"ref_id": "BIBREF12"
},
{
"start": 445,
"end": 449,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR GAUSSIAN MIXTURE",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q(O, #)= ~ ~ ~\"'~f(zt!O~)log~f(ztlo~)",
"eq_num": "(4)"
}
],
"section": "MAP ESTIMATES FOR GAUSSIAN MIXTURE",
"sec_num": null
},
{
"text": "From (2), (3) and (5) it can easily be verified that ~(.,0) belongs to the same family as g, and has parameters",
"cite_spans": [
{
"start": 18,
"end": 21,
"text": "(5)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR GAUSSIAN MIXTURE",
"sec_num": null
},
{
"text": "rk,/~k, t~k, uk}k:l,...,K satisfying the following conditions: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "O,L ' ' ' '",
"sec_num": null
},
{
"text": "If it is assumed &k > 0, then ckl, ck2, ...,ck, is a sequence of n i.i.d, random variables with a non-degenerate distribution and limsupn_o o ~=. ckt = co with probability one. It follows that w~ converges to ~=l Ckt/n with probability one when n ~ oo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "~L, ~k,(~, --~)(~, --rag) ~ (13)",
"sec_num": null
},
{
"text": "Applying the same reasoning to m~ and r~, it can be seen that the EM reestimation formulas for the MAP and ML approaches are asymptotically similar. Thus as long as the initial estimates are identical, the EM algorithm will provide identical estimates with probability one when n ~ c\u00a2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "~L, ~k,(~, --~)(~, --rag) ~ (13)",
"sec_num": null
},
{
"text": "The results obtained for a mixture of normal densities can be extended to the case of HMM with Gaussian mixture state observation densities, assuming that the observation p.d.f.'s of all the states have the same number of mixture components. We consider an N-state HMM with parameter vector A = (x, A, 0), where r is the initial probability vector, A is the transition matrix, and 0 is the p. where f(x,lOi ) K = ~k=t w~kA/'(x*lralk, rik), and the summation is over all possible state sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR CDHMM",
"sec_num": null
},
{
"text": "In the general case where MAP estimation is to be applied not only to the observation density parameters but also to the initial and transition probabilities, a Dirichlet density can also be used for the initial probability vector ~r and for each row of the transition probability matrix A. This choice directly follows the results of the previous section: since the complete-data likelihood satisfies h(x, s,tlA ) = h(s, A)h(x, tls , A) where h(s, A) is the product of N + 1 multinomial densities with parameters {n, a't, . In the following subsections we examine two ways of approximating AMAp by local maximization of f(xl~)G(~) and f(x, sI~)G(A). These two solutions are the MAP versions of the B aura-Welch algorithm [1 ] and of the segmental k-means algorithm [12] , algorithms which were developed for ML estimation.",
"cite_spans": [
{
"start": 722,
"end": 726,
"text": "[1 ]",
"ref_id": "BIBREF0"
},
{
"start": 766,
"end": 770,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR CDHMM",
"sec_num": null
},
{
"text": "Forward-Backward MAP Estimate From (14) it is straightforward to show that the auxilliary function of the EM algorithm applied to MLE of A, Q(A, ~) = E[log h(Yi~)lx, \u00a3], can be decomposed into a sum of three auxilliary functions: Q,~(a', X), Q~(A, X) and Qo(O, ~) [6] . These functions which can be independently maximized take the following forms:",
"cite_spans": [
{
"start": 35,
"end": 39,
"text": "(14)",
"ref_id": "BIBREF13"
},
{
"start": 264,
"end": 267,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR CDHMM",
"sec_num": null
},
{
"text": "N Q'rOr' ~) = E ~io log ~ri (17) i=1 QA(A, ~) = fist log aij (18) i=1 t=l j=l N Qo(o,\u00a3) = ~ Qo,(od\u00a3) (19) i--1 with ~ ~ ~ikf(zt[~ik) Qo,(Oi,X)= 7\" logoJikf(xtlOik) (20) ,=, k=t f(z,l@,)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR CDHMM",
"sec_num": null
},
{
"text": "where ~i/t = Pr(st-t =i, st =jlx, ~) and ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR CDHMM",
"sec_num": null
},
{
"text": "(21) eikt = 7,t f(xt[~i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR CDHMM",
"sec_num": null
},
{
"text": "then the reestimation formulas (11-13) can be used to maximize Ro~ (01, ~). It is straightforward to find the reesfimations formulas for ~r and A by applying the same derivations used for the mixture weights: 14. It follows that the reestimation formulas for A and 0 still hold if the summations over t are ~(q) and -(q)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR CDHMM",
"sec_num": null
},
{
"text": "replaced by summations over q and t. The values \",~jt 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR CDHMM",
"sec_num": null
},
{
"text": "are then obtained by applying the forward-backward algorithm for each observation sequence. The reestimation formula for the initial probabilities becomes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR CDHMM",
"sec_num": null
},
{
"text": ", T/, -1 + Eq%l ,~, = (24) N Q (q) Ei:, ', -Iv + E.:, ,,o",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR CDHMM",
"sec_num": null
},
{
"text": "As for the mixture Gaussian case, it can be shown that as Q ~ co, the MAP reestimation formulas approach the ML ones, exhibiting the asymptotic similarity of the two estimates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR CDHMM",
"sec_num": null
},
{
"text": "These reestimation equations give estimates of the HMM parameters which correspond to a local maximum of the posterior density. The choice of the initial estimates is therefore essential to finding a solution close to a global maximum and to minimize the number of EM iterations needed to attain the local maximum. When using an informative prior, one natural choice for the initial estimates is the mode of the prior density, which represents all the available information about the parameters when no data has been observed. The corresponding values are simply obtained by applying the reestimation formulas with n equal to 0. When using a non-informative prior, i.e. for ML estimation, while for discrete HMMs it is possible to use uniform initial estimates, there is no trivial solution for the continuous density case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP ESTIMATES FOR CDHMM",
"sec_num": null
},
{
"text": "By analogy with the segmental k-means algorithm [12] , a different optimization criterion can be considered. 14.",
"cite_spans": [
{
"start": 48,
"end": 52,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmental MAP Estimate",
"sec_num": null
},
{
"text": "It is straightforward to show that the forward-backward reestimation equations still hold with fijt= 6ts('n)~ t-t -i)6(s~ m) -J) and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmental MAP Estimate",
"sec_num": null
},
{
"text": "\"fit = ~(s~ '~) --i), where ~ denotes the Kronecker delta function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmental MAP Estimate",
"sec_num": null
},
{
"text": "In the previous sections it was assumed that the prior density G(A) is a member of a preassigned family of prior distributions defined by (16). In a strictly Bayesian approach the vector parameter of this family ofp.d.f.'s {G(.[~), ~ E ~b} is also assumed known based on common or subjective knowledge about the stochastic process. Another solution is to adopt an empirical Bayesian approach [14] where the prior parameters are estimated directly from data. The estimation is then based on the marginal disttrbution of the data given the prior parameters.",
"cite_spans": [
{
"start": 392,
"end": 396,
"text": "[14]",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PRIOR DENSITY ESTIMATION",
"sec_num": null
},
{
"text": "Adopting the empirical Bayes approach, it is assumed that the sequence of observations, X, is composed of multiple independent sequences associated with different unknown values of the HMM parameters. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRIOR DENSITY ESTIMATION",
"sec_num": null
},
{
"text": "~p Such a procedure provides a sequence of estimates with nondecreasing values of f(X, Al~(m)). The solution of (30) is the MAP estimate of A based on the current prior parameter ~(m). It can therefore be obtained by applying the forward-backward MAP reestimation formulas to each observation sequence Xq. The solution of (31) is simply the maximum likelihood estimate of ~ based on the current values of the HMM parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRIOR DENSITY ESTIMATION",
"sec_num": null
},
{
"text": "Finding this estimate poses two problems. First, due to the Wishart and Dirichlet components, ML estimation for the density defined by (16) is not trivial. Second, since more parameters are needed for the prior density than for the HMM itself, there can be a problem of overparametrization when the number of pairs (xq, Aq) is small. One way to simplify the estimation problem is to use moment estimates to approximate the ML estimates. For the overparametrization problem, it is possible to reduce the size of the prior family by adding constraints on the prior parameters. For example, the prior family can be limited to the family of the kernel density of the complete-data likelihood, i.e. the posterior density family of the complete.data model when no prior information is available. Doing so, it can be verified that the following constraints hold v~k = r~k (32)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRIOR DENSITY ESTIMATION",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "aik = rik-t-p",
"eq_num": "(33)"
}
],
"section": "PRIOR DENSITY ESTIMATION",
"sec_num": null
},
{
"text": "Parameter tying can also be used to further reduce the size of the prior family. We use this approach for approach for two types of applications: parameter smoothing and adaptation learning. For parameter \"smoothing\", the goal is to estimate {Al, A2, ...}. The previous algorithm offers a direct solution to \"smooth\" these different estimates by assuming a common prior density for all the models. For adaptative learning, we observe a new sequence of observations Xq associated with the unobserved vector parameter value Aq. The MAP estimate of A, can be obtained by using for prior parameters a point estimate ~ obtained with the previous algorithm. Such a training process can be seen as an adaptation of an a priori model = argmaxx G(A[~) (when no training data is available) to more specific conditions corresponding to the new observation sequence Xq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRIOR DENSITY ESTIMATION",
"sec_num": null
},
{
"text": "In the applications presented in this paper, the prior density parameters were estimated along with the estimation of the SI model parameters using the segmental k-means algorithm. Information about the variability to be modeled with the prior densities was associated with each frame of the SI training data. This information was simply represented by a class number which can be the speaker ID, the speaker sex, or the phonetic context. The HMM parameters for each class given the mixture component were then computed, and moment estimates were obtained for the tied prior parameters also subject to conditions (32-33) [5] .",
"cite_spans": [
{
"start": 621,
"end": 624,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PRIOR DENSITY ESTIMATION",
"sec_num": null
},
{
"text": "The experiments presented in this paper used various sets of context-independent (CI) and context-dependent (CD) phone models. Each model is a left-to-right HMM with Gaussian mixture state observation densities. Diagonal covariance matrices are used and the transition probabilities are assumed fixed and known. As described in [8] , a 3g-dimensional feature vector composed of LPC-derived cepstrum coefficients, and first and second order time derivatives. Results are reported for the RM task with the standard word pair grammar and for the TI/NIST connected digits. Both corpora were down-sampled to telephone bandwidth.",
"cite_spans": [
{
"start": 328,
"end": 331,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EXPERIMENTAL SETUP",
"sec_num": null
},
{
"text": "Last year we reported results for CD model smoothing, speaker adaptation, and sex-dependentmodeling [5] . CD model smoothing was found to reduce the word error rate by 10%. Speaker adaptation 13.9 8.7 6.9 3.4 SA (M/F) 11.5 7.5 6.0 3.5 Table 1 : Summary of SD, SA (SI), and SA (M/F) results on FEB91-SD test. Results are given as word error rate (%).",
"cite_spans": [
{
"start": 100,
"end": 103,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 235,
"end": 242,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "MODEL SMOOTHING AND ADAPTATION",
"sec_num": null
},
{
"text": "was tested on the JUN90 data with 1 minute and 2 minutes of speaker-specific adaptation data. A 16% and 31% reduction in word error were obtained compared to the SI results [5] . On the FEB91 test, using Bayesian learning for CD model smoothing combined with sex-dependent modeling, a 21% word error reduction was obtained compared to the baseline results [5] . In order to compare speaker adaption to ML training of SD models, an experiment has been carded out on the FEB91-SD test material including data from 12 speakers (7m/5f), using a set of 47 CI phone models. Two, five and thirty minutes of the SD training data were used for training and adaptation. The SD, SA (SI) word error rates are given in the two first rows of Table 1 .",
"cite_spans": [
{
"start": 173,
"end": 176,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 356,
"end": 359,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 728,
"end": 735,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "MODEL SMOOTHING AND ADAPTATION",
"sec_num": null
},
{
"text": "The SD word error rate for 2 min of training data was 31.5%. The SI word error rate (0 minutes of adaptation data) was 13.9%, somewhat comparable to the SD results with 5 min of SD training data. The SA models are seen to perform better than SD models when relatively small amounts of data were used for training or adaptation. When all the available training data was used, the SA and SD results were comparable, consistent with the Bayesian formulation that the MAP estimate converges to the MLE. Relative to the SI results, the word error reduction was 37% with 2 rain of adaptation data, an improvement similar to that observed on the JUN90 test data with CD models [5] . As in the previous experiment, a larger improvement was observed for the female speakers (51%) than for the male speakers (22%).",
"cite_spans": [
{
"start": 670,
"end": 673,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MODEL SMOOTHING AND ADAPTATION",
"sec_num": null
},
{
"text": "Speaker adaptation was also performed starting with sexdependent models (third row of Table 1 ). The word error rate with no speaker adaptation is 11.5%. The error rate is reduced to 7.5% with 2 rain, and 6.0% with 5 rain, of adaptation data. Comparing the last 2 rows of the table it can be seen that SA is more effective when sex-dependent seed models are used. The error reduction with 2 rain of training data is 35% compared to the sex-dependent model results and 46% compared to the SI model results.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 93,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "MODEL SMOOTHING AND ADAPTATION",
"sec_num": null
},
{
"text": "We have shown that Bayesian learning can be used for CD model smoothing [5] . This approach can be seen either as a way to add extra constraints to the model parameters so as to reduce the effect of insufficient training data, or it can be seen as an \"interpolation\" between two sets of parameter estimates: one corresponding to the desired model and the other to a smaller model which can be trained using MLE on the same data. Instead of defining a reduced parameter set by removing the context dependency, we can alternatively reduce the mixture size of the observation densities and use a single Ganssian per state in the smaller model. Cast in the Bayesian learning framework, this implies that the same marginal prior density is used for all the components of a given mixture. Variance clipping can also be viewed as a MAP estimation technique with a uniform prior density constrained by a maximum (positive) value for the precision parameters [9] . However, this does not have the appealing interpolation capability of the conjugate priors.",
"cite_spans": [
{
"start": 72,
"end": 75,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 950,
"end": 953,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "P.D.F. SMOOTHING",
"sec_num": null
},
{
"text": "We experimented with this p. Tables 2 and 3 . In Table 2 , word accuracy (WACC) and suing accuracy (SACC) are given for the 8578 test digit strings of the TI digit corpora. Compared to the variance clipping scheme, the MAP estimate reduces the number of string errors by 25%. Using p.d.f, smoothing, the suing accuracy of99.1% is the best result reported on this task. For the RM tests summarized in Table 3 , a consistent improvement over the variance clipping scheme (MLE+VC) is observed when p.d.f, smoothing is applied. Combined with sex-dependent modeling, the MAP(M/F) scheme gives an average word accuracy of about 95.8%.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 43,
"text": "Tables 2 and 3",
"ref_id": "TABREF4"
},
{
"start": 49,
"end": 56,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 400,
"end": 407,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "P.D.F. SMOOTHING",
"sec_num": null
},
{
"text": "Bayesian learning provides a scheme for model adaptation which can also be used for corrective training. Corrective training maximizes the recognition rate on the training data hoping that that will also improve performance on the test data. One simple way to do corrective training is to use the training sentences which were incorrectly recognized as new data. In order to do so, the state segmentation step of the segmental MAP algorithm was modified to obtain not only the frame/state association for the sentence model states but also for the states corresponding to the model of all the possible sentences (general model). In the reestimation formulas, the values cikt for each state si are evaluated using (21), such that 7it is equal to 1 in the sentence model and to -1 in the general model. While convergence is not guaranteed, in practice it was found that by using large values for rik(_ ~ 200), the number of training sentence errors decreased after each iteration until convergence. If we use the forward-backward MAP algorithm we obtain a corrective training algorithm for CDHMM's very similar to the recently proposed corrective MMIE training algorithm [11 ] .",
"cite_spans": [
{
"start": 1169,
"end": 1174,
"text": "[11 ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CORRECTIVE TRAINING",
"sec_num": null
},
{
"text": "Corrective training was evaluated on both the TI/NIST SI connected digit and the RM tasks. Only the Ganssian mean vectors and the mixture weights were corrected. For the TI digits a set of 21 phonetic HMMs were ~ained on the 8565 digit strings. Results are given in Table 4 ations of corrective training while the CT-32 results were based on only 3 iterations, where one full iteration of conective training is implemented as one recognition run which produces a set of \"new\" training strings (i.e. errors and/or barely correct strings) followed by 10 iterations of Bayesian adaptation using the data of these strings. String error rates of 1.4% and 1.3% were obtained with 16 and 32 mixture components per state respectively, compared to 2.0% and 1.5% without corrective training. These represent suing error reductions of 27% and 12%. We note that corrective training helps more with smaller models, as the ratio of adaptation data to the number of parameters is larger. The corrective training procedure is also effective for continuous sentence recognition of the RM task. Table 5 gives results for the RM task, using 47 SI-CI models with 32 mixture components. The CT-32 corrective training assumes a fixed beam width. Since the number of string errors was small in the training set, the amount of data for corrective training was rather limited. To increase the amount, a smaller beam width was used to recognize the training data. It was observed that this improved corrective training (ICT-32) procedure not only reduced the error rate in training but also increased the separation between the conect string and the other competing strings. The number of training errors also increased as predicted. The regular and the improved corrective training gave an average word error rate reduction of 15% and 20% respectively on the test data.",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 1077,
"end": 1084,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "CORRECTIVE TRAINING",
"sec_num": null
},
{
"text": "The theoretical framework for MAP estimation of multivariate Gaussian mixt~e density and HMM with mixture Gaussian state observation densities was presented. Two MAP training algorithms, the forward-baclovard MAP estimation and the segmental MAP estimation, were formulated. Bayesian learning serves as a unified approach for speaker adaptation, speaker group modeling, parameter smoothing and corrective training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SUMMARY",
"sec_num": null
},
{
"text": "Tested on the RM task, encouraging results have been obtained for all four applications. For speaker adaptation, a 37% word error reduction over the SI results was obtained on the FEB91-SD test with 2 minutes of speaker-specific training data. It was also found that speaker adaptation is more effective when based on sex-dependent models than with an SI seed. Compared to speakerdependent training, speaker adaptation achieved a better performance with the same amount of training/adaptation data. Corrective training appfied to CI models reduced word errors by 15-20%. The best SI results on RM tests were obtained with p.d.L smoothing and sex-dependent modeling, an average word accuracy of about 95.8% on four test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SUMMARY",
"sec_num": null
},
{
"text": "Only corrective training and p.d.L smoothing were applied to the TI/NIST connected digit task. It was found that corrective training is effective.for improving CI models, reducing the number of string errors by up to 27%. Corrective training was found to be more effective for models having smaller numbers of parameters. This implies that we can reduce computational requierements by using corrective training on a smaller model and achieve performance comparable to that of a larger model. Using 213 CD models, p.d.L smoothing provided a robust model that gave a 99.1% string accuracy on the test data, the best performance reported on this corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SUMMARY",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An inequality and associated maximization technique in statistical estimation for probabilisties functions of Markov processes",
"authors": [
{
"first": "L",
"middle": [
"E"
],
"last": "Baum",
"suffix": ""
}
],
"year": 1972,
"venue": "Inequalities",
"volume": "3",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. E. Baum, \"An inequality and associated maximization technique in statistical estimation for probabilisties functions of Markov pro- cesses,\" Inequalities, vol. 3, pp. 1-8, 1972.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Optimal StatisticalDecisions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Degroot",
"suffix": ""
}
],
"year": 1970,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. DeGroot, Optimal StatisticalDecisions, McGraw-Hill, 1970.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Maximum Likelihood from Incomplete Data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Roy. Statist. Soc. Set. B",
"volume": "39",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Dempster, N. Laird, D. Rubin, \"Maximum Likelihood from Incom- plete Data via the EM algorithm\", ./. Roy. Statist. Soc. Set. B, 39, pp. 1-38, 1977.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Pattern Classification and Scene Analysis",
"authors": [
{
"first": "R",
"middle": [
"O"
],
"last": "Duda",
"suffix": ""
},
{
"first": "P",
"middle": [
"E"
],
"last": "Hart",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, John Wiley & Sons, New York, 1973.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bayesian Learning of Ganssian Mixture Densities for Hidden Markov Models",
"authors": [
{
"first": "J.-L",
"middle": [],
"last": "Ganvain",
"suffix": ""
},
{
"first": "C.-H",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1991,
"venue": "Prec. DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.-L. Ganvain and C.-H. Lee, \"Bayesian Learning of Ganssian Mixture Densities for Hidden Markov Models,\" Prec. DARPA Speech and Natural Language Workshop, Pacific Grove, Feb. 1991.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Maximum-Likelihood Estimation for Mixture Multivariate Stochastic Observations of Marker Chains",
"authors": [
{
"first": "B",
"middle": [
"H"
],
"last": "Juang",
"suffix": ""
}
],
"year": 1985,
"venue": "AT&T Technical Journal",
"volume": "64",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. H. Juang, \"Maximum-Likelihood Estimation for Mixture Multi- variate Stochastic Observations of Marker Chains\", AT&T Technical Journal, Vol. 64, No. 6, July-August 1985.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On distributions admitting a sufficient statistic",
"authors": [
{
"first": "B",
"middle": [
"O"
],
"last": "Koopman",
"suffix": ""
}
],
"year": 1936,
"venue": "Trans. Ar,~ Math. See",
"volume": "39",
"issue": "",
"pages": "399--409",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. O. Koopman, \"On distributions admitting a sufficient statistic\", Trans. Ar,~ Math. See., vol. 39, pp. 399-409, 1936.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improved Acoustic Modeling for Continuous Speech Recognition",
"authors": [
{
"first": "C.-H",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Giachin",
"suffix": ""
},
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "A",
"middle": [
"E"
],
"last": "Rosenberg",
"suffix": ""
}
],
"year": 1990,
"venue": "Prec. DARPA Speech and Natural l_zmguage Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.-H. Lee, E. Giachin, L. R. Rabiner, R. I'ieraccini and A. E. Rosen- berg, \"Improved Acoustic Modeling for Continuous Speech Recogni- tion\", Prec. DARPA Speech and Natural l_zmguage Workshop, Hidden Valley, June 1990.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Study on Speaker Adaptation of the Parameters of Continuous Density Hidden Markov Models",
"authors": [
{
"first": "C.-H",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "C.-H",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "B.-H",
"middle": [],
"last": "Juang",
"suffix": ""
}
],
"year": 1991,
"venue": "IEEE Trans. on ASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.-H. Lee, C.-H. Lin and B.-H. Juang, \"A Study on Speaker Adaptation of the Parameters of Continuous Density Hidden Markov Models\", IEEE Trans. on ASSP, April 1991.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Maximum Likelihood Estimation for Multivariate Observations of Markov Sources",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Liporace",
"suffix": ""
}
],
"year": 1982,
"venue": "IEEE Trans. lnforr~ Theory",
"volume": "",
"issue": "5",
"pages": "729--734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. R. Liporace, \"Maximum Likelihood Estimation for Multivariate Observations of Markov Sources,\" IEEE Trans. lnforr~ Theory, Vol. IT-28, no. 5, pp. 729-734, September 1982.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An Improved MMIE Training Algorithm for Speaker-Independent Small Vocabulary, Continuous Speech Recognition",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Normandin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Morgera",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "537--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Normandin and D. Morgera, \"An Improved MMIE Training Algo- rithm for Speaker-Independent Small Vocabulary, Continuous Speech Recognition\", Prec. ICASSPgl, pp. 537-540, May 1991.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A segmental K-means training procedure for connected word recognition",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
},
{
"first": "J",
"middle": [
"G"
],
"last": "Wilpon",
"suffix": ""
},
{
"first": "B",
"middle": [
"H"
],
"last": "Juang",
"suffix": ""
}
],
"year": 1986,
"venue": "AT&T Tech. Y",
"volume": "64",
"issue": "3",
"pages": "21--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L.R. Rabiner, J. G. Wilpon, and B. H. Juang, \"A segmental K-means training procedure for connected word recognition,\" AT&T Tech. Y., voL 64, no. 3, pp. 21-40, May 1986.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mixture Densities, Maximum Likelihood and the EM Algorithm",
"authors": [
{
"first": "R",
"middle": [
"A"
],
"last": "Redner",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 1984,
"venue": "SIAM Review",
"volume": "26",
"issue": "2",
"pages": "195--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.A. Redner and H. E Walker, \"Mixture Densities, Maximum Like- lihood and the EM Algorithm,\" SIAM Review, Vol. 26, No. 2, pp. 195-239, April 1984.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Empirical Bayes Approach to Statistical Decision Problems",
"authors": [
{
"first": "H",
"middle": [],
"last": "Robbins",
"suffix": ""
}
],
"year": 1964,
"venue": "Ann. Math. Statist",
"volume": "35",
"issue": "",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Robbins, \"The Empirical Bayes Approach to Statistical Decision Problems,\" Ann. Math. Statist., Vol. 35, pp. 1-20, 1964.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "pectation of the complete-data log-likelihood log h(y[0 ) given the incomplete data x = (~, ...,x,) and the current fit 0, i.e. Q(0, ~) = E[log h(yl0)lx, ~. For a mixture density, the completedata likelihood is the joint likelihood of x and \u00a3 = (\u00a3t, ..., \u00a3n ) the unobserved labels referring to the mixture components, i.e. y = (x, \u00a3). The EM procedure derives from the fact that log f(xl0 ) = Q(O, 0) -H(O, 0) where H(O, 0) = E(log h(ylx , 0)Ix , 0) and H(O, 0) _< H(O, ~), and whenever a value 0 satisfies Q(O, O) > Q(0, 0) then f(x[0) > f(xl0). It foUows that the same iterative procedure can be used to estimate the mode of the posterior density by maximizing the anxilliary function R( O , 0) = Q( O , 0) + log 9(0) at each iteration instead of Q(O, 0) [3]."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": ",=, ~=, /(z,lO) Let tP(0, 0) = exp R(O, 0) be the function to be maximized and define the following notations cat &~f(xtl#k) ck = ~=~ ckt, c~txt/ck and S~ = ~=~ c~t(xt -\u00a3k)(Xt --\u00a3k) ~. It follows from the definition of f(x[O) and equation (T(m~ -~)%~(m~ -e~) -\u00bdtr(s~)]"
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "-I-Sk + ~-~-~'tZk --\u00a3k)(#k --\u00a3k) T (I0)The considered family of distributions is therefore a conjugate family for the complete-data density.The mode of ~P(., 0), denoted J i , obtained (wk, ink, rk), may be from the modes of the Dirichlet and normal-Wishart densities: w~ = (.~ -1)/~c:t(.~ _ 1), m~ = p~, and r~ = (~ -p)u~-'.Thus, the EM iteration is as follows:"
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "d.f, parameter vector composed of the mixture parameters 0i = {Wik,mik,rik}kfl,...,K for each state i. For a sample x = (2~1, ..., zn), the complete data is y = (x, s,Q where s = (so,..., s,) is the unobserved state sequence, and l = (\u00a3h ..., l,~) are the unobserved mixture component labels, si E [1, N] and li E [1, K]. The joint p.d.f, h(.lX) of x, s, and\u00a3 is defined as [1] rl h(x, s,llA ) = a'. o Hao,_,,,w,,t,f(xtlO,,t,) the initial probabilty of state i, aij is the transition probability from state i to state j, and Oik =(mik, rik) is the parameter vector of the k-th normal p.d.f, associated to state i. It follows that the likelihood of x has the form n /(xl~,)= ~ ~,o~I :.,_,., f(=,lO.,)"
},
"FIGREF4": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "3'. =Pr(st =/Ix, ~) can be computed at each EM iteration by using the Forward-Backward algorithm [I]. As for the mixture Gaussian case discussed in the previous section, to estimate the mode of the posterior density the anxilliary function R(A, ~) = Q(A, ~) + log G(A) must be maximized. The form chosen for G(A) in (16) permits independent maximization of each of the following 2N + I parameter sets: {Trl .... ,a'N}, {ail,...,aiN}i=t,...,g and {0i}i=l,...,N. The MAP auxiUiary function R(A, A) can thus be written as the sum R. ( a', ~) + ~i R., ( a, , ~) + ~, Ro, ( O,, ~ ), where each term represents the MAP anxilliary function associated with the indexed parameter set.We can recognize in (20) the same form as seen for Q(0[~) in(4) for the mixture Ganssian case. It follows that if the Ckt are replaced by the cikt defined as ,~,kX(xtl,h,~, ~,k )"
},
"FIGREF5": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "aq = EjN= 1 \"'J _ N -F Eju_t E~:a ~q' For multiple independent observation sequences { xo } q= l,...,Q, t~(q) ~(q)~ with Xq = x't .... , ~., ,, we maximize G(A) lq?:l f(xqlA)' where f(.[A) is defined by (15). The EM auxilliary function is then R(A, X) = logG(A) + ~qQ=t E[l\u00b0gh(Yql~)lxq, X], where h(.lA) is defined by equation"
},
"FIGREF6": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Instead of maximizing G(AIx), the joint posterior density of A and s, G(A, slx ), is maximized. The estimation procedure becomes = argmax max G(A, six ) called the segmental MAP estimate of A. As for the segmental k-means algorithm, it is straightforward to prove that starting with any estimate A (m), alternate maximization over s and A gives a sequence of estimates with non decreasing values of G(A, slx), i.e. G(A (m+'), s(m+')]x) > G(A(m), s(m)lx) with s (m) ----argm~x f(x, slA (m)) (27) A (rn+l) = argmxax f(x, s(m)IA)G(A) (28) The most likely state sequence s (m) is decoded by the Viterbi algorithm. In fact, maximization over A can be replaced by any hill climbing procedure which replaces A ('~) by A ('~+1) subject to the constraint that f(x, s(m)[A(m+D)G(A (re+D) _> f(x, s (m) [A (m))G(A(m)). The EM algorithm is once again a good candidate to perform this maximization using A (m) as an initial estimate. The EM anxilliary function is then R(A, ~) = log G(A) + E[log h(ylA)lx, s ~), X] where h(.IA) is defined by equation"
},
"FIGREF7": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Letting (X,A) = [(xt, Ai), (x2, A2) .... ] be such a multiple sequence of observations, where each pair is independent of the others and the Aq have a common prior distribution G(.[~). Since the Aq are not directly observed, the prior parameter estimates must be obtained from the marginal density f(X[~), f(Xl~) --~ f(XIA)G(A[~) dA (29) where f(XIA ) = I~Iq f(xqlAq) and G(AIg~ ) = I~q G(AqI~)\" However, maximum likelihood estimation based on f(Xl~ ) appears rather difficult. To simplify this problem, we can choose a simpler optimization criterion by maximizing the joint p.d.f, f(X, A I~) over A and ~ instead of the marginal p.d.f, of X given ~. Starting with an initial estimate of ~o, we obtain a hill climbing procedure by alternate maximization over A and ~o, i.e. A (m) = argmAax f(X, AIr <m)) (30) (m+D = argmaxG(A(m)[~)"
},
"TABREF1": {
"text": ".., ~N} and { n, air ..... a i N } if l,...,N . The prior density for all the HMM",
"content": "<table><tr><td>parameters is thus</td><td/><td/></tr><tr><td>G(A) oc</td><td colspan=\"2\">~rT'-Ig(Oi) H a?;J-I</td><td>(16)</td></tr><tr><td>i:1</td><td>j=l</td><td>J</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF4": {
"text": "TI test results for p.d.t smoothing (213 inter-word CD-32 models)",
"content": "<table><tr><td/><td>FEB89</td><td>OCT89</td><td>JUN90</td><td>FEB91</td></tr><tr><td>MLE</td><td>93.3</td><td>92.5</td><td>92.1</td><td>92.9</td></tr><tr><td>MLE+VC</td><td>95.0</td><td>95.0</td><td>94.8</td><td>95.9</td></tr><tr><td>MAP(SI)</td><td>95.0</td><td>95.5</td><td>95.0</td><td>96.2</td></tr><tr><td>MAP(M/F)</td><td>95.2</td><td>96.2</td><td>95.2</td><td>96.7</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF5": {
"text": "",
"content": "<table><tr><td>: RM test results for p.d.f, smoothing (2421 inter-word CD-16</td></tr><tr><td>models</td></tr><tr><td>digit and RM databases. A set of 213 CD phone models with 32</td></tr><tr><td>mixture components (213 CD-32) for the TI digits and a set of 2421</td></tr><tr><td>CD phone models with 16 mixture components (2421 CD-16) for</td></tr><tr><td>RM were used for evaluation. Results are given for MLE training,</td></tr><tr><td>MLE with variance clipping (MLE+VC), and MAP estimation with</td></tr><tr><td>p.d.f, smoothing in</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF6": {
"text": "using 16 and 32 mixture components for the observation p.d.L's, with and without corrective training for both test and training data. The CT-16 results were obtained with 8 iter-",
"content": "<table><tr><td>Training</td><td colspan=\"2\">Training</td><td>Test</td><td/></tr><tr><td>.'onditions</td><td>string</td><td>word</td><td>string</td><td>word</td></tr><tr><td>MLE-16</td><td>1.6 (134)</td><td>0.5</td><td>2.0 (168)</td><td>0.7</td></tr><tr><td>CT-16</td><td>0.2 (18)</td><td>0.1</td><td>1.4 (122)</td><td>0.5</td></tr><tr><td>MLE-32</td><td>0.8 (67)</td><td>0.2</td><td>1.5 (126)</td><td>0.5</td></tr><tr><td>CT-32</td><td>0.3 (29)</td><td>0.1</td><td>1.3 (111)</td><td>0.4</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF7": {
"text": "Corrective training results in siring and word error rates (%) on the TI-digits for 21 CI models with 16 and 32 mixture components per stale. String error counts are given in parenthesis.",
"content": "<table><tr><td>Test Set</td><td>MLE-32</td><td colspan=\"2\">CT-32 ICT-32</td></tr><tr><td>TRAIN</td><td>7.7</td><td>1.8</td><td>3.1</td></tr><tr><td>FEB89</td><td>11.9</td><td>10.2</td><td>8.9</td></tr><tr><td>OCT89</td><td>11.5</td><td>9.8</td><td>8.9</td></tr><tr><td>JUN90</td><td>10.2</td><td>8.8</td><td>8.1</td></tr><tr><td>FEB91</td><td>11.4</td><td>10.3</td><td>10.2</td></tr><tr><td>FEB91-SD</td><td>13.9</td><td>11.3</td><td>11.0</td></tr><tr><td>Overall Test</td><td>11.8</td><td>10.1</td><td>9.4</td></tr><tr><td colspan=\"4\">Table S: Corrective eaining results on the RM task (47 CI models with 32</td></tr><tr><td>mixture components per state)</td><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table",
"num": null
}
}
}
}