{ "paper_id": "C98-1047", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:29:37.604839Z" }, "title": "Learning a syntagmatic and paradigmatic structure from language data with a bi-multigram model", "authors": [ { "first": "Sabine", "middle": [], "last": "Deligne", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yoshinori", "middle": [], "last": "Sagisaka", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present a stochastic language modeling tool which aims at retrieving variable-length phrases (multigrams), assuming bigram dependencies between them. The phrase retrieval can be intermixed with a phrase clustering procedure, so that the language data are iteratively structured at both a paradigmatic and a syntagmatic level in a fully integrated way. Perplexity results on ATR travel arrangement data with a bi-multigram model (assuming bigram correlations between the phrases) come very close to the trigram scores with a reduced number of entries in the language model. Also the ability of the class version of the model to merge semantically related phrases into a common class is illustrated.", "pdf_parse": { "paper_id": "C98-1047", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present a stochastic language modeling tool which aims at retrieving variable-length phrases (multigrams), assuming bigram dependencies between them. The phrase retrieval can be intermixed with a phrase clustering procedure, so that the language data are iteratively structured at both a paradigmatic and a syntagmatic level in a fully integrated way. Perplexity results on ATR travel arrangement data with a bi-multigram model (assuming bigram correlations between the phrases) come very close to the trigram scores with a reduced number of entries in the language model. Also the ability of the class version of the model to merge semantically related phrases into a common class is illustrated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "1 Introduction There is currently an increasing interest in statistical language models, which in one way or another aim at exploiting word-dependencies spanning over a variable number of words. Though all these models commonly relax the assumption of fixed-length dependency of the conventional ngram model, they cover a wide variety of modeling assumptions and of parameter estimation frameworks. In this paper, we focus on a phrase-based approach, as opposed to a gram-based approach: sentences are structured into phrases and probabilities are assigned to phrases instead of words. Regardless of whether they are gram or phrase-based, models can be either deterministic or stochastic. In the phrase-based framework, non determinism is introduced via an ambiguity on the parse of the sentence into phrases. In practice, it means that even if phrase abe is registered as a phrase, the possibility of parsing the string as, for instance, [ab] [c] still remains. By contrast, in a deterministic approach, all eo-occurences of a, b and c would be systematically interpreted as an occurence of phrase [abc] .", "cite_spans": [ { "start": 939, "end": 943, "text": "[ab]", "ref_id": null }, { "start": 1099, "end": 1104, "text": "[abc]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Various criteria have been proposed to derive phrases in a purely statistical way 1; data likeli-1i.e. without using grarmnar rules llke in Stochastic Context Free Grammars.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "hood, leaving-one-out likelihood (Ries et al., 1996) , mutual information (Suhm and Waibel, 1994) , and entropy (Masataki and Sagisaka, 1996) . The use of the likelihood criterion in a stochastic framework allows EM principled optimization procedures, but it is prone to over]earning. The other criteria tend to reduce the risk of overlearning, but their optimization relies on heuristic procedures (e.g. word grouping via a greedy algorithm (Matsunaga and Sagayama, 1997) ) for which convergence and optimality are not theoretically guaranteed. The work reported in this paper is based on the multigram model, which is a stochastic phrase-based model, the parameters of which are estimated according to a likelihood criterion using an EM procedure. The multigram approach was introduced in , and in (Deligne and Bimbot, 1995) it was used to derive variable-length phrases under the assumption of independence of the phrases. Various ways of theoretically releasing this assumption were given in (Deligne et al., 1996) . More recently, experiments with 2-word multigrams embedded in a deterministic variable ngram scheme were reported in (Siu, 1998) . In section 2 of this paper, we further formulate a model with bigram (more generally ~-gram) dependencies between the phrases, by including a paradigmatic aspect which enables the clustering of variable-length phrases. It results in a stochastic class-phrase model, which can be interpolated with the stochastic phrase model, in a similar way to deterministic approaches. In section 3 and 4, the phrase and class-phrase models are evaluated in terms of perplexity values and model size.", "cite_spans": [ { "start": 33, "end": 52, "text": "(Ries et al., 1996)", "ref_id": "BIBREF11" }, { "start": 74, "end": 97, "text": "(Suhm and Waibel, 1994)", "ref_id": "BIBREF13" }, { "start": 112, "end": 141, "text": "(Masataki and Sagisaka, 1996)", "ref_id": "BIBREF9" }, { "start": 442, "end": 472, "text": "(Matsunaga and Sagayama, 1997)", "ref_id": "BIBREF10" }, { "start": 800, "end": 826, "text": "(Deligne and Bimbot, 1995)", "ref_id": "BIBREF3" }, { "start": 996, "end": 1018, "text": "(Deligne et al., 1996)", "ref_id": "BIBREF4" }, { "start": 1138, "end": 1149, "text": "(Siu, 1998)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Theoretical formulation of the multigrams", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "In the multigram framework, the assumption is made that sentences result from the concatenation of variable-length phrases, called multigrams. The likelihood of a sentence is computed by summing the likelihood values of all possible segmentations of the sentence into phrases. The likelihood eomputa-tion for any particular segmentation into phrases depends on the model assumed to describe the dependencies between the phrases. We call bi-multigram model the model where bigram dependencies are assumed between the phrases. For instance, by limiting to 3 words the maximal length of a phrase, the bi-multigram likelihood of the string % b c d\" is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variable-length phrase distribution", "sec_num": "2.1" }, { "text": "{ p(M #) p(N I M) p([c] I N) v([d] I [~]) p([a] #)p([bll[a])p([cd]l[b]) p([a] #) p([bc] I [~]) v([d] I [b~]) p([a] #) p([bcd] ] [a]) p([~b] #) v(MI [~b]) p([d] I M) p({ab] #)p([cd]l[ab]) p([abc] #) p([d] I [abc])", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variable-length phrase distribution", "sec_num": "2.1" }, { "text": "To present the general formalism of the model in this section, we assume ~-gram correlations between the phrases, and we note n the maximal length of a phrase (in the above example, ~=2 and n=3). Let W denote a string of words, and {S} the set of possible segmentations on W. The likelihood of W is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variable-length phrase distribution", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00a3(W)= ~ \u00a3(W,S)", "eq_num": "(1)" } ], "section": "Variable-length phrase distribution", "sec_num": "2.1" }, { "text": "sefs} and the likelihood of a segmentation S of W is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variable-length phrase distribution", "sec_num": "2.1" }, { "text": "\u00a3 (W, S) = H P(S(r) lS(r-~+') \"'\" s(~-l)) (2) T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variable-length phrase distribution", "sec_num": "2.1" }, { "text": "with s(~) denoting the phrase of rank (r) in the segmentation S. The model is thus fully defined by the set of g-gram probabilities on the set {8i} i of all the phrases which can be formed by combining 1, 2, ...up to n words of the vocabulary. Maximum likelihood (ML) estimates of these probabilities can be obtained by formulating the estimation problem as a ML estimation from incomplete data (Dempster et al,, 1977) , where the unknown data is the underlying segmentation S. Let Q(k, k+ 1) be the following auxiliary function computed with the likelihoods of iterations k and k + 1 :", "cite_spans": [ { "start": 395, "end": 418, "text": "(Dempster et al,, 1977)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Variable-length phrase distribution", "sec_num": "2.1" }, { "text": "Q(k,k + l) = E \u00a3(k)(S [ W)l\u00b0g\u00a3'(k+l)(W' S) sets} (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variable-length phrase distribution", "sec_num": "2.1" }, { "text": "It has been shown in (Dempster et al., 1977) that [ si, ...si-~_,) , at iteration (k + 1), can be derived by maximizing Q(k,k + 1) over the set of ", "cite_spans": [ { "start": 21, "end": 44, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF5" }, { "start": 50, "end": 66, "text": "[ si, ...si-~_,)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Variable-length phrase distribution", "sec_num": "2.1" }, { "text": "if Q(k,k + 1) > Q(k,k), then \u00a3(k+l)(W) > \u00a3(k)(W). Therefore the reestimation equation of p(si7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variable-length phrase distribution", "sec_num": "2.1" }, { "text": "~se{s} c (si, ... ", "cite_spans": [ { "start": 9, "end": 17, "text": "(si, ...", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": ".si.~_,si-~, S) x \u00a3(k)(S { W) (4)", "sec_num": null }, { "text": "where c (si, ... si~, co) is the number ofoccurences of the combination of phrases si~ \u2022 \u2022 \u2022 siw in the segmentation 0% Reestimation equation (4) can be implemented by means of a forward-backward algorithm, such as the one described for bi-multigrams (~ = 2) in the appendix of this paper. In a decision-oriented scheme, the reestimation equation reduces to:", "cite_spans": [ { "start": 8, "end": 25, "text": "(si, ... si~, co)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "si~_,, S) \u00d7 \u00a3(k)(S I W)", "sec_num": null }, { "text": "c(si, ... si-~_,si-~, S *(~)) p(k+~)(si-~ I si, ...~i~_,) = c(si~ ...~i-~_,, S *(k)) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "si~_,, S) \u00d7 \u00a3(k)(S I W)", "sec_num": null }, { "text": "where S *(k), the segmentation maximizing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "si~_,, S) \u00d7 \u00a3(k)(S I W)", "sec_num": null }, { "text": "\u00a3(k)(S [ W)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "si~_,, S) \u00d7 \u00a3(k)(S I W)", "sec_num": null }, { "text": ", is retrieved with a Viterbi algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "si~_,, S) \u00d7 \u00a3(k)(S I W)", "sec_num": null }, { "text": "Since each iteration improves the model in tile sense of increasing the likelihood \u00a3(k)(W), it eventually converges to a critical point (possibly a local maximum).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "si~_,, S) \u00d7 \u00a3(k)(S I W)", "sec_num": null }, { "text": "Recently, class-phrase based models have gained some attention (Ries et al., 1996) , but usually it assumes a previous clustering of the words. Typically, each word is first assigned a word-class label \"< Ck >\", then variable-length phrases [C~1Ck~...Ck~] of word-class labels are retrieved, each of which leads to define a phrase-class label which can be denoted as \"< [Ck~Ck~...Ck,] >\". But in this approach only phrases of the same length can be assigned the same phrase-class label. For instance, the phrases \"thank you for\" and \"thank you very much for\" cannot be assigned the same class label. We propose to address this limitation by directly clustering phrases instead of words. For this purpose, we assume bigram correlations between the phrases (~ = 2), and we modify the learning procedure of section 2.1, so that each iteration consists of 2 steps:", "cite_spans": [ { "start": 63, "end": 82, "text": "(Ries et al., 1996)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Varlable-length phrase clustering", "sec_num": "2.2" }, { "text": "\u2022 Step 1. Phrase clustering:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Varlable-length phrase clustering", "sec_num": "2.2" }, { "text": "\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "---+ {p(k)(cq(,j) l cq(,,)), p(~)(~j I Ca(,,)) }", "sec_num": null }, { "text": "Step 2_2_ Bi-multigram reestimation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "---+ {p(k)(cq(,j) l cq(,,)), p(~)(~j I Ca(,,)) }", "sec_num": null }, { "text": "{ p(k)(c,(,,) It,(,,)), p(k)(s~ I c,(,,~)) }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "---+ {p(k)(cq(,j) l cq(,,)), p(~)(~j I Ca(,,)) }", "sec_num": null }, { "text": "Step _1 takes a phrase distribution as an input, assigns each phrase s i to a class Cq(~ ), and outputs the corresponding class distribution. In our experiments, tile class assignment is performed by maximizing the mutual information between adjacent phrases, following the line described in (Brown et al., 1992) , with only the modification that candidates to clustering are phrases instead of words. The clustering process is initialized by assigning each phrase to its own class. The loss in average mutual information when merging 2 classes is computed for every pair of classes, and the 2 classes for which the loss is minimal are merged. After each merge, the loss values are updated and the process is repeated till the required number of classes is obtained.", "cite_spans": [ { "start": 292, "end": 312, "text": "(Brown et al., 1992)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "---+ {p(k)(cq(,j) l cq(,,)), p(~)(~j I Ca(,,)) }", "sec_num": null }, { "text": "Step 2_ consists in reestimating a phrase distribution using the bi-multigram reestimation equation 4or 5, with the only difference that the likelihood of a parse, instead of being computed as in Eq. 2, is now computed with the class estimates, i.e. as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "---+ {p(k)(cq(,j) l cq(,,)), p(~)(~j I Ca(,,)) }", "sec_num": null }, { "text": "ff~(W,S) = H p(Cq(s(~)) I Cq(s(~-o)) P(S(r) I Cq(,(,))) (6) This is equivalent to reestimating p(k+l)(sj I si) from p(k)(Cqoj) I Cq(s,)) \u00d7 p(k)(sj [ Cq(sj)), instead ofp(k)(sj I si)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "---+ {p(k)(cq(,j) l cq(,,)), p(~)(~j I Ca(,,)) }", "sec_num": null }, { "text": "as was the case in section 2.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "---+ {p(k)(cq(,j) l cq(,,)), p(~)(~j I Ca(,,)) }", "sec_num": null }, { "text": "Overall, step 1 ensures that the class assignment based on the mutual information criterion is optimal with respect to the current estimates of the phrase distribution and step 2_ ensures that the phrase distribution optimizes the likelihood computed according to (6) with the current estimates of the class distribution. The training data are thus iteratively structured in a fully integrated way, at both a paradigmatic level (step 1) and a syntagmatic level (step 2_).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "---+ {p(k)(cq(,j) l cq(,,)), p(~)(~j I Ca(,,)) }", "sec_num": null }, { "text": "With a class model, the probabilities of 2 phrases belonging to the same class are distinguished only according to their unigram probability. As it is unlikely that this loss of precision be compensated by the improved robustness of the estimates of the class distribution, class based models can be expected to deteriorate the likelihood of not only train but also test data, with respect to non-class based models. However, the performance of non-class models can be enhanced by interpolating their estimates with the class estimates. We first recall the way linear interpolation is performed with conventional word ngram models, and then we extend it to the case of our stochastic phrase-based approach. Usually, linear interpolation weights are computed so as to maximize the likelihood of cross evaluation data (Jelinek and Mercer, 1980) . Denoting by A and (1 -A) the interpolation weights, and by p+ the interpolated estimate, it comes for a word bigram model:", "cite_spans": [ { "start": 816, "end": 842, "text": "(Jelinek and Mercer, 1980)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Interpolation of stochastic class-phrase and phrase models", "sec_num": "2.3" }, { "text": "I = A p(wj [ wl) + (l-A) p(Cq(w D I Cq(w,)) p(wj I Cq~w7j)))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpolation of stochastic class-phrase and phrase models", "sec_num": "2.3" }, { "text": "with A having been iteratively estimated on a cross evaluation corpus IV~,.o~, as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpolation of stochastic class-phrase and phrase models", "sec_num": "2.3" }, { "text": "A(k+l) _ 1 A (~) p(wj ] wi) T.o,. (8) I w,)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpolation of stochastic class-phrase and phrase models", "sec_num": "2.3" }, { "text": "where Tcros, is the number of words in W~ro,,, and e(wiwj) the number of co-occurences of the words wi and wj in W~o~,.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpolation of stochastic class-phrase and phrase models", "sec_num": "2.3" }, { "text": "In the case of a stochastic phrase based modelwhere the segmentation into phrases is not known a priorithe above computation of the interpolation weights still applies, however, it has to be embedded in dynamic programming to solve the ambiguity on the segmentation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpolation of stochastic class-phrase and phrase models", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A(k+l) _ 1 S*(k)) ~(k) p(si I si) e(S*(k)) c(s, jl i s, ) s,)", "eq_num": "(9)" } ], "section": "Interpolation of stochastic class-phrase and phrase models", "sec_num": "2.3" }, { "text": "where S *(~) the most likely segmentation of Wcro~s given the current estimates p(+k)(sj [ si) can be retrieved with a Viterbi algorithm, and where c(S *(k))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpolation of stochastic class-phrase and phrase models", "sec_num": "2.3" }, { "text": "is the number of sequences in the segmentation S *(k). A more accurate, but computationally more involved solution would be to compute A(k+z) as the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpolation of stochastic class-phrase and phrase models", "sec_num": "2.3" }, { "text": "expectation of t S,j Is) p~'(,s I ~.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpolation of stochastic class-phrase and phrase models", "sec_num": "2.3" }, { "text": "over the set of segmentations {S} on Wc,.o,,, using for this purpose a forward-backward algorithm. However in the experiments reported in section 4, we use Eq (9) only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpolation of stochastic class-phrase and phrase models", "sec_num": "2.3" }, { "text": "3 Experiments with phrase based models 3.1 Protocol and database Evaluation protocol A motivation to learn bigram dependencies between variable length phrases is to improve the predictive capability of conventional word bigram models, while keeping the number of parameters in the model lower than in the word trigram case. The predictive capability is usually evaluated with the perplexity measure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpolation of stochastic class-phrase and phrase models", "sec_num": "2.3" }, { "text": "where T is the number of words in W. The lower PP is, the more accurate the prediction of the model is. In the case of a stochastic model, there are actually 2 perplexity values PP and PP* computed respectively from ~s f-.(W,S) and \u00a3(W,S*). The difference PP* -PP is always positive or zero, and measures the average degree of ambiguity on a parse S of W, or equivalently the loss in terms of prediction accuracy, when the sentence likelihood is approximated with the likelihood of the best parse, as is done in a speech recognizer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP = e-XrJOgL(W)", "sec_num": null }, { "text": "In section 3.2, we first evaluate the loss (PP* -PP) using the forward-backward estimation procedure, and then we study the influence of the estimation procedure itself, i.e. Eq. (4) or (5), in terms of perplexity and model size (number of distinct 2-uplets of phrases in the model). Finally, we compare these results with the ones obtained with conventional ngram models (the model size is thus the number of distinct n-uplets of words observed), using for this purpose the CMU-Cambridge toolkit (Clarkson and Rosenfeld, 1997) .", "cite_spans": [ { "start": 497, "end": 527, "text": "(Clarkson and Rosenfeld, 1997)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "PP = e-XrJOgL(W)", "sec_num": null }, { "text": "Training protocol Experiments are reported for phrases having at most n = 1, 2, 3 or 4 words (for n =1, bi-multigrams correspond to conventional higrams). The bi-multigram probabilities are initialized using the relative frequencies of all the 2-nplets of phrases observed in the training corpus, and they are reestimated with 6 iterations. The dictionaries of phrases are pruned by discarding all phrases occuring less than 20 times at initialization, and less than 10 times after each iteration 2, except for the 1-word phrases which are kept with a number of occurrences set to 1. Besides, bi-multigram and n-gram probabilities are smoothed with the backoff smoothing technique (Katz, 1987) using Witten-Bell discounting (Witten and Bell, 1991) a.", "cite_spans": [ { "start": 681, "end": 693, "text": "(Katz, 1987)", "ref_id": "BIBREF7" }, { "start": 724, "end": 747, "text": "(Witten and Bell, 1991)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "PP = e-XrJOgL(W)", "sec_num": null }, { "text": "Database Experiments are run on ATt~ travel arrangement data (see Tab. 1). This database consists of semi-spontaneous dialogues between a hotel clerk and a customer asking for travel/accomodation informations. All hesitation words and false starts were mapped to a single marker \"*uh*\". Ambiguity on a parse (Table 2) The difference (PP*-PP) usually remains within about 1 point of perplexity, meaning that the average ambiguity on a parse is low, so that relying on the single best parse should not decrease the accuracy of the prediction very much.", "cite_spans": [], "ref_spans": [ { "start": 308, "end": 317, "text": "(Table 2)", "ref_id": null } ], "eq_spans": [], "section": "PP = e-XrJOgL(W)", "sec_num": null }, { "text": "Influence of the estimation procedure (Table 3) As far as perplexity values arc concerned, 2Using different pruning thresholds values did not dramatically affect the results on our data, provided that the threshold at initialization is in the range 2%40, and that the threshold of the iterations is less than 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP = e-XrJOgL(W)", "sec_num": null }, { "text": "aThe Witten-Bell discounting was chosen, because it yielded the best perplexity scores with conventional n-gTams on our test data. the estimation scheme seems to have very little influence, with only a slight advantage in using the forward-backward training. On tile other hand, the size of the model at the end of the training is about 30% less with the forward-backward training: approximately 40 000 versus 60 000, for a same test perplexity value. The bi-multigram results tend to indicate that the pruning heuristic used to discard phrases does not allow us to fully avoid overtraining, since perplexities with n =3, 4 (i.e. dependencies possibly spanning over 6 or 8 words) are higher than with n =2 (dependencies limited to 4 words).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP = e-XrJOgL(W)", "sec_num": null }, { "text": "Test perplexity values PP* Comparison with n-grams (Table 4) The lowest bi-multigram perplexity (43.9) is still higher than the trigram score, but it is much closer to the trigram value (40.4) than to the bigram one (56.0) 4 The number of entries in the bi-multigram model is much less than in the trigram model (45000 versus 75000), which illustrates the ability of the model to select most relevant phrases.", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 60, "text": "(Table 4)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "PP = e-XrJOgL(W)", "sec_num": null }, { "text": "Test perplexity values PP n (and n) 1 2 3 4 n-gram 314.5 56.0 40.4 39.8 bimuftigrams 56.0 43.9 44.2 45.0 Model size n (and n) ' 1 2 3 4 n-gram 3526 \"32505 ;g5511 112148 bimultigrams 3'2505 42347 >13672 4~86 Training protocol All non-class models are the same as in section 3. The class-phrase models are trained with 5 iterations of the algorithm described in section 2.2: each iteration consists in clustering the phrases into 300 phrase-classes (step 1), and in reestimating the phrase distribution (step 2) with Eq. 4 Database The training and test data used to train and evaluate the models are the same as the ones described in Table 1 . We use an additional set of 7350 sentences and 55000 word tokens to estimate the interpolation weights of the interpolated models.", "cite_spans": [], "ref_spans": [ { "start": 633, "end": 640, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "PP = e-XrJOgL(W)", "sec_num": null }, { "text": "The perplexity scores obtained with the non-class, class and interpolated versions of a bi-multigram model (limiting to 2 words the size of a phrase), and of the bigram and trigram models are in Table 5 . Linear interpolation with the class based models allows us to improve each model's performance by about 2 points of perplexity: the Viterbi perplexity score of the interpolated bi-multigrams (43.5) remains intermediate between the bigram (54.7) and trigram (38.6) scores. However in the trigram case, the enhancement of the performance is obtained at the expense of a great increase of the number of entries in the interpolated model (139256 entries).", "cite_spans": [], "ref_spans": [ { "start": 195, "end": 202, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "In the bi-multigram case, the augmentation of the model size is much less (63972 entries). As a re-suit, the interpolated bi-multigram model still has fewer entries than the word based trigram model (75511 entries), while its Viterbi perplexity score comes even closer to the word trigram score (43.5 versus 40.4). Further experiments studying the influence of the threshold values and of the number of classes still need to be performed to optimize the performances for all models. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Clustering variable-length phrases may provide a natural way of dealing with some of the language disfluencies which characterize spontaneous utterances, like the insertion of hesitation words for instance. To illustrate this point, examples of phrases which were merged into a common cluster during the training of a model allowing phrases of up to n = 5 words are listed in Table 6 (the phrases containing the hesitation marker \"*uh*\" are in the upper part of the table). It is often the case that phrases differing mainly because of a speaker hesitation are merged together. Table 6 also illustrates another motivation for phrase retrieval and clustering, apart from word prediction, which is to address issues related to topic identification, dialogue modeling and language understanding (Kawahara et al., 1997) . Indeed, though the clustered phrases in our experiments were derived fully blindly, i.e. with no semantic/pragmatic information, intra-class phrases often display a strong semantic correlation. To make this approach effectively usable for speech understanding, constraints derived from semantic or pragmatic knowledge (like speech act tag of the utterance for instance) could be placed on the phrase clustering process.", "cite_spans": [ { "start": 792, "end": 815, "text": "(Kawahara et al., 1997)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 376, "end": 383, "text": "Table 6", "ref_id": null }, { "start": 578, "end": 585, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Examples", "sec_num": "4.3" }, { "text": "An algorithm to derive variable-length phrases assuming bigram dependencies between the phrases has been proposed for a language modeling task. It has been shown how a paradigmatic element could p(k+')(sj [si) = ~T=, a(t, li) p(k)(sj Isi) 13(t+lj, lj) 6i(t-li+ l) 6j(t+ l) /3(t, -", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "li and/3\" refer respectively to the lengths of the sequences si and sj, and where the Kronecker function 6k (t) equals 1 if the word sequence starting at rank t is sk, and equals 0 if not. In the case where the likelihood of a parse is computed with the class assumption, i.e. according to (6), the term p(k) (sj [si) in the reestimation equation shown in Table 7 should be replaced by its class equivalent, i.e. by ", "cite_spans": [ { "start": 108, "end": 111, "text": "(t)", "ref_id": null }, { "start": 309, "end": 317, "text": "(sj [si)", "ref_id": null } ], "ref_spans": [ { "start": 356, "end": 363, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "4Besides, tile trigram score depends on the discounted scheme: with a linear discounting, the trigram perplexity on our test data was 48.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "{ yes_that_will ; *uh*_that_would } { yes_that_will_be ; *uh*_yes_that's } { *uh*_by_the ; and_by_the } { yes_*uh*_i ; i_see_i } { okay_i_understand ; *uh*_yes_please } { could_you_recommend ; *uh*_is_there } { *uh*_could_you_tell ; and_could_you_tell } { so_that_will ; yes_that_will ; yes_that_would ; uh*_that_would } { if_possible_i'd_like; we_would_like ; *uh*_i_want } { that_sounds_good ; *uh*_i_understand } { *uh*_i_really ; *uh*_i_don't } { *uh*_i'm_staying ; and_i'm_staying } { all_right_we ; *uh*_yes_i } { good_morning ; good_afternoon ;hello } { sorry_to_keep_you_waiting ; hello_front_desk ; thank_you_very_much ;thank_you_for_calling ; you're_very_welcome ;yes_that's_correct ; yes_that's_right } { non_smoking ; western~style ; first_class ; japanese_style } { familiar_with ; in_charge_of } { could_you_tellJne ; do_you_know } { how_long ; how.much ; what_time ; uh*_what_time ; *uh*_how_nmch ; and_how_much ; and_what_time } { explain ; tell_us ; tell_me ; tell_me_about ; tell_me_what ; tell_me_how ; tell_me_how_much ; tell_me_the ; givemm ; give_me_the ; give_me_your ; please_tell_me } { are_there ; are_there_any ; if_there_are ; if_there_is ;if_you_have ; if_there's ; do_you_have ; do_you_have_a ; do_yon_have_any ; we_have_two ; is_there ; is_there_any ; is_there_a ;is_there_anything ; *uh*_is_there ; uh*_do_you_have } { tomorrow_morning ; nine_o'clock ; eight_o'clock ; seven_o'clock ; three_p.m. ; august_tenth ; in_the_morning ; six_p.m. ; six_o'clock } { we'd_like ;i'd_like ; i_wouldAike } { that'll_be_fine ; that's_fine ; i_understand ) { kazuko_suzuki ; mary ; mary_phillips ; thomas_nelson ; suzuki ; amy_harris ; john ;john_phillips } { fine ; no_problem ; anything_else } { return_the_car ; pick_it_up } { todaiji ; kofukuji ; brooklyn ; enryakuji ; hiroshima ; las_vegas ; salt_lake_city ; chicago ; kinkakuji ; manhattan ; miami ; kyoto.station ; this_hotel ; our_hotel ; your_hotel ; the_airport ; the_hotel ) Table 6 : Example of phrases assigned to a common cluster, with a model allowing up to 5-word phrases (clusters are delimited with curly brackets) be integrated within this framework, allowing to assign common labels to phrases having a different length. Experiments on a task oriented corpus have shown that structuring sentences into phrases results in large reductions in the bigram perplexity value, while still keeping the number of entries in the language model much lower than in a trigram model, especially when these models are interpolated with class based models. These results might be fitrther improved by finding a more efficient pruning strategy, allowing the learning of even longer dependencies without over-training, and by further experimenting with the class version of the pbrase-based model. Additionally, the semantic relevance of the clusters of phrases motivates the use of this approach in the areas of dialogue modeling and language nnderstanding. In that case, semantic/pragmatic informations could be used to constrain the clustering of the phrases.", "cite_spans": [], "ref_spans": [ { "start": 1954, "end": 1961, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "Forward-backward algorithm for the estimation of the bi-multigram parameters Equation (4) can be implemented at a complexity of O(n2T), with n the maximal length of a sequence and 7' the number of words in the corpus, using a forward-backward algorithm. Basically, it consists in re-arranging the order of the summations of the numerator and denominator of Eq. 4: the likelihood values of all the segmentations where sequence sj occurs after sequence si, with sequence si ending at the word at rank (t), are summed up first; and then the summation is completed by summing over t. The cumulated likelihood of all the segmentations where sj follows sl, and si ends at (t), can be directly computed as a product of a forward and of a backward variable. The forward variable represents the likelihood of the first t words, where the last I i words are constrained to form a sequence: Assuming that the likelihood of a parse is computed according to Eq. (2), then the reestimation equation (4) can be rewritten as shown in Tab. 7. The variables a and fl can be calculated according to the following recursion equations (assuming a start and an end symbol at rank t := 0 and t = T+I):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix:", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Variable-length sequence modeling: Multigrams", "authors": [ { "first": "F", "middle": [], "last": "Bimbot", "suffix": "" }, { "first": "R", "middle": [], "last": "Pieraccini", "suffix": "" }, { "first": "E", "middle": [], "last": "Levin", "suffix": "" }, { "first": "B", "middle": [], "last": "Atat", "suffix": "" } ], "year": 1995, "venue": "IEEE Signal Processing Letters", "volume": "2", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Bimbot, R. Pieraccini, E. Levin, and B. Atat. 1995. Variable-length sequence modeling: Multi- grams. IEEE Signal Processing Letters, 2(6), June.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Class-based n-grain models of natural language", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "P", "middle": [ "V" ], "last": "Souza", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Lai", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1992, "venue": "Computational Linguistics", "volume": "18", "issue": "4", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "P.F. Brown, V.J. Della Pietra, P.V. de Souza, J.C. Lai, and R.L. Mercer. 1992. Class-based n-grain models of natural language. Computational Lin- guistics, 18(4):467-479.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Statistical language modeling using the cmu-cambridge toolkit", "authors": [ { "first": "P", "middle": [], "last": "Clarkson", "suffix": "" }, { "first": "R", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1997, "venue": "Proceedings of EUROSPEECH 97", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Clarkson and R. Rosenfeld. 1997. Statistical lan- guage modeling using the cmu-cambridge toolkit. Proceedings of EUROSPEECH 97.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Language modeling by variable length sequences: theoretical formulation and evaluation of multigrams", "authors": [ { "first": "S", "middle": [], "last": "Deligne", "suffix": "" }, { "first": "F", "middle": [], "last": "Bimbot", "suffix": "" } ], "year": 1995, "venue": "Proceedings of ICASSP 95", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Deligne and F. Bimbot. 1995. Language modeling by variable length sequences: theoretical formula- tion and evaluation of multigrams. Proceedings of ICASSP 95.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Introducing statistical dependencies and structural constraints in variable-length sequence models", "authors": [ { "first": "S", "middle": [], "last": "Deligne", "suffix": "" }, { "first": "F", "middle": [], "last": "Yvon", "suffix": "" }, { "first": "F", "middle": [], "last": "Bimbot", "suffix": "" } ], "year": 1996, "venue": "Grammatical Inference : Learning Syntax from Sentences", "volume": "1147", "issue": "", "pages": "156--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Deligne, F. Yvon, and F. Bimbot. 1996. In- troducing statistical dependencies and structural constraints in variable-length sequence models. In Grammatical Inference : Learning Syntax from Sentences, Lecture Notes in Artificial Intelligence 1147, pages 156-167. Springer.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Maximum-likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistics Society", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum-likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistics So- ciety, 39(1):1-38.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Interpolated estimation of markov source parameters from sparse data", "authors": [ { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1980, "venue": "Proceedings of the workshop on Pattern Recognition in Practice", "volume": "", "issue": "", "pages": "381--397", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Jelinek and R.L. Mercer. 1980. Interpolated esti- mation of markov source parameters from sparse data. Proceedings of the workshop on Pattern Recognition in Practice, pages 381-397.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Estimation of probabilities from sparse data for the language model component of a speech recognizer", "authors": [ { "first": "S", "middle": [ "M" ], "last": "Katz", "suffix": "" } ], "year": 1987, "venue": "IEEE Trans. on Acoustic, Speech, and Signal Processing", "volume": "35", "issue": "3", "pages": "400--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. M. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Trans. on Acous- tic, Speech, and Signal Processing, 35(3):400-401, March.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Phrase language models for detection and verification-based speech understanding", "authors": [ { "first": "T", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "S", "middle": [], "last": "Doshita", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Lee", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 1997 IEEE workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "49--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Kawahara, S. Doshita, and C. H. Lee. 1997. Phrase language models for detection and verification-based speech understanding. Proceed- ings of the 1997 IEEE workshop on Automatic Speech Recognition and Understanding, pages 49- 56, December.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Variableorder n-gram generation by word-class splitting and consecutive word grouping", "authors": [ { "first": "H", "middle": [], "last": "Masataki", "suffix": "" }, { "first": "Y", "middle": [], "last": "Sagisaka", "suffix": "" } ], "year": 1996, "venue": "Proceedings of ICASSP 96", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Masataki and Y. Sagisaka. 1996. Variable- order n-gram generation by word-class splitting and consecutive word grouping. Proceedings of ICASSP 96.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Variablelength language modeling integrating global constraints", "authors": [ { "first": "S", "middle": [], "last": "Matsunaga", "suffix": "" }, { "first": "S", "middle": [], "last": "Sagayama", "suffix": "" } ], "year": 1997, "venue": "Proceedings of EUROSPEECH 97", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Matsunaga and S. Sagayama. 1997. Variable- length language modeling integrating global con- straints. Proceedings of EUROSPEECH 97.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Class phrase models for language modeling", "authors": [ { "first": "K", "middle": [], "last": "Ries", "suffix": "" }, { "first": "F", "middle": [ "D" ], "last": "Buo", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 1996, "venue": "Proceedings of ICSLP 96", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Ries, F. D. Buo, and A. Waibel. 1996. Class phrase models for language modeling. Proceedings of ICSLP 96.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning local lezical structure in spontaneous speech language modeling", "authors": [ { "first": "M", "middle": [], "last": "Siu", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Siu. 1998. Learning local lezical structure in spontaneous speech language modeling. Ph.D. the- sis, Boston University.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Towards better language models for spontaneous speech", "authors": [ { "first": "B", "middle": [], "last": "Suhm", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 1994, "venue": "Proceedings of ICSLP 94", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Suhm and A. Waibel. 1994. Towards better lan- guage models for spontaneous speech. Proceedings of ICSLP 94.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The zero-frequency problem: estimating the probabilities of novel events in adaptative text compression", "authors": [ { "first": "I", "middle": [ "H" ], "last": "Witten", "suffix": "" }, { "first": "T", "middle": [ "C" ], "last": "Bell", "suffix": "" } ], "year": 1991, "venue": "IEEE Trans. on Information Theory", "volume": "37", "issue": "4", "pages": "1085--1094", "other_ids": {}, "num": null, "urls": [], "raw_text": "I.H. Witten and T.C. Bell. 1991. The zero-frequency problem: estimating the probabilities of novel events in adaptative text compression. IEEE Trans. on Information Theory, 37(4):1085-1094, July.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "parameters of iteration (k + 1), under the set of constraints ~,~_ p(siz [ sq ... si~_,) = 1, hence: P(~+I)(si7 I Si, ... Si-~_t) = ~s~ts} c(sil..", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "B. [___~.0 [ 45.1 I 45.--7T-~ Viterbi [----5-~.0 [ 45.7 I 45.~9 ~ Model size >-n. a o5--Q7.423.471 436-7g-~ 6725g-", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": ". The bigrams and trigrams of classes are estimated based on 300 word-classes derived with the same clustering algorithm as the one used to cluster the phrases. The estimates of all the class ditributions are smoothed with the backoff technique like in section 3. Linear interpolation weights between the class and non-class models are estimated based on Eq. (8) in the case of the bigram or trigram models, and on Eq.(9) in the case of the bi-multigram model.", "num": null, "type_str": "figure" }, "FIGREF4": { "uris": null, "text": "T + 1, 1) = 1,/3(T + 1,2) ..... /3(T + 1, n) = O.", "num": null, "type_str": "figure" }, "FIGREF5": { "uris": null, "text": "(k)(CqOj ) ]Cq(~,)) P(k)(sj I Cq(~D). In the recursion of a, the term p([W~:)_t,+U]l[W~'_7.~:)t+l])'\" \" --, -equation is replaced by the corresponding class bigram probability multiplied by the class conditional probof the sequence [W(tt_)t,+l)].,_ A similar ability change affects the recursion equation of /3, with (t+t) p([Wit+l)]][W~:),j+D]) being replaced by the corresponding class bigram probability multiplied by the class conditional probability of the sequence re(t+01 -(t+l)J\"", "num": null, "type_str": "figure" }, "TABREF1": { "type_str": "table", "html": null, "num": null, "text": "", "content": "
: ATI~ Travel Arrangement Data
3.2 Results
" }, "TABREF2": { "type_str": "table", "html": null, "num": null, "text": "Influence of the estimation procedure: forward-backward (F.-B.) or Viterbi.", "content": "" }, "TABREF3": { "type_str": "table", "html": null, "num": null, "text": "Comparison with n-grams: Test perplexity values and model size.", "content": "
4 Experiments with class-phrase
based models
4.1 Protocol and database
Evaluation protocol In section 4.2, we compare
class versions and interpolated versions of the bi-
gram, trigram and bi-multigram models, in terms
of perplexity values and of model size. For bigrams
(resp. trigrams) of classes, the size of the model is
the number of distinct 2-uplets (resp. 3-uplets) of
word-classes observed, plus the size of the vocab-
ulary. For the class version of the bi-multigrams,
the size of the model is the number of distinct 2-
uplets of phrase-classes, plus the number of distinct
phrases maintained. In section 4.3, we show samples
from classes of up to 5-word phrases, to illustrate
the potential benefit of clustering relatively long and
variable-length phrases for issues related to language
understanding.
" }, "TABREF5": { "type_str": "table", "html": null, "num": null, "text": "", "content": "
: Comparison of class-phrase bi-multigrams
and of class-word bigrams and trigrams: Test per-
plexity values and model size.
" }, "TABREF6": { "type_str": "table", "html": null, "num": null, "text": "Forward-backward reestimation for 1 " } } } }