{ "paper_id": "P02-1038", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:31:22.827128Z" }, "title": "Discriminative Training and Maximum Entropy Models for Statistical Machine Translation", "authors": [ { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen -University of Technology", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen -University of Technology", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a framework for statistical machine translation of natural languages based on direct maximum entropy models, which contains the widely used source-channel approach as a special case. All knowledge sources are treated as feature functions, which depend on the source language sentence, the target language sentence and possible hidden variables. This approach allows a baseline machine translation system to be extended easily by adding new feature functions. We show that a baseline statistical machine translation system is significantly improved using this approach.", "pdf_parse": { "paper_id": "P02-1038", "_pdf_hash": "", "abstract": [ { "text": "We present a framework for statistical machine translation of natural languages based on direct maximum entropy models, which contains the widely used source-channel approach as a special case. All knowledge sources are treated as feature functions, which depend on the source language sentence, the target language sentence and possible hidden variables. This approach allows a baseline machine translation system to be extended easily by adding new feature functions. We show that a baseline statistical machine translation system is significantly improved using this approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We are given a source ('French') sentence f J 1 = f 1 , . . . , f j , . . . , f J , which is to be translated into a target ('English') sentence e I 1 = e 1 , . . . , e i , . . . , e I . Among all possible target sentences, we will choose the sentence with the highest probability: 1 e I 1 = argmax", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e I 1 {P r(e I 1 |f J 1 )}", "eq_num": "(1)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language. 1 The notational convention will be as follows. We use the symbol P r(\u2022) to denote general probability distributions with (nearly) no specific assumptions. In contrast, for model-based probability distributions, we use the generic symbol p(\u2022).", "cite_spans": [ { "start": 116, "end": 117, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "According to Bayes' decision rule, we can equivalently to Eq. 1 perform the following maximization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "e I 1 = argmax e I 1 {P r(e I 1 ) \u2022 P r(f J 1 |e I 1 )} (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "This approach is referred to as source-channel approach to statistical MT. Sometimes, it is also referred to as the 'fundamental equation of statistical MT' (Brown et al., 1993) . Here, P r(e I 1 ) is the language model of the target language, whereas P r(f J 1 |e I 1 ) is the translation model. Typically, Eq. 2 is favored over the direct translation model of Eq. 1 with the argument that it yields a modular approach. Instead of modeling one probability distribution, we obtain two different knowledge sources that are trained independently.", "cite_spans": [ { "start": 157, "end": 177, "text": "(Brown et al., 1993)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "The overall architecture of the source-channel approach is summarized in Figure 1 . In general, as shown in this figure, there may be additional transformations to make the translation task simpler for the algorithm. Typically, training is performed by applying a maximum likelihood approach. If the language model P r(e I 1 ) = p \u03b3 (e I 1 ) depends on parameters \u03b3 and the translation model P r(f J 1 |e I 1 ) = p \u03b8 (f J 1 |e I 1 ) depends on parameters \u03b8, then the optimal parameter values are obtained by maximizing the likelihood on a parallel training corpus f S 1 , e S 1 (Brown et al., 1993) : Global Searc\u0125 We obtain the following decision rule:", "cite_spans": [ { "start": 578, "end": 598, "text": "(Brown et al., 1993)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 73, "end": 81, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 = argmax \u03b8 S s=1 p \u03b8 (f s |e s )", "eq_num": "(3)" } ], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b3 = argmax \u03b3 S s=1 p \u03b3 (e s )", "eq_num": "(4)" } ], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "e I 1 = argmax e I 1 {P r(e I 1 ) \u2022 P r(f J 1 |e I 1 )} P r(f J 1 |e I 1 ): Translation Model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e I 1 = argmax e I 1 {p\u03b3(e I 1 ) \u2022 p\u03b8(f J 1 |e I 1 )}", "eq_num": "(5)" } ], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "State-of-the-art statistical MT systems are based on this approach. Yet, the use of this decision rule has various problems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "1. The combination of the language model p\u03b3(e I 1 ) and the translation model p\u03b8(f J 1 |e I 1 ) as shown in Eq. 5 can only be shown to be optimal if the true probability distributions p\u03b3(e I 1 ) = P r(e I 1 ) and p\u03b8(f J 1 |e I 1 ) = P r(f J 1 |e I 1 ) are used. Yet, we know that the used models and training methods provide only poor approximations of the true probability distributions. Therefore, a different combination of language model and translation model might yield better results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "2. There is no straightforward way to extend a baseline statistical MT model by including additional dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "3. Often, we observe that comparable results are obtained by using the following decision rule instead of Eq. 5 (Och et al., 1999) :", "cite_spans": [ { "start": 112, "end": 130, "text": "(Och et al., 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "e I 1 = argmax e I 1 {p\u03b3(e I 1 ) \u2022 p\u03b8(e I 1 |f J 1 )} (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "Here, we replaced p\u03b8(f J 1 |e I 1 ) by p\u03b8(e I 1 |f J 1 ). From a theoretical framework of the sourcechannel approach, this approach is hard to justify. Yet, if both decision rules yield the same translation quality, we can use that decision rule which is better suited for efficient search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Channel Model", "sec_num": "1.1" }, { "text": "As alternative to the source-channel approach, we directly model the posterior probability P r(e I 1 |f J 1 ). An especially well-founded framework for doing this is maximum entropy (Berger et al., 1996) . In this framework, we have a set of M feature functions h m (e I 1 , f J 1 ), m = 1, . . . , M . For each feature function, there exists a model parameter \u03bb m , m = 1, . . . , M . The direct translation probability is given ", "cite_spans": [ { "start": 182, "end": 203, "text": "(Berger et al., 1996)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Direct Maximum Entropy Translation Model", "sec_num": "1.2" }, { "text": "Source Language Text Preprocessing \u03bb 1 \u2022 h 1 (e I 1 , f J 1 ) o o Global Search argmax e I 1 M m=1 \u03bb m h m (e I 1 , f J 1 ) \u03bb 2 \u2022 h 2 (e I 1 , f J 1 ) o o . . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direct Maximum Entropy Translation Model", "sec_num": "1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P r(e I 1 |f J 1 ) = p \u03bb M 1 (e I 1 |f J 1 ) (7) = exp[ M m=1 \u03bb m h m (e I 1 , f J 1 )] e I 1 exp[ M m=1 \u03bb m h m (e I 1 , f J 1 )]", "eq_num": "(8)" } ], "section": "Direct Maximum Entropy Translation Model", "sec_num": "1.2" }, { "text": "This approach has been suggested by (Papineni et al., 1997; Papineni et al., 1998) for a natural language understanding task. We obtain the following decision rule:", "cite_spans": [ { "start": 36, "end": 59, "text": "(Papineni et al., 1997;", "ref_id": "BIBREF9" }, { "start": 60, "end": 82, "text": "Papineni et al., 1998)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Direct Maximum Entropy Translation Model", "sec_num": "1.2" }, { "text": "e I 1 = argmax e I 1 P r(e I 1 |f J 1 ) = argmax e I 1 M m=1 \u03bb m h m (e I 1 , f J 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direct Maximum Entropy Translation Model", "sec_num": "1.2" }, { "text": "Hence, the time-consuming renormalization in Eq. 8 is not needed in search. The overall architecture of the direct maximum entropy models is summarized in Figure 2 . Interestingly, this framework contains as special case the source channel approach (Eq. 5) if we use the following two feature functions:", "cite_spans": [], "ref_spans": [ { "start": 155, "end": 163, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Direct Maximum Entropy Translation Model", "sec_num": "1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h 1 (e I 1 , f J 1 ) = log p\u03b3(e I 1 ) (9) h 2 (e I 1 , f J 1 ) = log p\u03b8(f J 1 |e I 1 )", "eq_num": "(10)" } ], "section": "Direct Maximum Entropy Translation Model", "sec_num": "1.2" }, { "text": "and set \u03bb 1 = \u03bb 2 = 1. Optimizing the corresponding parameters \u03bb 1 and \u03bb 2 of the model in Eq. 8 is equivalent to the optimization of model scaling factors, which is a standard approach in other areas such as speech recognition or pattern recognition. The use of an 'inverted' translation model in the unconventional decision rule of Eq. 6 results if we use the feature function log P r(e I 1 |f J 1 ) instead of log P r(f J 1 |e I 1 ). In this framework, this feature can be as good as log P r(f J 1 |e I 1 ). It has to be empirically verified, which of the two features yields better results. We even can use both features log P r(e I 1 |f J 1 ) and log P r(f J 1 |e I 1 ), obtaining a more symmetric translation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direct Maximum Entropy Translation Model", "sec_num": "1.2" }, { "text": "As training criterion, we use the maximum class posterior probability criterion:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direct Maximum Entropy Translation Model", "sec_num": "1.2" }, { "text": "\u03bb M 1 = argmax \u03bb M 1 S s=1 log p \u03bb M 1 (e s |f s ) (11)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direct Maximum Entropy Translation Model", "sec_num": "1.2" }, { "text": "This corresponds to maximizing the equivocation or maximizing the likelihood of the direct translation model. This direct optimization of the posterior probability in Bayes decision rule is referred to as discriminative training (Ney, 1995) because we directly take into account the overlap in the probability distributions. The optimization problem has one global optimum and the optimization criterion is convex.", "cite_spans": [ { "start": 229, "end": 240, "text": "(Ney, 1995)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Direct Maximum Entropy Translation Model", "sec_num": "1.2" }, { "text": "Typically, the probability P r(f J 1 |e I 1 ) is decomposed via additional hidden variables. In statistical alignment models P r(f J 1 , a J 1 |e I 1 ), the alignment a J 1 is introduced as a hidden variable:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Models and Maximum Approximation", "sec_num": "1.3" }, { "text": "P r(f J 1 |e I 1 ) = a J 1 P r(f J 1 , a J 1 |e I 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Models and Maximum Approximation", "sec_num": "1.3" }, { "text": "The alignment mapping is j \u2192 i = a j from source position j to target position i = a j . Search is performed using the so-called maximum approximation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Models and Maximum Approximation", "sec_num": "1.3" }, { "text": "e I 1 = argmax e I 1 \uf8f1 \uf8f2 \uf8f3 P r(e I 1 ) \u2022 a J 1 P r(f J 1 , a J 1 |e I 1 ) \uf8fc \uf8fd \uf8fe \u2248 argmax e I 1 P r(e I 1 ) \u2022 max a J 1 P r(f J 1 , a J 1 |e I 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Models and Maximum Approximation", "sec_num": "1.3" }, { "text": "Hence, the search space consists of the set of all possible target language sentences e I 1 and all possible alignments a J 1 . Generalizing this approach to direct translation models, we extend the feature functions to include the dependence on the additional hidden variable. Using M feature functions of the form h m (e I 1 , f J 1 , a J 1 ), m = 1, . . . , M , we obtain the following model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Models and Maximum Approximation", "sec_num": "1.3" }, { "text": "P r(e I 1 , a J 1 |f J 1 ) = = exp M m=1 \u03bb m h m (e I 1 , f J 1 , a J 1 ) e I 1 ,a J 1 exp M m=1 \u03bb m h m (e I 1 , f J 1 , a J 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Models and Maximum Approximation", "sec_num": "1.3" }, { "text": "Obviously, we can perform the same step for translation models with an even richer structure of hidden variables than only the alignment a J 1 . To simplify the notation, we shall omit in the following the dependence on the hidden variables of the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Models and Maximum Approximation", "sec_num": "1.3" }, { "text": "As specific MT method, we use the alignment template approach (Och et al., 1999) . The key elements of this approach are the alignment templates, which are pairs of source and target language phrases together with an alignment between the words within the phrases. The advantage of the alignment template approach compared to single word-based statistical translation models is that word context and local changes in word order are explicitly considered.", "cite_spans": [ { "start": 62, "end": 80, "text": "(Och et al., 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Alignment Templates", "sec_num": "2" }, { "text": "The alignment template model refines the translation probability P r(f J 1 |e I 1 ) by introducing two hidden variables z K 1 and a K 1 for the K alignment templates and the alignment of the alignment templates:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Templates", "sec_num": "2" }, { "text": "P r(f J 1 |e I 1 ) = z K 1 ,a K 1 P r(a K 1 |e I 1 ) \u2022 P r(z K 1 |a K 1 , e I 1 ) \u2022 P r(f J 1 |z K 1 , a K 1 , e I 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Templates", "sec_num": "2" }, { "text": "Hence, we obtain three different probability distributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Templates", "sec_num": "2" }, { "text": "P r(a K 1 |e I 1 ), P r(z K 1 |a K 1 , e I 1 ) and P r(f J 1 |z K 1 , a K 1 , e I 1 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Templates", "sec_num": "2" }, { "text": "Here, we omit a detailed description of modeling, training and search, as this is not relevant for the subsequent exposition. For further details, see (Och et al., 1999) .", "cite_spans": [ { "start": 151, "end": 169, "text": "(Och et al., 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Alignment Templates", "sec_num": "2" }, { "text": "To use these three component models in a direct maximum entropy approach, we define three different feature functions for each component of the translation model instead of one feature function for the whole translation model p(f J 1 |e I 1 ). The feature functions have then not only a dependence on f J 1 and e I 1 but also on z K 1 , a K 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Templates", "sec_num": "2" }, { "text": "So far, we use the logarithm of the components of a translation model as feature functions. This is a very convenient approach to improve the quality of a baseline system. Yet, we are not limited to train only model scaling factors, but we have many possibilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "\u2022 We could add a sentence length feature:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "h(f J 1 , e I 1 ) = I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "This corresponds to a word penalty for each produced target word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "\u2022 We could use additional language models by using features of the following form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "h(f J 1 , e I 1 ) = h(e I 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "\u2022 We could use a feature that counts how many entries of a conventional lexicon co-occur in the given sentence pair. Therefore, the weight for the provided conventional dictionary can be learned. The intuition is that the conventional dictionary is expected to be more reliable than the automatically trained lexicon and therefore should get a larger weight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "\u2022 We could use lexical features, which fire if a certain lexical relationship (f, e) occurs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "h(f J 1 , e I 1 ) = \uf8eb \uf8ed J j=1 \u03b4(f, f j ) \uf8f6 \uf8f8 \u2022 I i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "\u03b4(e, e i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "\u2022 We could use grammatical features that relate certain grammatical dependencies of source and target language. For example, using a function k(\u2022) that counts how many verb groups exist in the source or the target sentence, we can define the following feature, which is 1 if each of the two sentences contains the same number of verb groups:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h(f J 1 , e I 1 ) = \u03b4(k(f J 1 ), k(e I 1 ))", "eq_num": "(12)" } ], "section": "Feature functions", "sec_num": "3" }, { "text": "In the same way, we can introduce semantic features or pragmatic features such as the dialogue act classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "We can use numerous additional features that deal with specific problems of the baseline statistical MT system. In this paper, we shall use the first three of these features. As additional language model, we use a class-based five-gram language model. This feature and the word penalty feature allow a straightforward integration into the used dynamic programming search algorithm (Och et al., 1999) . As this is not possible for the conventional dictionary feature, we use n-best rescoring for this feature.", "cite_spans": [ { "start": 381, "end": 399, "text": "(Och et al., 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Feature functions", "sec_num": "3" }, { "text": "To train the model parameters \u03bb M 1 of the direct translation model according to Eq. 11, we use the GIS (Generalized Iterative Scaling) algorithm (Darroch and Ratcliff, 1972) . It should be noted that, as was already shown by (Darroch and Ratcliff, 1972) , by applying suitable transformations, the GIS algorithm is able to handle any type of real-valued features. To apply this algorithm, we have to solve various practical problems.", "cite_spans": [ { "start": 146, "end": 174, "text": "(Darroch and Ratcliff, 1972)", "ref_id": "BIBREF4" }, { "start": 226, "end": 254, "text": "(Darroch and Ratcliff, 1972)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "The renormalization needed in Eq. 8 requires a sum over a large number of possible sentences, for which we do not know an efficient algorithm. Hence, we approximate this sum by sampling the space of all possible sentences by a large set of highly probable sentences. The set of considered sentences is computed by an appropriately extended version of the used search algorithm (Och et al., 1999) computing an approximate n-best list of translations.", "cite_spans": [ { "start": 377, "end": 395, "text": "(Och et al., 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "Unlike automatic speech recognition, we do not have one reference sentence, but there exists a number of reference sentences. Yet, the criterion as it is described in Eq. 11 allows for only one reference translation. Hence, we change the criterion to allow R s reference translations e s,1 , . . . , e s,R s for the sentence e s :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "\u03bb M 1 = argmax \u03bb M 1 S s=1 1 R s R s r=1 log p \u03bb M 1 (e s,r |f s )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "We use this optimization criterion instead of the optimization criterion shown in Eq. 11.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "In addition, we might have the problem that no single of the reference translations is part of the nbest list because the search algorithm performs pruning, which in principle limits the possible translations that can be produced given a certain input sentence. To solve this problem, we define for maximum entropy training each sentence as reference translation that has the minimal number of word errors with respect to any of the reference translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "We present results on the VERBMOBIL task, which is a speech translation task in the domain of appointment scheduling, travel planning, and hotel reser-vation (Wahlster, 1993) . Table 1 shows the corpus statistics of this task. We use a training corpus, which is used to train the alignment template model and the language models, a development corpus, which is used to estimate the model scaling factors, and a test corpus. So far, in machine translation research does not exist one generally accepted criterion for the evaluation of the experimental results. Therefore, we use a large variety of different criteria and show that the obtained results improve on most or all of these criteria. In all experiments, we use the following six error criteria:", "cite_spans": [ { "start": 158, "end": 174, "text": "(Wahlster, 1993)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 177, "end": 184, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 SER (sentence error rate): The SER is computed as the number of times that the generated sentence corresponds exactly to one of the reference translations used for the maximum entropy training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 WER (word error rate): The WER is computed as the minimum number of substitution, insertion and deletion operations that have to be performed to convert the generated sentence into the target sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 PER (position-independent WER): A shortcoming of the WER is the fact that it requires a perfect word order. The word order of an acceptable sentence can be different from that of the target sentence, so that the WER measure alone could be misleading. To overcome this problem, we introduce as additional measure the position-independent word error rate (PER). This measure compares the words in the two sentences ignoring the word order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 mWER (multi-reference word error rate): For each test sentence, there is not only used a single reference translation, as for the WER, but a whole set of reference translations. For each translation hypothesis, the edit distance to the most similar sentence is calculated (Nie\u00dfen et al., 2000) .", "cite_spans": [ { "start": 274, "end": 295, "text": "(Nie\u00dfen et al., 2000)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 BLEU score: This score measures the precision of unigrams, bigrams, trigrams and fourgrams with respect to a whole set of reference translations with a penalty for too short sentences (Papineni et al., 2001) . Unlike all other evaluation criteria used here, BLEU measures accuracy, i.e. the opposite of error rate. Hence, large BLEU scores are better.", "cite_spans": [ { "start": 186, "end": 209, "text": "(Papineni et al., 2001)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 SSER (subjective sentence error rate): For a more detailed analysis, subjective judgments by test persons are necessary. Each translated sentence was judged by a human examiner according to an error scale from 0.0 to 1.0 (Nie\u00dfen et al., 2000) .", "cite_spans": [ { "start": 223, "end": 244, "text": "(Nie\u00dfen et al., 2000)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 IER (information item error rate): The test sentences are segmented into information items. For each of them, if the intended information is conveyed and there are no syntactic errors, the sentence is counted as correct (Nie\u00dfen et al., 2000) .", "cite_spans": [ { "start": 222, "end": 243, "text": "(Nie\u00dfen et al., 2000)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In the following, we present the results of this approach. Table 2 shows the results if we use a direct translation model (Eq. 6).", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 66, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "As baseline features, we use a normal word trigram language model and the three component models of the alignment templates. The first row shows the results using only the four baseline features with \u03bb 1 = \u2022 \u2022 \u2022 = \u03bb 4 = 1. The second row shows the result if we train the model scaling factors. We see a systematic improvement on all error rates. The following three rows show the results if we add the word penalty, an additional class-based five-gram Figure 3 : Test error rate over the iterations of the GIS algorithm for maximum entropy training of alignment templates. language model and the conventional dictionary features. We observe improved error rates for using the word penalty and the class-based language model as additional features. Figure 3 show how the sentence error rate (SER) on the test corpus improves during the iterations of the GIS algorithm. We see that the sentence error rates converges after about 4000 iterations. We do not observe significant overfitting. Table 3 shows the resulting normalized model scaling factors. Multiplying each model scaling factor by a constant positive value does not affect the decision rule. We see that adding new features also has an effect on the other model scaling factors.", "cite_spans": [], "ref_spans": [ { "start": 452, "end": 460, "text": "Figure 3", "ref_id": null }, { "start": 748, "end": 756, "text": "Figure 3", "ref_id": null }, { "start": 987, "end": 994, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The use of direct maximum entropy translation models for statistical machine translation has been sug- (Papineni et al., 1997; Papineni et al., 1998) . They train models for natural language understanding rather than natural language translation. In contrast to their approach, we include a dependence on the hidden variable of the translation model in the direct translation model. Therefore, we are able to use statistical alignment models, which have been shown to be a very powerful component for statistical machine translation systems. In speech recognition, training the parameters of the acoustic model by optimizing the (average) mutual information and conditional entropy as they are defined in information theory is a standard approach (Bahl et al., 1986; Ney, 1995) . Combining various probabilistic models for speech and language modeling has been suggested in (Beyerlein, 1997; Peters and Klakow, 1999) .", "cite_spans": [ { "start": 103, "end": 126, "text": "(Papineni et al., 1997;", "ref_id": "BIBREF9" }, { "start": 127, "end": 149, "text": "Papineni et al., 1998)", "ref_id": "BIBREF10" }, { "start": 747, "end": 766, "text": "(Bahl et al., 1986;", "ref_id": "BIBREF0" }, { "start": 767, "end": 777, "text": "Ney, 1995)", "ref_id": "BIBREF6" }, { "start": 874, "end": 891, "text": "(Beyerlein, 1997;", "ref_id": "BIBREF2" }, { "start": 892, "end": 916, "text": "Peters and Klakow, 1999)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We have presented a framework for statistical MT for natural languages, which is more general than the widely used source-channel approach. It allows a baseline MT system to be extended easily by adding new feature functions. We have shown that a baseline statistical MT system can be significantly improved using this framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "There are two possible interpretations for a statistical MT system structured according to the sourcechannel approach, hence including a model for P r(e I 1 ) and a model for P r(f J 1 |e I 1 ). We can interpret it as an approximation to the Bayes decision rule in Eq. 2 or as an instance of a direct maximum entropy model with feature functions log P r(e I 1 ) and log P r(f J 1 |e I 1 ). As soon as we want to use model scaling factors, we can only do this in a theoretically justified way using the second interpretation. Yet, the main advantage comes from the large number of additional possibilities that we obtain by using the second interpretation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "An important open problem of this approach is the handling of complex features in search. An interesting question is to come up with features that allow an efficient handling using conventional dynamic programming search algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In addition, it might be promising to optimize the parameters directly with respect to the error rate of the MT system as is suggested in the field of pattern and speech recognition (Juang et al., 1995; Schl\u00fcter and Ney, 2001 ).", "cite_spans": [ { "start": 182, "end": 202, "text": "(Juang et al., 1995;", "ref_id": "BIBREF5" }, { "start": 203, "end": 225, "text": "Schl\u00fcter and Ney, 2001", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Maximum mutual information estimation of hidden markov model parameters", "authors": [ { "first": "L", "middle": [ "R" ], "last": "Bahl", "suffix": "" }, { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "P", "middle": [ "V" ], "last": "De Souza", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1986, "venue": "Proc. Int. Conf. on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "49--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. R. Bahl, P. F. Brown, P. V. de Souza, and R. L. Mer- cer. 1986. Maximum mutual information estimation of hidden markov model parameters. In Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, pages 49-52, Tokyo, Japan, April.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "A", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "39--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. L. Berger, S. A. Della Pietra, and V. J. Della Pietra. 1996. A maximum entropy approach to nat- ural language processing. Computational Linguistics, 22(1):39-72, March.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Discriminative model combination", "authors": [ { "first": "P", "middle": [], "last": "Beyerlein", "suffix": "" } ], "year": 1997, "venue": "Proc. of the IEEE Workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "238--245", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Beyerlein. 1997. Discriminative model combina- tion. In Proc. of the IEEE Workshop on Automatic Speech Recognition and Understanding, pages 238- 245, Santa Barbara, CA, December.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computa- tional Linguistics, 19(2):263-311.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Generalized iterative scaling for log-linear models", "authors": [ { "first": "J", "middle": [ "N" ], "last": "Darroch", "suffix": "" }, { "first": "D", "middle": [], "last": "Ratcliff", "suffix": "" } ], "year": 1972, "venue": "Annals of Mathematical Statistics", "volume": "43", "issue": "", "pages": "1470--1480", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. N. Darroch and D. Ratcliff. 1972. Generalized itera- tive scaling for log-linear models. Annals of Mathe- matical Statistics, 43:1470-1480.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Statistical and discriminative methods for speech recognition", "authors": [ { "first": "B", "middle": [ "H" ], "last": "Juang", "suffix": "" }, { "first": "W", "middle": [], "last": "Chou", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Lee", "suffix": "" } ], "year": 1995, "venue": "Speech Recognition and Coding -New Advances and Trends", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. H. Juang, W. Chou, and C. H. Lee. 1995. Statisti- cal and discriminative methods for speech recognition. In A. J. R. Ayuso and J. M. L. Soler, editors, Speech Recognition and Coding -New Advances and Trends. Springer Verlag, Berlin, Germany.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "On the probabilistic-interpretation of neural-network classifiers and discriminative training criteria", "authors": [ { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1995, "venue": "IEEE Trans. on Pattern Analysis and Machine Intelligence", "volume": "17", "issue": "2", "pages": "107--119", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Ney. 1995. On the probabilistic-interpretation of neural-network classifiers and discriminative training criteria. IEEE Trans. on Pattern Analysis and Machine Intelligence, 17(2):107-119, February.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An evaluation tool for machine translation: Fast evaluation for MT research", "authors": [ { "first": "S", "middle": [], "last": "Nie\u00dfen", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "G", "middle": [], "last": "Leusch", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proc. of the Second Int. Conf. on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "39--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Nie\u00dfen, F. J. Och, G. Leusch, and H. Ney. 2000. An evaluation tool for machine translation: Fast eval- uation for MT research. In Proc. of the Second Int. Conf. on Language Resources and Evaluation (LREC), pages 39-45, Athens, Greece, May.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improved alignment models for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1999, "venue": "Proc. of the Joint SIGDAT Conf. on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "20--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och, C. Tillmann, and H. Ney. 1999. Improved alignment models for statistical machine translation. In Proc. of the Joint SIGDAT Conf. on Empirical Meth- ods in Natural Language Processing and Very Large Corpora, pages 20-28, University of Maryland, Col- lege Park, MD, June.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Feature-based language understanding", "authors": [ { "first": "K", "middle": [ "A" ], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "R", "middle": [ "T" ], "last": "Ward", "suffix": "" } ], "year": 1997, "venue": "European Conf. on Speech Communication and Technology", "volume": "", "issue": "", "pages": "1435--1438", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. A. Papineni, S. Roukos, and R. T. Ward. 1997. Feature-based language understanding. In European Conf. on Speech Communication and Technology, pages 1435-1438, Rhodes, Greece, September.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Maximum likelihood and discriminative training of direct translation models", "authors": [ { "first": "K", "middle": [ "A" ], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "R", "middle": [ "T" ], "last": "Ward", "suffix": "" } ], "year": 1998, "venue": "Proc. Int. Conf. on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "189--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. A. Papineni, S. Roukos, and R. T. Ward. 1998. Max- imum likelihood and discriminative training of direct translation models. In Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, pages 189-192, Seat- tle, WA, May.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [ "A" ], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W.-J", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2001, "venue": "IBM Research Division", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. A. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. Technical Report RC22176 (W0109-022), IBM Research Division, Thomas J. Watson Research Center, Yorktown Heights, NY, September.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Compact maximum entropy language models", "authors": [ { "first": "J", "middle": [], "last": "Peters", "suffix": "" }, { "first": "D", "middle": [], "last": "Klakow", "suffix": "" } ], "year": 1999, "venue": "Proc. of the IEEE Workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Peters and D. Klakow. 1999. Compact maximum en- tropy language models. In Proc. of the IEEE Workshop on Automatic Speech Recognition and Understanding, Keystone, CO, December.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Model-based MCE bound to the true Bayes' error", "authors": [ { "first": "R", "middle": [], "last": "Schl\u00fcter", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2001, "venue": "IEEE Signal Processing Letters", "volume": "8", "issue": "5", "pages": "131--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Schl\u00fcter and H. Ney. 2001. Model-based MCE bound to the true Bayes' error. IEEE Signal Processing Let- ters, 8(5):131-133, May.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Verbmobil: Translation of face-toface dialogs", "authors": [ { "first": "W", "middle": [], "last": "Wahlster", "suffix": "" } ], "year": 1993, "venue": "Proc. of MT Summit IV", "volume": "", "issue": "", "pages": "127--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Wahlster. 1993. Verbmobil: Translation of face-to- face dialogs. In Proc. of MT Summit IV, pages 127- 135, Kobe, Japan, July.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Architecture of the translation approach based on source-channel models.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "Architecture of the translation approach based on direct maximum entropy models.by:", "uris": null, "num": null }, "TABREF1": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
Characteristics of training corpus (Train),
manual lexicon (Lex), development corpus (Dev),
test corpus (Test).
German English
Train Sentences58 073
Words519 523 549 921
Singletons3 4531 698
Vocabulary7 9394 672
LexEntries12 779
Ext. Vocab.11 5016 867
DevSentences276
Words3 1593 438
PP (trigr. LM)-28.1
TestSentences251
Words2 6282 871
PP (trigr. LM)-30.5
" }, "TABREF2": { "type_str": "table", "html": null, "text": "Effect of maximum entropy training for alignment template approach (WP: word penalty feature, CLM: class-based language model (five-gram), MX: conventional dictionary).", "num": null, "content": "
objective criteria [%]subjective criteria [%]
SER WER PER mWER BLEU SSERIER
Baseline(\u03bb m = 1)86.9 42.8 33.037.743.935.939.0
ME81.7 40.2 28.734.649.732.534.8
ME+WP80.5 38.6 26.932.454.129.932.2
ME+WP+CLM78.1 38.3 26.932.155.029.130.9
ME+WP+CLM+MX 77.8 38.4 26.831.955.228.830.9
0.9ME
0.88ME+WP ME+WP+CLM
ME+WP+CLM+MX
sentence error rate (SER)0.78 0.8 0.82 0.84 0.86
0.76
0.74
01000 2000 3000 4000 5000 6000 7000 8000 9000 10000
number of iterations
" }, "TABREF3": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
: Resulting model scaling factors of maxi-
mum entropy training for alignment templates; \u03bb 1 :
trigram language model; \u03bb 2 : alignment template
model, \u03bb 3 : lexicon model, \u03bb 4 : alignment model (normalized such that 4 m=1 \u03bb m = 4).
ME +WP +CLM +MX
\u03bb 10.86 0.980.750.77
\u03bb 22.33 2.052.242.24
\u03bb 30.58 0.720.790.75
\u03bb 40.22 0.250.230.24
WP\u20222.63.032.78
CLM\u2022\u20220.330.34
MX\u2022\u2022\u20222.92
gested by
" } } } }