{ "paper_id": "C04-1006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:20:46.898625Z" }, "title": "Improved Word Alignment Using a Symmetric Lexicon Model", "authors": [ { "first": "Richard", "middle": [], "last": "Zens", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "zens@cs.rwth-aachen.de" }, { "first": "Evgeny", "middle": [], "last": "Matusov", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "matusov@cs.rwth-aachen.de" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "ney@cs.rwth-aachen.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Word-aligned bilingual corpora are an important knowledge source for many tasks in natural language processing. We improve the well-known IBM alignment models, as well as the Hidden-Markov alignment model using a symmetric lexicon model. This symmetrization takes not only the standard translation direction from source to target into account, but also the inverse translation direction from target to source. We present a theoretically sound derivation of these techniques. In addition to the symmetrization, we introduce a smoothed lexicon model. The standard lexicon model is based on full-form words only. We propose a lexicon smoothing method that takes the word base forms explicitly into account. Therefore, it is especially useful for highly inflected languages such as German. We evaluate these methods on the German-English Verbmobil task and the French-English Canadian Hansards task. We show statistically significant improvements of the alignment quality compared to the best system reported so far. For the Canadian Hansards task, we achieve an improvement of more than 30% relative.", "pdf_parse": { "paper_id": "C04-1006", "_pdf_hash": "", "abstract": [ { "text": "Word-aligned bilingual corpora are an important knowledge source for many tasks in natural language processing. We improve the well-known IBM alignment models, as well as the Hidden-Markov alignment model using a symmetric lexicon model. This symmetrization takes not only the standard translation direction from source to target into account, but also the inverse translation direction from target to source. We present a theoretically sound derivation of these techniques. In addition to the symmetrization, we introduce a smoothed lexicon model. The standard lexicon model is based on full-form words only. We propose a lexicon smoothing method that takes the word base forms explicitly into account. Therefore, it is especially useful for highly inflected languages such as German. We evaluate these methods on the German-English Verbmobil task and the French-English Canadian Hansards task. We show statistically significant improvements of the alignment quality compared to the best system reported so far. For the Canadian Hansards task, we achieve an improvement of more than 30% relative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word-aligned bilingual corpora are an important knowledge source for many tasks in natural language processing. Obvious applications are the extraction of bilingual word or phrase lexica (Melamed, 2000; Och and Ney, 2000) . These applications depend heavily on the quality of the word alignment (Och and Ney, 2000) . Word alignment models were first introduced in statistical machine translation (Brown et al., 1993) . The alignment describes the mapping from source sentence words to target sentence words.", "cite_spans": [ { "start": 187, "end": 202, "text": "(Melamed, 2000;", "ref_id": "BIBREF3" }, { "start": 203, "end": 221, "text": "Och and Ney, 2000)", "ref_id": "BIBREF5" }, { "start": 295, "end": 314, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF5" }, { "start": 396, "end": 416, "text": "(Brown et al., 1993)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Using the IBM translation models IBM-1 to IBM-5 (Brown et al., 1993) , as well as the Hidden-Markov alignment model (Vogel et al., 1996) , we can produce alignments of good quality. In (Och and Ney, 2003) , it is shown that the statistical approach performs very well compared to alternative approaches, e.g. based on the Dice coefficient or the competitive linking algorithm (Melamed, 2000) .", "cite_spans": [ { "start": 48, "end": 68, "text": "(Brown et al., 1993)", "ref_id": "BIBREF0" }, { "start": 116, "end": 136, "text": "(Vogel et al., 1996)", "ref_id": "BIBREF8" }, { "start": 185, "end": 204, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" }, { "start": 376, "end": 391, "text": "(Melamed, 2000)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A central component of the statistical translation models is the lexicon. It models the word translation probabilities. The standard training procedure of the statistical models uses the EM algorithm. Typically, the models are trained for one translation direction only. Here, we will perform a simultaneous training of both translation directions, source-to-target and target-to-source. After each iteration of the EM algorithm, we combine the two lexica to a symmetric lexicon. This symmetric lexicon is then used in the next iteration of the EM algorithm for both translation directions. We will propose and justify linear and loglinear interpolation methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Statistical methods often suffer from the data sparseness problem. In our case, many words in the bilingual sentence-aligned texts are singletons, i.e. they occur only once. This is especially true for the highly inflected languages such as German. It is hard to obtain reliable estimations of the translation probabilities for these rarely occurring words. To overcome this problem (at least partially), we will smooth the lexicon probabilities of the full-form words using a probability distribution that is estimated using the word base forms. Thus, we exploit that multiple fullform words share the same base form and have similar meanings and translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We will evaluate these methods on the German-English Verbmobil task and the French-English Canadian Hansards task. We will show statistically significant improvements compared to state-of-the-art results in (Och and Ney, 2003) . On the Canadian Hansards task, the symmetrization methods will result in an improvement of more than 30% relative.", "cite_spans": [ { "start": 207, "end": 226, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we will give a short description of the commonly used statistical word alignment models. These alignment models stem from the source-channel approach to statistical machine translation (Brown et al., 1993) . We are given a source language sentence f J 1 := f 1 ...f j ...f J which has to be translated into a target language sentence e I 1 := e 1 ...e i ...e I . Among all possible target language sentences, we will choose the sentence with the highest probability:", "cite_spans": [ { "start": 202, "end": 222, "text": "(Brown et al., 1993)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "e I 1 = argmax e I 1 P r(e I 1 |f J 1 ) = argmax e I 1 P r(e I 1 ) \u2022 P r(f J 1 |e I 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "This decomposition into two knowledge sources allows for an independent modeling of target language model P r(e I 1 ) and translation model P r(f J 1 |e I 1 ). Into the translation model, the word alignment A is introduced as a hidden variable:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "P r(f J 1 |e I 1 ) = A P r(f J 1 , A|e I 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "Usually, we use restricted alignments in the sense that each source word is aligned to at most one target word, i.e. A = a J 1 . A detailed description of the popular translation models IBM-1 to IBM-5 (Brown et al., 1993) , as well as the Hidden-Markov alignment model (HMM) (Vogel et al., 1996) can be found in (Och and Ney, 2003) . All these models include parameters p(f |e) for the single-word based lexicon. They differ in the alignment model.", "cite_spans": [ { "start": 201, "end": 221, "text": "(Brown et al., 1993)", "ref_id": "BIBREF0" }, { "start": 275, "end": 295, "text": "(Vogel et al., 1996)", "ref_id": "BIBREF8" }, { "start": 312, "end": 331, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "A Viterbi alignment\u00c2 of a specific model is an alignment for which the following equation holds:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "\u00c2 = argmax A P r(f J 1 , A|e I 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "We measure the quality of an alignment model using the quality of the Viterbi alignment compared to a manually produced reference alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "In Section 3, we will apply the lexicon symmetrization methods to the models described previously. Therefore, we will now sketch the standard training procedure for the lexicon model. The EM algorithm is used to train the free lexicon parameters p(f |e).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "In the E-step, the lexical counts for each sentence pair (f J 1 , e I 1 ) are calculated and then summed over all sentence pairs in the training corpus:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "N (f, e) = (f J 1 ,e I 1 ) a J 1 p(a J 1 |f J 1 , e I 1 ) i,j \u03b4(f, f j )\u03b4(e, e i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "In the M-step the lexicon probabilities are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "p(f |e) = N (f, e) f N (f , e)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "3 Symmetrized Lexicon Model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "During the standard training procedure, the lexicon parameters p(f |e) and p(e|f ) were estimated independent of each other in strictly separate trainings. In this section, we present two symmetrization methods for the lexicon model. As a starting point, we use the joint lexicon probability p(f, e) and determine the conditional probabilities for the sourceto-target direction p(f |e) and the target-tosource direction p(e|f ) as the corresponding marginal distribution:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(f |e) = p(f, e) f p(f , e) (1) p(e|f ) = p(f, e) \u1ebd p(f,\u1ebd)", "eq_num": "(2)" } ], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "The nonsymmetric auxiliary Q-functions for reestimating the lexicon probabilities during the EM algorithm can be represented as follows. Here, N ST (f, e) and N T S (f, e) denote the lexicon counts for the source-to-target (ST ) direction and the target-to-source (T S) direction, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "Q ST ({p(f |e)}) = f,e N ST (f, e) \u2022 log p(f, e) f p(f , e) Q T S ({p(e|f )}) = f,e N T S (f, e) \u2022 log p(f, e) \u1ebd p(f,\u1ebd)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment Models", "sec_num": "2" }, { "text": "To estimate the joint probability using the EM algorithm, we define the auxiliary Q-function as a linear interpolation of the Q-functions for the source-to-target and the target-to-source direction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Interpolation", "sec_num": "3.1" }, { "text": "Q \u03b1 ({p(f, e)}) = \u03b1 \u2022 Q ST ({p(f |e)}) +(1 \u2212 \u03b1) \u2022 Q T S ({p(e|f )}) = \u03b1 \u2022 f,e N ST (f, e) \u2022 log p(f, e) +(1 \u2212 \u03b1) \u2022 f,e N T S (f, e) \u2022 log p(f, e) \u2212\u03b1 \u2022 e N ST (e) \u2022 log f p(f , e) \u2212(1 \u2212 \u03b1) \u2022 f N T S (f ) \u2022 log \u1ebd p(f,\u1ebd)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Interpolation", "sec_num": "3.1" }, { "text": "The unigram counts N (e) and N (f ) are determined, for each of the two translation directions, by taking a sum of N (f, e) over f and over e, respectively. We define the combined lexicon count N \u03b1 (f, e):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Interpolation", "sec_num": "3.1" }, { "text": "N \u03b1 (f, e) := \u03b1 \u2022 N ST (f, e) + (1 \u2212 \u03b1) \u2022 N T S (f, e)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Interpolation", "sec_num": "3.1" }, { "text": "Now, we derive the symmetrized Q-function over p(f, e) for a certain word pair (f, e).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Interpolation", "sec_num": "3.1" }, { "text": "Then, we set this derivative to zero to determine the reestimation formula for p(f, e) and obtain the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Interpolation", "sec_num": "3.1" }, { "text": "N \u03b1 (f, e) p(f, e) = \u03b1 \u2022 N ST (e) f p(f , e) + (1 \u2212 \u03b1) \u2022 N T S (f ) \u1ebd p(f,\u1ebd)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Interpolation", "sec_num": "3.1" }, { "text": "We do not know a closed form solution for this equation. As an approximation, we use the following term:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Interpolation", "sec_num": "3.1" }, { "text": "p(f, e) = N \u03b1 (f, e) f ,\u1ebd N \u03b1 (f ,\u1ebd)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Interpolation", "sec_num": "3.1" }, { "text": "This estimate is an exact solution, if the unigram counts for f and e are independent of the translation direction, i. e. N ST (f ) = N T S (f ) and N ST (e) = N T S (e). We make this approximation and thus we interpolate the lexicon counts linear after each iteration of the EM algorithm. Then, we normalize these counts (according to Equations 1 and 2) to determine the lexicon probabilities for each of the two translation directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Interpolation", "sec_num": "3.1" }, { "text": "We will show in Section 5 that the linear interpolation results in significant improvements over the nonsymmetric system. Motivated by these experiments, we investigated also the loglinear interpolation of the lexicon counts of the two translation directions. The combined lexicon count N \u03b1 (f, e) is now defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loglinear Interpolation", "sec_num": "3.2" }, { "text": "N \u03b1 (f, e) = N ST (f, e) \u03b1 \u2022 N T S (f, e) 1\u2212\u03b1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loglinear Interpolation", "sec_num": "3.2" }, { "text": "The normalization is done in the same way as for the linear interpolation. The linear interpolation resembles more a union of the two lexica whereas the loglinear interpolation is more similar to an intersection of both lexica. Thus for the linear interpolation, a word pair (f, e) obtains a large combined count, if the count in at least one direction is large. For the loglinear interpolation, the combined count is large only if both lexicon counts are large.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loglinear Interpolation", "sec_num": "3.2" }, { "text": "In the experiments, we will use the interpolation weight \u03b1 = 0.5 for both the linear and the loglinear interpolation, i. e. both translation directions are weighted equally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loglinear Interpolation", "sec_num": "3.2" }, { "text": "Initially, the lexicon contains all word pairs that cooccur in the bilingual training corpus. The majority of these word pairs are not translations of each other. Therefore, we would like to remove those lexicon entries. Evidence trimming is one way to do this. The evidence of a word pair (f, e) is the estimated count N (f, e). Now, we discard a word pair if its evidence is below a certain threshold \u03c4 . 1 In the case of the symmetric lexicon, we can further refine this method. For estimating the lexicon in the source-to-target directionp(f |e), the idea is to keep all entries from this direction and to boost the entries that have a high evidence in the target-to-source direction N T S (f, e). We obtain the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Trimming", "sec_num": "3.3" }, { "text": "N ST (f, e) = \uf8f1 \uf8f2 \uf8f3 \u03b1N ST (f, e) + (1 \u2212 \u03b1)N T S (f, e)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Trimming", "sec_num": "3.3" }, { "text": "if N ST (f, e) > \u03c4 0 else", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Trimming", "sec_num": "3.3" }, { "text": "The countN ST (f, e) is now used to estimate the source-to-target lexiconp(f |e). With this method, we do not keep entries in the sourceto-target lexiconp(f |e) if their evidence is low, even if their evidence in the target-to-source direction N T S (f, e) is high. For the target-tosource direction, we apply this method in a similar way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Trimming", "sec_num": "3.3" }, { "text": "The lexicon model described so far is based on full-form words. For highly inflected languages such as German this might cause problems, because many full-form words occur only a few times in the training corpus. Compared to English, the token/type ratio for German is usually much lower (e.g. Verbmobil: English 99.4, German 56.3). The information that multiple full-form words share the same base form is not used in the lexicon model. To take this information into account, we smooth the lexicon model with a backing-off lexicon that is based on word base forms. The smoothing method we apply is absolute discounting with interpolation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Smoothing", "sec_num": "4" }, { "text": "p(f |e) = max {N (f, e) \u2212 d, 0} N (e) + \u03b1(e) \u2022 \u03b2(f,\u0113)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Smoothing", "sec_num": "4" }, { "text": "This method is well known from language modeling (Ney et al., 1997) . Here,\u0113 denotes the generalization, i.e. the base form, of the word e. The nonnegative value d is the discounting parameter, \u03b1(e) is a normalization constant and \u03b2(f,\u0113) is the normalized backing-off distribution. The formula for \u03b1(e) is:", "cite_spans": [ { "start": 49, "end": 67, "text": "(Ney et al., 1997)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Lexicon Smoothing", "sec_num": "4" }, { "text": "\u03b1(e) = 1 N (e) \uf8eb \uf8ed f :N (f,e)>d d + f :N (f,e)\u2264d N (f, e) \uf8f6 \uf8f8 = 1 N (e) f min{d, N (f, e)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Smoothing", "sec_num": "4" }, { "text": "This formula is a generalization of the one typically used in publications on language modeling. This generalization is necessary, because the lexicon counts may be fractional whereas in language modeling typically integer counts are used. Additionally, we want to allow for discounting values d greater than one. The backing-off distribution \u03b2(f,\u0113) is estimated using relative frequencies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Smoothing", "sec_num": "4" }, { "text": "\u03b2(f,\u0113) = N (f,\u0113) f N (f ,\u0113)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Smoothing", "sec_num": "4" }, { "text": "Here, N (f,\u0113) denotes the count of the event that the source language word f and the target language base form\u0113 occur together. These counts are computed by summing the lexicon counts N (f, e) over all full-form words e which share the same base form\u0113.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Smoothing", "sec_num": "4" }, { "text": "We use the same evaluation criterion as described in (Och and Ney, 2000) . The generated word alignment is compared to a reference alignment which is produced by human experts. The annotation scheme explicitly takes the ambiguity of the word alignment into account. There are two different kinds of alignments: sure alignments (S) which are used for alignments that are unambiguous and possible alignments (P ) which are used for alignments that might or might not exist. The P relation is used especially to align words within idiomatic expressions, free translations, and missing function words. It is guaranteed that the sure alignments are a subset of the possible alignments (S \u2286 P ). The obtained reference alignment may contain many-to-one and one-to-many relationships. The quality of an alignment A is computed as appropriately redefined precision and recall measures. Additionally, we use the alignment error rate (AER), which is derived from the well-known F-measure. With these definitions a recall error can only occur if a S(ure) alignment is not found and a precision error can only occur if a found alignment is not even P (ossible).", "cite_spans": [ { "start": 53, "end": 72, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Criteria", "sec_num": "5.1" }, { "text": "We evaluated the presented lexicon symmetrization methods on the Verbmobil and the Canadian Hansards task. The German-English Verbmobil task (Wahlster, 2000) is a speech translation task in the domain of appointment scheduling, travel planning and hotel reservation. The French-English Canadian Hansards task consists of the debates in the Canadian Parliament. The corpus statistics are shown in Table 1 and Table 2 . The number of running words and the vocabularies are based on full-form words including punctuation marks. As in (Och and Ney, 2003) , the first 100 sentences of the test corpus are used as a development corpus to optimize model parameters that are not trained via the EM algorithm, e.g. the discounting parameter for lexicon smoothing. The remaining part of the test corpus is used to evaluate the models. We use the same training schemes (model sequences) as presented in (Och and Ney, 2003) . As we use the same training and testing conditions as (Och and Ney, 2003) , we will refer to the results presented in that article as the baseline results. In (Och and Ney, 2003) , the alignment quality of statistical models is compared to alternative approaches, e.g. using the Dice coefficient or the competitive linking algorithm. The statistical approach showed the best performance and therefore we report only the results for the statistical systems.", "cite_spans": [ { "start": 532, "end": 551, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" }, { "start": 893, "end": 912, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" }, { "start": 969, "end": 988, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" }, { "start": 1074, "end": 1093, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 396, "end": 416, "text": "Table 1 and Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "In Table 3 and Table 4 , we present the following experiments performed for both the Verbmobil and the Canadian Hansards task:", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 22, "text": "Table 3 and Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Lexicon Symmetrization", "sec_num": "5.3" }, { "text": "\u2022 Base: the system taken from (Och and Ney, 2003) that we use as baseline system.", "cite_spans": [ { "start": 30, "end": 49, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Lexicon Symmetrization", "sec_num": "5.3" }, { "text": "\u2022 Lin.: symmetrized lexicon using a linear interpolation of the lexicon counts after each training iteration as described in Section 3.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Symmetrization", "sec_num": "5.3" }, { "text": "\u2022 Log.: symmetrized lexicon using a loglinear interpolation of the lexicon counts after each training iteration as described in Section 3.2. In Table 3 , we compare both interpolation variants for the Verbmobil task to (Och and Ney, 2003) . We observe notable improvements in the alignment error rate using the linear interpolation. For the translation direction from German to English (S\u2192T), an improvement of about 25% relative is achieved from an alignment error rate of 5.7% for the baseline system to 4.3% using the linear interpolation. Performing the loglinear interpolation, we observe a substantial reduction of the alignment error rate as well. The two symmetrization methods improve both precision and recall of the resulting Viterbi alignment in both translation directions for the Verbmobil task. The improvements with the linear interpolation is for both translation directions statistically significant at the 99% level. For the loglinear interpolation, the target-to-source translation direction is statistically significant at the 99% level. The statistical significance test were done using boostrap resampling.", "cite_spans": [ { "start": 219, "end": 238, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 144, "end": 151, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Lexicon Symmetrization", "sec_num": "5.3" }, { "text": "We also performed experiments on subcorpora of different sizes. For the Verbmobil task, the results are illustrated in Figure 1 . We observe that both symmetrization variants result in improvements for all corpus sizes. With increasing training corpus size the performance of the linear interpolation becomes superior to the performance of the loglinear interpolation.", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 127, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Lexicon Symmetrization", "sec_num": "5.3" }, { "text": "In Table 4 , we compare the symmetrization methods with the baseline system for the Canadian Hansards task. Here, the loglinear interpolation performs best. We achieve a relative improvement over the baseline of more than 30% for both translation directions. For instance, the alignment error rate for the translation direction from French to English (S\u2192T) improves from 12.6% for the baseline system to 8.6% for the symmetrized system with loglinear interpolation. Again, the two symmetrization methods improve both precision and recall of the Viterbi alignment. For the Canadian Hansards task, all the improvements of the alignment error rate are statistically significant at the 99% level.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Lexicon Symmetrization", "sec_num": "5.3" }, { "text": "In (Och and Ney, 2003) generalized alignments are used, thus the final Viterbi alignments of both translation directions are combined using some heuristic. Experimentally, the best heuristic for the Canadian Hansards task is the intersection. For the Verbmobil task, the refined method of (Och and Ney, 2003) is used. The results are summarized in Table 5 . We see that both the linear and the loglinear lexicon symmetrization methods yield an improvement with respect to the alignment error rate. For the Verbmobil task, the improvement with the loglinear interpolation is statistically significant at the 99% level. For the Canadian Hansards task, both lexicon symmetrization methods result in statistically significant improvements at the 95% level. Additionally, we observe that precision and recall are more balanced for the symmetrized lexicon variants, especially for the Canadian Hansards Table 6 : Effect of smoothing the lexicon probabilities on the alignment performance for the Verbmobil task (S\u2192T: source-to-target direction, smooth English; T\u2192S: target-to-source direction, smooth German; all numbers in percent). S\u2192T T\u2192S Pre. Rec. AER Pre. Rec. AER Base 93.5 95.3 5.7 91.4 88.7 9.9 smooth 94.8 94.8 5.2 93.4 88.2 9.1 task.", "cite_spans": [ { "start": 3, "end": 22, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" }, { "start": 289, "end": 308, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 348, "end": 355, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 897, "end": 904, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Generalized Alignments", "sec_num": "5.4" }, { "text": "In Table 6 , we present the results for the lexicon smoothing as described in Section 4 on the Verbmobil corpus 2 . As expected, a notable improvement in the AER is reached if the lexicon smoothing is performed for German (i.e. for the target-to-source direction), because many full-form words with the same base form are present in this language. These improvements are statistically significant at the 95% level.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Lexicon Smoothing", "sec_num": "5.5" }, { "text": "The popular IBM models for statistical machine translation are described in (Brown et al., 1993) . The HMM-based alignment model was introduced in (Vogel et al., 1996) . A good overview of these models is given in (Och and Ney, 2003) . In that article Model 6 is introduced as the loglinear interpolation of the other models. Additionally, state-ofthe-art results are presented for the Verbmobil task and the Canadian Hansards task for various configurations. Therefore, we chose them as baseline. Compared to our work, these publications kept the training of the two translation directions strictly separate whereas we integrate both directions into one symmetrized training. Additional linguistic knowledge sources such as dependency trees or parse trees were used in (Cherry and Lin, 2003) and (Gildea, 2003) . In (Cherry and Lin, 2003) a probability model P r(a J 1 |f J 1 , e I 1 ) is used, which is symmetric per definition. Bilingual bracketing methods were used to produce a word alignment in (Wu, 1997) . (Melamed, 2000) uses an alignment model that enforces one-to-one alignments for nonempty words. In (Toutanova et al., 2002) , extensions to the HMM-based alignment model are presented.", "cite_spans": [ { "start": 76, "end": 96, "text": "(Brown et al., 1993)", "ref_id": "BIBREF0" }, { "start": 147, "end": 167, "text": "(Vogel et al., 1996)", "ref_id": "BIBREF8" }, { "start": 214, "end": 233, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" }, { "start": 770, "end": 792, "text": "(Cherry and Lin, 2003)", "ref_id": "BIBREF1" }, { "start": 797, "end": 811, "text": "(Gildea, 2003)", "ref_id": "BIBREF2" }, { "start": 817, "end": 839, "text": "(Cherry and Lin, 2003)", "ref_id": "BIBREF1" }, { "start": 1001, "end": 1011, "text": "(Wu, 1997)", "ref_id": "BIBREF10" }, { "start": 1014, "end": 1029, "text": "(Melamed, 2000)", "ref_id": "BIBREF3" }, { "start": 1113, "end": 1137, "text": "(Toutanova et al., 2002)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We have addressed the task of automatically generating word alignments for bilingual corpora. This problem is of great importance for many tasks in natural language processing, especially in the field of machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "We have presented lexicon symmetrization methods for statistical alignment models that are trained using the EM algorithm, in particular the five IBM models, the HMM and Model 6. We have evaluated these methods on the Verbmobil task and the Canadian Hansards task and compared our results to the state-of-the-art system of (Och and Ney, 2003) . We have shown that both the linear and the loglinear interpolation of lexicon counts after each iteration of the EM algorithm result in statistically significant improvements of the alignment quality. For the Canadian Hansards task, the AER improved by about 30% relative; for the Verbmobil task the improvement was about 25% relative. Additionally, we have described lexicon smoothing using the word base forms. Especially for highly inflected languages such as German, this smoothing resulted in statistically significant improvements.", "cite_spans": [ { "start": 323, "end": 342, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In the future, we plan to optimize the interpolation weights to balance the two translation directions. We will also investigate the possibility of generating directly an unconstrained alignment based on the symmetrized lexicon probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Actually, there is always implicit evidence trimming caused by the limited machine precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The base forms were determined using LingSoft tools.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work has been partially funded by the EU project LC-Star, IST-2001-32216. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter esti- mation. Computational Linguistics, 19(2):263- 311, June.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A probability model to improve word alignment", "authors": [ { "first": "C", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2003, "venue": "Proc. of the 41th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "88--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Cherry and D. Lin. 2003. A probability model to improve word alignment. In Proc. of the 41th Annual Meeting of the Association for Compu- tational Linguistics (ACL), pages 88-95, Sap- poro, Japan, July.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Loosely tree-based alignment for machine translation", "authors": [ { "first": "D", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2003, "venue": "Proc. of the 41th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "80--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Gildea. 2003. Loosely tree-based alignment for machine translation. In Proc. of the 41th An- nual Meeting of the Association for Computa- tional Linguistics (ACL), pages 80-87, Sapporo, Japan, July.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Models of translational equivalence among words", "authors": [ { "first": "D", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "2", "pages": "221--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Melamed. 2000. Models of translational equivalence among words. Computational Lin- guistics, 26(2):221-249.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Statistical language modeling using leaving-one-out", "authors": [ { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "S", "middle": [], "last": "Martin", "suffix": "" }, { "first": "F", "middle": [], "last": "Wessel", "suffix": "" } ], "year": 1997, "venue": "Corpus-Based Methods in Language and Speech Processing", "volume": "", "issue": "", "pages": "174--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Ney, S. Martin, and F. Wessel. 1997. Statisti- cal language modeling using leaving-one-out. In S. Young and G. Bloothooft, editors, Corpus- Based Methods in Language and Speech Process- ing, pages 174-207. Kluwer.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Improved statistical alignment models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proc. of the 38th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och and H. Ney. 2000. Improved statistical alignment models. In Proc. of the 38th Annual Meeting of the Association for Computational Linguistics (ACL), pages 440-447, Hong Kong, October.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "J", "middle": [], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Och and H. Ney. 2003. A systematic com- parison of various statistical alignment models. Computational Linguistics, 29(1):19-51, March.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Extensions to hmm-based statistical word alignment models", "authors": [ { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Ilhan", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2002, "venue": "Proc. Conf. on Empirical Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "87--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Toutanova, H. T. Ilhan, and C. D. Manning. 2002. Extensions to hmm-based statistical word alignment models. In Proc. Conf. on Empirical Methods for Natural Language Processing, pages 87-94, Philadelphia, PA, July.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "HMMbased word alignment in statistical translation", "authors": [ { "first": "H", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "C", "middle": [], "last": "Ney", "suffix": "" }, { "first": "", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1996, "venue": "COLING '96: The 16th Int. Conf. on Computational Linguistics", "volume": "", "issue": "", "pages": "836--841", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vogel, H. Ney, and C. Tillmann. 1996. HMM- based word alignment in statistical translation. In COLING '96: The 16th Int. Conf. on Com- putational Linguistics, pages 836-841, Copen- hagen, Denmark, August.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Verbmobil: Foundations of speech-to-speech translations", "authors": [], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Wahlster, editor. 2000. Verbmobil: Founda- tions of speech-to-speech translations. Springer Verlag, Berlin, Germany, July.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", "authors": [ { "first": "D", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "3", "pages": "377--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel cor- pora. Computational Linguistics, 23(3):377- 403, September.", "links": null } }, "ref_entries": { "FIGREF0": { "text": ", P ; A) = 1 \u2212 |A \u2229 S| + |A \u2229 P | |A| + |S|", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "AER[%] of different alignment methods as a function of the training corpus size for the Verbmobil task (source-to-target direction).", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "content": "
German English
Train Sentences34K
Words329 625 343 076
Vocabulary5 9363 505
Singletons2 6001 305
TestSentences354
Words3 2333 109
", "type_str": "table", "html": null, "text": "Verbmobil: Corpus statistics.", "num": null }, "TABREF1": { "content": "
French English
Train Sentences128K
Words2.12M1.93M
Vocabulary 37 54229 414
Singletons12 9869 572
TestSentences500
Words8 7497 946
", "type_str": "table", "html": null, "text": "Canadian Hansards: Corpus statistics.", "num": null }, "TABREF2": { "content": "
S\u2192TT\u2192S
Pre. Rec. AER Pre. Rec. AER
Base 93.5 95.35.7 91.4 88.79.9
Lin. 96.0 95.44.3 93.7 89.68.2
Log. 93.6 95.65.5 94.5 89.47.9
18baseline
16linear loglinear
14
AER10 12
8
6
4
100100010000100000
Corpus Size
", "type_str": "table", "html": null, "text": "Comparison of alignment performance for the Verbmobil task (S\u2192T: sourceto-target direction, T\u2192S: target-to-source direction; all numbers in percent).", "num": null }, "TABREF3": { "content": "
S\u2192TT\u2192S
Pre. Rec. AER Pre. Rec. AER
Base 85.4 90.6 12.6 85.6 90.9 12.4
Lin. 89.3 91.49.9 89.0 92.09.8
Log. 91.0 92.08.6 91.2 92.18.4
", "type_str": "table", "html": null, "text": "Comparison of alignment performance for the Canadian Hansards task (S\u2192T: source-to-target direction, T\u2192S: target-tosource direction; all numbers in percent).", "num": null }, "TABREF4": { "content": "
task:VerbmobilCanadian Hansards
Precision[%] Recall[%] AER[%] Precision[%] Recall[%] AER[%]
Base93.396.05.596.686.08.2
Lin.96.194.04.995.288.57.7
Loglin.95.295.34.793.690.87.5
", "type_str": "table", "html": null, "text": "Effect of different lexicon symmetrization methods on alignment performance for the generalized alignments for the Verbmobil task and the Canadian Hansards task.", "num": null } } } }