{ "paper_id": "P04-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:43:37.405383Z" }, "title": "Statistical Machine Translation with Word-and Sentence-Aligned Parallel Corpora", "authors": [ { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "", "affiliation": {}, "email": "callison-burch@ed.ac.uk" }, { "first": "David", "middle": [], "last": "Talbot", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The parameters of statistical translation models are typically estimated from sentence-aligned parallel corpora. We show that significant improvements in the alignment and translation quality of such models can be achieved by additionally including wordaligned data during training. Incorporating wordlevel alignments into the parameter estimation of the IBM models reduces alignment error rate and increases the Bleu score when compared to training the same models only on sentence-aligned data. On the Verbmobil data set, we attain a 38% reduction in the alignment error rate and a higher Bleu score with half as many training examples. We discuss how varying the ratio of word-aligned to sentencealigned data affects the expected performance gain.", "pdf_parse": { "paper_id": "P04-1023", "_pdf_hash": "", "abstract": [ { "text": "The parameters of statistical translation models are typically estimated from sentence-aligned parallel corpora. We show that significant improvements in the alignment and translation quality of such models can be achieved by additionally including wordaligned data during training. Incorporating wordlevel alignments into the parameter estimation of the IBM models reduces alignment error rate and increases the Bleu score when compared to training the same models only on sentence-aligned data. On the Verbmobil data set, we attain a 38% reduction in the alignment error rate and a higher Bleu score with half as many training examples. We discuss how varying the ratio of word-aligned to sentencealigned data affects the expected performance gain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine translation systems based on probabilistic translation models (Brown et al., 1993) are generally trained using sentence-aligned parallel corpora. For many language pairs these exist in abundant quantities. However for new domains or uncommon language pairs extensive parallel corpora are often hard to come by.", "cite_spans": [ { "start": 70, "end": 90, "text": "(Brown et al., 1993)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Two factors could increase the performance of statistical machine translation for new language pairs and domains: a reduction in the cost of creating new training data, and the development of more efficient methods for exploiting existing training data. Approaches such as harvesting parallel corpora from the web (Resnik and Smith, 2003) address the creation of data. We take the second, complementary approach. We address the problem of efficiently exploiting existing parallel corpora by adding explicit word-level alignments between a number of the sentence pairs in the training corpus. We modify the standard parameter estimation procedure for IBM Models and HMM variants so that they can exploit these additional wordlevel alignments. Our approach uses both word-and sentence-level alignments for training material.", "cite_spans": [ { "start": 314, "end": 338, "text": "(Resnik and Smith, 2003)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Describe how the parameter estimation framework of Brown et al. (1993) can be adapted to incorporate word-level alignments;", "cite_spans": [ { "start": 54, "end": 73, "text": "Brown et al. (1993)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Report significant improvements in alignment error rate and translation quality when training on data with word-level alignments;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. Demonstrate that the inclusion of word-level alignments is more effective than using a bilingual dictionary; 4. Show the importance of amplifying the contribution of word-aligned data during parameter estimation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper shows that word-level alignments improve the parameter estimates for translation models, which in turn results in improved statistical translation for languages that do not have large sentence-aligned parallel corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task of statistical machine translation is to choose the source sentence, e, that is the most probable translation of a given sentence, f , in a foreign language. Rather than choosing e * that directly maximizes p(e|f ), Brown et al. (1993) apply Bayes' rule and select the source sentence:", "cite_spans": [ { "start": 225, "end": 244, "text": "Brown et al. (1993)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "e * = arg max e p(e)p(f |e).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "In this equation p(e) is a language model probability and is p(f |e) a translation model probability. A series of increasingly sophisticated translation models, referred to as the IBM Models, was defined in Brown et al. (1993) . The translation model, p(f |e) defined as a marginal probability obtained by summing over word-level alignments, a, between the source and target sentences:", "cite_spans": [ { "start": 207, "end": 226, "text": "Brown et al. (1993)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "p(f |e) = a p(f , a|e).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "(2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "While word-level alignments are a crucial component of the IBM models, the model parameters are generally estimated from sentence-aligned parallel corpora without explicit word-level alignment information. The reason for this is that word-aligned parallel corpora do not generally exist. Consequently, word level alignments are treated as hidden variables. To estimate the values of these hidden variables, the expectation maximization (EM) framework for maximum likelihood estimation from incomplete data is used (Dempster et al., 1977) .", "cite_spans": [ { "start": 514, "end": 537, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "The previous section describes how the translation probability of a given sentence pair is obtained by summing over all alignments p(f |e) = a p(f , a|e). EM seeks to maximize the marginal log likelihood, log p(f |e), indirectly by iteratively maximizing a bound on this term known as the expected complete log likelihood, log", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "p(f , a|e) q(a) , 1 log p(f |e) = log a p(f , a|e) (3) = log a q(a) p(f , a|e) q(a) (4) \u2265 a q(a) log p(f , a|e) q(a) (5) = log p(f , a|e) q(a) + H(q(a))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "where the bound in (5) is given by Jensen's inequality. By choosing q(a) = p(a|f , e) this bound becomes an equality. This maximization consists of two steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "\u2022 E-step: calculate the posterior probability under the current model of every permissible alignment for each sentence pair in the sentence-aligned training corpus;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "\u2022 M-step: maximize the expected log likelihood under this posterior distribution, log p(f , a|e) q(a) , with respect to the model's parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "While in standard maximum likelihood estimation events are counted directly to estimate parameter settings, in EM we effectively collect fractional counts of events (here permissible alignments weighted by their posterior probability), and use these to iteratively update the parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "Since only some of the permissible alignments make sense linguistically, we would like EM to use the posterior alignment probabilities calculated in the E-step to weight plausible alignments higher than the large number of bogus alignments which are included in the expected complete log likelihood. This in turn should encourage the parameter adjustments made in the M-step to converge to linguistically plausible values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "Since the number of permissible alignments for a sentence grows exponentially in the length of the sentences for the later IBM Models, a large number of informative example sentence pairs are required to distinguish between plausible and implausible alignments. Given sufficient data the distinction occurs because words which are mutual translations appear together more frequently in aligned sentences in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "Given the high number of model parameters and permissible alignments, however, huge amounts of data will be required to estimate reasonable translation models from sentence-aligned data alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Sentence-Aligned Corpora", "sec_num": "2" }, { "text": "As an alternative to collecting a huge amount of sentence-aligned training data, by annotating some of our sentence pairs with word-level alignments we can explicitly provide information to highlight plausible alignments and thereby help parameters converge upon reasonable settings with less training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Word-and Sentence-Aligned Corpora", "sec_num": "3" }, { "text": "Since word-alignments are inherent in the IBM translation models it is straightforward to incorporate this information into the parameter estimation procedure. For sentence pairs with explicit wordlevel alignments marked, fractional counts over all permissible alignments need not be collected. Instead, whole counts are collected for the single hand annotated alignment for each sentence pair which has been word-aligned. By doing this the expected complete log likelihood collapses to a single term, the complete log likelihood (p(f , a|e)), and the Estep is circumvented.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Word-and Sentence-Aligned Corpora", "sec_num": "3" }, { "text": "The parameter estimation procedure now involves maximizing the likelihood of data aligned only at the sentence level and also of data aligned at the word level. The mixed likelihood function, M, combines the expected information contained in the sentence-aligned data with the complete information contained in the word-aligned data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Word-and Sentence-Aligned Corpora", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M = Ns s=1 (1 \u2212 \u03bb) log p(f s , a s |e s ) q(as) + Nw w=1 \u03bb log p(f w , a w |e w )", "eq_num": "(6)" } ], "section": "Parameter Estimation Using Word-and Sentence-Aligned Corpora", "sec_num": "3" }, { "text": "Here s and w index the N s sentence-aligned sentences and N w word-aligned sentences in our corpora respectively. Thus M combines the expected complete log likelihood and the complete log likelihood. In order to control the relative contributions of the sentence-aligned and word-aligned data in the parameter estimation procedure, we introduce a mixing weight \u03bb that can take values between 0 and 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Using Word-and Sentence-Aligned Corpora", "sec_num": "3" }, { "text": "The impact of word-level alignments on parameter estimation is closely tied to the structure of the IBM Models. Since translation and word alignment parameters are shared between all sentences, the posterior alignment probability of a source-target word pair in the sentence-aligned section of the corpus that were aligned in the word-aligned section will tend to be relatively high.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The impact of word-level alignments", "sec_num": "3.1" }, { "text": "In this way, the alignments from the word-aligned data effectively percolate through to the sentencealigned data indirectly constraining the E-step of EM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The impact of word-level alignments", "sec_num": "3.1" }, { "text": "word-aligned data By incorporating \u03bb, Equation 6 becomes an interpolation of the expected complete log likelihood provided by the sentence-aligned data and the complete log likelihood provided by word-aligned data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting the contribution of", "sec_num": "3.2" }, { "text": "The use of a weight to balance the contributions of unlabeled and labeled data in maximum likelihood estimation was proposed by Nigam et al. (2000) . \u03bb quantifies our relative confidence in the expected statistics and observed statistics estimated from the sentence-and word-aligned data respectively.", "cite_spans": [ { "start": 128, "end": 147, "text": "Nigam et al. (2000)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Weighting the contribution of", "sec_num": "3.2" }, { "text": "Standard maximum likelihood estimation (MLE) which weighs all training samples equally, corresponds to an implicit value of lambda equal to the proportion of word-aligned data in the whole of the training set: \u03bb = Nw Nw+Ns . However, having the total amount of sentence-aligned data be much larger than the amount of word-aligned data implies a value of \u03bb close to zero. This means that M can be maximized while essentially ignoring the likelihood of the word-aligned data. Since we believe that the explicit word-alignment information will be highly effective in distinguishing plausible alignments in the corpus as a whole, we expect to see benefits by setting \u03bb to amplify the contribution of the wordaligned data set particularly when this is a relatively small portion of the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting the contribution of", "sec_num": "3.2" }, { "text": "To perform our experiments with word-level alignements we modified GIZA++, an existing and freely available implementation of the IBM models and HMM variants (Och and Ney, 2003) . Our modifications involved circumventing the E-step for sentences which had word-level alignments and incorporating these observed alignment statistics in the M-step. The observed and expected statistics were weighted accordingly by \u03bb and (1 \u2212 \u03bb) respectively as were their contributions to the mixed log likelihood.", "cite_spans": [ { "start": 158, "end": 177, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "In order to measure the accuracy of the predictions that the statistical translation models make under our various experimental settings, we choose the alignment error rate (AER) metric, which is defined in Och and Ney (2003) . We also investigated whether improved AER leads to improved translation quality. We used the alignments created during our AER experiments as the input to a phrase-based decoder. We translated a test set of 350 sentences, and used the Bleu metric (Papineni et al., 2001) to automatically evaluate machine translation quality.", "cite_spans": [ { "start": 207, "end": 225, "text": "Och and Ney (2003)", "ref_id": "BIBREF8" }, { "start": 475, "end": 498, "text": "(Papineni et al., 2001)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "We used the Verbmobil German-English parallel corpus as a source of training data because it has been used extensively in evaluating statistical translation and alignment accuracy. This data set comes with a manually word-aligned set of 350 sentences which we used as our test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "Our experiments additionally required a very large set of word-aligned sentence pairs to be incorporated in the training set. Since previous work has shown that when training on the complete set of 34,000 sentence pairs an alignment error rate as low as 6% can be achieved for the Verbmobil data, we automatically generated a set of alignments for the entire training data set using the unmodified version of GIZA++. We wanted to use automatic alignments in lieu of actual hand alignments so that we would be able to perform experiments using large data sets. We ran a pilot experiment to test whether our automatic would produce similar results to manual alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "We divided our manual word alignments into training and test sets and compared the performance of models trained on human aligned data against models trained on automatically aligned data. Table 1 : Alignment error rates for the various IBM Models trained with sentence-aligned data 100-fold cross validation showed that manual and automatic alignments produced AER results that were similar to each other to within 0.1%. 2 Having satisfied ourselves that automatic alignment were a sufficient stand-in for manual alignments, we performed our main experiments which fell into the following categories:", "cite_spans": [ { "start": 422, "end": 423, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 189, "end": 196, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "1. Verifying that the use of word-aligned data has an impact on the quality of alignments predicted by the IBM Models, and comparing the quality increase to that gained by using a bilingual dictionary in the estimation stage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "2. Evaluating whether improved parameter estimates of alignment quality lead to improved translation quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "3. Experimenting with how increasing the ratio of word-aligned to sentence-aligned data affected the performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "4. Experimenting with our \u03bb parameter which allows us to weight the relative contributions of the word-aligned and sentence-aligned data, and relating it to the ratio experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "5. Showing that improvements to AER and translation quality held for another corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "As a staring point for comparison we trained GIZA++ using four different sized portions of the Verbmobil corpus. For each of those portions we output the most probable alignments of the testing data for Model 1, the HMM, Model 3, and Model 2 Note that we stripped out probable alignments from our manually produced alignments. Probable alignments are large blocks of words which the annotator was uncertain of how to align. The many possible word-to-word translations implied by the manual alignments led to lower results than with the automatic alignments, which contained fewer word-to-word translation possibilities. Table 1 gives alignment error rates when training on 500, 2000, 8000, and 16000 sentence pairs from Verbmobil corpus without using any word-aligned training data. We obtained much better results when incorporating word-alignments with our mixed likelihood function. Table 2 shows the results for the different corpus sizes, when all of the sentence pairs have been word-aligned. The best performing model in the unmodified GIZA++ code was the HMM trained on 16,000 sentence pairs, which had an alignment error rate of 12.04%. In our modified code the best performing model was Model 4 trained on 16,000 sentence pairs (where all the sentence pairs are word-aligned) with an alignment error rate of 7.52%. The difference in the best performing models represents a 38% relative reduction in AER. Interestingly, we achieve a lower AER than the best performing unmodified models using a corpus that is one-eight the size of the sentence-aligned data. Figure 1 show an example of the improved alignments that are achieved when using the word aligned data. The example alignments were held out sentence pairs that were aligned after training on 500 sentence pairs. The alignments produced when the training on word-aligned data are dramatically better than when training on sentence-aligned data.", "cite_spans": [], "ref_spans": [ { "start": 620, "end": 627, "text": "Table 1", "ref_id": null }, { "start": 886, "end": 893, "text": "Table 2", "ref_id": null }, { "start": 1567, "end": 1575, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Improved alignment quality", "sec_num": "5.1" }, { "text": "We contrasted these improvements with the improvements that are to be had from incorporating a bilingual dictionary into the estimation process. For this experiment we allowed a bilingual dictionary to constrain which words can act as translations of each other during the initial estimates of translation probabilities (as described in Och and Ney (2003) ). As can be seen in Table 3 , using a dictionary reduces the AER when compared to using GIZA++ without a dictionary, but not as dramatically as integrating the word-alignments. We further tried combining a dictionary with our word-alignments but found that the dictionary results in only very minimal improvements over using word-alignments alone. Table 4 : Improved AER leads to improved translation quality", "cite_spans": [ { "start": 337, "end": 355, "text": "Och and Ney (2003)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 377, "end": 384, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 705, "end": 712, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Improved alignment quality", "sec_num": "5.1" }, { "text": "The fact that using word-aligned data in estimating the parameters for machine translation leads to better alignments is predictable. A more significant result is whether it leads to improved translation quality. In order to test that our improved parameter estimates lead to better translation quality, we used a state-of-the-art phrase-based decoder to translate a held out set of German sentences into English. The phrase-based decoder extracts phrases from the word alignments produced by GIZA++, and computes translation probabilities based on the frequency of one phrase being aligned with another (Koehn et al., 2003) . We trained a language model AER when when Ratio \u03bb = Standard MLE \u03bb = .9 0.1 11.73 9.40 0.2 10.89 8.66 0.3 10.23 8.13 0.5 8.65 8.19 0.7 8.29 8.03 0.9 7.78 7.78 Table 5 : The effect of weighting word-aligned data more heavily that its proportion in the training data (corpus size 16000 sentence pairs) using the 34,000 English sentences from the training set. Table 4 shows that using word-aligned data leads to better translation quality than using sentencealigned data. Particularly, significantly less data is needed to achieve a high Bleu score when using word alignments. Training on a corpus of 8,000 sentence pairs with word alignments results in a higher Bleu score than when training on a corpus of 16,000 sentence pairs without word alignments.", "cite_spans": [ { "start": 604, "end": 624, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 786, "end": 793, "text": "Table 5", "ref_id": null }, { "start": 985, "end": 992, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Improved translation quality", "sec_num": "5.2" }, { "text": "We have seen that using training data consisting of entirely word-aligned sentence pairs leads to better alignment accuracy and translation quality. However, because manually word-aligning sentence pairs costs more than just using sentence-aligned data, it is unlikely that we will ever want to label an entire corpus. Instead we will likely have a relatively small portion of the corpus word aligned. We want to be sure that this small amount of data labeled with word alignments does not get overwhelmed by a larger amount of unlabeled data. Thus we introduced the \u03bb weight into our mixed likelihood function. Table 5 compares the natural setting of \u03bb (where it is proportional to the amount of labeled data in the corpus) to a value that amplifies the contribution of the word-aligned data. Figure 2 shows a variety of values for \u03bb. It shows as \u03bb increases AER decreases. Placing nearly all the weight onto the word-aligned data seems to be most effective. 4 Note this did not vary the training data size -only the relative contributions between sentence-and word-aligned training material.", "cite_spans": [], "ref_spans": [ { "start": 612, "end": 619, "text": "Table 5", "ref_id": null }, { "start": 794, "end": 802, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Weighting the word-aligned data", "sec_num": "5.3" }, { "text": "We also varied the ratio of word-aligned to sentence-aligned data, and evaluated the AER and Bleu scores, and assigned high value to \u03bb (= 0.9). Figure 3 shows how AER improves as more word-aligned data is added. Each curve on the graph represents a corpus size and shows its reduction in error rate as more word-aligned data is added. For example, the bottom curve shows the performance of a corpus of 16,000 sentence pairs which starts with an AER of just over 12% with no word-aligned training data and decreases to an AER of 7.5% when all 16,000 sentence pairs are word-aligned. This curve essentially levels off after 30% of the data is word-aligned. This shows that a small amount of word-aligned data is very useful, and if we wanted to achieve a low AER, we would only have to label 4,800 examples with their word alignments rather than the entire corpus. Figure 4 shows how the Bleu score improves as more word-aligned data is added. This graph also Figure 4 : The effect on Bleu of varying the ratio of word-aligned to sentence-aligned data reinforces the fact that a small amount of wordaligned data is useful. A corpus of 8,000 sentence pairs with only 800 of them labeled with word alignments achieves a higher Bleu score than a corpus of 16,000 sentence pairs with no word alignments.", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 152, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 863, "end": 871, "text": "Figure 4", "ref_id": null }, { "start": 958, "end": 966, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Ratio of word-to sentence-aligned data", "sec_num": "5.4" }, { "text": "We additionally tested whether incorporating wordlevel alignments into the estimation improved results for a larger corpus. We repeated our experiments using the Canadian Hansards French-English parallel corpus. Figure 6 gives a summary of the improvements in AER and Bleu score for that corpus, when testing on a held out set of 484 hand aligned sentences.", "cite_spans": [], "ref_spans": [ { "start": 212, "end": 220, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Evaluation using a larger training corpus", "sec_num": "5.5" }, { "text": "On the whole, alignment error rates are higher and Bleu scores are considerably lower for the Hansards corpus. This is probably due to the differences in the corpora. Whereas the Verbmobil corpus has a small vocabulary (<10,000 per lan- Table 6 : Summary results for AER and translation quality experiments on Hansards data guage), the Hansards has ten times that many vocabulary items and has a much longer average sentence length. This made it more difficult for us to create a simulated set of hand alignments; we measured the AER of our simulated alignments at 11.3% (which compares to 6.5% for our simulated alignments for the Verbmobil corpus). Nevertheless, the trend of decreased AER and increased Bleu score still holds. For each size of training corpus we tested we found better results using the word-aligned data.", "cite_spans": [], "ref_spans": [ { "start": 237, "end": 244, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Evaluation using a larger training corpus", "sec_num": "5.5" }, { "text": "Och and Ney (2003) is the most extensive analysis to date of how many different factors contribute towards improved alignments error rates, but the inclusion of word-alignments is not considered. Och and Ney do not give any direct analysis of how improved word alignments accuracy contributes toward better translation quality as we do here. Mihalcea and Pedersen (2003) described a shared task where the goal was to achieve the best AER. A number of different methods were tried, but none of them used word-level alignments. Since the best performing system used an unmodified version of Giza++, we would expected that our modifed version would show enhanced performance. Naturally this would need to be tested in future work. Melamed (1998) describes the process of manually creating a large set of word-level alignments of sentences in a parallel text. Nigam et al. (2000) described the use of weight to balance the respective contributions of labeled and unlabeled data to a mixed likelihood function. Corduneanu (2002) provides a detailed discussion of the instability of maximum likelhood solutions estimated from a mixture of labeled and unlabeled data.", "cite_spans": [ { "start": 342, "end": 370, "text": "Mihalcea and Pedersen (2003)", "ref_id": "BIBREF6" }, { "start": 728, "end": 742, "text": "Melamed (1998)", "ref_id": "BIBREF5" }, { "start": 856, "end": 875, "text": "Nigam et al. (2000)", "ref_id": "BIBREF7" }, { "start": 1006, "end": 1023, "text": "Corduneanu (2002)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this paper we show with the appropriate modification of EM significant improvement gains can be had through labeling word alignments in a bilingual corpus. Because of this significantly less data is required to achieve a low alignment error rate or high Bleu score. This holds even when using noisy word alignments such as our automatically created set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "7" }, { "text": "One should take our research into account when trying to efficiently create a statistical machine translation system for a language pair for which a parallel corpus is not available. Germann (2001) describes the cost of building a Tamil-English parallel corpus from scratch, and finds that using professional translations is prohibitively high. In our experience it is quicker to manually word-align translated sentence pairs than to translate a sentence, and word-level alignment can be done by someone who might not be fluent enough to produce translations. It might therefore be possible to achieve a higher performance at a fraction of the cost by hiring a nonprofessional produce word-alignments after a limited set of sentences have been translated.", "cite_spans": [ { "start": 183, "end": 197, "text": "Germann (2001)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "7" }, { "text": "We plan to investigate whether it is feasible to use active learning to select which examples will be most useful when aligned at the word-level. Section 5.4 shows that word-aligning a fraction of sentence pairs in a training corpus, rather than the entire training corpus can still yield most of the benefits described in this paper. One would hope that by selectively sampling which sentences are to be manually word-aligned we would achieve nearly the same performance as word-aligning the entire corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "7" }, { "text": "Here \u2022 q(\u2022) denotes an expectation with respect to q(\u2022).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used the default training schemes for GIZA++, and left model smoothing parameters at their default settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "At \u03bb = 1 (not shown inFigure 2) the data that is only sentence-aligned is ignored, and the AER is therefore higher.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank Franz Och, Hermann Ney, and Richard Zens for providing the Verbmobil data, and Linear B for providing its phrase-based decoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The mathematics of machine translation: Parameter estimation", "authors": [ { "first": "Peter", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Stephen", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "Vincent", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Brown, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer. 1993. The mathematics of ma- chine translation: Parameter estimation. Computa- tional Linguistics, 19(2):263-311, June.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Master's thesis, Massachusetts Institute of Technology", "authors": [ { "first": "Adrian", "middle": [], "last": "Corduneanu", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adrian Corduneanu. 2002. Stable mixing of complete and incomplete information. Master's thesis, Mas- sachusetts Institute of Technology, February.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Soci- ety, 39(1):1-38, Nov.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Building a statistical machine translation system from scratch: How much bang for the buck can we expect", "authors": [ { "first": "Ulrich", "middle": [], "last": "Germann", "suffix": "" } ], "year": 2001, "venue": "ACL 2001 Workshop on Data-Driven Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ulrich Germann. 2001. Building a statistical machine translation system from scratch: How much bang for the buck can we expect? In ACL 2001 Workshop on Data-Driven Machine Translation, Toulouse, France, July 7.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Statistical phrase-based translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the HLT/NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Pro- ceedings of the HLT/NAACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Manual annotation of translational equivalence: The blinker project", "authors": [ { "first": "I", "middle": [], "last": "", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 1998, "venue": "Cognitive Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Dan Melamed. 1998. Manual annotation of trans- lational equivalence: The blinker project. Cognitive Science Technical Report 98/07, University of Penn- sylvania.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An evaluation exercise for word alignment", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2003, "venue": "HLT-NAACL 2003 Workshop: Building and Using Parallel Texts", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In Rada Mihalcea and Ted Pedersen, editors, HLT-NAACL 2003 Workshop: Building and Using Parallel Texts.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Text classification from labeled and unlabeled documents using EM", "authors": [ { "first": "Kamal", "middle": [], "last": "Nigam", "suffix": "" }, { "first": "Andrew", "middle": [ "K" ], "last": "Mccallum", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Thrun", "suffix": "" }, { "first": "Tom", "middle": [ "M" ], "last": "Mitchell", "suffix": "" } ], "year": 2000, "venue": "Machine Learning", "volume": "39", "issue": "", "pages": "103--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kamal Nigam, Andrew K. McCallum, Sebastian Thrun, and Tom M. Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3):103-134.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, 29(1):19-51, March. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2001. Bleu: a method for automatic eval- uation of machine translation. IBM Research Report RC22176(W0109-022), IBM.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The web as a parallel corpus", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "3", "pages": "349--380", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik and Noah Smith. 2003. The web as a par- allel corpus. Computational Linguistics, 29(3):349- 380, September.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Example alignments using sentence-aligned training data (a), using word-aligned data (b), and a reference manual alignment (c)", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "The effect on AER of varying \u03bb for a training corpus of 16K sentence pairs with various proportions of word-alignments", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "The effect on AER of varying the ratio of word-aligned to sentence-aligned data", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "text": "29.64 24.66 22.64 21.68 HMM 18.74 15.63 12.39 12.04 Model 3 26.07 18.64 14.39 13.87 Model 4 20.59 16.05 12.63 12.17", "type_str": "table", "html": null, "num": null, "content": "
Size of training corpus
Model.5k2k8k16k
Model 1
A
" }, "TABREF3": { "text": "", "type_str": "table", "html": null, "num": null, "content": "
The improved alignment error rates when
using a dictionary instead of word-aligned data to
constrain word translations
Sentence-alignedWord-aligned
SizeAER BleuAER Bleu
50020.59 0.21114.19 0.233
200016.05 0.24710.13 0.260
800012.63 0.2657.87 0.278
16000 12.17 0.2707.52 0.282
" } } } }