{ "paper_id": "P10-1049", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:23:00.688057Z" }, "title": "Training Phrase Translation Models with Leaving-One-Out", "authors": [ { "first": "Joern", "middle": [], "last": "Wuebker", "suffix": "", "affiliation": { "laboratory": "Human Language Technology and Pattern Recognition Group", "institution": "RWTH Aachen University", "location": { "country": "Germany" } }, "email": "" }, { "first": "Arne", "middle": [], "last": "Mauser", "suffix": "", "affiliation": { "laboratory": "Human Language Technology and Pattern Recognition Group", "institution": "RWTH Aachen University", "location": { "country": "Germany" } }, "email": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "", "affiliation": { "laboratory": "Human Language Technology and Pattern Recognition Group", "institution": "RWTH Aachen University", "location": { "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Several attempts have been made to learn phrase translation probabilities for phrasebased statistical machine translation that go beyond pure counting of phrases in word-aligned training data. Most approaches report problems with overfitting. We describe a novel leavingone-out approach to prevent over-fitting that allows us to train phrase models that show improved translation performance on the WMT08 Europarl German-English task. In contrast to most previous work where phrase models were trained separately from other models used in translation, we include all components such as single word lexica and reordering models in training. Using this consistent training of phrase models we are able to achieve improvements of up to 1.4 points in BLEU. As a side effect, the phrase table size is reduced by more than 80%.", "pdf_parse": { "paper_id": "P10-1049", "_pdf_hash": "", "abstract": [ { "text": "Several attempts have been made to learn phrase translation probabilities for phrasebased statistical machine translation that go beyond pure counting of phrases in word-aligned training data. Most approaches report problems with overfitting. We describe a novel leavingone-out approach to prevent over-fitting that allows us to train phrase models that show improved translation performance on the WMT08 Europarl German-English task. In contrast to most previous work where phrase models were trained separately from other models used in translation, we include all components such as single word lexica and reordering models in training. Using this consistent training of phrase models we are able to achieve improvements of up to 1.4 points in BLEU. As a side effect, the phrase table size is reduced by more than 80%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A phrase-based SMT system takes a source sentence and produces a translation by segmenting the sentence into phrases and translating those phrases separately (Koehn et al., 2003) . The phrase translation table, which contains the bilingual phrase pairs and the corresponding translation probabilities, is one of the main components of an SMT system. The most common method for obtaining the phrase table is heuristic extraction from automatically word-aligned bilingual training data (Och et al., 1999) . In this method, all phrases of the sentence pair that match constraints given by the alignment are extracted. This includes overlapping phrases. At extraction time it does not matter, whether the phrases are extracted from a highly probable phrase alignment or from an unlikely one.", "cite_spans": [ { "start": 158, "end": 178, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF9" }, { "start": 484, "end": 502, "text": "(Och et al., 1999)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Phrase model probabilities are typically defined as relative frequencies of phrases extracted from word-aligned parallel training data. The joint counts C(f ,\u1ebd) of the source phrasef and the target phrase\u1ebd in the entire training data are normalized by the marginal counts of source and target phrase to obtain a conditional probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p H (f |\u1ebd) = C(f ,\u1ebd) C(\u1ebd) .", "eq_num": "(1)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "The translation process is implemented as a weighted log-linear combination of several models h m (e I 1 , s K 1 , f J 1 ) including the logarithm of the phrase probability in source-to-target as well as in target-to-source direction. The phrase model is combined with a language model, word lexicon models, word and phrase penalty, and many others. (Och and Ney, 2004) The best translation\u00ea\u00ce 1 as defined by the models then can be written a\u015d", "cite_spans": [ { "start": 350, "end": 369, "text": "(Och and Ney, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e\u00ce 1 = argmax I,e I 1 M m=1 \u03bb m h m (e I 1 , s K 1 , f J 1 )", "eq_num": "(2)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we propose to directly train our phrase models by applying a forced alignment procedure where we use the decoder to find a phrase alignment between source and target sentences of the training data and then updating phrase translation probabilities based on this alignment. In contrast to heuristic extraction, the proposed method provides a way of consistently training and using phrase models in translation. We use a modified version of a phrase-based decoder to perform the forced alignment. This way we ensure that all models used in training are identical to the ones used at decoding time. An illustration of the basic idea can be seen in Figure 1 . In the literature this method by itself has been shown to be problematic because it suffers from over-fitting (DeNero et al., 2006) , (Liang et al., 2006) . Since our initial phrases are extracted from the same training data, that we want to align, very long phrases can be found for segmentation. As these long phrases tend to occur in only a few training sentences, the EM algorithm generally overestimates their probability and neglects shorter phrases, which better generalize to unseen data and thus are more useful for translation. In order to counteract these effects, our training procedure applies leaving-one-out on the sentence level. Our results show, that this leads to a better translation quality.", "cite_spans": [ { "start": 780, "end": 801, "text": "(DeNero et al., 2006)", "ref_id": "BIBREF4" }, { "start": 804, "end": 824, "text": "(Liang et al., 2006)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 659, "end": 667, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Ideally, we would produce all possible segmentations and alignments during training. However, this has been shown to be infeasible for real-world data . As training uses a modified version of the translation decoder, it is straightforward to apply pruning as in regular decoding. Additionally, we consider three ways of approximating the full search space:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. the single-best Viterbi alignment, 2. the n-best alignments, 3. all alignments remaining in the search space after pruning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The performance of the different approaches is measured and compared on the German-English Europarl task from the ACL 2008 Workshop on Statistical Machine Translation (WMT08). Our results show that the proposed phrase model training improves translation quality on the test set by 0.9 BLEU points over our baseline. We find that by interpolation with the heuristically extracted phrases translation performance can reach up to 1.4 BLEU improvement over the baseline on the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "After reviewing the related work in the following section, we give a detailed description of phrasal alignment and leaving-one-out in Section 3. Section 4 explains the estimation of phrase models. The empirical evaluation of the different approaches is done in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It has been pointed out in literature, that training phrase models poses some difficulties. For a generative model, (DeNero et al., 2006 ) gave a detailed analysis of the challenges and arising problems. They introduce a model similar to the one we propose in Section 4.2 and train it with the EM algorithm. Their results show that it can not reach a performance competitive to extracting a phrase table from word alignment by heuristics (Och et al., 1999) .", "cite_spans": [ { "start": 116, "end": 136, "text": "(DeNero et al., 2006", "ref_id": "BIBREF4" }, { "start": 438, "end": 456, "text": "(Och et al., 1999)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Several reasons are revealed in (DeNero et al., 2006) . When given a bilingual sentence pair, we can usually assume there are a number of equally correct phrase segmentations and corresponding alignments. For example, it may be possible to transform one valid segmentation into another by splitting some of its phrases into sub-phrases or by shifting phrase boundaries. This is different from word-based translation models, where a typical assumption is that each target word corresponds to only one source word. As a result of this ambiguity, different segmentations are recruited for different examples during training. That in turn leads to over-fitting which shows in overly determinized estimates of the phrase translation probabilities. In addition, (DeNero et al., 2006) found that the trained phrase table shows a highly peaked distribution in opposition to the more flat distribution resulting from heuristic extraction, leaving the decoder only few translation options at decoding time.", "cite_spans": [ { "start": 32, "end": 53, "text": "(DeNero et al., 2006)", "ref_id": "BIBREF4" }, { "start": 756, "end": 777, "text": "(DeNero et al., 2006)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our work differs from (DeNero et al., 2006) in a number of ways, addressing those problems.", "cite_spans": [ { "start": 22, "end": 43, "text": "(DeNero et al., 2006)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To limit the effects of over-fitting, we apply the leaving-one-out and cross-validation methods in training. In addition, we do not restrict the training to phrases consistent with the word alignment, as was done in (DeNero et al., 2006) . This allows us to recover from flawed word alignments.", "cite_spans": [ { "start": 216, "end": 237, "text": "(DeNero et al., 2006)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In (Liang et al., 2006) a discriminative translation system is described. For training of the parameters for the discriminative features they propose a strategy they call bold updating. It is similar to our forced alignment training procedure described in Section 3.", "cite_spans": [ { "start": 3, "end": 23, "text": "(Liang et al., 2006)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For the hierarchical phrase-based approach, (Blunsom et al., 2008) present a discriminative rule model and show the difference between using only the viterbi alignment in training and using the full sum over all possible derivations.", "cite_spans": [ { "start": 44, "end": 66, "text": "(Blunsom et al., 2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Forced alignment can also be utilized to train a phrase segmentation model, as is shown in (Shen et al., 2008) . They report small but consistent improvements by incorporating this segmentation model, which works as an additional prior probability on the monolingual target phrase.", "cite_spans": [ { "start": 91, "end": 110, "text": "(Shen et al., 2008)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In (Ferrer and Juan, 2009) , phrase models are trained by a semi-hidden Markov model. They train a conditional \"inverse\" phrase model of the target phrase given the source phrase. Additionally to the phrases, they model the segmentation sequence that is used to produce a phrase alignment between the source and the target sentence. They used a phrase length limit of 4 words with longer phrases not resulting in further improvements. To counteract over-fitting, they interpolate the phrase model with IBM Model 1 probabilities that are computed on the phrase level. We also include these word lexica, as they are standard components of the phrase-based system.", "cite_spans": [ { "start": 3, "end": 26, "text": "(Ferrer and Juan, 2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "It is shown in (Ferrer and Juan, 2009) , that Viterbi training produces almost the same results as full Baum-Welch training. They report improvements over a phrase-based model that uses an inverse phrase model and a language model. Experiments are carried out on a custom subset of the English-Spanish Europarl corpus.", "cite_spans": [ { "start": 15, "end": 38, "text": "(Ferrer and Juan, 2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our approach is similar to the one presented in (Ferrer and Juan, 2009 ) in that we compare Viterbi and a training method based on the Forward-Backward algorithm. But instead of focusing on the statistical model and relaxing the translation task by using monotone translation only, we use a full and competitive translation system as starting point with reordering and all models included.", "cite_spans": [ { "start": 48, "end": 70, "text": "(Ferrer and Juan, 2009", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In (Marcu and Wong, 2002) , a joint probability phrase model is presented. The learned phrases are restricted to the most frequent n-grams up to length 6 and all unigrams. Monolingual phrases have to occur at least 5 times to be considered in training. Smoothing is applied to the learned models so that probabilities for rare phrases are non-zero. In training, they use a greedy algorithm to produce the Viterbi phrase alignment and then apply a hill-climbing technique that modifies the Viterbi alignment by merge, move, split, and swap operations to find an alignment with a better probability in each iteration. The model shows improvements in translation quality over the singleword-based IBM Model 4 (Brown et al., 1993) on a subset of the Canadian Hansards corpus.", "cite_spans": [ { "start": 3, "end": 25, "text": "(Marcu and Wong, 2002)", "ref_id": "BIBREF11" }, { "start": 706, "end": 726, "text": "(Brown et al., 1993)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The joint model by (Marcu and Wong, 2002) is refined by (Birch et al., 2006) who use high-confidence word alignments to constrain the search space in training. They observe that due to several constraints and pruning steps, the trained phrase table is much smaller than the heuristically extracted one, while preserving translation quality.", "cite_spans": [ { "start": 19, "end": 41, "text": "(Marcu and Wong, 2002)", "ref_id": "BIBREF11" }, { "start": 56, "end": 76, "text": "(Birch et al., 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The work by describes a method to train the joint model described in (Marcu and Wong, 2002) with a Gibbs sampler. They show that by applying a prior distribution over the phrase translation probabilities they can prevent over-fitting. The prior is composed of IBM1 lexical probabilities and a geometric distribution over phrase lengths which penalizes long phrases. The two approaches differ in that we apply the leaving-one-out procedure to avoid overfitting, as opposed to explicitly defining a prior distribution.", "cite_spans": [ { "start": 69, "end": 91, "text": "(Marcu and Wong, 2002)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The training process is divided into three parts. First we obtain all models needed for a normal translations system. We perform minimum error rate training with the downhill simplex algorithm (Nelder and Mead, 1965) on the development data to obtain a set of scaling factors that achieve a good BLEU score. We then use these models and scaling factors to do a forced alignment, where we compute a phrase alignment for the training data. From this alignment we then estimate new phrase models, while keeping all other models un-changed. In this section we describe our forced alignment procedure that is the basic training procedure for the models proposed here.", "cite_spans": [ { "start": 193, "end": 216, "text": "(Nelder and Mead, 1965)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Alignment", "sec_num": "3" }, { "text": "The idea of forced alignment is to perform a phrase segmentation and alignment of each sentence pair of the training data using the full translation system as in decoding. What we call segmentation and alignment here corresponds to the \"concepts\" used by (Marcu and Wong, 2002) . We apply our normal phrase-based decoder on the source side of the training data and constrain the translations to the corresponding target sentences from the training data.", "cite_spans": [ { "start": 255, "end": 277, "text": "(Marcu and Wong, 2002)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Forced Alignment", "sec_num": "3.1" }, { "text": "Given a source sentence f J 1 and target sentence e I 1 , we search for the best phrase segmentation and alignment that covers both sentences. A segmentation of a sentence into K phrase is defined by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Forced Alignment", "sec_num": "3.1" }, { "text": "k \u2192 s k := (i k , b k , j k ), for k = 1, . . . , K", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Forced Alignment", "sec_num": "3.1" }, { "text": "where for each segment i k is last position of kth target phrase, and (b k , j k ) are the start and end positions of the source phrase aligned to the kth target phrase. Consequently, we can modify Equation 2 to define the best segmentation of a sentence pair as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Forced Alignment", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sK 1 = argmax K,s K 1 M m=1 \u03bb m h m (e I 1 , s K 1 , f J 1 )", "eq_num": "(3)" } ], "section": "Forced Alignment", "sec_num": "3.1" }, { "text": "The identical models as in search are used: conditional phrase probabilities p(f k |\u1ebd k ) and p(\u1ebd k |f k ), within-phrase lexical probabilities, distance-based reordering model as well as word and phrase penalty. A language model is not used in this case, as the system is constrained to the given target sentence and thus the language model score has no effect on the alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Forced Alignment", "sec_num": "3.1" }, { "text": "In addition to the phrase matching on the source sentence, we also discard all phrase translation candidates, that do not match any sequence in the given target sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Forced Alignment", "sec_num": "3.1" }, { "text": "Sentences for which the decoder can not find an alignment are discarded for the phrase model training. In our experiments, this is the case for roughly 5% of the training sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Forced Alignment", "sec_num": "3.1" }, { "text": "As was mentioned in Section 2, previous approaches found over-fitting to be a problem in phrase model training. In this section, we describe a leaving-one-out method that can improve the phrase alignment in situations, where the probability of rare phrases and alignments might be overestimated. The training data that consists of N parallel sentence pairs f n and e n for n = 1, . . . , N is used for both the initialization of the translation model p(f |\u1ebd) and the phrase model training. While this way we can make full use of the available data and avoid unknown words during training, it has the drawback that it can lead to overfitting. All phrases extracted from a specific sentence pair f n , e n can be used for the alignment of this sentence pair. This includes longer phrases, which only match in very few sentences in the data. Therefore those long phrases are trained to fit only a few sentence pairs, strongly overestimating their translation probabilities and failing to generalize. In the extreme case, whole sentences will be learned as phrasal translations. The average length of the used phrases is an indicator of this kind of over-fitting, as the number of matching training sentences decreases with increasing phrase length. We can see an example in Figure 2 . Without leaving-one-out the sentence is segmented into a few long phrases, which are unlikely to occur in data to be translated. Phrase boundaries seem to be unintuitive and based on some hidden structures. With leaving-one-out the phrases are shorter and therefore better suited for generalization to unseen data.", "cite_spans": [], "ref_spans": [ { "start": 1271, "end": 1280, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Leaving-one-out", "sec_num": "3.2" }, { "text": "Previous attempts have dealt with the overfitting problem by limiting the maximum phrase length (DeNero et al., 2006; Marcu and Wong, 2002) and by smoothing the phrase probabilities by lexical models on the phrase level (Ferrer and Juan, 2009) . However, (DeNero et al., 2006) experienced similar over-fitting with short phrases due to the fact that the same word sequence can be segmented in different ways, leading to specific segmentations being learned for specific training sentence pairs. Our results confirm these findings. To deal with this problem, instead of simple phrase length restriction, we propose to apply the leavingone-out method, which is also used for language modeling techniques (Kneser and Ney, 1995) .", "cite_spans": [ { "start": 96, "end": 117, "text": "(DeNero et al., 2006;", "ref_id": "BIBREF4" }, { "start": 118, "end": 139, "text": "Marcu and Wong, 2002)", "ref_id": "BIBREF11" }, { "start": 220, "end": 243, "text": "(Ferrer and Juan, 2009)", "ref_id": "BIBREF7" }, { "start": 255, "end": 276, "text": "(DeNero et al., 2006)", "ref_id": "BIBREF4" }, { "start": 702, "end": 724, "text": "(Kneser and Ney, 1995)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Leaving-one-out", "sec_num": "3.2" }, { "text": "When using leaving-one-out, we modify the phrase translation probabilities for each sentence pair. For a training example f n , e n , we have to remove all phrases C n (f ,\u1ebd) that were extracted from this sentence pair from the phrase counts that Figure 2 : Segmentation example from forced alignment. Top: without leaving-one-out. Bottom: with leaving-one-out.", "cite_spans": [], "ref_spans": [ { "start": 247, "end": 255, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Leaving-one-out", "sec_num": "3.2" }, { "text": "we used to construct our phrase translation table. The same holds for the marginal counts C n (\u1ebd) and C n (f ). Starting from Equation 1, the leaving-oneout phrase probability for training sentence pair n is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Leaving-one-out", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p l1o,n (f |\u1ebd) = C(f ,\u1ebd) \u2212 C n (f ,\u1ebd) C(\u1ebd) \u2212 C n (\u1ebd)", "eq_num": "(4)" } ], "section": "Leaving-one-out", "sec_num": "3.2" }, { "text": "To be able to perform the re-computation in an efficient way, we store the source and target phrase marginal counts for each phrase in the phrase table. A phrase extraction is performed for each training sentence pair separately using the same word alignment as for the initialization. It is then straightforward to compute the phrase counts after leaving-one-out using the phrase probabilities and marginal counts stored in the phrase table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Leaving-one-out", "sec_num": "3.2" }, { "text": "While this works well for more frequent observations, singleton phrases are assigned a probability of zero. We refer to singleton phrases as phrase pairs that occur only in one sentence. For these sentences, the decoder needs the singleton phrase pairs to produce an alignment. Therefore we retain those phrases by assigning them a positive probability close to zero. We evaluated with two different strategies for this, which we call standard and length-based leaving-one-out. Standard leavingone-out assigns a fixed probability \u03b1 to singleton phrase pairs. This way the decoder will prefer using more frequent phrases for the alignment, but is able to resort to singletons if necessary. However, we found that with this method longer singleton phrases are preferred over shorter ones, because fewer of them are needed to produce the target sentence. In order to better generalize to unseen data, we would like to give the preference to shorter phrases. This is done by length-based leavingone-out, where singleton phrases are assigned the probability \u03b2 (|f |+|\u1ebd|) with the source and target Table 1 : Avg. source phrase lengths in forced alignment without leaving-one-out and with standard and length-based leaving-one-out.", "cite_spans": [], "ref_spans": [ { "start": 1093, "end": 1100, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Leaving-one-out", "sec_num": "3.2" }, { "text": "avg. phrase length without l1o 2.5 standard l1o", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Leaving-one-out", "sec_num": "3.2" }, { "text": "1.9 length-based l1o", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Leaving-one-out", "sec_num": "3.2" }, { "text": "1.6 phrase lengths |f | and |\u1ebd| and fixed \u03b2 < 1. In our experiments we set \u03b1 = e \u221220 and \u03b2 = e \u22125 . Table 1 shows the decrease in average source phrase length by application of leaving-one-out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Leaving-one-out", "sec_num": "3.2" }, { "text": "For the first iteration of the phrase training, leaving-one-out can be implemented efficiently as described in Section 3.2. For higher iterations, phrase counts obtained in the previous iterations would have to be stored on disk separately for each sentence and accessed during the forced alignment process. To simplify this procedure, we propose a cross-validation strategy on larger batches of data. Instead of recomputing the phrase counts for each sentence individually, this is done for a whole batch of sentences at a time. In our experiments, we set this batch-size to 10000 sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-validation", "sec_num": "3.3" }, { "text": "To cope with the runtime and memory requirements of phrase model training that was pointed out by previous work (Marcu and Wong, 2002; Birch et al., 2006) , we parallelized the forced alignment by splitting the training corpus into blocks of 10k sentence pairs. From the initial phrase table, each of these blocks only loads the phrases that are required for alignment. The align-ment and the counting of phrases are done separately for each block and then accumulated to build the updated phrase model.", "cite_spans": [ { "start": 112, "end": 134, "text": "(Marcu and Wong, 2002;", "ref_id": "BIBREF11" }, { "start": 135, "end": 154, "text": "Birch et al., 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Parallelization", "sec_num": "3.4" }, { "text": "The produced phrase alignment can be given as a single best alignment, as the n-best alignments or as an alignment graph representing all alignments considered by the decoder. We have developed two different models for phrase translation probabilities which make use of the force-aligned training data. Additionally we consider smoothing by different kinds of interpolation of the generative model with the state-of-the-art heuristics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Model Training", "sec_num": "4" }, { "text": "The simplest of our generative phrase models estimates phrase translation probabilities by their relative frequencies in the Viterbi alignment of the data, similar to the heuristic model but with counts from the phrase-aligned data produced in training rather than computed on the basis of a word alignment. The translation probability of a phrase pair (f ,\u1ebd) is estimated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Viterbi", "sec_num": "4.1" }, { "text": "p F A (f |\u1ebd) = C F A (f ,\u1ebd) f C F A (f ,\u1ebd) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Viterbi", "sec_num": "4.1" }, { "text": "where C F A (f ,\u1ebd) is the count of the phrase pair (f ,\u1ebd) in the phrase-aligned training data. This can be applied to either the Viterbi phrase alignment or an n-best list. For the simplest model, each hypothesis in the n-best list is weighted equally. We will refer to this model as the count model as we simply count the number of occurrences of a phrase pair. We also experimented with weighting the counts with the estimated likelihood of the corresponding entry in the the n-best list. The sum of the likelihoods of all entries in an n-best list is normalized to 1. We will refer to this model as the weighted count model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Viterbi", "sec_num": "4.1" }, { "text": "Ideally, the training procedure would consider all possible alignment and segmentation hypotheses. When alternatives are weighted by their posterior probability. As discussed earlier, the run-time requirements for computing all possible alignments is prohibitive for large data tasks. However, we can approximate the space of all possible hypotheses by the search space that was used for the alignment. While this might not cover all phrase translation probabilities, it allows the search space and translation times to be feasible and still contains the most probable alignments. This search space can be represented as a graph of partial hypotheses (Ueffing et al., 2002 ) on which we can compute expectations using the Forward-Backward algorithm. We will refer to this alignment as the full alignment. In contrast to the method described in Section 4.1, phrases are weighted by their posterior probability in the word graph. As suggested in work on minimum Bayes-risk decoding for SMT (Tromble et al., 2008; Ehling et al., 2007) , we use a global factor to scale the posterior probabilities.", "cite_spans": [ { "start": 651, "end": 672, "text": "(Ueffing et al., 2002", "ref_id": "BIBREF20" }, { "start": 988, "end": 1010, "text": "(Tromble et al., 2008;", "ref_id": "BIBREF19" }, { "start": 1011, "end": 1031, "text": "Ehling et al., 2007)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Forward-backward", "sec_num": "4.2" }, { "text": "As (DeNero et al., 2006) have reported improvements in translation quality by interpolation of phrase tables produced by the generative and the heuristic model, we adopt this method and also report results using log-linear interpolation of the estimated model with the original model. The log-linear interpolations p int (f |\u1ebd) of the phrase translation probabilities are estimated as", "cite_spans": [ { "start": 3, "end": 24, "text": "(DeNero et al., 2006)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase Table Interpolation", "sec_num": "4.3" }, { "text": "p int (f |\u1ebd) = p H (f |\u1ebd) 1\u2212\u03c9 \u2022 p gen (f |\u1ebd) (\u03c9) (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Table Interpolation", "sec_num": "4.3" }, { "text": "where \u03c9 is the interpolation weight, p H the heuristically estimated phrase model and p gen the count model. The interpolation weight \u03c9 is adjusted on the development corpus. When interpolating phrase tables containing different sets of phrase pairs, we retain the intersection of the two.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Table Interpolation", "sec_num": "4.3" }, { "text": "As a generalization of the fixed interpolation of the two phrase tables we also experimented with adding the two trained phrase probabilities as additional features to the log-linear framework. This way we allow different interpolation weights for the two translation directions and can optimize them automatically along with the other feature weights. We will refer to this method as featurewise combination. Again, we retain the intersection of the two phrase tables. With good loglinear feature weights, feature-wise combination should perform at least as well as fixed interpolation. However, the results presented in Table 5 show a slightly lower performance. This illustrates that a higher number of features results in a less reliable optimization of the log-linear parameters.", "cite_spans": [], "ref_spans": [ { "start": 622, "end": 629, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Phrase Table Interpolation", "sec_num": "4.3" }, { "text": "5 Experimental Evaluation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Table Interpolation", "sec_num": "4.3" }, { "text": "We conducted our experiments on the German-English data published for the ACL 2008 Workshop on Statistical Machine Translation (WMT08). Statistics for the Europarl data are given in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 182, "end": 189, "text": "Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "We are given the three data sets T RAIN , DEV and T EST . For the heuristic phrase model, we first use GIZA++ (Och and Ney, 2003) to compute the word alignment on T RAIN . Next we obtain a phrase table by extraction of phrases from the word alignment. The scaling factors of the translation models have been optimized for BLEU on the DEV data.", "cite_spans": [ { "start": 110, "end": 129, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "The phrase table obtained by heuristic extraction is also used to initialize the training. The forced alignment is run on the training data T RAIN from which we obtain the phrase alignments. Those are used to build a phrase table according to the proposed generative phrase models. Afterward, the scaling factors are trained on DEV for the new phrase table. By feeding back the new phrase table into forced alignment we can reiterate the training procedure. When training is finished the resulting phrase model is evaluated on DEV and T EST . Additionally, we can apply smoothing by interpolation of the new phrase table with the original one estimated heuristically, retrain the scaling factors and evaluate afterwards.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "The baseline system is a standard phrase-based SMT system with eight features: phrase translation and word lexicon probabilities in both translation directions, phrase penalty, word penalty, language model score and a simple distance-based reordering model. The features are combined in a log-linear way. To investigate the generative models, we replace the two phrase translation probabilities and keep the other features identical to the baseline. For the feature-wise combination the two generative phrase probabilities are added to the features, resulting in a total of 10 features. We used a 4-gram language model with modified Kneser-Ney discounting for all experiments. The metrics used for evaluation are the case-sensitive BLEU (Papineni et al., 2002) score and the translation edit rate (TER) (Snover et al., 2006) with one reference translation.", "cite_spans": [ { "start": 737, "end": 760, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF16" }, { "start": 803, "end": 824, "text": "(Snover et al., 2006)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "In this section, we investigate the different aspects of the models and methods presented before. We will focus on the proposed leaving-oneout technique and show that it helps in finding good phrasal alignments on the training data that lead to improved translation models. Our final results show an improvement of 1.4 BLEU over the heuristically extracted phrase model on the test data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "In Section 3.2 we have discussed several methods which aim to overcome the over-fitting prob- Figure 3 : Performance on DEV in BLEU of the count model plotted against size n of n-best list on a logarithmic scale. lems described in (DeNero et al., 2006) . Table 3 shows translation scores of the count model on the development data after the first training iteration for both leaving-one-out strategies we have introduced and for training without leaving-one-out with different restrictions on phrase length. We can see that by restricting the source phrase length to a maximum of 3 words, the trained model is close to the performance of the heuristic phrase model. With the application of leaving-one-out, the trained model is superior to the baseline, the length-based strategy performing slightly better than standard leaving-one-out. For these experiments the count model was estimated with a 100best list.", "cite_spans": [ { "start": 231, "end": 252, "text": "(DeNero et al., 2006)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 94, "end": 102, "text": "Figure 3", "ref_id": null }, { "start": 255, "end": 262, "text": "Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "The count model we describe in Section 4.1 estimates phrase translation probabilities using counts from the n-best phrase alignments. For smaller n the resulting phrase table contains fewer phrases and is more deterministic. For higher values of n more competing alignments are taken into account, resulting in a bigger phrase table and a smoother distribution. We can see in Figure 3 that translation performance improves by moving from the Viterbi alignment to n-best alignments. The variations in performance with sizes between n = 10 and n = 10000 are less than 0.2 BLEU. The maximum is reached for n = 100, which we used in all subsequent experiments. An additional benefit of the count model is the smaller phrase table size compared to the heuristic phrase extraction. This is consistent with the findings of (Birch et al., 2006) . Table 4 shows the phrase table sizes for different n. With n = 100 we retain only 17% of the original phrases. Even for the full model, we Table 5 we can see that the performance of the heuristic phrase model can be increased by 0.6 BLEU on T EST by filtering the phrase table to contain the same phrases as the count model and reoptimizing the log-linear model weights. The experiments on the number of different alignments taken into account were done with standard leaving-one-out.", "cite_spans": [ { "start": 816, "end": 836, "text": "(Birch et al., 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 376, "end": 384, "text": "Figure 3", "ref_id": null }, { "start": 839, "end": 846, "text": "Table 4", "ref_id": "TABREF2" }, { "start": 978, "end": 985, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "The final results are given in Table 5 . We can see that the count model outperforms the baseline by 0.8 BLEU on DEV and 0.9 BLEU on T EST after the first training iteration. The performance of the filtered baseline phrase table shows that part of that improvement derives from the smaller phrase table size. Application of crossvalidation (cv) in the first iteration yields a performance close to training with leaving-one-out (l1o), which indicates that cross-validation can be safely applied to higher training iterations as an alternative to leaving-one-out. The weighted count model clearly under-performs the simpler count model. A second iteration of the training algorithm shows nearly no changes in BLEU score, but a small improvement in TER. Here, we used the phrase table trained with leaving-one-out in the first iteration and applied cross-validation in the second iteration. Log-linear interpolation of the count model with the heuristic yields a further increase, showing an improvement of 1.3 BLEU on DEV and 1.4 BLEU on T EST over the baseline. The interpo- Table 5 : Final results for the heuristic phrase table filtered to contain the same phrases as the count model (baseline filt.), the count model trained with leaving-one-out (l1o) and cross-validation (cv), the weighted count model and the full model. Further, scores for fixed log-linear interpolation of the count model trained with leaving-one-out with the heuristic as well as a feature-wise combination are shown. The results of the second training iteration are given in the bottom row. lation weight is adjusted on the development set and was set to \u03c9 = 0.6. Integrating both models into the log-linear framework (feat. comb.) yields a BLEU score slightly lower than with fixed interpolation on both DEV and T EST . This might be attributed to deficiencies in the tuning procedure. The full model, where we extract all phrases from the search graph, weighted with their posterior probability, performs comparable to the count model with a slightly worse BLEU and a slightly better TER.", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 38, "text": "Table 5", "ref_id": null }, { "start": 1075, "end": 1082, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "We have shown that training phrase models can improve translation performance on a state-ofthe-art phrase-based translation model. This is achieved by training phrase translation probabilities in a way that they are consistent with their use in translation. A crucial aspect here is the use of leaving-one-out to avoid over-fitting. We have shown that the technique is superior to limiting phrase lengths and smoothing with lexical probabilities alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "While models trained from Viterbi alignments already lead to good results, we have demonstrated that considering the 100-best alignments allows to better model the ambiguities in phrase segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The proposed techniques are shown to be superior to previous approaches that only used lexical probabilities to smooth phrase tables or imposed limits on the phrase lengths. On the WMT08 Europarl task we show improvements of 0.9 BLEU points with the trained phrase table and 1.4 BLEU points when interpolating the newly trained model with the original, heuristically extracted phrase table. In TER, improvements are 0.4 and 1.7 points.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In addition to the improved performance, the trained models are smaller leading to faster and smaller translation systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [ { "text": "This work was partly realized as part of the Quaero Programme, funded by OSEO, French State agency for innovation, and also partly based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001-06-C-0023. Any opinions, ndings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reect the views of the DARPA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Constraining the phrase-based, joint probability statistical translation model", "authors": [ { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "154--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandra Birch, Chris Callison-Burch, Miles Os- borne, and Philipp Koehn. 2006. Constraining the phrase-based, joint probability statistical translation model. In smt2006, pages 154-157, Jun.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A discriminative latent variable model for statistical machine translation", "authors": [ { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "200--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phil Blunsom, Trevor Cohn, and Miles Osborne. 2008. A discriminative latent variable model for statisti- cal machine translation. In Proceedings of ACL-08: HLT, pages 200-208, Columbus, Ohio, June. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The mathematics of statistical machine translation: parameter estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. F. Brown, V. J. Della Pietra, S. A. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Compu- tational Linguistics, 19(2):263-312, June.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The complexity of phrase alignment problems", "authors": [ { "first": "John", "middle": [], "last": "Denero", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers", "volume": "", "issue": "", "pages": "25--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "John DeNero and Dan Klein. 2008. The complexity of phrase alignment problems. In Proceedings of the 46th Annual Meeting of the Association for Compu- tational Linguistics on Human Language Technolo- gies: Short Papers, pages 25-28, Morristown, NJ, USA. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Why Generative Phrase Models Underperform Surface Heuristics", "authors": [ { "first": "John", "middle": [], "last": "Denero", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Gillick", "suffix": "" }, { "first": "James", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "31--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "John DeNero, Dan Gillick, James Zhang, and Dan Klein. 2006. Why Generative Phrase Models Un- derperform Surface Heuristics. In Proceedings of the Workshop on Statistical Machine Translation, pages 31-38, New York City, June.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Sampling Alignment Structure under a Bayesian Translation Model", "authors": [ { "first": "John", "middle": [], "last": "Denero", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Buchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "314--323", "other_ids": {}, "num": null, "urls": [], "raw_text": "John DeNero, Alexandre Buchard-C\u00f4t\u00e9, and Dan Klein. 2008. Sampling Alignment Structure under a Bayesian Translation Model. In Proceedings of the 2008 Conference on Empirical Methods in Natu- ral Language Processing, pages 314-323, Honolulu, October.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Minimum bayes risk decoding for bleu", "authors": [ { "first": "Nicola", "middle": [], "last": "Ehling", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2007, "venue": "ACL '07: Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions", "volume": "", "issue": "", "pages": "101--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicola Ehling, Richard Zens, and Hermann Ney. 2007. Minimum bayes risk decoding for bleu. In ACL '07: Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 101-104, Morristown, NJ, USA. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A phrasebased hidden semi-markov approach to machine translation", "authors": [ { "first": "Jes\u00fas-Andr\u00e9s", "middle": [], "last": "Ferrer", "suffix": "" }, { "first": "Alfons", "middle": [], "last": "Juan", "suffix": "" } ], "year": 2009, "venue": "Procedings of European Association for Machine Translation (EAMT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jes\u00fas-Andr\u00e9s Ferrer and Alfons Juan. 2009. A phrase- based hidden semi-markov approach to machine translation. In Procedings of European Association for Machine Translation (EAMT), Barcelona, Spain, May. European Association for Machine Translation.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improved Backing-Off for M-gram Language Modelling", "authors": [ { "first": "Reinhard", "middle": [], "last": "Kneser", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1995, "venue": "IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "181--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhard Kneser and Hermann Ney. 1995. Improved Backing-Off for M-gram Language Modelling. In IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pages 181-184, Detroit, MI, May.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Statistical phrase-based translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "", "issue": "", "pages": "48--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Pro- ceedings of the 2003 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics on Human Language Technology -Vol- ume 1, pages 48-54, Morristown, NJ, USA. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An End-to-End Discriminative Approach to Machine Translation", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Buchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "761--768", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Alexandre Buchard-C\u00f4t\u00e9, Dan Klein, and Ben Taskar. 2006. An End-to-End Discriminative Approach to Machine Translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the As- sociation for Computational Linguistics, pages 761- 768, Sydney, Australia.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A phrasebased, joint probability model for statistical machine translation", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "William", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu and William Wong. 2002. A phrase- based, joint probability model for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2002), July.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A Simplex Method for Function Minimization", "authors": [ { "first": "J", "middle": [ "A" ], "last": "Nelder", "suffix": "" }, { "first": "R", "middle": [], "last": "Mead", "suffix": "" } ], "year": 1965, "venue": "The Computer Journal)", "volume": "7", "issue": "", "pages": "308--313", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.A. Nelder and R. Mead. 1965. A Simplex Method for Function Minimization. The Computer Journal), 7:308-313.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51, March.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The alignment template approach to statistical machine translation", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "417--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine trans- lation. Computational Linguistics, 30(4):417-449, December.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Improved alignment models for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1999, "venue": "Proc. of the Joint SIGDAT Conf. on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP99)", "volume": "", "issue": "", "pages": "20--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "F.J. Och, C. Tillmann, and H. Ney. 1999. Improved alignment models for statistical machine translation. In Proc. of the Joint SIGDAT Conf. on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP99), pages 20-28, Univer- sity of Maryland, College Park, MD, USA, June.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, pages 311-318, Morristown, NJ, USA. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The MIT-LL/AFRL IWSLT-2008 MT System", "authors": [ { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Delaney", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Ray", "middle": [], "last": "Slyh", "suffix": "" } ], "year": 2008, "venue": "Proceedings of IWSLT 2008", "volume": "", "issue": "", "pages": "69--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wade Shen, Brian Delaney, Tim Anderson, and Ray Slyh. 2008. The MIT-LL/AFRL IWSLT-2008 MT System. In Proceedings of IWSLT 2008, pages 69- 76, Hawaii, U.S.A., October.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A study of translation edit rate with targeted human annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Linnea", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proc. of AMTA", "volume": "", "issue": "", "pages": "223--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proc. of AMTA, pages 223-231, Aug.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Lattice Minimum Bayes-Risk decoding for statistical machine translation", "authors": [ { "first": "Roy", "middle": [], "last": "Tromble", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Franz", "middle": [], "last": "Och", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "620--629", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Tromble, Shankar Kumar, Franz Och, and Wolf- gang Macherey. 2008. Lattice Minimum Bayes- Risk decoding for statistical machine translation. In Proceedings of the 2008 Conference on Empiri- cal Methods in Natural Language Processing, pages 620-629, Honolulu, Hawaii, October. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Generation of word graphs in statistical machine translation", "authors": [ { "first": "N", "middle": [], "last": "Ueffing", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proc. of the Conference on Empirical Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "156--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Ueffing, F.J. Och, and H. Ney. 2002. Genera- tion of word graphs in statistical machine translation. In Proc. of the Conference on Empirical Methods for Natural Language Processing, pages 156-163, Philadelphia, PA, USA, July.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "Illustration of phrase training with forced alignment." }, "TABREF0": { "content": "
English data
GermanEnglish
TRAINSentences1 311 815
Run. Words 34 398 651 36 090 085
Vocabulary336 347118 112
Singletons168 68647 507
DEVSentences2 000
Run. Words55 11858 761
Vocabulary9 2116 549
OOVs28477
TESTSentences2 000
Run. Words56 63560 188
Vocabulary9 2546 497
OOVs26689
", "text": "Statistics for the Europarl German-", "type_str": "table", "num": null, "html": null }, "TABREF1": { "content": "
BLEU TER
", "text": "Comparison of different training setups for the count model on DEV .leaving-one-out max phr.len.", "type_str": "table", "num": null, "html": null }, "TABREF2": { "content": "
N # phrases % of full table
14.9M5.3
108.4M9.1
10015.9M17.2
100027.1M29.2
1000040.1M43.2
full59.6M64.2
heuristic92.7M100.0
do not retain all phrase table entries. Due to prun-
ing in the forced alignment step, not all translation
options are considered. As a result experiments
can be done more rapidly and with less resources
than with the heuristically extracted phrase table.
Also, our experiments show that the increased per-
formance of the count model is partly derived from
the smaller phrase table size. In
", "text": "Phrase table size of the count model for different n-best list sizes, the full model and for heuristic phrase extraction.", "type_str": "table", "num": null, "html": null } } } }