{ "paper_id": "D07-1038", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:19:58.695374Z" }, "title": "Syntactic Re-Alignment Models for Machine Translation", "authors": [ { "first": "Jonathan", "middle": [], "last": "May", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California Marina del Rey", "location": { "postCode": "90292", "region": "CA" } }, "email": "jonmay@isi.edu" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "", "affiliation": {}, "email": "knight@isi.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a method for improving word alignment for statistical syntax-based machine translation that employs a syntactically informed alignment model closer to the translation model than commonly-used word alignment models. This leads to extraction of more useful linguistic patterns and improved BLEU scores on translation experiments in Chinese and Arabic.", "pdf_parse": { "paper_id": "D07-1038", "_pdf_hash": "", "abstract": [ { "text": "We present a method for improving word alignment for statistical syntax-based machine translation that employs a syntactically informed alignment model closer to the translation model than commonly-used word alignment models. This leads to extraction of more useful linguistic patterns and improved BLEU scores on translation experiments in Chinese and Arabic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Roughly speaking, there are two paths commonly taken in statistical machine translation ( Figure 1 ). The idealistic path uses an unsupervised learning algorithm such as EM (Demptser et al., 1977) to learn parameters for some proposed translation model from a bitext training corpus, and then directly translates using the weighted model. Some examples of the idealistic approach are the direct IBM word model (Berger et al., 1994; Germann et al., 2001) , the phrase-based approach of Marcu and Wong (2002) , and the syntax approaches of Wu (1996) and . Idealistic approaches are conceptually simple and thus easy to relate to observed phenomena. However, as more parameters are added to the model the idealistic approach has not scaled well, for it is increasingly difficult to incorporate large amounts of training data efficiently over an increasingly large search space. Additionally, the EM procedure has a tendency to overfit its training data when the input units have varying explanatory powers, such as variable-size phrases or variable-height trees.", "cite_spans": [ { "start": 173, "end": 196, "text": "(Demptser et al., 1977)", "ref_id": "BIBREF4" }, { "start": 410, "end": 431, "text": "(Berger et al., 1994;", "ref_id": "BIBREF0" }, { "start": 432, "end": 453, "text": "Germann et al., 2001)", "ref_id": "BIBREF9" }, { "start": 485, "end": 506, "text": "Marcu and Wong (2002)", "ref_id": "BIBREF11" }, { "start": 538, "end": 547, "text": "Wu (1996)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 90, "end": 98, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Methods of statistical MT", "sec_num": "1" }, { "text": "The realistic path also learns a model of translation, but uses that model only to obtain Viterbi wordfor-word alignments for the training corpus. The bitext and corresponding alignments are then used as input to a pattern extraction algorithm, which yields a set of patterns or rules for a second translation model (which often has a wider parameter space than that used to obtain the word-for-word alignments). Weights for the second model are then set, typically by counting and smoothing, and this weighted model is used for translation. Realistic approaches scale to large data sets and have yielded better BLEU performance than their idealistic counterparts, but there is a disconnect between the first model (hereafter, the alignment model) and the second (the translation model). Examples of realistic systems are the phrase-based ATS system of Och and Ney (2004) , the phrasal-syntax hybrid system Hiero (Chiang, 2005) , and the GHKM syntax system (Galley et al., 2004; Galley et al., 2006) . For an alignment model, most of these use the Aachen HMM approach (Vogel et al., 1996) , the implementation of IBM Model 4 in GIZA++ (Och and Ney, 2000) or, more recently, the semi-supervised EMD algorithm (Fraser and Marcu, 2006) . The two-model approach of the realistic path has undeniable empirical advantages and scales to large data sets, but new research tends to focus on development of higher order translation models that are informed only by low-order alignments. We would like to add the analytic power gained from modern translation models to the underlying alignment model without sacrificing the efficiency and empirical gains of the two-model approach. By adding the u n s u p e r v i s e d l e a r n i n g t a r g e t s e n t e n c e s s o u r c e s e n t e n c e s u n w e i g h t e d m o d e l w e i g Figure 1 : General approach to idealistic and realistic statistical MT systems syntactic information used in the translation model to our alignment model we may improve alignment quality such that rule quality and, in turn, system quality are improved. In the remainder of this work we show how a touch of idealism can improve an existing realistic syntax-based translation system.", "cite_spans": [ { "start": 853, "end": 871, "text": "Och and Ney (2004)", "ref_id": "BIBREF13" }, { "start": 913, "end": 927, "text": "(Chiang, 2005)", "ref_id": "BIBREF3" }, { "start": 957, "end": 978, "text": "(Galley et al., 2004;", "ref_id": "BIBREF7" }, { "start": 979, "end": 999, "text": "Galley et al., 2006)", "ref_id": "BIBREF8" }, { "start": 1068, "end": 1088, "text": "(Vogel et al., 1996)", "ref_id": "BIBREF15" }, { "start": 1135, "end": 1154, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF12" }, { "start": 1208, "end": 1232, "text": "(Fraser and Marcu, 2006)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 1671, "end": 1910, "text": "By adding the u n s u p e r v i s e d l e a r n i n g t a r g e t s e n t e n c e s s o u r c e s e n t e n c e s u n w e i g h t e d m o d e l w e i g", "ref_id": null }, { "start": 1911, "end": 1919, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Methods of statistical MT", "sec_num": "1" }, { "text": "2 Multi-level syntactic rules for syntax MT Galley et al. (2004) and Galley et al. (2006) describe a syntactic translation model that relates English trees to foreign strings. The model describes joint production of a (tree, string) pair via a nondeterministic selection of weighted rules. Each rule has an English tree fragment with variables and a corresponding foreign string fragment with the same variables. A series of rules forms an explanation (or derivation) of the complete pair. As an example, consider the parsed English and corresponding Chinese at the top of Figure 2 . The three columns underneath the example are different rule sequences that can explain this pair; there are many other possibilities. Note how rules specify rotation (e.g. R10, R5), direct translation (R12, R8), insertion and deletion (R11, R1), and tree traversal (R7, R15). Note too that the rules explain variable-size fragments (e.g. R6 vs. R14) and thus the possible derivation trees of rules that explain a sentence pair have varying sizes. The smallest such derivation tree has a single large rule (which does not appear in Figure 2 ; we leave the description of such a rule as an exercise for the reader). A string-totree decoder constructs a derivation forest of derivation trees where the right sides of the rules in a tree, taken together, explain a candidate source sentence. It then outputs the English tree corresponding to the highest-scoring derivation in the forest.", "cite_spans": [ { "start": 44, "end": 64, "text": "Galley et al. (2004)", "ref_id": "BIBREF7" }, { "start": 69, "end": 89, "text": "Galley et al. (2006)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 573, "end": 581, "text": "Figure 2", "ref_id": null }, { "start": 1115, "end": 1123, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Methods of statistical MT", "sec_num": "1" }, { "text": "We now lay the ground for a syntactically motivated alignment model. We begin by reviewing an alignment model commonly seen in realistic MT systems and compare it to a syntactically-aware alignment model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introducing syntax into the alignment model", "sec_num": "3" }, { "text": "IBM Model 4 (Brown et al., 1993) learns a set of 4 probability tables to compute p(f |e) given a foreign sentence f and its target translation e via the following (greatly simplified) generative story: The re-alignment procedure described in Section 3.2 learns to prefer the rule set at bottom, which omits the bad link.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The traditional IBM alignment model", "sec_num": "3.1" }, { "text": "1. A fertility y for each word e i in e is chosen with probability p f ert (y|e i ). 2. A null word is inserted next to each fertility-expanded word with probability p null . 3. Each token e i in the fertility-expanded word and null string is translated into some foreign word f i in f with probability p trans (f i |e i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The traditional IBM alignment model", "sec_num": "3.1" }, { "text": "f i that was translated from e i is changed by \u2206 (which may be positive, negative, or zero) with probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The position of each foreign word", "sec_num": "4." }, { "text": "p distortion (\u2206|A(e i ), B(f i ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The position of each foreign word", "sec_num": "4." }, { "text": ", where A and B are functions over the source and target vocabularies, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The position of each foreign word", "sec_num": "4." }, { "text": "Brown et al. (1993) describes an EM algorithm for estimating values for the four tables in the generative story. However, searching the space of all possible alignments is intractable for EM, so in practice the procedure is bootstrapped by models with narrower search space such as IBM Model 1 (Brown et al., 1993) or Aachen HMM (Vogel et al., 1996) .", "cite_spans": [ { "start": 294, "end": 314, "text": "(Brown et al., 1993)", "ref_id": "BIBREF2" }, { "start": 329, "end": 349, "text": "(Vogel et al., 1996)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "The position of each foreign word", "sec_num": "4." }, { "text": "Now let us contrast this commonly used model for obtaining alignments with a syntactically motivated alternative. We recall the rules described in Section 2. Our model learns a single probability table to compute p(etree, f ) given a foreign sentence f and a parsed target translation etree. In the following generative story we assume a starting variable with syntactic type v.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A syntax re-alignment model", "sec_num": "3.2" }, { "text": "1. Choose a rule r to replace v, with probability p rule (r|v). 2. For each variable with syntactic type v i in the partially completed (tree, string) pair, continue to choose rules r i with probability p rule (r i |v i ) to replace these variables until there are no variables remaining. In Section 5.1 we discuss an EM learning procedure for estimating these rule probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A syntax re-alignment model", "sec_num": "3.2" }, { "text": "As in the IBM approach, we must mitigate intractability by limiting the parameter space searched, which is potentially much wider than in the word-to-word case. We would like to supply to EM all possible rules that explain the training data, but this implies a rule relating each possible tree fragment to each possible string fragment, which is infeasible. We follow the approach of bootstrapping from a model with a narrower parameter space as is done in, e.g. Och and Ney (2000) and Fraser and Marcu (2006) .", "cite_spans": [ { "start": 463, "end": 481, "text": "Och and Ney (2000)", "ref_id": "BIBREF12" }, { "start": 486, "end": 509, "text": "Fraser and Marcu (2006)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "A syntax re-alignment model", "sec_num": "3.2" }, { "text": "To reduce the model space we employ the rule acquisition technique of Galley et al. (2004) , which obtains rules given a (tree, string) pair as well as an initial alignment between them. We are agnostic about the source of this bootstrap alignment and in Section 5 present results based on several different bootstrap alignment qualities. We require an initial set of alignments, which we obtain from a wordfor-word alignment procedure such as GIZA++ or EMD. Thus, we are not aligning input data, but rather re-aligning it with a syntax model.", "cite_spans": [ { "start": 70, "end": 90, "text": "Galley et al. (2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "A syntax re-alignment model", "sec_num": "3.2" }, { "text": "Consider the example of Figure 2 again. The leftmost derivation is obtained from the bootstrap alignment set. This derivation is reasonable but there are some poorly motivated rules, from a linguistic standpoint. The Chinese word \u00e7 \u00e7 \u00e7 roughly means \"the SENTENCE PAIRS DESCRIPTION CHINESE ARABIC TUNE NIST 2002 short 925 696 TEST NIST 2003 919 663 Table 1 : Tuning and testing data sets for the MT system described in Section 5.2.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 32, "text": "Figure 2", "ref_id": null }, { "start": 255, "end": 369, "text": "SENTENCE PAIRS DESCRIPTION CHINESE ARABIC TUNE NIST 2002 short 925 696 TEST NIST 2003 919 663 Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "The appeal of a syntax alignment model", "sec_num": "4" }, { "text": "two shores\" in this context, but the rule R6 learned from the alignment incorrectly includes \"between\". However, other sentences in the training corpus have the correct alignment, which yields rule R16. Meanwhile, rules R13 and R14, learned from yet other sentences in the training corpus, handle the ... structure (which roughly translates to \"in between\"), thus allowing the middle derivation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The appeal of a syntax alignment model", "sec_num": "4" }, { "text": "EM distributes rule probabilities in such a way as to maximize the probability of the training corpus. It thus prefers to use one rule many times instead of several different rules for the same situation over several sentences, if possible. R6 is a possible rule in 46 of the 329,031 sentence pairs in the training corpus, while R16 is a possible rule in 100 sentence pairs. Well-formed rules are more usable than illformed rules and the partial alignments behind these rules, generally also well-formed, become favored as well. The top row of Figure 3 contains an example of an alignment learned by the bootstrap alignment model that includes an incorrect link. Rule R24, which is extracted from this alignment, is a poor rule. A set of commonly seen rules learned from other training sentences provide a more likely explanation of the data, and the consequent alignment omits the spurious link.", "cite_spans": [], "ref_spans": [ { "start": 544, "end": 552, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "The appeal of a syntax alignment model", "sec_num": "4" }, { "text": "In this section, we describe the implementation of our semi-idealistic model and our means of evaluating the resulting re-alignments in an MT task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We begin with a training corpus of Chinese-English and Arabic-English bitexts, the English side parsed by a reimplementation of the standard Collins model (Bikel, 2004) . In order to acquire a syntactic rule set, we also need a bootstrap alignment of each training sentence. We use an implementation of the GHKM Table 2 : A comparison of Chinese BLEU performance between the GIZA baseline (no re-alignment), realignment as proposed in Section 3.2, and re-alignment as modified in Section 5.4 algorithm (Galley et al., 2004) to obtain a rule set for each bootstrap alignment. Now we need an EM algorithm for learning the parameters of the rule set that maximize corpus p(tree, string). Such an algorithm is presented by Graehl and Knight (2004) . The algorithm consists of two components: DERIV, which is a procedure for constructing a packed forest of derivation trees of rules that explain a (tree, string) bitext corpus given that corpus and a rule set, and TRAIN, which is an iterative parameter-setting procedure.", "cite_spans": [ { "start": 155, "end": 168, "text": "(Bikel, 2004)", "ref_id": "BIBREF1" }, { "start": 502, "end": 523, "text": "(Galley et al., 2004)", "ref_id": "BIBREF7" }, { "start": 719, "end": 743, "text": "Graehl and Knight (2004)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 312, "end": 319, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "The re-alignment setup", "sec_num": "5.1" }, { "text": "We initially attempted to use the top-down DE-RIV algorithm of Graehl and Knight (2004) , but as the constraints of the derivation forests are largely lexical, too much time was spent on exploring deadends. Instead we build derivation forests using the following sequence of operations:", "cite_spans": [ { "start": 63, "end": 87, "text": "Graehl and Knight (2004)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "The re-alignment setup", "sec_num": "5.1" }, { "text": "1. Binarize rules using the synchronous binarization algorithm for tree-to-string transducers described in Zhang et al. (2006) . 2. Construct a parse chart with a CKY parser simultaneously constrained on the foreign string and English tree, similar to the bilingual parsing of Wu (1997) 1 . 3. Recover all reachable edges by traversing the chart, starting from the topmost entry. Since the chart is constructed bottom-up, leaf lexical constraints are encountered immediately, resulting in a narrower search space and faster running time than the top-down DERIV algorithm for this application. Derivation forest construction takes around 400 hours of cumulative machine time (4processor machines) for Chinese. The actual running of EM iterations (which directly implements the TRAIN algorithm of Graehl and Knight (2004)) takes about 10 minutes, after which the Viterbi derivation trees are directly recoverable. The Viterbi derivation tree tells us which English words produce which Chinese words, so we can extract a wordto-word alignment from it. We summarize the approach described in this paper as:", "cite_spans": [ { "start": 107, "end": 126, "text": "Zhang et al. (2006)", "ref_id": "BIBREF19" }, { "start": 795, "end": 820, "text": "Graehl and Knight (2004))", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "The re-alignment setup", "sec_num": "5.1" }, { "text": "1. Obtain bootstrap alignments for a training corpus using GIZA++. 2. Extract rules from the corpus and alignments using GHKM, noting the partial alignment that is used to extract each rule. 3. Construct derivation forests for each (tree, string) pair, ignoring the alignments, and run EM to obtain Viterbi derivation trees, then use the annotated partial alignments to obtain Viterbi alignments. 4. Use the new alignments as input to the MT system described below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The re-alignment setup", "sec_num": "5.1" }, { "text": "A truly idealistic MT system would directly apply the rule weight parameters learned via EM to a machine translation task. As mentioned in Section 1, we maintain the two-model, or realistic approach. Below we briefly describe the translation model, focusing on comparison with the previously described alignment model. Galley et al. (2006) provides a more complete description of the translation model and DeNeefe et al. (2007) provides a more complete description of the end-to-end translation pipeline. Although in principle the re-alignment model and translation model learn parameter weights over the same rule space, in practice we limit the rules used for re-alignment to the set of smallest rules that explain the training corpus and are consistent with the bootstrap alignments. This is a compromise made to reduce the search space for EM. The translation model learns multiple derivations of rules consistent with the re-alignments for each sentence, and learns weights for these by counting and smoothing. A dozen other features are also added to the rules. We obtain weights for the combinations of the features by performing minimum error rate training (Och, 2003) on held-out data. We then use a CKY decoder to translate unseen test data using the rules and tuned weights. Table 1 summarizes the data used in tuning and testing.", "cite_spans": [ { "start": 319, "end": 339, "text": "Galley et al. (2006)", "ref_id": "BIBREF8" }, { "start": 1165, "end": 1176, "text": "(Och, 2003)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 1286, "end": 1293, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "The MT system setup", "sec_num": "5.2" }, { "text": "An initial re-alignment experiment shows a reasonable rise in BLEU scores from the baseline (Table 2) , but closer inspection of the rules favored by EM implies we can do even better. EM has a tendency to favor few large rules over many small rules, even when the small rules are more useful. Referring to the rules in Figure 2 , note that possible derivations for (taiwan 's, \u00d0 \u00d0 \u00d0) 2 are R2, R11-R12, and R17-R18. Clearly the third derivation is not desirable, and we do not discuss it further. Between the first two derivations, R11-R12 is preferred over R2, as the conditioning for possessive insertion is not related to the specific Chinese word being inserted. Of the 1,902 sentences in the training corpus where this pair is seen, the bootstrap alignments yield the R2 derivation 1,649 times and the R11-R12 derivation 0 times. Re-alignment does not change the result much; the new alignments yield the R2 derivation 1,613 times and again never choose R11-R12. The rules in the second derivation themselves are 2 The Chinese gloss is simply \"taiwan\". not rarely seen -R11 is in 13,311 forests other than those where R2 is seen, and R12 is in 2,500 additional forests. EM gives R11 a probability of e \u22127.72 -better than 98.7% of rules, and R12 a probability of e \u22122.96 . But R2 receives a probability of e \u22126.32 and is preferred over the R11-R12 derivation, which has a combined probability of e \u221210.68 .", "cite_spans": [ { "start": 1019, "end": 1020, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 92, "end": 102, "text": "(Table 2)", "ref_id": null }, { "start": 320, "end": 328, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Initial results", "sec_num": "5.3" }, { "text": "The preference for shorter derivations containing large rules over longer derivations containing small rules is due to a general tendency for EM to prefer derivations with few atoms. Marcu and Wong (2002) note this preference but consider the phenomenon a feature, rather than a bug. Zollmann and Sima'an (2005) combat the overfitting aspect for parsing by using a held-out corpus and a straight maximum likelihood estimate, rather than EM. We take a modeling approach to the phenomenon.", "cite_spans": [ { "start": 183, "end": 204, "text": "Marcu and Wong (2002)", "ref_id": "BIBREF11" }, { "start": 284, "end": 311, "text": "Zollmann and Sima'an (2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Making EM fair", "sec_num": "5.4" }, { "text": "As the probability of a derivation is determined by the product of its atom probabilities, longer derivations with more probabilities to multiply have an inherent disadvantage against shorter derivations, all else being equal. EM is an iterative procedure and thus such a bias can lead the procedure to converge with artificially raised probabilities for short derivations and the large rules that comprise them. The relatively rare applicability of large rules (and thus lower observed partial counts) does not overcome the inherent advantage of large coverage. To combat this, we introduce size terms into our generative story, ensuring that all competing derivations for the Table 4 : Re-alignment performance with semi-supervised EMD bootstrap alignments same sentence contain the same number of atoms:", "cite_spans": [], "ref_spans": [ { "start": 678, "end": 685, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Making EM fair", "sec_num": "5.4" }, { "text": "1. Choose a rule size s with cost c size (s) s\u22121 . 2. Choose a rule r (of size s) to replace the start symbol with probability p rule (r|s, v). 3. For each variable in the partially completed (tree, string) pair, continue to choose sizes followed by rules, recursively to replace these variables until there are no variables remaining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Making EM fair", "sec_num": "5.4" }, { "text": "This generative story changes the derivation comparison from R2 vs R11-R12 to S2-R2 vs R11-R12, where S2 is the atom that represents the choice of size 2 (the size of a rule in this context is the number of non-leaf and non-root nodes in its tree fragment). Note that the variable number of inclusions implied by the exponent in the generative story above ensures that all derivations have the same size. For example, a derivation with one size-3 rule, a derivation with one size-2 and one size-1 rule, and a derivation with three size-1 rules would each have three atoms. With this revised model that allows for fair comparison of derivations, the R11-R12 derivation is chosen 1636 times, and S2-R2 is not chosen. R2 does, however, appear in the translation model, as the expanded rule extraction described in Section 5.2 creates R2 by joining R11 and R12.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Making EM fair", "sec_num": "5.4" }, { "text": "The probability of size atoms, like that of rule atoms, is decided by EM. The revised generative story tends to encourage smaller sizes by virtue of the exponent. This does not, however, simply ensure the largest number of rules per derivation is used in all cases. Ill-fitting and poorly-motivated rules such as R22, R23, and R24 in Figure 2 are not preferred over R16, even though they are smaller. However, R14 and R16 are preferred over R6, as the former are useful rules. Although the modified model does not sum to 1, it leads to an improvement in BLEU score, as can be seen in the last row of Table 2.", "cite_spans": [], "ref_spans": [ { "start": 334, "end": 342, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Making EM fair", "sec_num": "5.4" }, { "text": "We performed primary experiments on two different bootstrap setups in two languages: the initial experiment uses the same data set for the GIZA++ initial alignment as is used in the re-alignment, while an experiment on better quality bootstrap alignments uses a much larger data set. For each bootstrapping in each language we compared the baseline of using these alignments directly in an MT system with the experiment of using the alignments obtained from the re-alignment procedure described in Section 5.4. For each experiment we report: the number of rules extracted by the expanded GHKM algorithm of Galley et al. (2006) for the translation model, converged BLEU scores on the tuning set, and finally BLEU performance on the held-out test set. Data set specifics for the GIZA++ bootstrapping and BLEU results are summarized in Table 3 .", "cite_spans": [ { "start": 606, "end": 626, "text": "Galley et al. (2006)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 833, "end": 840, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "5.5" }, { "text": "The results presented demonstrate we are able to improve on unsupervised GIZA++ alignments by about 1 BLEU point for Chinese and around 0.4 BLEU point for Arabic using an additional unsupervised algorithm that requires no human aligned data. If human-aligned data is available, the EMD algorithm provides higher baseline alignments than GIZA++ that have led to better MT performance (Fraser and Marcu, 2006) . As a further experiment we repeated the experimental conditions from Table 3 , this time bootstrapped with the semisupervised EMD method, which uses the larger bootstrap GIZA corpora described in Table 3 and an additional 64,469/48,650 words of hand-aligned English-Chinese and 43,782/31,457 words of handaligned English-Arabic. The results of this advanced experiment are in Table 4 . We show a 0.42 gain in BLEU for Arabic, but no movement for Chinese. We believe increasing the size of the re-alignment corpora will increase BLEU gains in this experimental condition, but leave those results for future work.", "cite_spans": [ { "start": 383, "end": 407, "text": "(Fraser and Marcu, 2006)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 479, "end": 486, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 606, "end": 613, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 786, "end": 793, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "5.6" }, { "text": "We can see from the results presented that the impact of the syntax-aware re-alignment procedure of Section 3.2, coupled with the addition of size parameters to the generative story from Section 5.4 serves to remove links from the bootstrap alignments that cause less useful rules to be extracted, and thus increase the overall quality of the rules, and hence the system performance. We thus see the benefit to including syntax in an alignment model, bringing the two models of the realistic machine translation path somewhat closer together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.6" }, { "text": "In the cases where a rule is not synchronous-binarizable standard left-right binarization is performed and proper permutation of the disjoint English tree spans must be verified when building the part of the chart that uses this rule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank David Chiang, Steve DeNeefe, Alex Fraser, Victoria Fossum, Jonathan Graehl, Liang Huang, Daniel Marcu, Michael Pust, Oana Postolache, Michael Pust, Jason Riesa, Jens V\u00f6ckler, and Wei Wang for help and discussion. This research was supported by NSF (grant IIS-0428020) and DARPA (contract HR0011-06-C-0022).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The candide system for machine translation", "authors": [ { "first": "Adam", "middle": [], "last": "Berger", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Stephen", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "Vincent", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "John", "middle": [], "last": "Gillett", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Mercer", "suffix": "" }, { "first": "Harry", "middle": [], "last": "Printz", "suffix": "" }, { "first": "Lubo\u0161", "middle": [], "last": "Ure\u0161", "suffix": "" } ], "year": 1994, "venue": "Proc. HLT", "volume": "", "issue": "", "pages": "157--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Berger, Peter Brown, Stephen Della Pietra, Vin- cent Della Pietra, John Gillett, John Lafferty, Robert Mercer, Harry Printz, and Lubo\u0161 Ure\u0161. 1994. The candide system for machine translation. In Proc. HLT, pages 157-162, Plainsboro, New Jersey, March.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Intricacies of Collins' parsing model", "authors": [ { "first": "Daniel", "middle": [], "last": "Bikel", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "479--511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Bikel. 2004. Intricacies of Collins' parsing model. Computational Linguistics, 30(4):479-511.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The mathematics of statistical machine translation: parameter estimation", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [ "Della" ], "last": "Vincent", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathemat- ics of statistical machine translation: parameter esti- mation. Computational Linguistics, 19(2):263-311.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. ACL, pages 263-270, Ann Arbor, Michigan, June.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "Arthur", "middle": [ "P" ], "last": "Demptser", "suffix": "" }, { "first": "Nan", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "Donald", "middle": [ "B" ], "last": "", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society, Series B", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur P. Demptser, Nan M. Laird, and Donald B. Ru- bin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1-38.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "What can syntax-based MT learn from phrase-based MT?", "authors": [ { "first": "Steve", "middle": [], "last": "Deneefe", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2007, "venue": "Proc. EMNLP/CONLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steve DeNeefe, Kevin Knight, Wei Wang, and Daniel Marcu. 2007. What can syntax-based MT learn from phrase-based MT? In Proc. EMNLP/CONLL, Prague, June.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Semisupervised training for statistical word alignment", "authors": [ { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2006, "venue": "Proc. COLING-ACL", "volume": "", "issue": "", "pages": "769--776", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Fraser and Daniel Marcu. 2006. Semi- supervised training for statistical word alignment. In Proc. COLING-ACL, pages 769-776, Sydney, July.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "What's in a translation rule?", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2004, "venue": "Proc. HLT-NAACL", "volume": "", "issue": "", "pages": "273--280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In Proc. HLT-NAACL, pages 273-280, Boston, May.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Scalable inference and training of context-rich syntactic models", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Deneefe", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ignacio", "middle": [], "last": "Thayer", "suffix": "" } ], "year": 2006, "venue": "Proc. COLING-ACL", "volume": "", "issue": "", "pages": "961--968", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steven DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic models. In Proc. COLING- ACL, pages 961-968, Sydney, July.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Fast decoding and optimal decoding for machine translation", "authors": [ { "first": "Ulrich", "middle": [], "last": "Germann", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Jahr", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" } ], "year": 2001, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "228--235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ulrich Germann, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. 2001. Fast decoding and optimal decoding for machine translation. In Proc. ACL, pages 228-235, Toulouse, France, July.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Training tree transducers", "authors": [ { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2004, "venue": "Proc. HLT-NAACL", "volume": "", "issue": "", "pages": "105--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Graehl and Kevin Knight. 2004. Training tree transducers. In Proc. HLT-NAACL, pages 105-112, Boston, May.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A phrase-based, joint probability model for statistical machine translation", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "William", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2002, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "133--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine transla- tion. In Proc. EMNLP, pages 133-139, Philadelphia, July.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improved statistical alignment models", "authors": [ { "first": "Franz", "middle": [], "last": "Och", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Och and Hermann Ney. 2000. Improved statisti- cal alignment models. In Proc. ACL, pages 440-447, Hong Kong, October.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The alignment template approach to statistical machine translation", "authors": [ { "first": "Franz", "middle": [], "last": "Och", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "417--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Och and Hermann Ney. 2004. The alignment tem- plate approach to statistical machine translation. Com- putational Linguistics, 30(4):417-449.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Minimum error rate training for statistical machine translation", "authors": [ { "first": "Franz", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Och. 2003. Minimum error rate training for sta- tistical machine translation. In Proc. ACL, pages 160- 167, Sapporo, Japan, July.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "HMM-based word alignment in statistical translation", "authors": [ { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1996, "venue": "Proc. COLING", "volume": "", "issue": "", "pages": "836--841", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical trans- lation. In Proc. COLING, pages 836-841, Copen- hagen, August.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A polynomial-time algorithm for statistical machine translation", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1996, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "152--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu. 1996. A polynomial-time algorithm for sta- tistical machine translation. In Proc. ACL, pages 152- 158, Santa Cruz, California, June.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "3", "pages": "377--404", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-404.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A syntax-based statistical translation model", "authors": [ { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2001, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Proc. ACL, pages 523- 530, Toulouse, France, July.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Synchronous binarization for machine translation", "authors": [ { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2006, "venue": "Proc. HLT-NAACL", "volume": "", "issue": "", "pages": "256--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In Proc. HLT-NAACL, pages 256-263, New York City, June.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A consistent and efficient estimator for data-oriented parsing", "authors": [ { "first": "Andreas", "middle": [], "last": "Zollmann", "suffix": "" }, { "first": "Khalil", "middle": [], "last": "Sima", "suffix": "" } ], "year": 2005, "venue": "Journal of Automata, Languages and Combinatorics", "volume": "10", "issue": "2/3", "pages": "367--388", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Zollmann and Khalil Sima'an. 2005. A consis- tent and efficient estimator for data-oriented parsing. Journal of Automata, Languages and Combinatorics, 10(2/3):367-388.", "links": null } }, "ref_entries": { "FIGREF1": { "text": ": A (English tree, Chinese string) pair and three different sets of multilevel tree-to-string rules that can explain it; the first set is obtained from bootstrap alignments, the second from this paper's re-alignment procedure, and the third is a viable, if poor quality, alternative that is not learned. The impact of a bad alignment on rule extraction. Including the alignment link indicated by the dotted line in the example leads to the rule set in the second row.", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "type_str": "table", "text": "", "num": null, "html": null, "content": "
BOOTSTRAP GIZA CORPUSRE-ALIGNMENT EXPERIMENT
ENGLISH WORDS CHINESE WORDSTYPERULESTUNETEST
baseline 19,138,252 39.08 37.77
9,864,2947,520,779initial18,698,549 39.49 38.39
adjusted 26,053,341
" }, "TABREF2": { "type_str": "table", "text": "Machine Translation experimental results evaluated with case-insensitive BLEU4.", "num": null, "html": null, "content": "" } } } }