{ "paper_id": "N09-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:42:27.775622Z" }, "title": "1,001 New Features for Statistical Machine Translation *", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "USC Information Sciences Institute", "location": { "addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey", "postCode": "90292", "region": "CA", "country": "USA" } }, "email": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "", "affiliation": { "laboratory": "", "institution": "USC Information Sciences Institute", "location": { "addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey", "postCode": "90292", "region": "CA", "country": "USA" } }, "email": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We use the Margin Infused Relaxed Algorithm of Crammer et al. to add a large number of new features to two machine translation systems: the Hiero hierarchical phrasebased translation system and our syntax-based translation system. On a large-scale Chinese-English translation task, we obtain statistically significant improvements of +1.5 B\uf76c\uf765\uf775 and +1.1 B\uf76c\uf765\uf775, respectively. We analyze the impact of the new features and the performance of the learning algorithm.", "pdf_parse": { "paper_id": "N09-1025", "_pdf_hash": "", "abstract": [ { "text": "We use the Margin Infused Relaxed Algorithm of Crammer et al. to add a large number of new features to two machine translation systems: the Hiero hierarchical phrasebased translation system and our syntax-based translation system. On a large-scale Chinese-English translation task, we obtain statistically significant improvements of +1.5 B\uf76c\uf765\uf775 and +1.1 B\uf76c\uf765\uf775, respectively. We analyze the impact of the new features and the performance of the learning algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "What linguistic features can improve statistical machine translation (MT)? This is a fundamental question for the discipline, particularly as it pertains to improving the best systems we have. Further:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Do syntax-based translation systems have unique and effective levers to pull when designing new features?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Can large numbers of feature weights be learned efficiently and stably on modest amounts of data?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we address these questions by experimenting with a large number of new features. We add more than 250 features to improve a syntaxbased MT system-already the highest-scoring single system in the NIST 2008 Chinese-English common-data track-by +1.1 B\uf76c\uf765\uf775. We also add more than 10,000 features to Hiero (Chiang, 2005) and obtain a +1.5 B\uf76c\uf765\uf775 improvement.", "cite_spans": [ { "start": 315, "end": 329, "text": "(Chiang, 2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many of the new features use syntactic information, and in particular depend on information that is available only inside a syntax-based translation model. Thus they widen the advantage that syntaxbased models have over other types of models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The models are trained using the Margin Infused Relaxed Algorithm or MIRA (Crammer et al., 2006) instead of the standard minimum-error-rate training or MERT algorithm (Och, 2003) . Our results add to a growing body of evidence (Watanabe et al., 2007; Chiang et al., 2008) that MIRA is preferable to MERT across languages and systems, even for very large-scale tasks.", "cite_spans": [ { "start": 74, "end": 96, "text": "(Crammer et al., 2006)", "ref_id": "BIBREF8" }, { "start": 167, "end": 178, "text": "(Och, 2003)", "ref_id": "BIBREF21" }, { "start": 227, "end": 250, "text": "(Watanabe et al., 2007;", "ref_id": "BIBREF27" }, { "start": 251, "end": 271, "text": "Chiang et al., 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The work of Och et al (2004) is perhaps the bestknown study of new features and their impact on translation quality. However, it had a few shortcomings. First, it used the features for reranking n-best lists of translations, rather than for decoding or forest reranking (Huang, 2008) . Second, it attempted to incorporate syntax by applying off-the-shelf part-ofspeech taggers and parsers to MT output, a task these tools were never designed for. By contrast, we incorporate features directly into hierarchical and syntaxbased decoders.", "cite_spans": [ { "start": 12, "end": 28, "text": "Och et al (2004)", "ref_id": "BIBREF20" }, { "start": 270, "end": 283, "text": "(Huang, 2008)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A third difficulty with Och et al.'s study was that it used MERT, which is not an ideal vehicle for feature exploration because it is observed not to perform well with large feature sets. Others have introduced alternative discriminative training methods (Tillmann and Zhang, 2006; Turian et al., 2007; Blunsom et al., 2008; Macherey et al., 2008) , in which a recurring challenge is scalability: to train many features, we need many train-ing examples, and to train discriminatively, we need to search through all possible translations of each training example. Another line of research (Watanabe et al., 2007; Chiang et al., 2008) tries to squeeze as many features as possible from a relatively small dataset. We follow this approach here.", "cite_spans": [ { "start": 255, "end": 281, "text": "(Tillmann and Zhang, 2006;", "ref_id": "BIBREF24" }, { "start": 282, "end": 302, "text": "Turian et al., 2007;", "ref_id": "BIBREF25" }, { "start": 303, "end": 324, "text": "Blunsom et al., 2008;", "ref_id": "BIBREF0" }, { "start": 325, "end": 347, "text": "Macherey et al., 2008)", "ref_id": "BIBREF17" }, { "start": 588, "end": 611, "text": "(Watanabe et al., 2007;", "ref_id": "BIBREF27" }, { "start": 612, "end": 632, "text": "Chiang et al., 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Systems Used", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Hiero (Chiang, 2005 ) is a hierarchical, string-tostring translation system. Its rules, which are extracted from unparsed, word-aligned parallel text, are synchronous CFG productions, for example:", "cite_spans": [ { "start": 6, "end": 19, "text": "(Chiang, 2005", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Hiero", "sec_num": "3.1" }, { "text": "X \u2192 X 1 de X 2 , X 2 of X 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hiero", "sec_num": "3.1" }, { "text": "As the number of nonterminals is limited to two, the grammar is equivalent to an inversion transduction grammar (Wu, 1997) .", "cite_spans": [ { "start": 112, "end": 122, "text": "(Wu, 1997)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Hiero", "sec_num": "3.1" }, { "text": "The baseline model includes 12 features whose weights are optimized using MERT. Two of the features are n-gram language models, which require intersecting the synchronous CFG with finite-state automata representing the language models. This grammar can be parsed efficiently using cube pruning (Chiang, 2007) .", "cite_spans": [ { "start": 294, "end": 308, "text": "(Chiang, 2007)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Hiero", "sec_num": "3.1" }, { "text": "Our syntax-based system transforms source Chinese strings into target English syntax trees. Following previous work in statistical MT (Brown et al., 1993) , we envision a noisy-channel model in which a language model generates English, and then a translation model transforms English trees into Chinese. We represent the translation model as a tree transducer (Knight and Graehl, 2005) . It is obtained from bilingual text that has been word-aligned and whose English side has been syntactically parsed. From this data, we use the the GHKM minimal-rule extraction algorithm of (Galley et al., 2004) to yield rules like:", "cite_spans": [ { "start": 131, "end": 154, "text": "MT (Brown et al., 1993)", "ref_id": null }, { "start": 360, "end": 385, "text": "(Knight and Graehl, 2005)", "ref_id": "BIBREF14" }, { "start": 577, "end": 598, "text": "(Galley et al., 2004)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Syntax-based system", "sec_num": "3.2" }, { "text": "NP-C(x 0 :NPB PP(IN(of x 1 :NPB)) \u2194 x 1 de x 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based system", "sec_num": "3.2" }, { "text": "Though this rule can be used in either direction, here we use it right-to-left (Chinese to English). We follow Galley et al. (2006) in allowing unaligned Chinese words to participate in multiple translation rules, and in collecting larger rules composed of minimal rules. These larger rules have been shown to substantially improve translation accuracy (Galley et al., 2006; DeNeefe et al., 2007) .", "cite_spans": [ { "start": 111, "end": 131, "text": "Galley et al. (2006)", "ref_id": "BIBREF12" }, { "start": 353, "end": 374, "text": "(Galley et al., 2006;", "ref_id": "BIBREF12" }, { "start": 375, "end": 396, "text": "DeNeefe et al., 2007)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Syntax-based system", "sec_num": "3.2" }, { "text": "We apply Good-Turing discounting to the transducer rule counts and obtain probability estimates:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based system", "sec_num": "3.2" }, { "text": "P(rule) = count(rule) count(LHS-root(rule))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based system", "sec_num": "3.2" }, { "text": "When we apply these probabilities to derive an English sentence e and a corresponding Chinese sentence c, we wind up with the joint probability P(e, c).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based system", "sec_num": "3.2" }, { "text": "The baseline model includes log P(e, c), the two n-gram language models log P(e), and other features for a total of 25. For example, there is a pair of features to punish rules that drop Chinese content words or introduce spurious English content words. All features are linearly combined and their weights are optimized using MERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based system", "sec_num": "3.2" }, { "text": "For efficient decoding with integrated n-gram language models, all transducer rules must be binarized into rules that contain at most two variables and can be incrementally scored by the language model . Then we use a CKY-style parser (Yamada and Knight, 2002; Galley et al., 2006) with cube pruning to decode new sentences.", "cite_spans": [ { "start": 235, "end": 260, "text": "(Yamada and Knight, 2002;", "ref_id": "BIBREF29" }, { "start": 261, "end": 281, "text": "Galley et al., 2006)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Syntax-based system", "sec_num": "3.2" }, { "text": "We include two other techniques in our baseline. To get more general translation rules, we restructure our English training trees using expectationmaximization , and to get more specific translation rules, we relabel the trees with up to 4 specialized versions of each nonterminal symbol, again using expectation-maximization and the split/merge technique of Petrov et al. (2006) .", "cite_spans": [ { "start": 359, "end": 379, "text": "Petrov et al. (2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Syntax-based system", "sec_num": "3.2" }, { "text": "We incorporate all our new features into a linear model (Och and Ney, 2002) and train them using MIRA (Crammer et al., 2006) , following previous work (Watanabe et al., 2007; Chiang et al., 2008) .", "cite_spans": [ { "start": 56, "end": 75, "text": "(Och and Ney, 2002)", "ref_id": "BIBREF19" }, { "start": 102, "end": 124, "text": "(Crammer et al., 2006)", "ref_id": "BIBREF8" }, { "start": 151, "end": 174, "text": "(Watanabe et al., 2007;", "ref_id": "BIBREF27" }, { "start": 175, "end": 195, "text": "Chiang et al., 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "Let e stand for output strings or their derivations, and let h(e) stand for the feature vector for e. Initialize the feature weights w. Then, repeatedly:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "\u2022 Select a batch of input sentences f 1 , . . . , f m and decode each f i to obtain a forest of translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "\u2022 For each i, select from the forest a set of hypothesis translations e i1 , . . . , e in , which are the 10-best translations according to each of:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "h(e) \u2022 w B\uf76c\uf765\uf775(e) + h(e) \u2022 w \u2212B\uf76c\uf765\uf775(e) + h(e) \u2022 w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "\u2022 For each i, select an oracle translation: e * = arg max e (B\uf76c\uf765\uf775(e) + h(e) \u2022 w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "Let \u2206h i j = h(e * i ) \u2212 h(e i j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "\u2022 For each e i j , compute the loss", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i j = B\uf76c\uf765\uf775(e * i ) \u2212 B\uf76c\uf765\uf775(e i j )", "eq_num": "(3)" } ], "section": "MIRA training", "sec_num": "3.3" }, { "text": "\u2022 Update w to the value of w that minimizes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "1 2 w \u2212 w 2 + C m i=1 max 1\u2264 j\u2264n ( i j \u2212 \u2206h i j \u2022 w ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "where C = 0.01. This minimization is performed by a variant of sequential minimal optimization (Platt, 1998).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "Following Chiang et al. (2008) , we calculate the sentence B\uf76c\uf765\uf775 scores in (1), (2), and (3) in the context of some previous 1-best translations. We run 20 of these learners in parallel, and when training is finished, the weight vectors from all iterations of all learners are averaged together.", "cite_spans": [ { "start": 10, "end": 30, "text": "Chiang et al. (2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "Since the interface between the trainer and the decoder is fairly simple-for each sentence, the decoder sends the trainer a forest, and the trainer returns a weight update-it is easy to use this algorithm with a variety of CKY-based decoders: here, we are using it in conjunction with both the Hiero decoder and our syntax-based decoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA training", "sec_num": "3.3" }, { "text": "In this section, we describe the new features introduced on top of our baseline systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4" }, { "text": "Discount features Both of our systems calculate several features based on observed counts of rules in the training data. Though the syntax-based system uses Good-Turing discounting when computing the P(e, c) feature, we find, as noted above, that it uses quite a few one-count rules, suggesting that their probabilities have been overestimated. We can directly attack this problem by adding features count i that reward or punish rules seen i times, or features count [i, j] for rules seen between i and j times.", "cite_spans": [ { "start": 468, "end": 474, "text": "[i, j]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4" }, { "text": "String-to-tree MT offers some unique levers to pull, in terms of target-side features. Because the system outputs English trees, we can analyze output trees on the tuning set and design new features to encourage the decoder to produce more grammatical trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "Rule overlap features While individual rules observed in decoder output are often quite reasonable, two adjacent rules can create problems. For example, a rule that has a variable of type IN (preposition) needs another rule rooted with IN to fill the position. If the second rule supplies the wrong preposition, a bad translation results. The IN node here is an overlap point between rules. Considering that certain nonterminal symbols may be more reliable overlap points than others, we create a binary feature for each nonterminal. A rule like:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "IN(at) \u2194 zai", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "will have feature rule-root-IN set to 1 and all other rule-root features set to 0. Our rule root features range over the original (non-split) nonterminal set; we have 105 in total. Even though the rule root features are locally attached to individual rules-and therefore cause no additional problems for the decoder search-they are aimed at problematic rule/rule interactions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "Bad single-level rewrites Sometimes the decoder uses questionable rules, for example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "PP(x 0 :VBN x 1 :NP-C) \u2194 x 0 x 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "This rule is learned from 62 cases in our training data, where the VBN is almost always the word given. However, the decoder misuses this rule with other VBNs. So we can add a feature that penalizes any rule in which a PP dominates a VBN and NP-C. The feature class bad-rewrite comprises penalties for the following configurations based on our analysis of the tuning set:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "PP \u2192 VBN NP-C PP-BAR \u2192 NP-C IN VP \u2192 NP-C PP CONJP \u2192 RB IN", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "Node count features It is possible that the decoder creates English trees with too many or too few nodes of a particular syntactic category. For example, there may be an tendency to generate too many determiners or past-tense verbs. We therefore add a count feature for each of the 109 (non-split) English nonterminal symbols. For a rule like NPB(NNP(us) NNP(president) x 0 :NNP) \u2194 meiguo zongtong x 0 the feature node-count-NPB gets value 1, nodecount-NNP gets value 2, and all others get 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "Insertion features Among the rules we extract from bilingual corpora are target-language insertion rules, which have a word on the English side, but no words on the source Chinese side. Sample syntaxbased insertion rules are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "NPB(DT(the) x 0 :NN) \u2194 x 0 S(x 0 :NP-C VP(VBZ(is) x 1 :VP-C)) \u2194 x 0 x 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "We notice that our decoder, however, frequently fails to insert words like is and are, which often have no equivalent in the Chinese source. We also notice that the-insertion rules sometimes have a good effect, as in the translation \"in the bloom of youth,\" but other times have a bad effect, as in \"people seek areas of the conspiracy.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "Each time the decoder uses (or fails to use) an insertion rule, it incurs some risk. There is no guarantee that the interaction of the rule probabilities and the language model provides the best way to manage this risk. We therefore provide MIRA with a feature for each of the most common English words appearing in insertion rules, e.g., insert-the and insert-is. There are 35 such features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side features", "sec_num": "4.1" }, { "text": "We now turn to features that make use of source-side context. Although these features capture dependencies that cross boundaries between rules, they are still local in the sense that no new states need to be added to the decoder. This is because the entire source sentence, being fixed, is always available to every feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-side features", "sec_num": "4.2" }, { "text": "Soft syntactic constraints Neither of our systems uses source-side syntactic information; hence, both could potentially benefit from soft syntactic constraints as described by . In brief, these features use the output of an independent syntactic parser on the source sentence, rewarding decoder constituents that match syntactic constituents and punishing decoder constituents that cross syntactic constituents. We use separatelytunable features for each syntactic category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-side features", "sec_num": "4.2" }, { "text": "Structural distortion features Both of our systems have rules with variables that generalize over possible fillers, but neither system's basic model conditions a rule application on the size of a filler, making it difficult to distinguish long-distance reorderings from short-distance reorderings. To remedy this problem, Chiang et al. (2008) introduce a structural distortion model, which we include in our experiment. Our syntax-based baseline includes the generative version of this model already.", "cite_spans": [ { "start": 322, "end": 342, "text": "Chiang et al. (2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Source-side features", "sec_num": "4.2" }, { "text": "Word context During rule extraction, we retain word alignments from the training data in the extracted rules. (If a rule is observed with more than one set of word alignments, we keep only the most frequent one.) We then define, for each triple ( f, e, f +1 ), a feature that counts the number of times that f is aligned to e and f +1 occurs to the right of f ; and similarly for triples ( f, e, f \u22121 ) with f \u22121 occurring to the left of f . In order to limit the size of the model, we restrict words to be among the 100 most frequently occurring words from the training data; all other words are replaced with a token .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-side features", "sec_num": "4.2" }, { "text": "These features are somewhat similar to features used by Watanabe et al. (2007) , but more in the spirit of features used in the word sense disambiguation model introduced by Lee and Ng (2002) and incorporated as a submodel of a translation system by Chan et al. (2007) ; here, we are incorporating some of its features directly into the translation model.", "cite_spans": [ { "start": 56, "end": 78, "text": "Watanabe et al. (2007)", "ref_id": "BIBREF27" }, { "start": 174, "end": 191, "text": "Lee and Ng (2002)", "ref_id": "BIBREF15" }, { "start": 250, "end": 268, "text": "Chan et al. (2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Source-side features", "sec_num": "4.2" }, { "text": "For our experiments, we used a 260 million word Chinese/English bitext. We ran GIZA++ on the entire bitext to produce IBM Model 4 word alignments, and then the link deletion algorithm (Fossum et al., 2008) the syntax-based system, we ran a reimplementation of the Collins parser (Collins, 1997) on the English half of the bitext to produce parse trees, then restructured and relabeled them as described in Section 3.2. Syntax-based rule extraction was performed on a 65 million word subset of the training data. For Hiero, rules with up to two nonterminals were extracted from a 38 million word subset and phrasal rules were extracted from the remainder of the training data.", "cite_spans": [ { "start": 184, "end": 205, "text": "(Fossum et al., 2008)", "ref_id": "BIBREF10" }, { "start": 279, "end": 294, "text": "(Collins, 1997)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We trained three 5-gram language models: one on the English half of the bitext, used by both systems, one on one billion words of English, used by the syntax-based system, and one on two billion words of English, used by Hiero. Modified Kneser-Ney smoothing (Chen and Goodman, 1998) was applied to all language models. The language models are represented using randomized data structures similar to those of Talbot et al. (2007) .", "cite_spans": [ { "start": 408, "end": 428, "text": "Talbot et al. (2007)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Our tuning set (2010 sentences) and test set (1994 sentences) were drawn from newswire data from the NIST 2004 and 2005 evaluations and the GALE program (with no overlap at either the segment or document level). For the source-side syntax features, we used the Berkeley parser (Petrov et al., 2006) to parse the Chinese side of both sets.", "cite_spans": [ { "start": 277, "end": 298, "text": "(Petrov et al., 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We implemented the source-side context features for Hiero and the target-side syntax features for the syntax-based system, and the discount features for both. We then ran MIRA on the tuning set with 20 parallel learners for Hiero and 73 parallel learners for the syntax-based system. We chose a stopping iteration based on the B\uf76c\uf765\uf775 score on the tuning set, and used the averaged feature weights from all iter- ations of all learners to decode the test set. The results (Table 1) show significant improvements in both systems (p < 0.01) over already very strong MERT baselines. Adding the source-side and discount features to Hiero yields a +1.5 B\uf76c\uf765\uf775 improvement, and adding the target-side syntax and discount features to the syntax-based system yields a +1.1 B\uf76c\uf765\uf775 improvement. The results also show that for Hiero, the various classes of features contributed roughly equally; for the syntax-based system, we see that two of the feature classes make small contributions but time constraints unfortunately did not permit isolated testing of all feature classes.", "cite_spans": [], "ref_spans": [ { "start": 469, "end": 478, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "How did the various new features improve the translation quality of our two systems? We begin by examining the discount features. For these features, we used slightly different schemes for the two systems, shown in Table 2 with their learned feature weights. We see in both cases that one-count rules are strongly penalized, as expected. Table 3 shows word-insertion feature weights. The system rewards insertion of forms of be; examples 1-3 in Figure 1 show typical improved translations that result. Among determiners, inserting a is rewarded, while inserting the is punished. This seems to be because the is often part of a fixed phrase, such as the White House, and therefore comes naturally as part of larger phrasal rules. Inserting the outside these fixed phrases is a risk that the generative model is too inclined to take. We also note that the system learns to punish unmotivated insertions of commas and periods, which get into our grammar via quirks in the MT training data. Table 4 shows weights for rule-overlap features. MIRA punishes the case where rules overlap with an IN (preposition) node. This makes sense: if a rule has a variable that can be filled by any English preposition, there is a risk that an incorrect preposition will fill it. On the other hand, splitting at a period is a safe bet, and frees the model to use rules that dig deeper into NP and VP trees when constructing a top-level S. Table 5 shows weights for generated English nonterminals: SBAR-C nodes are rewarded and commas are punished.", "cite_spans": [], "ref_spans": [ { "start": 215, "end": 222, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 338, "end": 345, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 445, "end": 453, "text": "Figure 1", "ref_id": "FIGREF2" }, { "start": 987, "end": 994, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 1419, "end": 1426, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Analysis", "sec_num": "6" }, { "text": "The combined effect of all weights is subtle. To interpret them further, it helps to look at gross changes in the system's behavior. For example, a major error in the baseline system is to move \"X said\" or \"X asked\" from the beginning of the Chinese input to the middle or end of the English trans- lation. The error occurs with many speaking verbs, and each time, we trace it to a different rule. The problematic rules can even be non-lexical, e.g.:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax features", "sec_num": "6.1" }, { "text": "Bonus \u22120.50 period \u22120.39 VP-C \u22120.36 VB \u22120.31 SG-C \u22120.30 MD \u22120.26 VBG \u22120.25 ADJP \u22120.22 -LRB- \u22120.21 VP-BAR \u22120.20 NPB-BAR \u22120.16 FRAG \u22120.16 PRN \u22120.15 NPB \u22120.13 RB \u22120.12 SBAR-C \u22120.12 VP-C-BAR \u22120.11 -RRB- . . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax features", "sec_num": "6.1" }, { "text": "S(x 0 :NP-C x 1 :VP x 2 :, x 3 :NP-C x 4 :VP x 5 :.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Penalty", "sec_num": null }, { "text": "\u2194 x 3 x 4 x 2 x 0 x 1 x 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Penalty", "sec_num": null }, { "text": "It is therefore difficult to come up with a straightforward feature to address the problem. However, when we apply MIRA with the features already listed, these translation errors all disappear, as demonstrated by examples 4-5 in Figure 1 . Why does this happen? It turns out that in translation hypotheses that move \"X said\" or \"X asked\" away from the beginning of the sentence, more commas appear, and fewer S-C and SBAR-C nodes appear. Therefore, the new features work to discourage these hypotheses. Example 6 shows additionally that commas next to speaking verbs are now correctly deleted. Examples 7-8 in Figure 1 show other kinds of unanticipated improvements. We do not have space for a fuller analysis, but we note that the specific effects we describe above account for only part of the overall B\uf76c\uf765\uf775 improvement.", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 237, "text": "Figure 1", "ref_id": "FIGREF2" }, { "start": 610, "end": 618, "text": "Figure 1", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Penalty", "sec_num": null }, { "text": "In Table 6 are shown feature weights learned for the word-context features. A surprising number of the highest-weighted features have to do with translations of dates and bylines. Many of the penalties seem to discourage spurious insertion or deletion of frequent words (for, 's, said, parentheses, and quotes). Finally, we note that several of the features (the third-and eighth-ranked reward and twelfthranked penalty) shape the translation of shuo 'said', preferring translations with an overt complementizer that and without a comma. Thus these features work together to attack a frequent problem that our targetsyntax features also addressed. Figure 2 shows the performance of Hiero with all of its features on the tuning and test sets over time. The scores on the tuning set rise rapidly, and the scores on the test set also rise, but much more slowly, and there appears to be slight degradation after the 18th pass through the tuning data. This seems in line with the finding of Watanabe et al. (2007) that with on the order of 10,000 features, overfitting is possible, but we can still improve accuracy on new data. Figure 2 : Using over 10,000 word-context features leads to overfitting, but its detrimental effects are modest. Scores on the tuning set were obtained from the 1-best output of the online learning algorithm, whereas scores on the test set were obtained using averaged weights.", "cite_spans": [ { "start": 986, "end": 1008, "text": "Watanabe et al. (2007)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 6", "ref_id": null }, { "start": 648, "end": 656, "text": "Figure 2", "ref_id": null }, { "start": 1124, "end": 1132, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Word context features", "sec_num": "6.2" }, { "text": "Early stopping would have given +0.2 B\uf76c\uf765\uf775 over the results reported in Table 1 . 1", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 78, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Word context features", "sec_num": "6.2" }, { "text": "We have described a variety of features for statistical machine translation and applied them to syntaxbased and hierarchical systems. We saw that these features, discriminatively trained using MIRA, led to significant improvements, and took a closer look at the results to see how the new features qualitatively improved translation quality. We draw three conclusions from this study. First, we have shown that these new features can improve the performance even of top-scoring MT systems. Second, these results add to a growing body of evidence that MIRA is preferable to MERT for discriminative training. When training over 10,000 features on a modest amount of data, we, like Watanabe et al. (2007) , did observe overfitting, yet saw improvements on new data. Third, we have shown that syntax-based machine translation offers possibilities for features not available in other models, making syntax-based MT and MIRA an especially strong combination for future work.", "cite_spans": [ { "start": 679, "end": 701, "text": "Watanabe et al. (2007)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "1 MERT: the united states pending israeli clarification on golan settlement plan MIRA: the united states is waiting for israeli clarification on golan settlement plan 2 MERT: . . . the average life expectancy of only 18 months , canada 's minority goverment will . . . MIRA: . . . the average life expectancy of canada's previous minority government is only 18 months . . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "3 MERT: . . . since un inspectors expelled by north korea . . . MIRA: . . . since un inspectors were expelled by north korea . . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "4 MERT: another thing is . . . , \" he said , \" obviously , the first thing we need to do . . . . MIRA: he said : \" obviously , the first thing we need to do . . . , and another thing is . . . . \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "5 MERT: the actual timing . . . reopened in january , yoon said . MIRA: yoon said the issue of the timing . . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "6 MERT: . . . us -led coalition forces , said today that the crash . . . MIRA: . . . us -led coalition forces said today that a us military . . . 7 MERT: . . . and others will feel the danger . MIRA: . . . and others will not feel the danger . Bonus f e context \u22121.19 f \u22121 = ri 'day' \u22121.01 f \u22121 = ( \u22120.84 , that f \u22121 = shuo 'say' \u22120.82 yue 'month' f +1 = \u22120.78 \" \" f \u22121 = \u22120.76 \" \" f +1 = \u22120.66 f +1 = nian 'year' \u22120.65 , that f +1 = . . . Table 6 : Weights learned for word-context features, which fire when English word e is generated aligned to Chinese word f , with Chinese word f \u22121 to the left or f +1 to the right. Glosses for Chinese words are not part of features.", "cite_spans": [], "ref_spans": [ { "start": 506, "end": 513, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "It was this iteration, in fact, which was used to derive the combined feature count used in the title of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A discriminative latent variable model for statistical machine translation", "authors": [ { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "" } ], "year": 2008, "venue": "Proc. ACL-08: HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phil Blunsom, Trevor Cohn, and Miles Osborne. 2008. A discriminative latent variable model for statistical ma- chine translation. In Proc. ACL-08: HLT.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Brown", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Della Pietra", "suffix": "" }, { "first": "J", "middle": [], "last": "Della", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent Della J. Pietra, and Robert L. Mercer. 1993. The mathemat- ics of statistical machine translation: Parameter esti- mation. Computational Linguistics, 19(2):263-312.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Word sense disambiguation improves statistical machine translation", "authors": [ { "first": "Yee", "middle": [], "last": "Seng Chan", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Seng Chan, Hwee Tou Ng, and David Chiang. 2007. Word sense disambiguation improves statistical ma- chine translation. In Proc. ACL 2007.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An empirical study of smoothing techniques for language modeling", "authors": [ { "first": "F", "middle": [], "last": "Stanley", "suffix": "" }, { "first": "Joshua", "middle": [ "T" ], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanley F. Chen and Joshua T. Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Computer Sci- ence Group, Harvard University.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Online large-margin training of syntactic and structural translation features", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Marton", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and struc- tural translation features. In Proc. EMNLP 2008.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. ACL 2005.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Hierarchical phrase-based translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33(2).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Three generative, lexicalized models for statistical parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1997, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 1997. Three generative, lexicalized models for statistical parsing. In Proc. ACL 1997.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Shai Shalev-Shwartz, and Yoram Singer", "authors": [ { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Dekel", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Keshet", "suffix": "" } ], "year": 2006, "venue": "Journal of Machine Learning Research", "volume": "7", "issue": "", "pages": "551--585", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Shwartz, and Yoram Singer. 2006. Online passive- aggressive algorithms. Journal of Machine Learning Research, 7:551-585.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "What can syntax-based MT learn from phrase-based MT?", "authors": [ { "first": "Steve", "middle": [], "last": "Deneefe", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2007, "venue": "Proc. EMNLP-CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steve DeNeefe, Kevin Knight, Wei Wang, and Daniel Marcu. 2007. What can syntax-based MT learn from phrase-based MT? In Proc. EMNLP-CoNLL-2007.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Using syntax to improve word alignment for syntaxbased statistical machine translation", "authors": [ { "first": "Victoria", "middle": [], "last": "Fossum", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Abney", "suffix": "" } ], "year": 2008, "venue": "Proc. Third Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victoria Fossum, Kevin Knight, and Steven Abney. 2008. Using syntax to improve word alignment for syntax- based statistical machine translation. In Proc. Third Workshop on Statistical Machine Translation.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "What's in a translation rule?", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2004, "venue": "Proc. HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In Proc. HLT-NAACL 2004, Boston, Massachusetts.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Scalable inference and training of context-rich syntactic models", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Deneefe", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ignacio", "middle": [], "last": "Thayer", "suffix": "" } ], "year": 2006, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic models. In Proc. ACL 2006.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Forest reranking: Discriminative parsing with non-local features", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2008, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proc. ACL 2008.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An overview of probabilistic tree transducers for natural language processing", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Sixth International Conference on Intelligent Text Processing and Computational Linguistics (CICLing)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight and Jonathan Graehl. 2005. An overview of probabilistic tree transducers for natural language processing. In Proceedings of the Sixth International Conference on Intelligent Text Processing and Compu- tational Linguistics (CICLing).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "An empirical evaluation of knowledge sources and learning algorithms for word sense disambiguation", "authors": [ { "first": "Yoong Keok", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2002, "venue": "Proc. EMNLP 2002", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoong Keok Lee and Hwee Tou Ng. 2002. An em- pirical evaluation of knowledge sources and learn- ing algorithms for word sense disambiguation. In Proc. EMNLP 2002, pages 41-48.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An end-to-end discriminative approach to machine translation", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Bouchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2006, "venue": "Proc. COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Alexandre Bouchard-C\u00f4t\u00e9, Dan Klein, and Ben Taskar. 2006. An end-to-end discriminative ap- proach to machine translation. In Proc. COLING-ACL 2006.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Lattice-based minimum error rate training for statistical machine translation", "authors": [ { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Ignacio", "middle": [], "last": "Thayer", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uskoreit", "suffix": "" } ], "year": 2008, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wolfgang Macherey, Franz Josef Och, Ignacio Thayer, and Jakob Uskoreit. 2008. Lattice-based minimum error rate training for statistical machine translation. In Proc. EMNLP 2008.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Soft syntactic constraints for hierarchical phrased-based translation", "authors": [ { "first": "Yuval", "middle": [], "last": "Marton", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "Proc. ACL-08: HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuval Marton and Philip Resnik. 2008. Soft syntactic constraints for hierarchical phrased-based translation. In Proc. ACL-08: HLT.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Discriminative training and maximum entropy models for statistical machine translation", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimina- tive training and maximum entropy models for statis- tical machine translation. In Proc. ACL 2002.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A smorgasbord of features for statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Libin", "middle": [], "last": "Shen", "suffix": "" }, { "first": "David", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Eng", "suffix": "" }, { "first": "Viren", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Zhen", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2004, "venue": "Proc. HLT-NAACL 2004", "volume": "", "issue": "", "pages": "161--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och, Daniel Gildea, Sanjeev Khudanpur, Anoop Sarkar, Kenji Yamada, Alex Fraser, Shankar Kumar, Libin Shen, David Smith, Katherine Eng, Viren Jain, Zhen Jin, and Dragomir Radev. 2004. A smorgasbord of features for statistical machine trans- lation. In Proc. HLT-NAACL 2004, pages 161-168.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proc. ACL 2003", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. ACL 2003. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and inter- pretable tree annotation. In Proc. ACL 2006.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Fast training of support vector machines using sequential minimal optimization", "authors": [ { "first": "C", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Platt", "suffix": "" } ], "year": 1998, "venue": "Advances in Kernel Methods: Support Vector Learning", "volume": "", "issue": "", "pages": "195--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "John C. Platt. 1998. Fast training of support vector machines using sequential minimal optimization. In B. Sch\u00f6lkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods: Support Vector Learn- ing, pages 195-208. MIT Press.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Randomised language modelling for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Talbot", "suffix": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "" } ], "year": 2007, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "512--519", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Talbot and Miles Osborne. 2007. Randomised language modelling for statistical machine translation. In Proc. ACL 2007, pages 512-519.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A discriminative global training algorithm for statistical MT", "authors": [ { "first": "Christoph", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2006, "venue": "Proc. COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christoph Tillmann and Tong Zhang. 2006. A discrimi- native global training algorithm for statistical MT. In Proc. COLING-ACL 2006.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Scalable discriminative learning for natural language parsing and translation", "authors": [ { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Wellington", "suffix": "" }, { "first": "I", "middle": [ "Dan" ], "last": "Melamed", "suffix": "" } ], "year": 2006, "venue": "Proc. NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Turian, Benjamin Wellington, and I. Dan Melamed. 2007. Scalable discriminative learn- ing for natural language parsing and translation. In Proc. NIPS 2006.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Binarizing syntax trees to improve syntax-based machine translation accuracy", "authors": [ { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2007, "venue": "Proc. EMNLP-CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Wang, Kevin Knight, and Daniel Marcu. 2007. Bi- narizing syntax trees to improve syntax-based machine translation accuracy. In Proc. EMNLP-CoNLL 2007.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Online large-margin training for statistical machine translation", "authors": [ { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Hajime", "middle": [], "last": "Tsukuda", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Isozaki", "suffix": "" } ], "year": 2007, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taro Watanabe, Jun Suzuki, Hajime Tsukuda, and Hideki Isozaki. 2007. Online large-margin training for statis- tical machine translation. In Proc. EMNLP 2007.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "", "pages": "377--404", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23:377-404.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A decoder for syntax-based statistical MT", "authors": [ { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2002, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Yamada and Kevin Knight. 2002. A decoder for syntax-based statistical MT. In Proc. ACL 2002.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Synchronous binarization for machine translation", "authors": [ { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2006, "venue": "Proc. HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In Proc. HLT-NAACL 2006.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "MERT: in residential or public activities within 200 meters of the region , . . . MIRA: within 200 m of residential or public activities area , . . .", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "Improved syntax-based translations due to MIRA-trained weights.", "uris": null, "num": null, "type_str": "figure" }, "TABREF2": { "num": null, "content": "", "text": "Weights learned for discount features. Negative weights indicate bonuses; positive weights indicate penalties.", "html": null, "type_str": "table" }, "TABREF4": { "num": null, "content": "
", "text": "Weights learned for inserting target English words with rules that lack Chinese words.", "html": null, "type_str": "table" }, "TABREF6": { "num": null, "content": "
BonusPenalty
\u22120.73 SBAR-C \u22120.54 VBZ \u22120.54 IN \u22120.52 NN \u22120.51 PP-C \u22120.47 right double quote \u22120.39 ADJP \u22120.34 POS \u22120.31 ADVP \u22120.30 RP \u22120.29 PRT \u22120.27 SG-C \u22120.22 S-C \u22120.21 NNPS \u22120.21 VP-BAR \u22120.20 PRP \u22120.20 NPB-BAR+1.30 comma +0.80 DT +0.58 PP +0.44 TO +0.33 NNP +0.30 NNS +0.30 NML +0.22 CD +0.18 PRN +0.16 SYM +0.15 ADJP-BAR +0.15 NP +0.15 MD +0.15 HYPH +0.14 PRN-BAR +0.14 NP-C +0.11 ADJP-C
. . .. . .
", "text": "Weights learned for employing rules whose English sides are rooted at particular syntactic categories.", "html": null, "type_str": "table" }, "TABREF7": { "num": null, "content": "", "text": "Weights learned for generating syntactic nodes of various types anywhere in the English translation.", "html": null, "type_str": "table" } } } }