{ "paper_id": "P09-1036", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:54:03.306274Z" }, "title": "A Syntax-Driven Bracketing Model for Phrase-Based Translation", "authors": [ { "first": "Deyi", "middle": [], "last": "Xiong", "suffix": "", "affiliation": { "laboratory": "", "institution": "Human Language Technology Institute for Infocomm Research", "location": { "addrLine": "1 Fusionopolis Way", "postCode": "138632", "settlement": "#21-01 South Connexis", "country": "Singapore" } }, "email": "dyxiong@i2r.a-star.edu.sg" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Human Language Technology Institute for Infocomm Research", "location": { "addrLine": "1 Fusionopolis Way", "postCode": "138632", "settlement": "#21-01 South Connexis", "country": "Singapore" } }, "email": "mzhang@i2r.a-star.edu.sg" }, { "first": "Aiti", "middle": [], "last": "Aw", "suffix": "", "affiliation": { "laboratory": "", "institution": "Human Language Technology Institute for Infocomm Research", "location": { "addrLine": "1 Fusionopolis Way", "postCode": "138632", "settlement": "#21-01 South Connexis", "country": "Singapore" } }, "email": "" }, { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Human Language Technology Institute for Infocomm Research", "location": { "addrLine": "1 Fusionopolis Way", "postCode": "138632", "settlement": "#21-01 South Connexis", "country": "Singapore" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Syntactic analysis influences the way in which the source sentence is translated. Previous efforts add syntactic constraints to phrase-based translation by directly rewarding/punishing a hypothesis whenever it matches/violates source-side constituents. We present a new model that automatically learns syntactic constraints, including but not limited to constituent matching/violation, from training corpus. The model brackets a source phrase as to whether it satisfies the learnt syntactic constraints. The bracketed phrases are then translated as a whole unit by the decoder. Experimental results and analysis show that the new model outperforms other previous methods and achieves a substantial improvement over the baseline which is not syntactically informed.", "pdf_parse": { "paper_id": "P09-1036", "_pdf_hash": "", "abstract": [ { "text": "Syntactic analysis influences the way in which the source sentence is translated. Previous efforts add syntactic constraints to phrase-based translation by directly rewarding/punishing a hypothesis whenever it matches/violates source-side constituents. We present a new model that automatically learns syntactic constraints, including but not limited to constituent matching/violation, from training corpus. The model brackets a source phrase as to whether it satisfies the learnt syntactic constraints. The bracketed phrases are then translated as a whole unit by the decoder. Experimental results and analysis show that the new model outperforms other previous methods and achieves a substantial improvement over the baseline which is not syntactically informed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The phrase-based approach is widely adopted in statistical machine translation (SMT). It segments a source sentence into a sequence of phrases, then translates and reorder these phrases in the target. In such a process, original phrase-based decoding (Koehn et al., 2003) does not take advantage of any linguistic analysis, which, however, is broadly used in rule-based approaches. Since it is not linguistically motivated, original phrasebased decoding might produce ungrammatical or even wrong translations. Consider the following Chinese fragment with its parse tree: Src: [ [[7 11 ] The output is generated from a phrase-based system which does not involve any syntactic analysis. Here we use \"[]\" (straight orientation) and \" \" (inverted orientation) to denote the common structure of the source fragment and its translation found by the decoder. We can observe that the decoder inadequately breaks up the second NP phrase and translates the two words \" \" and \" \" separately. However, the parse tree of the source fragment constrains the phrase \" \" to be translated as a unit.", "cite_spans": [ { "start": 251, "end": 271, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF4" }, { "start": 576, "end": 586, "text": "[ [[7 11 ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Without considering syntactic constraints from the parse tree, the decoder makes wrong decisions not only on phrase movement but also on the lexical selection for the multi-meaning word \" \" 1 . To avert such errors, the decoder can fully respect linguistic structures by only allowing syntactic constituent translations and reorderings. This, unfortunately, significantly jeopardizes performance (Koehn et al., 2003; Xiong et al., 2008) because by integrating syntactic constraint into decoding as a hard constraint, it simply prohibits any other useful non-syntactic translations which violate constituent boundaries.", "cite_spans": [ { "start": 396, "end": 416, "text": "(Koehn et al., 2003;", "ref_id": "BIBREF4" }, { "start": 417, "end": 436, "text": "Xiong et al., 2008)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To better leverage syntactic constraint yet still allow non-syntactic translations, Chiang (2005) introduces a count for each hypothesis and accumulates it whenever the hypothesis exactly matches syntactic boundaries on the source side. On the contrary, and Cherry (2008) accumulate a count whenever hypotheses violate constituent boundaries. These constituent matching/violation counts are used as a feature in the decoder's log-linear model and their weights are tuned via minimal error rate training (MERT) (Och, 2003) . In this way, syntactic constraint is integrated into decoding as a soft constraint to enable the decoder to reward hypotheses that respect syntactic analyses or to pe-nalize hypotheses that violate syntactic structures.", "cite_spans": [ { "start": 84, "end": 97, "text": "Chiang (2005)", "ref_id": "BIBREF1" }, { "start": 258, "end": 271, "text": "Cherry (2008)", "ref_id": "BIBREF0" }, { "start": 510, "end": 521, "text": "(Och, 2003)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although experiments show that this constituent matching/violation counting feature achieves significant improvements on various language-pairs, one issue is that matching syntactic analysis can not always guarantee a good translation, and violating syntactic structure does not always induce a bad translation. find that some constituency types favor matching the source parse while others encourage violations. Therefore it is necessary to integrate more syntactic constraints into phrase translation, not just the constraint of constituent matching/violation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The other issue is that during decoding we are more concerned with the question of phrase cohesion, i.e. whether the current phrase can be translated as a unit or not within particular syntactic contexts (Fox, 2002) 2 , than that of constituent matching/violation. Phrase cohesion is one of the main reasons that we introduce syntactic constraints (Cherry, 2008) . If a source phrase remains contiguous after translation, we refer this type of phrase bracketable, otherwise unbracketable. It is more desirable to translate a bracketable phrase than an unbracketable one.", "cite_spans": [ { "start": 204, "end": 215, "text": "(Fox, 2002)", "ref_id": "BIBREF3" }, { "start": 348, "end": 362, "text": "(Cherry, 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a syntax-driven bracketing (SDB) model to predict whether a phrase (a sequence of contiguous words) is bracketable or not using rich syntactic constraints. We parse the source language sentences in the word-aligned training corpus. According to the word alignments, we define bracketable and unbracketable instances. For each of these instances, we automatically extract relevant syntactic features from the source parse tree as bracketing evidences. Then we tune the weights of these features using a maximum entropy (ME) trainer. In this way, we build two bracketing models: 1) a unary SDB model (UniSDB) which predicts whether an independent phrase is bracketable or not; and 2) a binary SDB model(BiSDB) which predicts whether two neighboring phrases are bracketable. Similar to previous methods, our SDB model is integrated into the decoder's log-linear model as a feature so that we can inherit the idea of soft constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In contrast to the constituent matching/violation counting (CMVC) (Chiang, 2005; Cherry, 2008 ), our SDB model has 2 Here we expand the definition of phrase to include both syntactic and non-syntactic phrases.", "cite_spans": [ { "start": 66, "end": 80, "text": "(Chiang, 2005;", "ref_id": "BIBREF1" }, { "start": 81, "end": 93, "text": "Cherry, 2008", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "the following advantages \u2022 The SDB model automatically learns syntactic constraints from training data while the CMVC uses manually defined syntactic constraints: constituency matching/violation. In our SDB model, each learned syntactic feature from bracketing instances can be considered as a syntactic constraint. Therefore we can use thousands of syntactic constraints to guide phrase translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The SDB model maintains and protects the strength of the phrase-based approach in a better way than the CMVC does. It is able to reward non-syntactic translations by assigning an adequate probability to them if these translations are appropriate to particular syntactic contexts on the source side, rather than always punish them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We test our SDB model against the baseline which doest not use any syntactic constraints on Chinese-to-English translation. To compare with the CMVC, we also conduct experiments using 's XP+. The XP+ accumulates a count for each hypothesis whenever it violates the boundaries of a constituent with a label from {NP, VP, CP, IP, PP, ADVP, QP, LCP, DNP}. The XP+ is the best feature among all features that Marton and Resnik use for Chinese-to-English translation. Our experimental results display that our SDB model achieves a substantial improvement over the baseline and significantly outperforms XP+ according to the BLEU metric (Papineni et al., 2002) . In addition, our analysis shows further evidences of the performance gain from a different perspective than that of BLEU. The paper proceeds as follows. In section 2 we describe how to learn bracketing instances from a training corpus. In section 3 we elaborate the syntax-driven bracketing model, including feature generation and the integration of the SDB model into phrase-based SMT. In section 4 and 5, we present our experiments and analysis. And we finally conclude in section 6.", "cite_spans": [ { "start": 631, "end": 654, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we formally define the bracketing instance, comprising two types namely binary bracketing instance and unary bracketing instance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "We present an algorithm to automatically extract these bracketing instances from word-aligned bilingual corpus where the source language sentences are parsed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "Let c and e be the source sentence and the target sentence, W be the word alignment between them, T be the parse tree of c. We define a binary bracketing instance as a tu-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "ple b, \u03c4 (c i..j ), \u03c4 (c j+1..k ), \u03c4 (c i..k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "where b \u2208 {bracketable, unbracketable}, c i..j and c j+1..k are two neighboring source phrases and \u03c4 (T, s) (\u03c4 (s) for short) is a subtree function which returns the minimal subtree covering the source sequence s from the source parse tree T . Note that \u03c4 (c i..k ) includes both \u03c4 (c i..j ) and \u03c4 (c j+1..k ). For the two neighboring source phrases, the following conditions are satisfied:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "\u2203e u..v , e p..q \u2208 e s.t. \u2200(m, n) \u2208 W, i \u2264 m \u2264 j \u2194 u \u2264 n \u2264 v (1) \u2200(m, n) \u2208 W, j + 1 \u2264 m \u2264 k \u2194 p \u2264 n \u2264 q (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "The above (1) means that there exists a target phrase e u..v aligned to c i..j and (2) denotes a target phrase e p..q aligned to c j+1..k . If e u..v and e p..q are neighboring to each other or all words between the two phrases are aligned to null, we set b = bracketable, otherwise b = unbracketable. From a binary bracketing instance, we derive a unary bracketing instance b, \u03c4 (c i..k ) , ignoring the subtrees \u03c4 (c i..j ) and \u03c4 (c j+1..k ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "Let n be the number of words of c. If we extract all potential bracketing instances, there will be o(n 2 ) unary instances and o(n 3 ) binary instances. To keep the number of bracketing instances tractable, we only record 4 representative bracketing instances for each index j: 1) the bracketable instance with the minimal \u03c4 (c i..k ), 2) the bracketable instance with the maximal \u03c4 (c i..k ), 3) the unbracketable instance with the minimal \u03c4 (c i..k ), and 4) the unbracketable instance with the maximal \u03c4 (c i..k ). Figure 1 shows the algorithm to extract bracketing instances. Line 3-11 find all potential bracketing instances for each (i, j, k) \u2208 c but only keep 4 bracketing instances for each index j: two minimal and two maximal instances. This algorithm learns binary bracketing instances, from which we can derive unary bracketing instances. 1: Input: sentence pair (c, e), the parse tree T of c and the word alignment W between c and e 2:", "cite_spans": [], "ref_spans": [ { "start": 518, "end": 526, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": ":= \u2205 3: for each (i, j, k) \u2208 c do 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "if There exist a target phrase eu..v aligned to ci..j and ep..q aligned to c j+1..k then 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "Get \u03c4 (ci..j), \u03c4 (c j+1..k ), and \u03c4 (c i..k ) 6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "Determine b according to the relationship between eu..v and ep..q 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "if \u03c4 (c i..k ) is currently maximal or minimal then 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "Update bracketing instances for index j 9: end if 10: end if 11: end for 12: for each j \u2208 c do 13:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": ":= \u222a {bracketing instances from j} 14: end for 15: Output: bracketing instances 3 The Syntax-Driven Bracketing Model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Acquisition of Bracketing Instances", "sec_num": "2" }, { "text": "Our interest is to automatically detect phrase bracketing using rich contextual information. We consider this task as a binary-class classification problem: whether the current source phrase s is bracketable (b) within particular syntactic contexts (\u03c4 (s)). If two neighboring sub-phrases s 1 and s 2 are given, we can use more inner syntactic contexts to complete this binary classification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.1" }, { "text": "We construct the syntax-driven bracketing model within the maximum entropy framework. A unary SDB model is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.1" }, { "text": "P U niSDB (b|\u03c4 (s), T ) = exp( i \u03b8 i h i (b, \u03c4 (s), T ) b exp( i \u03b8 i h i (b, \u03c4 (s), T ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.1" }, { "text": "where h i \u2208 {0, 1} is a binary feature function which we will describe in the next subsection, and \u03b8 i is the weight of h i . Similarly, a binary SDB model is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.1" }, { "text": "P BiSDB (b|\u03c4 (s 1 ), \u03c4 (s 2 ), \u03c4 (s), T ) = exp( i \u03b8 i h i (b, \u03c4 (s 1 ), \u03c4 (s 2 ), \u03c4 (s), T ) b exp( i \u03b8 i h i (b, \u03c4 (s 1 ), \u03c4 (s 2 ), \u03c4 (s), T ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.1" }, { "text": "The most important advantage of ME-based SDB model is its capacity of incorporating more fine-grained contextual features besides the binary feature that detects constituent boundary violation or matching. By employing these features, we can investigate the value of various syntactic constraints in phrase translation. x i a n c h a n g s c e n e N N N N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.1" }, { "text": "N P V P A S V V A D N N A D V P V P N P I P s s 1 s 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.1" }, { "text": "Figure 2: Illustration of syntax-driven features used in SDB. Here we only show the features for the source phrase s. The triangle, rounded rectangle and rectangle denote the rule feature, path feature and constituent boundary matching feature respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.1" }, { "text": "Let s be the source phrase in question, s 1 and s 2 be the two neighboring sub-phrases. \u03c3(.) is the root node of \u03c4 (.). The SDB model exploits various syntactic features as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-Driven Features", "sec_num": "3.2" }, { "text": "We use the CFG rules of \u03c3(s), \u03c3(s 1 ) and \u03c3(s 2 ) as features. These features capture syntactic \"horizontal context\" which demonstrates the expansion trend of the source phrase s, s 1 and s 2 on the parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Rule Features (RF)", "sec_num": null }, { "text": "In figure 2, the CFG rule \"ADVP\u2192AD\", \"VP\u2192VV AS NP\", and \"VP\u2192ADVP VP\" are used as features for s 1 , s 2 and s respectively. Let's revisit the Figure 2 . The source phrase s 1 exactly matches the constituent ADVP, therefore CBMF is \"ADVP-M\". The source phrase s 2 exactly spans two sub-trees VV and AS of VP, therefore CBMF is \"VP-I\". Finally, the source phrase s cross boundaries of the lower VP on the right, therefore CBMF is \"VP-RC\".", "cite_spans": [], "ref_spans": [ { "start": 142, "end": 150, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "\u2022 Rule Features (RF)", "sec_num": null }, { "text": "We integrate the SDB model into phrase-based SMT to help decoder perform syntax-driven phrase translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Integration of the SDB Model into Phrase-Based SMT", "sec_num": "3.3" }, { "text": "In particular, we add a new feature into the log-linear translation model: P SDB (b|T, \u03c4 (.)). This feature is computed by the SDB model described in equation 3or equation 4, which estimates a probability that a source span is to be translated as a unit within particular syntactic contexts. If a source span can be translated as a unit, the feature will give a higher probability even though this span violates boundaries of a constituent. Otherwise, a lower probability is given. Through this additional feature, we want the decoder to prefer hypotheses that translate source spans which can be translated as a unit, and avoids translating those which are discontinuous after translation. The weight of this new feature is tuned via MERT, which measures the extent to which this feature should be trusted. In this paper, we implement the SDB model in a state-of-the-art phrase-based system which adapts a binary bracketing transduction grammar (BTG) (Wu, 1997) to phrase translation and reordering, described in (Xiong et al., 2006) . Whenever a BTG merging rule (", "cite_spans": [ { "start": 952, "end": 962, "text": "(Wu, 1997)", "ref_id": "BIBREF13" }, { "start": 1014, "end": 1034, "text": "(Xiong et al., 2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "The Integration of the SDB Model into Phrase-Based SMT", "sec_num": "3.3" }, { "text": "s \u2192 [s 1 s 2 ] or s \u2192 s 1 s 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Integration of the SDB Model into Phrase-Based SMT", "sec_num": "3.3" }, { "text": "is used, the SDB model gives a probability to the span s covered by the rule, which estimates the extent to which the span is bracketable. For the unary SDB model, we only consider the features from \u03c4 (s). For the binary SDB model, we use all features from \u03c4 (s 1 ), \u03c4 (s 2 ) and \u03c4 (s) since the binary SDB model is naturally suitable to the binary BTG rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Integration of the SDB Model into Phrase-Based SMT", "sec_num": "3.3" }, { "text": "The SDB model, however, is not only limited to phrase-based SMT using BTG rules. Since it is applied on a source span each time, any other hierarchical phrase-based or syntax-based system that translates source spans recursively or linearly, can adopt the SDB model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Integration of the SDB Model into Phrase-Based SMT", "sec_num": "3.3" }, { "text": "We carried out the MT experiments on Chineseto-English translation, using (Xiong et al., 2006) 's system as our baseline system. We modified the baseline decoder to incorporate our SDB models as descried in section 3.3. In order to compare with Marton and Resnik's approach, we also adapted the baseline decoder to their XP+ feature.", "cite_spans": [ { "start": 74, "end": 94, "text": "(Xiong et al., 2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In order to obtain syntactic trees for SDB models and XP+, we parsed source sentences using a lexicalized PCFG parser (Xiong et al., 2005) . The parser was trained on the Penn Chinese Treebank with an F1 score of 79.4%.", "cite_spans": [ { "start": 118, "end": 138, "text": "(Xiong et al., 2005)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "All translation models were trained on the FBIS corpus. We removed 15,250 sentences, for which the Chinese parser failed to produce syntactic parse trees. To obtain word-level alignments, we ran GIZA++ (Och and Ney, 2000) on the remaining corpus in both directions, and applied the \"grow-diag-final\" refinement rule (Koehn et al., 2005) to produce the final many-to-many word alignments. We built our four-gram language model using Xinhua section of the English Gigaword corpus (181.1M words) with the SRILM toolkit (Stolcke, 2002) .", "cite_spans": [ { "start": 202, "end": 221, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF9" }, { "start": 316, "end": 336, "text": "(Koehn et al., 2005)", "ref_id": "BIBREF6" }, { "start": 516, "end": 531, "text": "(Stolcke, 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "For the efficiency of MERT, we built our development set (580 sentences) using sentences not exceeding 50 characters from the NIST MT-02 set. We evaluated all models on the NIST MT-05 set using case-sensitive BLEU-4. Statistical significance in BLEU score differences was tested by paired bootstrap re-sampling (Koehn, 2004) .", "cite_spans": [ { "start": 311, "end": 324, "text": "(Koehn, 2004)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "We extracted 6.55M bracketing instances from our training corpus using the algorithm shown in figure 1, which contains 4.67M bracketable instances and 1.89M unbracketable instances. From extracted bracketing instances we generated syntaxdriven features, which include 73,480 rule features, 153,614 path features and 336 constituent boundary matching features. To tune weights of features, we ran the MaxEnt toolkit (Zhang, 2004) with iteration number being set to 100 and Gaussian prior to 1 to avoid overfitting.", "cite_spans": [ { "start": 415, "end": 428, "text": "(Zhang, 2004)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "SDB Training", "sec_num": "4.2" }, { "text": "We ran the MERT module with our decoders to tune the feature weights. The values are shown in Table 1 . The P SDB receives the largest feature weight, 0.29 for UniSDB and 0.38 for BiSDB, indicating that the SDB models exert a nontrivial impact on decoder.", "cite_spans": [], "ref_spans": [ { "start": 94, "end": 101, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "In Table 2 , we present our results. Like (Marton and Resnik, 2008), we find that the XP+ feature obtains a significant improvement of 1.08 BLEU over the baseline. However, using all syntax-driven features described in section 3.2, our SDB models achieve larger improvements of up to 1.67 BLEU. The binary SDB (BiSDB) model statistically significantly outperforms Marton and Resnik's XP+ by an absolute improvement of 0.59 (relatively 2%). It is also marginally better than the unary SDB model.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "System P (c|e) P (e|c) P w (c|e) P w (e|c) P lm (e) P r (e) Word Phr. Table 2 : Results on the test set. **: significantly better than baseline (p < 0.01). + or ++: significantly better than Marton and Resnik's XP+ (p < 0.05 or p < 0.01, respectively).", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Features", "sec_num": null }, { "text": "In this section, we present analysis to perceive the influence mechanism of the SDB model on phrase translation by studying the effects of syntax-driven features and differences of 1-best translation outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "We conducted further experiments using individual syntax-driven features and their combinations. Table 3 shows the results, from which we have the following key observations.", "cite_spans": [], "ref_spans": [ { "start": 97, "end": 104, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Effects of Syntax-Driven Features", "sec_num": "5.1" }, { "text": "\u2022 The constituent boundary matching feature (CBMF) is a very important feature, which by itself achieves significant improvement over the baseline (up to 1.13 BLEU). Both our CBMF and Marton and Resnik's XP+ feature focus on the relationship between a source phrase and a constituent. Their significant contribution to the improvement implies that this relationship is an important syntactic constraint for phrase translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effects of Syntax-Driven Features", "sec_num": "5.1" }, { "text": "\u2022 Adding more features, such as path feature and rule feature, achieves further improvements. This demonstrates the advantage of using more syntactic constraints in the SDB model, compared with Marton and Resnik's XP+. Table 3 : Results of different feature sets. * or **: significantly better than baseline (p < 0.05 or p < 0.01, respectively). + or ++: significantly better than XP+ (p < 0.05 or p < 0.01, respectively). @ \u2212 : almost significantly better than its UniSDB counterpart (p < 0.075). @ or @@: significantly better than its UniSDB counterpart (p < 0.05 or p < 0.01, respectively).", "cite_spans": [], "ref_spans": [ { "start": 219, "end": 226, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Effects of Syntax-Driven Features", "sec_num": "5.1" }, { "text": "\u2022 In most cases, the binary SDB is constantly significantly better than the unary SDB, suggesting that inner contexts are useful in predicting phrase bracketing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BLEU-4 Features", "sec_num": null }, { "text": "We want to further study the happenings after we integrate the constraint feature (our SDB model and Marton and Resnik's XP+) to provide such insights, we introduce a new statistical metric which measures the proportion of syntactic constituents 4 whose boundaries are consistently matched by decoder during translation. This proportion, which we call consistent constituent matching (CCM) rate , reflects the extent to which the translation output respects the source parse tree. In order to calculate this rate, we output translation results as well as phrase alignments found by decoders. Then for each multi-branch constituent c j i spanning from i to j on the source side, we check the following conditions.", "cite_spans": [ { "start": 101, "end": 125, "text": "Marton and Resnik's XP+)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Beyond BLEU", "sec_num": "5.2" }, { "text": "\u2022 If its boundaries i and j are aligned to phrase segmentation boundaries found by decoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beyond BLEU", "sec_num": "5.2" }, { "text": "\u2022 If all target phrases inside c j i 's target span 5 are aligned to the source phrases within c j i and not to the phrases outside c j i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beyond BLEU", "sec_num": "5.2" }, { "text": "If both conditions are satisfied, the constituent c j i is consistently matched by decoder. Table 4 shows the consistent constituent matching rates. Without using any source-side syntactic information, the baseline obtains a low CCM rate of 43.53%, indicating that the baseline decoder violates the source parse tree more than it respects the source structure. The translation output described in section 1 is actually generated by the baseline decoder, where the second NP phrase boundaries are violated.", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Beyond BLEU", "sec_num": "5.2" }, { "text": "By integrating syntactic constraints into decoding, we can see that both Marton and Resnik's XP+ and our SDB model achieve a significantly higher constituent matching rate, suggesting that they are more likely to respect the source structure. The examples in Table 5 show that the decoder is able to generate better translations if it is 4 We only consider multi-branch constituents. 5 Given a phrase alignment P = {c g f \u2194 e q p }, if the segmentation within c j i defined by P is c", "cite_spans": [ { "start": 338, "end": 339, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 259, "end": 266, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Beyond BLEU", "sec_num": "5.2" }, { "text": "j i = c j 1 i 1 ...c j k i k ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beyond BLEU", "sec_num": "5.2" }, { "text": "ir \u2194 e vr ur \u2208 P, 1 \u2264 r \u2264 k, we define the target span of c j i as a pair where the first element is min(eu 1 ...eu k ) and the second element is max(ev 1 ...ev k ), similar to (Fox, 2002) .", "cite_spans": [ { "start": 177, "end": 188, "text": "(Fox, 2002)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "and c jr", "sec_num": null }, { "text": "CCM Rates (%) System <6 6-10 11-15 16-20 >20 XP+ 75.2 70.9 71.0 76.2 82.2 BiSDB 69.3 74.7 74.2 80.0 85.6 Table 6 : Consistent constituent matching rates for structures with different spans.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 112, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "and c jr", "sec_num": null }, { "text": "faithful to the source parse tree by using syntactic constraints. We further conducted a deep comparison of translation outputs of BiSDB vs. XP+ with regard to constituent matching and violation. We found two significant differences that may explain why our BiSDB outperforms XP+. First, although the overall CCM rate of XP+ is higher than that of BiSDB, BiSDB obtains higher CCM rates for long-span structures than XP+ does, which are shown in Table 6 . Generally speaking, violations of long-span constituents have a more negative impact on performance than short-span violations if these violations are toxic. This explains why BiSDB achieves relatively higher precision improvements for higher n-grams over XP+, as shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 445, "end": 452, "text": "Table 6", "ref_id": null }, { "start": 728, "end": 735, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "and c jr", "sec_num": null }, { "text": "Second, compared with XP+ that only punishes constituent boundary violations, our SDB model is able to encourage violations if these violations are done on bracketable phrases. We observed in many cases that by violating constituent boundaries BiSDB produces better translations than XP+ does, which on the contrary matches these boundaries. Still consider the example shown in section 1. The following translations are found by XP+ and BiSDB respectively. XP+ here matches all constituent boundaries while BiSDB violates the PP constituent to translate the non-syntactic phrase \" \". Table 7 shows more examples. From these examples, we clearly see that appropriate violations are helpful and even necessary for generating better translations. By allowing appropriate violations to translate nonsyntactic phrases according to particular syntactic contexts, our SDB model better inherits the strength of phrase-based approach than XP+.", "cite_spans": [], "ref_spans": [ { "start": 584, "end": 591, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "and c jr", "sec_num": null }, { "text": "This word can be translated into \"section\", \"festival\", and \"knot\" in different contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The three scenarios that we define here are similar to those in(L\u00fc et al., 2002).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "In this paper, we presented a syntax-driven bracketing model that automatically learns bracketing knowledge from training corpus. With this knowledge, the model is able to predict whether source phrases can be translated together, regardless of matching or crossing syntactic constituents. We integrate this model into phrase-based SMT to increase its capacity of linguistically motivated translation without undermining its strengths. Experiments show that our model achieves substantial improvements over baseline and significantly outperforms 's XP+.Compared with previous constituency feature, our SDB model is capable of incorporating more syntactic constraints, and rewarding necessary violations of the source parse tree. find that their constituent constraints are sensitive to language pairs. In the future work, we will use other language pairs to test our models so that we could know whether our method is language-independent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Cohesive Phrase-based Decoding for Statistical Machine Translation", "authors": [ { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Cherry. 2008. Cohesive Phrase-based Decoding for Statistical Machine Translation. In Proceedings of ACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Hierarchical Phrase-based Model for Statistical Machine Translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A Hierarchical Phrase-based Model for Statistical Machine Translation. In Pro- ceedings of ACL, pages 263-270.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Online Large-Margin Training of Syntactic and Structural Translation Features", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Marton", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang, Yuval Marton and Philip Resnik. 2008. Online Large-Margin Training of Syntactic and Structural Translation Features. In Proceedings of EMNLP.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Phrasal Cohesion and Statistical Machine Translation", "authors": [ { "first": "Heidi", "middle": [ "J" ], "last": "Fox", "suffix": "" } ], "year": 2002, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "304--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heidi J. Fox 2002. Phrasal Cohesion and Statistical Machine Translation. In Proceedings of EMNLP, pages 304-311.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Statistical Phrase-based Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "Joseph" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical Phrase-based Translation. In Pro- ceedings of HLT-NAACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Statistical Significance Tests for Machine Translation Evaluation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of EMNLP.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Amittai", "middle": [], "last": "Axelrod", "suffix": "" }, { "first": "Alexandra", "middle": [ "Birch" ], "last": "Mayne", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "David", "middle": [], "last": "Talbot", "suffix": "" } ], "year": 2005, "venue": "International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne and David Talbot. 2005. Edinburgh System Descrip- tion for the 2005 IWSLT Speech Translation Eval- uation. In International Workshop on Spoken Lan- guage Translation.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning Chinese Bracketing Knowledge Based on a Bilingual Language Model", "authors": [ { "first": "Yajuan", "middle": [], "last": "L\u00fc", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Tiezhun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Muyun", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2002, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yajuan L\u00fc, Sheng Li, Tiezhun Zhao and Muyun Yang. 2002. Learning Chinese Bracketing Knowledge Based on a Bilingual Language Model. In Proceed- ings of COLING.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Soft Syntactic Constraints for Hierarchical Phrase-Based Translation", "authors": [ { "first": "Yuval", "middle": [], "last": "Marton", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuval Marton and Philip Resnik. 2008. Soft Syntactic Constraints for Hierarchical Phrase-Based Transla- tion. In Proceedings of ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improved Statistical Alignment Models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2000. Improved Statistical Alignment Models. In Proceedings of ACL 2000.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Minimum Error Rate Training in Statistical Machine Translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of ACL 2003.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bleu: a Method for Automatically Evaluation of Machine Translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a Method for Automatically Evaluation of Machine Translation. In Proceedings of ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "SRILM -an Extensible Language Modeling Toolkit", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proceedings of International Conference on Spoken Language Processing", "volume": "2", "issue": "", "pages": "901--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke. 2002. SRILM -an Extensible Lan- guage Modeling Toolkit. In Proceedings of Interna- tional Conference on Spoken Language Processing, volume 2, pages 901-904.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "3", "pages": "377--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Cor- pora. Computational Linguistics, 23(3):377-403.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Parsing the Penn Chinese Treebank with Semantic Knowledge", "authors": [ { "first": "Deyi", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Shuanglong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shouxun", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yueliang", "middle": [], "last": "Qian", "suffix": "" } ], "year": 2005, "venue": "Proceedings of IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deyi Xiong, Shuanglong Li, Qun Liu, Shouxun Lin, Yueliang Qian. 2005. Parsing the Penn Chinese Treebank with Semantic Knowledge. In Proceed- ings of IJCNLP, Jeju Island, Korea.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation", "authors": [ { "first": "Deyi", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shouxun", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ACL-COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deyi Xiong, Qun Liu and Shouxun Lin. 2006. Max- imum Entropy Based Phrase Reordering Model for Statistical Machine Translation. In Proceedings of ACL-COLING 2006.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Linguistically Annotated BTG for Statistical Machine Translation", "authors": [ { "first": "Deyi", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Aiti", "middle": [], "last": "Aw", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" } ], "year": 2008, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deyi Xiong, Min Zhang, Aiti Aw, and Haizhou Li. 2008. Linguistically Annotated BTG for Statistical Machine Translation. In Proceedings of COLING 2008.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Maximum Entropy Modeling Tooklkit for Python and C++", "authors": [ { "first": "Le", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Le Zhang. 2004. Maximum Entropy Model- ing Tooklkit for Python and C++. Available at http://homepages.inf.ed.ac.uk/s0450736 /maxent toolkit.html.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Bracketing Instances Extraction Algorithm.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Path Features (PF) The tree path \u03c3(s 1 )..\u03c3(s) connecting \u03c3(s 1 ) and \u03c3(s), \u03c3(s 2 )..\u03c3(s) connecting \u03c3(s 2 ) and \u03c3(s), and \u03c3(s)..\u03c1 connecting \u03c3(s) and the root node \u03c1 of the whole parse tree are used as features. These features provide syntactic \"vertical context\" which shows the generation history of the source phrases on the parse tree. Three scenarios of the relationship between phrase boundaries and constituent boundaries. The gray circles are constituent boundaries while the black circles are phrase boundaries.In figure 2, the path features are \"ADVP VP\", \"VP VP\" and \"VP IP\" for s1 , s 2 and s respectively. Constituent Boundary Matching Features (CBMF) These features are to capture the relationship between a source phrase s and \u03c4 (s) or \u03c4 (s)'s subtrees. There are three different scenarios 3 : 1) exact match, where s exactly matches the boundaries of \u03c4 (s) (figure 3(a)), 2) inside match, where s exactly spans a sequence of \u03c4 (s)'s subtrees (figure 3(b)), and 3) crossing, where s crosses the boundaries of one or two subtrees of \u03c4 (s) (figure 3(c)). In the case of 1) or 2), we set the value of this feature to \u03c3(s)-M or \u03c3(s)-I respectively. When s crosses the boundaries of the subconstituent l on s's left, we set the value to \u03c3( l )-LC; If s crosses the boundaries of the sub-constituent r on s's right, we set the value to \u03c3( r )-RC; If both, we set the value to \u03c3( l )-LC-\u03c3( r )-RC.", "uris": null, "num": null }, "TABREF1": { "type_str": "table", "content": "
XP+ P SDB
Baseline 0.0410.0300.0060.0650.200.350.19 -0.12--
XP+0.0020.0490.0460.0440.170.290.16 0.12 -0.12-
UniSDB 0.0230.0510.0550.0120.210.200.12 0.04-0.29
BiSDB0.0160.0320.0270.0130.130.230.08 0.09-0.38
Table 1: BLEU-nn-gram Precision
System412345678
Baseline 0.26120.71 0.36 0.18 0.10 0.054 0.030 0.016 0.009
XP+0.2720**0.72 0.37 0.19 0.11 0.060 0.035 0.021 0.012
UniSDB 0.2762**+0.72 0.37 0.20 0.11 0.062 0.035 0.020 0.011
BiSDB0.2779**++ 0.72 0.37 0.20 0.11 0.065 0.038 0.022 0.014
", "text": "Feature weights obtained by MERT on the development set. The first 4 features are the phrase translation probabilities in both directions and the lexical translation probabilities in both directions. P lm = language model; P r = MaxEnt-based reordering model; Word = word bonus; Phr = phrase bonus.", "num": null, "html": null } } } }