Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N10-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:50:31.963675Z"
},
"title": "Unsupervised Syntactic Alignment with Inversion Transduction Grammars",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Pauls",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California at Berkeley",
"location": {}
},
"email": "adpauls@cs.berkeley.edu"
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California at Berkeley",
"location": {}
},
"email": "klein@cs.berkeley.edu"
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": "",
"affiliation": {},
"email": "chiang@isi.edu"
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": "",
"affiliation": {},
"email": "knight@isi.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Syntactic machine translation systems currently use word alignments to infer syntactic correspondences between the source and target languages. Instead, we propose an unsupervised ITG alignment model that directly aligns syntactic structures. Our model aligns spans in a source sentence to nodes in a target parse tree. We show that our model produces syntactically consistent analyses where possible, while being robust in the face of syntactic divergence. Alignment quality and end-to-end translation experiments demonstrate that this consistency yields higher quality alignments than our baseline.",
"pdf_parse": {
"paper_id": "N10-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "Syntactic machine translation systems currently use word alignments to infer syntactic correspondences between the source and target languages. Instead, we propose an unsupervised ITG alignment model that directly aligns syntactic structures. Our model aligns spans in a source sentence to nodes in a target parse tree. We show that our model produces syntactically consistent analyses where possible, while being robust in the face of syntactic divergence. Alignment quality and end-to-end translation experiments demonstrate that this consistency yields higher quality alignments than our baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Syntactic machine translation has advanced significantly in recent years, and multiple variants currently achieve state-of-the-art translation quality. Many of these systems exploit linguistically-derived syntactic information either on the target side (Galley et al., 2006) , the source side , or both . Still others induce their syntax from the data (Chiang, 2005) . Despite differences in detail, the vast majority of syntactic methods share a critical dependence on word alignments. In particular, they infer syntactic correspondences between the source and target languages through word alignment patterns, sometimes in combination with constraints from parser outputs.",
"cite_spans": [
{
"start": 253,
"end": 274,
"text": "(Galley et al., 2006)",
"ref_id": "BIBREF13"
},
{
"start": 352,
"end": 366,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, word alignments are not perfect indicators of syntactic alignment, and syntactic systems are very sensitive to word alignment behavior. Even a single spurious word alignment can invalidate a large number of otherwise extractable rules, while unaligned words can result in an exponentially large set of extractable rules to choose from. Researchers have worked to incorporate syntactic information into word alignments, resulting in improvements to both alignment quality (Cherry and Lin, 2006; DeNero and Klein, 2007) , and translation quality (May and Knight, 2007; Fossum et al., 2008) .",
"cite_spans": [
{
"start": 480,
"end": 502,
"text": "(Cherry and Lin, 2006;",
"ref_id": "BIBREF3"
},
{
"start": 503,
"end": 526,
"text": "DeNero and Klein, 2007)",
"ref_id": "BIBREF8"
},
{
"start": 553,
"end": 575,
"text": "(May and Knight, 2007;",
"ref_id": "BIBREF25"
},
{
"start": 576,
"end": 596,
"text": "Fossum et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we remove the dependence on word alignments and instead directly model the syntactic correspondences in the data, in a manner broadly similar to Yamada and Knight (2001) . In particular, we propose an unsupervised model that aligns nodes of a parse tree (or forest) in one language to spans of a sentence in another. Our model is an instance of the inversion transduction grammar (ITG) formalism (Wu, 1997) , constrained in such a way that one side of the synchronous derivation respects a syntactic parse. Our model is best suited to systems which use source-or target-side trees only.",
"cite_spans": [
{
"start": 160,
"end": 184,
"text": "Yamada and Knight (2001)",
"ref_id": "BIBREF31"
},
{
"start": 411,
"end": 421,
"text": "(Wu, 1997)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The design of our model is such that, for divergent structures, a structurally integrated backoff to flatter word-level (or null) analyses is available. Therefore, our model is empirically robust to the case where syntactic divergence between languages prevents syntactically accurate ITG derivations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We show that, with appropriate pruning, our model can be efficiently trained on large parallel corpora. When compared to standard word-alignmentbacked baselines, our model produces more consistent analyses of parallel sentences, leading to high-count, high-quality transfer rules. End-toend translation experiments demonstrate that these higher quality rules improve translation quality by 1.0 BLEU over a word-alignment-backed baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our model is intended for use in syntactic translation models which make use of syntactic parses on either the target (Galley et al., 2006) or source side Liu et al., 2006) . Our model's chief purpose is to align nodes in the syntactic parse in one language to spans in the other -an alignment we will refer to as a \"syntactic\" alignment. These alignments are employed by standard syntactic rule extraction algorithms, for example, the GHKM algorithm of Galley et al. (2004) . Following that work, we will assume parses are present in the target language, though our model applies in either direction. Currently, although syntactic systems make use of syntactic alignments, these alignments must be induced indirectly from word-level alignments. Previous work has discussed at length the poor interaction of word-alignments with syntactic rule extraction (DeNero and Klein, 2007; Fossum et al., 2008) . For completeness, we provide a brief example of this interaction, but for a more detailed discussion we refer the reader to these presentations.",
"cite_spans": [
{
"start": 118,
"end": 139,
"text": "(Galley et al., 2006)",
"ref_id": "BIBREF13"
},
{
"start": 155,
"end": 172,
"text": "Liu et al., 2006)",
"ref_id": "BIBREF22"
},
{
"start": 454,
"end": 474,
"text": "Galley et al. (2004)",
"ref_id": "BIBREF12"
},
{
"start": 855,
"end": 879,
"text": "(DeNero and Klein, 2007;",
"ref_id": "BIBREF8"
},
{
"start": 880,
"end": 900,
"text": "Fossum et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Rule Extraction",
"sec_num": "2"
},
{
"text": "Syntactic systems begin rule extraction by first identifying, for each node in the target parse tree, a span of the foreign sentence which (1) contains every source word that aligns to a target word in the yield of the node and (2) contains no source words that align outside that yield. Only nodes for which a non-empty span satisfying (1) and (2) exists may form the root or leaf of a translation rule; for that reason, we will refer to these nodes as extractable nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction with Word Alignments",
"sec_num": "2.1"
},
{
"text": "Since extractable nodes are inferred based on word alignments, spurious word alignments can rule out otherwise desirable extraction points. For exam-ple, consider the alignment in Figure 1 . This alignment, produced by GIZA++ (Och and Ney, 2003) , contains 4 correct alignments (the filled circles), but incorrectly aligns the to the Chinese past tense marker \u4e86 (the hollow circle). This mistaken alignment produces the incorrect rule (DT \u2192 the ; \u4e86), and also blocks the extraction of (VBN \u2192 fallen ; \u51cf\u5c11 \u4e86).",
"cite_spans": [
{
"start": 226,
"end": 245,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 180,
"end": 188,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Interaction with Word Alignments",
"sec_num": "2.1"
},
{
"text": "More high-level syntactic transfer rules are also ruled out, for example, the \"the insertion rule\" (NP \u2192 the NN 1 NN 2 ; NN 1 NN 2 ) and the high-level (S \u2192 NP 1 VP 2 ; NP 1 VP 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction with Word Alignments",
"sec_num": "2.1"
},
{
"text": "The most common approach to avoiding these problems is to inject knowledge about syntactic constraints into a word alignment model (Cherry and Lin, 2006; DeNero and Klein, 2007; Fossum et al., 2008) . 1 While syntactically aware, these models remain limited by the word alignment models that underly them.",
"cite_spans": [
{
"start": 131,
"end": 153,
"text": "(Cherry and Lin, 2006;",
"ref_id": "BIBREF3"
},
{
"start": 154,
"end": 177,
"text": "DeNero and Klein, 2007;",
"ref_id": "BIBREF8"
},
{
"start": 178,
"end": 198,
"text": "Fossum et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Syntactic Alignment Model",
"sec_num": "3"
},
{
"text": "Here, we describe a model which directly infers alignments of nodes in the target-language parse tree to spans of the source sentence. Formally, our model is an instance of a Synchronous Context-Free Grammar (see Chiang (2004) for a review), or SCFG, which generates an English (target) parse tree T and foreign (source) sentence f given a target sentence e. The generative process underlying this model produces a derivation d of SCFG rules, from which T and f can be read off; because we condition on e, the derivations produce e with probability 1. This model places a distribution over T and f given by",
"cite_spans": [
{
"start": 213,
"end": 226,
"text": "Chiang (2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Syntactic Alignment Model",
"sec_num": "3"
},
{
"text": "p(T, f | e) = d p(d | e) = d r\u2208d p(r | e)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Syntactic Alignment Model",
"sec_num": "3"
},
{
"text": "where the sum is over derivations d which yield T and f . The SCFG rules r come from one of 4 types, pictured in Table 1 . In general, because our model can generate English trees, it permits inference over forests. Although we will restrict ourselves to a single parse tree for our experiments, in this section, we discuss the more general case. The first rule type is the TERMINAL production, which rewrites a terminal symbol 2 E as its English word e and a (possibly empty) sequence of foreign words f t . Generally speaking, the majority of foreign words are generated using this rule. It is only when a straightforward word-to-word correspondence cannot be found that our model resorts to generating foreign words elsewhere.",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "A Syntactic Alignment Model",
"sec_num": "3"
},
{
"text": "; \u56db UNARY A B f l B f r CD \u2192 FOUR ; FOUR \u4e2a BINARYMONO A B C f l B f m C f r NP \u2192 NN NN ; NN \u7684 NN BINARYINV A B C f l C f m B f r PP \u2192 IN NP ; \u5728 NP IN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Syntactic Alignment Model",
"sec_num": "3"
},
{
"text": "We can also rewrite a non-terminal symbol A using a UNARY production, which on the English side produces a single symbol B, and on the foreign side produces the symbol B, with sequences of words f l to its left and f r to its right.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Syntactic Alignment Model",
"sec_num": "3"
},
{
"text": "Finally, there are two binary productions: BINA-RYMONO rewrites A with two non-terminals B and C on the English side, and the same non-terminals B and C in monotonic order on the foreign side, with sequences of words f l , f r , and f m to the left, right, and the middle. BINARYINV inverts the order in which the non-terminals B and C are written on the source side, allowing our model to capture a large subset of possible reorderings (Wu, 1997) .",
"cite_spans": [
{
"start": 437,
"end": 447,
"text": "(Wu, 1997)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Syntactic Alignment Model",
"sec_num": "3"
},
{
"text": "Derivations from this model have two key properties: first, the English side of a derivation is constrained to form a valid constituency parse, as is required in a syntax system with target-side syntax; and second, for each parse node in the English projection, there is exactly one (possibly empty) contiguous span of the foreign side which was generated from that non-terminal or one of its descendants. Identifying extractable nodes from a derivation is thus trivial: any node aligned to a non-empty foreign span is extractable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Syntactic Alignment Model",
"sec_num": "3"
},
{
"text": "In Figure 2 , we show a sample sentence pair frag-2 For notational convenience, we imagine that for each particular English word e, there is a special preterminal symbol E which produces it. These symbols E act like any other nonterminal in the grammar with respect to the parameterization in Section 3.1. To denote standard non-terminals, we will use A, B, and C. We connect each foreign terminal with a dashed line to the node in the English side of the synchronous derivation at which it is generated. The foreign span assigned to each English node is indicated with indices. All nodes with non-empty spans, shown in boldface, are extractable nodes. Bottom: The SCFG rules used in the derivation.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "A Syntactic Alignment Model",
"sec_num": "3"
},
{
"text": "ment as generated by our model. Our model correctly identifies that the English the aligns to nothing on the foreign side. Our model also effectively captures the one-to-many alignment 3 of elections to \u8bae \u4f1a \u9009\u4e3e. Finally, our model correctly analyzes the Chinese circumposition \u5728 . . . \u4e4b\u524d (before . . . ). In this construction, \u4e4b\u524d carries the meaning of \"before\", and thus correctly aligns to before, while \u5728 functions as a generic preposition, which our model handles by attaching it to the PP. This analysis permits the extraction of the general rule (PP \u2192 IN 1 NP 2 ; \u5728 NP 2 IN 1 ), and the more lexicalized (PP \u2192 before NP ; \u5728 NP \u4e4b\u524d).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Syntactic Alignment Model",
"sec_num": "3"
},
{
"text": "In principle, our model could have one parameter for each instantiation r of a rule type. This model would have an unmanageable number of parameters, producing both computational and modeling issues -it is well known that unsupervised models with large numbers of parameters are prone to degenerate analyses of the data (DeNero et al., 2006) . One solution might be to apply an informed prior with a computationally tractable inference procedure (e.g. Cohn and Blunsom (2009) or Liu and Gildea (2009) ). We opt here for the simpler, statistically more robust solution of making independence assumptions to keep the number of parameters at a reasonable level. Concretely, we define the probability of the BI-NARYMONO rule, 4",
"cite_spans": [
{
"start": 320,
"end": 341,
"text": "(DeNero et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 452,
"end": 475,
"text": "Cohn and Blunsom (2009)",
"ref_id": "BIBREF7"
},
{
"start": 479,
"end": 500,
"text": "Liu and Gildea (2009)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.1"
},
{
"text": "p(r = A \u2192 B C; f l B f m C f r |A, e A )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.1"
},
{
"text": "which conditions on the root of the rule A and the English yield e A , as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.1"
},
{
"text": "p g (A \u2192 B C | A, e A ) \u2022 p inv (I | B, C)\u2022 p lef t (f l | A, e A )\u2022p mid (f m | A, e A )\u2022p right (f r | A, e A )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.1"
},
{
"text": "In words, we assume that the rule probability decomposes into a monolingual PCFG grammar probability p g , an inversion probability p inv , and a probability of left, middle, and right word sequences p lef t , p mid , and p right . 5 Because we condition on e, the monolingual grammar probability p g must form a distribution which produces e with probability 1. 6 NARYMONO rule. For a parameterization of all rules, we refer the reader to Table 2 . 5 All parameters in our model are multinomial distributions. 6 A simple case of such a distribution is one which places all of its mass on a single tree. More complex distributions can be obtained by conditioning an arbitrary PCFG on e (Goodman, 1998) .",
"cite_spans": [
{
"start": 450,
"end": 451,
"text": "5",
"ref_id": null
},
{
"start": 511,
"end": 512,
"text": "6",
"ref_id": null
},
{
"start": 686,
"end": 701,
"text": "(Goodman, 1998)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 440,
"end": 447,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.1"
},
{
"text": "We further assume that the probability of producing a foreign word sequence f l decomposes as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.1"
},
{
"text": "p lef t (f l | A, e A ) = p l (|f l | = m | A) m j=1 p(f j | A, e A )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.1"
},
{
"text": "where m is the length of the sequence f l . The parameter p l is a left length distribution. The probabilities p mid , p right , decompose in the same way, except substituting a separate length distribution p m and p r for p l . For the TERMINAL rule, we emit f t with a similarly decomposed distribution p term using length distribution p w .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.1"
},
{
"text": "We define the probability of generating a foreign word f j as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.1"
},
{
"text": "p(f j | A, e A ) = i\u2208e A 1 | e A | p t (f j | e i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.1"
},
{
"text": "with i \u2208 e A denoting an index ranging over the indices of the English words contained in e A . The reader may recognize the above expressions as the probability assigned by IBM Model 1 (Brown et al., 1993) of generating the words f l given the words e A , with one important difference -the length m of the foreign sentence is often not modeled, so the term p l (|f l | = m | A) is set to a constant and ignored. Parameterizing this length allows our model to effectively control the number of words produced at different levels of the derivation. It is worth noting how each parameter affects the model's behavior. The p t distribution is a standard \"translation\" table, familiar from the IBM Models. The p inv distribution is a \"distortion\" parameter, and models the likelihood of inverting non-terminals B and C. This parameter can capture, for example, the high likelihood that prepositions IN and noun phrases NP often invert in Chinese due to its use of postpositions. The non-terminal length distributions p l , p m , and p r model the probability of \"backing off\" and emitting foreign words at non-terminals when a more refined analysis cannot be found. If these parameters place high mass on 0 length word sequences, this heavily penalizes this backoff behaviour. For the TERMINAL rule, the length distribution p w parameterizes the number of words produced for a particular English word e, functioning similarly to the \"fertilities\" employed by IBM Models 3 and 4 (Brown et al., 1993) . This allows us to model, for example, the tendency of English determiners the and a translate to nothing in the Chinese, and of English names to align to multiple Chinese words. In general, we expect an English word to usually align to one Chinese word, and so we place a weak Dirichlet prior on on the p e distribution which puts extra mass on 1-length word sequences. This is helpful for avoiding the \"garbage collection\" (Moore, 2004) problem for rare words.",
"cite_spans": [
{
"start": 186,
"end": 206,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
},
{
"start": 1475,
"end": 1495,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
},
{
"start": 1922,
"end": 1935,
"text": "(Moore, 2004)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.1"
},
{
"text": "There are often foreign words that do not correspond well to any English word, which our model must also handle. We elected for a simple augmentation to our model to account for these words. When generating foreign word sequences f at a non-terminal (i.e. via the UNARY or BINARY productions), we also allow for the production of foreign words from the non-terminal symbol A. We modify p(f j | e A ) from the previous section to allow production of f j directly from the non-terminal 7 A:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Non-Terminal Labels",
"sec_num": "3.2"
},
{
"text": "p(f j | e A ) = p nt \u2022 p(f j | A) + (1 \u2212 p nt ) \u2022 i\u2208e A 1 |e A | p t (f j | e i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Non-Terminal Labels",
"sec_num": "3.2"
},
{
"text": "where p nt is a global binomial parameter which controls how often such alignments are made. This necessitates the inclusion of parameters like p t ( \u7684 | NP) into our translation table. Generally, these parameters do not contain much information, but rather function like a traditional NULL rooted at some position in the tree. However, in some cases, the particular annotation used by the Penn Treebank (Marcus et al., 1993 ) (and hence most parsers) allows for some interesting parameters to be learned. For example, we found that our aligner often matched the Chinese word \u4e86, which marks the past tense (among other things), to the preterminals VBD and VBN, which denote the English simple past and perfect tense. Additionally, Chinese measure words like \u4e2a and \u540d often align to the CD (numeral) preterminal. These generalizations can be quite useful -where a particular number might predict a measure word quite poorly, the generalization that measure words co-occur with the CD tag is very robust. 7 For terminal symbols E, this production is not possible.",
"cite_spans": [
{
"start": 404,
"end": 424,
"text": "(Marcus et al., 1993",
"ref_id": "BIBREF24"
},
{
"start": 1002,
"end": 1003,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Non-Terminal Labels",
"sec_num": "3.2"
},
{
"text": "The generative process which describes our model contains a class of grammars larger than the computationally efficient class of ITG grammars. Fortunately, the parameterization described above not only reduces the number of parameters to a manageable level, but also introduces independence assumptions which permit synchronous binarization of our grammar. Any SCFG that can be synchronously binarized is an ITG, meaning that our parameterization permits efficient inference algorithms which we will make use of in the next section. Although several binarizations are possible, we give one such binarization and its associated probabilities in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 644,
"end": 651,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Membership in ITG",
"sec_num": "3.3"
},
{
"text": "Generally speaking, ITG grammars have proven more useful without the monolingual syntactic constraints imposed by a target parse tree. When derivations are restricted to respect a target-side parse tree, many desirable alignments are ruled out when the syntax of the two languages diverges, and alignment quality drops precipitously (Zhang and Gildea, 2004) , though attempts have been made to address this issue (Gildea, 2003) .",
"cite_spans": [
{
"start": 333,
"end": 357,
"text": "(Zhang and Gildea, 2004)",
"ref_id": "BIBREF32"
},
{
"start": 413,
"end": 427,
"text": "(Gildea, 2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness to Syntactic Divergence",
"sec_num": "3.4"
},
{
"text": "Our model is designed to degrade gracefully in the case of syntactic divergence. Because it can produce foreign words at any level of the derivation, our model can effectively back off to a variant of Model 1 in the case where an ITG derivation that both respects the target parse tree and the desired word-level alignments cannot be found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness to Syntactic Divergence",
"sec_num": "3.4"
},
{
"text": "For example, consider the sentence pair fragment in Figure 3 . It is not possible to produce an ITG derivation of this fragment that both respects the English tree and also aligns all foreign words to their obvious English counterparts. Our model handles this case by attaching the troublesome \u660e\u5929 at the uppermost VP. This analysis captures 3 of the 4 word-level correspondences, and also permits extraction of abstract rules like (S \u2192 NP VP ; NP VP) and (NP \u2192 the NN ; NN) .",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 60,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 461,
"end": 473,
"text": "the NN ; NN)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Robustness to Syntactic Divergence",
"sec_num": "3.4"
},
{
"text": "Unfortunately, this analysis leaves the English word tomorrow with an empty foreign span, permitting extraction of the incorrect translation (VP \u2192 announced tomorrow ; \u516c\u5e03), among others. Our point here is not that our model's analysis is \"correct\", but \"good enough\" without resorting to more computationally complicated models. In general, our model follows an \"extract as much as possible\" approach. We hypothesize that this approach will capture important syntactic generalizations, but it also risks including low-quality rules. It is an empirical question whether this approach is effective, and we investigate this issue further in Section 5.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness to Syntactic Divergence",
"sec_num": "3.4"
},
{
"text": "| E) UNARY A B u w l B u pg(A \u2192 B | A)p lef t (w l | A, eA) B u B B wr p right (wr | A, eA) BINARY A A 1 w l A 1 p lef t (w l | A, eA) A 1 B C 1 B C 1 pg(A \u2192 B C | A)pinv(I=false | B, C) A 1 B C 1 C 1 B pg(A \u2192 B C | A)pinv(I=true | B, C) C 1 C 2 fm C 2 p mid (fm | A, eA) C 2 C C fr p right (fr | A, eA)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness to Syntactic Divergence",
"sec_num": "3.4"
},
{
"text": "There are possibilities for improving our model's treatment of syntactic divergence. One option is to allow the model to select trees which are more consistent with the alignment (Burkett et al., 2010) , which our model can do since it permits efficient inference over forests. The second is to modify the generative process slightly, perhaps by including the \"clone\" operator of Gildea (2003) .",
"cite_spans": [
{
"start": 179,
"end": 201,
"text": "(Burkett et al., 2010)",
"ref_id": "BIBREF2"
},
{
"start": 380,
"end": 393,
"text": "Gildea (2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness to Syntactic Divergence",
"sec_num": "3.4"
},
{
"text": "The parameters of our model can be efficiently estimated in an unsupervised fashion using the Expectation-Maximization (EM) algorithm. The Estep requires the computation of expected counts under our model for each multinomial parameter. We omit the details of obtaining expected counts for each distribution, since they can be obtained using simple arithmetic from a single quantity, namely, the expected count of a particular instantiation of a synchronous rule r. This expectation is a standard quantity that can be computed in O(n 6 ) time using the bitext Inside-Outside dynamic program (Wu, 1997) .",
"cite_spans": [
{
"start": 591,
"end": 601,
"text": "(Wu, 1997)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "4.1"
},
{
"text": "While our model permits O(n 6 ) inference over a forest of English trees, inference over a full forest would be very slow, and so we fix a single n-ary English tree obtained from a monolingual parser. However, it is worth noting that the English side of the ITG derivation is not completely fixed. Where our English trees are more than binary branching, we permit any binarization in our dynamic program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Program Pruning",
"sec_num": "4.2"
},
{
"text": "For efficiency, we also ruled out span alignments that are extremely lopsided, for example, a 1-word English span aligned to a 20-word foreign span. Specifically, we pruned any span alignment in which one side is more than 5 times larger than the other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Program Pruning",
"sec_num": "4.2"
},
{
"text": "Finally, we employ pruning based on highprecision alignments from simpler models (Cherry and Lin, 2007; Haghighi et al., 2009) . We compute word-to-word alignments by finding all word pairs which have a posterior of at least 0.7 according to both forward and reverse IBM Model 1 parameters, and prune any span pairs which invalidate more than 3 of these alignments. In total, this pruning re- duced computation from approximately 1.5 seconds per sentence to about 0.3 seconds per sentence, a speed-up of a factor of 5.",
"cite_spans": [
{
"start": 81,
"end": 103,
"text": "(Cherry and Lin, 2007;",
"ref_id": "BIBREF4"
},
{
"start": 104,
"end": 126,
"text": "Haghighi et al., 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Program Pruning",
"sec_num": "4.2"
},
{
"text": "Given a trained model, we extract a tree-to-string alignment as follows: we compute, for each node in the English tree, the posterior probability of a particular foreign span assignment using the same dynamic program needed for EM. We then compute the set of span assignments which maximizes the sum of these posteriors, constrained such that the foreign span assignments nest in the obvious way. This algorithm is a natural synchronous generalization of the monolingual Maximum Constituents Parse algorithm of Goodman (1996) .",
"cite_spans": [
{
"start": 511,
"end": 525,
"text": "Goodman (1996)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4.3"
},
{
"text": "We first evaluated our alignments against gold standard annotations. Our training data consisted of the 2261 manually aligned and translated sentences of the Chinese Treebank (Bies et al., 2007) and approximately half a million unlabeled sentences of parallel Chinese-English newswire. The unlabeled data was subsampled (Li et al., 2009 ) from a larger corpus by selecting sentences which have good tune and test set coverage, and limited to sentences of length at most 40. We parsed the English side of the training data with the Berkeley parser. 8 For our baseline alignments, we used GIZA++, trained in the standard way. 9 We used the grow-diag-final alignment heuristic, as we found it outperformed union in early experiments. We trained our unsupervised syntactic aligner on the concatenation of the labelled and unlabelled data. As is standard in unsupervised alignment models, we initialized the translation parameters p t by first training 5 iterations of IBM Model 1 using the joint training algorithm of Liang et al. (2006) , and then trained our model for 5 EM iterations. We extracted syntactic rules using a re-implementation of the Galley et al. (2006) algorithm from both our syntactic alignments and the GIZA++ alignments. We handle null-aligned words by extracting every consistent derivation, and extracted composed rules consisting of at most 3 minimal rules.",
"cite_spans": [
{
"start": 175,
"end": 194,
"text": "(Bies et al., 2007)",
"ref_id": "BIBREF0"
},
{
"start": 320,
"end": 336,
"text": "(Li et al., 2009",
"ref_id": "BIBREF19"
},
{
"start": 624,
"end": 625,
"text": "9",
"ref_id": null
},
{
"start": 1014,
"end": 1033,
"text": "Liang et al. (2006)",
"ref_id": "BIBREF18"
},
{
"start": 1146,
"end": 1166,
"text": "Galley et al. (2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Quality",
"sec_num": "5.1"
},
{
"text": "We evaluate our alignments against the gold standard in two ways. We calculated Span F-score, which compares the set of extractable nodes paired with a foreign span, and Rule F-score (Fossum et al., 2008 ) over minimal rules. The results are shown in Table 3 . By both measures, our syntactic aligner effectively trades recall for precision when compared to our baseline, slightly increasing overall F-score.",
"cite_spans": [
{
"start": 183,
"end": 203,
"text": "(Fossum et al., 2008",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 251,
"end": 258,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Alignment Quality",
"sec_num": "5.1"
},
{
"text": "For our translation system, we used a reimplementation of the syntactic system of Galley et al. (2006) . For the translation rules extracted from our data, we computed standard features based on relative frequency counts, and tuned their weights using MERT (Och, 2003) . We also included a language model feature, using a 5-gram language model trained on 220 million words of English text using the SRILM Toolkit (Stolcke, 2002) .",
"cite_spans": [
{
"start": 82,
"end": 102,
"text": "Galley et al. (2006)",
"ref_id": "BIBREF13"
},
{
"start": 257,
"end": 268,
"text": "(Och, 2003)",
"ref_id": "BIBREF28"
},
{
"start": 413,
"end": 428,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Quality",
"sec_num": "5.2"
},
{
"text": "For tuning and test data, we used a subset of the NIST MT04 and MT05 with sentences of length at most 40. We used the first 1000 sentences of this set for tuning and the remaining 642 sentences as test data. We used the decoder described in during both tuning and testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Quality",
"sec_num": "5.2"
},
{
"text": "We provide final tune and test set results in Table 4. Our alignments produce a 1.0 BLEU improvement over the baseline. Our reported syntactic results were obtained when rules were thresholded by count; we discuss this in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Quality",
"sec_num": "5.2"
},
{
"text": "As discussed in Section 3.4, our aligner is designed to extract many rules, which risks inadvertently extracting low-quality rules. To quantify this, we first examined the number of rules extracted by our aligner as compared with GIZA++. After relativiz- Table 4 : Final tune and test set results for our grammars extracted using the baseline GIZA++ alignments and our syntactic aligner. When we filter the GIZA++ grammars with the same count thresholds used for our aligner (\"high count\"), BLEU score drops substantially.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "ing to the tune and test set, we extracted approximately 32 million unique rules using our aligner, but only 3 million with GIZA++. To check that we were not just extracting extra low-count, lowquality rules, we plotted the number of rules with a particular count in Figure 4 . We found that while our aligner certainly extracts many more low-count rules, it also extracts many more high-count rules.",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 275,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "Of course, high-count rules are not guaranteed to be high quality. To verify that frequent rules were better for translation, we experimented with various methods of thresholding to remove rules with low count extracted from using aligner. We found in early development found that removing low-count rules improved translation performance substantially. In particular, we settled on the following scheme: we kept all rules with a single foreign terminal on the right-hand side. For entirely lexical (gapless) rules, we kept all rules occurring at least 3 times. For unlexicalized rules, we kept all rules occurring at least 20 times per gap. For rules which mixed gaps and lexical items, we kept all rules occurring at least 10 times per gap. This left us with a grammar about 600 000 rules, the same grammar which gave us our final results reported in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 853,
"end": 860,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "In contrast to our syntactic aligner, rules extracted using GIZA++ could not be so aggressively pruned. When pruned using the same count thresholds, accuracy dropped by more than 3.0 BLEU on the tune set, and similarly on the test set (see Table 4 ). To obtain the accuracy shown in our final results (our best results with GIZA++), we had to adjust the count threshold to include all lexicalized rules, all unlexicalized rules, and mixed rules occurring at least twice per gap. With these count thresholds, the GIZA++ grammar contained about 580 000 rules, roughly the same number as our syntactic grammar.",
"cite_spans": [],
"ref_spans": [
{
"start": 240,
"end": 247,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "We also manually searched the grammars for rules that had high count in the syntactically- Syntactic GIZA++ Figure 4 : Number of extracted translation rules with a particular count. Grammars extracted from our syntactic aligner produce not only more low-count rules, but also more high-count rules than GIZA++.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 116,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "extracted grammar and low (or 0) count in the GIZA++ grammar. Of course, we can always cherry-pick such examples, but a few rules were illuminating. For example, for the \u5728 . . . \u4e4b\u524d construction discussed earlier, our aligner permits extraction of the general rule (PP \u2192 IN 1 NP 2 ; \u5728 NP 2 IN 1 ) 3087 times, and the lexicalized rule (PP \u2192 before NP ; \u5728 NP \u4e4b\u524d) 118 times. In constrast, the GIZA++ grammar extracts the latter only 23 times and the former not at all. The more complex rule (NP \u2192 NP 2 , who S 1 , ; S 1 \u7684 NP 2 ), which captures a common appositive construction, was absent from the GIZA++ grammar but occurred 63 in ours.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "We have described a syntactic alignment model which explicitly aligns nodes of a syntactic parse in one language to spans in another, making it suitable for use in many syntactic translation systems. Our model is unsupervised and can be efficiently trained with a straightforward application of EM. We have demonstrated that our model can accurately capture many syntactic correspondences, and is robust in the face of syntactic divergence between language pairs. Our aligner permits the extraction of more reliable, high-count rules when compared to a standard wordalignment baseline. These high-count rules also produce improvements in BLEU score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "One notable exception isMay and Knight (2007), who produces syntactic alignments using syntactic rules derived from word-aligned data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "While our model does not explicitly produce many-to-one alignments, many-to-one rules can be discovered via rule composition(Galley et al., 2006).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the text, we only describe the factorization for the BI-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://code.google.com/p/berkeleyparser/ 9 5 iterations of model 1, 5 iterations of HMM, 3 iterations of Model 3, and 3 iterations of Model 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This project is funded in part by the NSF under grant 0643742; by BBN under DARPA contract HR0011-06-C-0022; and an NSERC Postgraduate Fellowship. The authors would like to thank Michael Auli for his input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "English chinese translation treebank v 1.0. web download",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Mott",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Warner",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Bies, Martha Palmer, Justin Mott, and Colin Warner. 2007. English chinese translation treebank v 1.0. web download. In LDC2007T02.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19:263-311.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Joint parsing and alignment with weakly synchronized grammar",
"authors": [
{
"first": "David",
"middle": [],
"last": "Burkett",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the North American Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Burkett, John Blitzer, and Dan Klein. 2010. Joint pars- ing and alignment with weakly synchronized grammar. In Proceedings of the North American Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Soft syntactic constraints for word alignment through discriminative training",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and Dekang Lin. 2006. Soft syntactic constraints for word alignment through discriminative training. In Pro- ceedings of the Association of Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Inversion transduction grammar for joint phrasal translation modeling",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2007,
"venue": "Workshop on Syntax and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and Dekang Lin. 2007. Inversion transduction grammar for joint phrasal translation modeling. In Workshop on Syntax and Structure in Statistical Translation.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Evaluating grammar formalisms for applications to natural language processing and biological sequence analysis",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2004. Evaluating grammar formalisms for ap- plications to natural language processing and biological se- quence analysis. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "The Annual Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In The Annual Conference of the Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A Bayesian model of syntax-directed tree to string grammar induction",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Conference on Emprical Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn and Phil Blunsom. 2009. A Bayesian model of syntax-directed tree to string grammar induction. In Pro- ceedings of the Conference on Emprical Methods for Natural Language Processing.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Tailoring word alignments to syntactic machine translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2007,
"venue": "The Annual Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeNero and Dan Klein. 2007. Tailoring word alignments to syntactic machine translation. In The Annual Conference of the Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Why generative phrase models underperform surface heuristics",
"authors": [
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Workshop on Statistical Machine Translation at NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeNero, Dan Gillick, James Zhang, and Dan Klein. 2006. Why generative phrase models underperform surface heuris- tics. In Workshop on Statistical Machine Translation at NAACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Efficient parsing for transducer grammars",
"authors": [
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Pauls",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeNero, Mohit Bansal, Adam Pauls, and Dan Klein. 2009. Efficient parsing for transducer grammars. In Pro- ceedings of NAACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Using syntax to improve word alignment precision for syntaxbased machine translation",
"authors": [
{
"first": "Victoria",
"middle": [],
"last": "Fossum",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Third Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victoria Fossum, Kevin Knight, and Steven Abney. 2008. Us- ing syntax to improve word alignment precision for syntax- based machine translation. In Proceedings of the Third Workshop on Statistical Machine Translation.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "What's in a translation rule",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the North American Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In Proceed- ings of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Scalable inference and training of context-rich syntactic translation models",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Graehl",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Deneefe",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scal- able inference and training of context-rich syntactic transla- tion models. In Proceedings of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Loosely tree-based alignment for machine translation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea. 2003. Loosely tree-based alignment for ma- chine translation. In Proceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Parsing algorithms and metrics",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua Goodman. 1996. Parsing algorithms and metrics. In Proceedings of the Association for Computational Linguis- tics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Better word alignments with supervised itg models",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi, John Blitzer, John Denero, and Dan Klein. 2009. Better word alignments with supervised itg models. In Proceedings of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A syntax-directed translator with extended domain of locality",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of CHSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang, Kevin Knight, and Aravind Joshi. 2006. A syntax-directed translator with extended domain of locality. In Proceedings of CHSLP.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Joshua: an open source toolkit for parsing-based machine translation",
"authors": [
{
"first": "Zhifei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "Lane",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "N",
"middle": [
"G"
],
"last": "Wren",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Thornton",
"suffix": ""
},
{
"first": "Omar",
"middle": [
"F"
],
"last": "Weese",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaidan",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Fourth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhifei Li, Chris Callison-Burch, Chris Dyer, Juri Ganitkevitch, Sanjeev Khudanpur, Lane Schwartz, Wren N. G. Thornton, Jonathan Weese, and Omar F. Zaidan. 2009. Joshua: an open source toolkit for parsing-based machine translation. In Proceedings of the Fourth Workshop on Statistical Ma- chine Translation.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Alignment by agreement",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the North American Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bayesian learning of phrasal tree-to-string templates",
"authors": [
{
"first": "Ding",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ding Liu and Daniel Gildea. 2009. Bayesian learning of phrasal tree-to-string templates. In Proceedings of EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Tree-to-string alignment template for statistical machine translation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-to-string alignment template for statistical machine translation. In Proceedings of the Association for Computational Linguis- tics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improving tree-totree translation with packed forests",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "L\u00fc",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Yajuan L\u00fc, and Qun Liu. 2009. Improving tree-to- tree translation with packed forests. In Proceedings of ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Build- ing a large annotated corpus of English: The Penn Treebank. In Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Syntactic re-alignment models for machine translation",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Conference on Emprical Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan May and Kevin Knight. 2007. Syntactic re-alignment models for machine translation. In Proceedings of the Con- ference on Emprical Methods for Natural Language Pro- cessing.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Improving ibm word alignment model 1",
"authors": [
{
"first": "Robert",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2004,
"venue": "The Annual Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert C. Moore. 2004. Improving ibm word alignment model 1. In The Annual Conference of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A systematic com- parison of various statistical alignment models. Computa- tional Linguistics, 29:19-51.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statis- tical machine translation. In Proceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "SRILM: An extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM: An extensible language mod- eling toolkit. In ICSLP 2002.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "",
"pages": "377--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23:377-404.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A syntax-based statistical translation model",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax-based statis- tical translation model. In Proceedings of the Association of Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Syntax-based alignment: supervised or unsupervised?",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang and Daniel Gildea. 2004. Syntax-based alignment: supervised or unsupervised? In Proceedings of the Confer- ence on Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Synchronous binarization for machine translation",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In Proceedings of the North American Chapter of the Associa- tion for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A single incorrect alignment removes an extractable node, and hence several desirable rules. We represent correct extractable nodes in bold, spurious extractable nodes with a *, and incorrectly blocked extractable nodes in bold strikethrough.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Top: A synchronous derivation of a small sentence pair fragment under our model. The English projection of the derivation represents a valid constituency parse, while the foreign projection is less constrained.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "The graceful degradation of our model in the face of syntactic divergence. It is not possible to align all foreign words with their obvious English counterparts with an ITG derivation. Instead, our model analyzes as much as possible, but must resort to emitting \u660e\u5929 high in the tree.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"content": "<table/>",
"html": null,
"text": "Types of rules present in the SCFG describing our model, along with some sample instantiations of each type. Empty word sequences f have been explicitly marked with an .",
"type_str": "table",
"num": null
},
"TABREF4": {
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">S[0,4]</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">NP[3,4]</td><td/><td colspan=\"2\">VP[0,3]</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td>VP[2,3]</td><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"3\">VP[2,3]</td></tr><tr><td colspan=\"4\">DT[3,3] NN[3,4]</td><td>MD[1,2]</td><td>VB[2,2]</td><td>VBN[2,3]</td><td/><td colspan=\"2\">NN[3,3]</td></tr><tr><td colspan=\"2\">the[3,3]</td><td colspan=\"2\">list[3,4]</td><td>will[1,2]</td><td colspan=\"5\">be[2,2] announced[2,3] tomorrow[3,3]</td></tr><tr><td>0</td><td colspan=\"2\">tomorrow</td><td>1</td><td>will</td><td>2</td><td>announce</td><td>3</td><td>list</td><td>4</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>!</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">(a)</td><td/><td/><td/></tr></table>",
"html": null,
"text": "A synchronous binarization of the SCFG describing our model.",
"type_str": "table",
"num": null
}
}
}
}