{ "paper_id": "D15-1004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:28:09.281867Z" }, "title": "Dependency Graph-to-String Translation", "authors": [ { "first": "Liangyou", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dublin City University", "location": {} }, "email": "liangyouli@computing.dcu.ie" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dublin City University", "location": {} }, "email": "away@computing.dcu.ie" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dublin City University", "location": {} }, "email": "qliu@computing.dcu.ie" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Compared to tree grammars, graph grammars have stronger generative capacity over structures. Based on an edge replacement grammar, in this paper we propose to use a synchronous graph-to-string grammar for statistical machine translation. The graph we use is directly converted from a dependency tree by labelling edges. We build our translation model in the log-linear framework with standard features. Large-scale experiments on Chinese-English and German-English tasks show that our model is significantly better than the state-of-the-art hierarchical phrase-based (HPB) model and a recently improved dependency tree-to-string model on BLEU, METEOR and TER scores. Experiments also suggest that our model has better capability to perform long-distance reordering and is more suitable for translating long sentences.", "pdf_parse": { "paper_id": "D15-1004", "_pdf_hash": "", "abstract": [ { "text": "Compared to tree grammars, graph grammars have stronger generative capacity over structures. Based on an edge replacement grammar, in this paper we propose to use a synchronous graph-to-string grammar for statistical machine translation. The graph we use is directly converted from a dependency tree by labelling edges. We build our translation model in the log-linear framework with standard features. Large-scale experiments on Chinese-English and German-English tasks show that our model is significantly better than the state-of-the-art hierarchical phrase-based (HPB) model and a recently improved dependency tree-to-string model on BLEU, METEOR and TER scores. Experiments also suggest that our model has better capability to perform long-distance reordering and is more suitable for translating long sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Compared to trees, which have dominated the field of natural language processing (NLP) for decades, graphs are more general for modelling natural languages. The corresponding grammars for recognizing and producing graphs are more flexible and powerful than tree grammars. However, because of their high complexity, graph grammars have not been widely used in NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, along with progress on graph-based meaning representation, hyperedge replacement grammars (HRG) (Drewes et al., 1997) have been revisited, explored and used for semantic-based machine translation (Jones et al., 2012) . However, the translation process is rather complex and the resources it relies on, namely abstract meaning corpora, are limited as well.", "cite_spans": [ { "start": 106, "end": 127, "text": "(Drewes et al., 1997)", "ref_id": "BIBREF9" }, { "start": 206, "end": 226, "text": "(Jones et al., 2012)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As most available syntactic resources and tools are tree-based, in this paper we propose to convert dependency trees, which are usually taken as a kind of shallow semantic representation, to dependency graphs by labelling edges. We then use a synchronous version of edge replacement grammar (ERG) (Section 2), a special case of HRG, to translate these graphs. The resulting translation model has the same order of magnitude in terms of time complexity with the hierarchical phrasebased model (HPB) (Chiang, 2005) under a certain restriction (Section 3).", "cite_spans": [ { "start": 498, "end": 512, "text": "(Chiang, 2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Compared to dependency tree-to-string models, using ERG for graph-to-string translation brings some benefits (Section 3). Thanks to the stronger generative capacity of the grammar, our model can naturally translate siblings in a tree structure, which are usually treated as non-syntactic phrases and handled by other techniques (Huck et al., 2014; . Furthermore, compared to the known treelet approach and Dep2Str (Xie et al., 2011) , our method not only uses treelets but also has a full capacity of reordering.", "cite_spans": [ { "start": 328, "end": 347, "text": "(Huck et al., 2014;", "ref_id": "BIBREF12" }, { "start": 414, "end": 432, "text": "(Xie et al., 2011)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We define our translation model (Section 4) in the log-linear framework (Och and Ney, 2002) . Large-scale experiments (Section 5) on Chinese-English and German-English, two language pairs that have a high degree of syntactic reordering, show that our method significantly improves translation quality over both HPB and Dep2Str, as measured by BLEU (Papineni et al., 2002) , TER (Snover et al., 2006) and METEOR (Denkowski and Lavie, 2011) . We also find that the rules in our model are more suitable for long-distance reordering and translating long sentences.", "cite_spans": [ { "start": 72, "end": 91, "text": "(Och and Ney, 2002)", "ref_id": "BIBREF21" }, { "start": 348, "end": 371, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF25" }, { "start": 378, "end": 399, "text": "(Snover et al., 2006)", "ref_id": "BIBREF28" }, { "start": 411, "end": 438, "text": "(Denkowski and Lavie, 2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As a special case of HRG, ERG is also a contextfree rewriting grammar to recognize and produce graphs. Following HRG, the graph we use in this paper is connected, nodes ordered, acyclic and has edge labels but no node labels (Chiang et al., 2013) . We provide some formal definitions on ERG.", "cite_spans": [ { "start": 225, "end": 246, "text": "(Chiang et al., 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Edge Replacement Grammar", "sec_num": "2" }, { "text": "Definition 1. A connected, edge-labeled, ordered graph is a tuple H = V, E, \u03c6 , where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Edge Replacement Grammar", "sec_num": "2" }, { "text": "\u2022 V is a finite set of nodes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Edge Replacement Grammar", "sec_num": "2" }, { "text": "\u2022 E \u2286 V 2 is a finite set of edges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Edge Replacement Grammar", "sec_num": "2" }, { "text": "\u2022 \u03c6 : E \u2192 C assigns a label (drawn from C) to each edge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Edge Replacement Grammar", "sec_num": "2" }, { "text": "In ERG, the elementary unit is a graph fragment, which is also the right-hand side of a production in the grammar. Its definition is as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Edge Replacement Grammar", "sec_num": "2" }, { "text": "Definition 2. A graph fragment is a tuple H = V, E, \u03c6, X , where V, E, \u03c6 is a graph and X \u2208 (V \u222a V 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Edge Replacement Grammar", "sec_num": "2" }, { "text": "is a list of distinct nodes. Following Chiang et al. (2013) , we call these external nodes.", "cite_spans": [ { "start": 39, "end": 59, "text": "Chiang et al. (2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Edge Replacement Grammar", "sec_num": "2" }, { "text": "The external nodes indicate how to integrate a graph into another one during a derivation. Different to HRG, ERG limits the number of external nodes to 2 at most to make sure hyperedges do not exist during a derivation. Now we define the ERG. \u2022 N and T are disjoint finite sets of nonterminal symbols and terminal symbols, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Edge Replacement Grammar", "sec_num": "2" }, { "text": "\u2022 P is a finite set of productions of the form A \u2192 R, where A \u2208 N and R is a graph fragment, where edge-labels are from N T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Edge Replacement Grammar", "sec_num": "2" }, { "text": "\u2022 S \u2208 N is the start symbol. Figure 1 shows an example of a derivation in an ERG to produce a graph. Starting from the start symbol S, when a rule (A \u2192 R) is applied to an edge e, the edge is replaced by the graph fragment R. Just like in HRG, the ordering of nodes V e in e and external nodes X R in R implies the mapping from V e to X R (Chiang et al., 2013) .", "cite_spans": [ { "start": 339, "end": 360, "text": "(Chiang et al., 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 29, "end": 37, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Edge Replacement Grammar", "sec_num": "2" }, { "text": "In SMT, we need a synchronous grammar to simultaneously parse an input graph and produce translations. The graph we use in this paper is from a dependency structure which is capable of modelling long-distance relations in a sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-to-String Grammar", "sec_num": "3" }, { "text": "Before defining the synchronous grammar, we firstly define a dependency graph which is a special case of a graph. Definition 4. A dependency graph is a tuple V, E, \u03c6, \u2206 , where V, E, \u03c6 is a graph and \u2206 is a restriction: edges are ordered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Grammar", "sec_num": "3.1" }, { "text": "A dependency graph is directly derived from a dependency tree by labeling edges with words, as shown in Figure 2 . Although in general graph edges are unordered, in Definition 4 we keep word order by ordering edges, because the word order is an important piece of information for translation.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 112, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "The Grammar", "sec_num": "3.1" }, { "text": "Similar to the graph fragment, a dependencygraph fragment is defined as below. In this paper, we define a synchronous ERG over dependency graphs as a dependency graphto-string grammar, which can be used for MT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Grammar", "sec_num": "3.1" }, { "text": "Definition 5. A dependency-graph fragment is a tuple V, E, \u03c6, \u2206, X , where V, E, \u03c6, \u2206 is a de- pendency graph, X \u2208 (V \u222aV 2 ) is a list of external nodes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Grammar", "sec_num": "3.1" }, { "text": "Definition 6. A dependency graph-to-string grammar (DGSG) is a tuple N, T, T , P, S , where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Grammar", "sec_num": "3.1" }, { "text": "\u2022 N is a finite set of non-terminal symbols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Grammar", "sec_num": "3.1" }, { "text": "\u2022 T and T are finite sets of terminal symbols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Grammar", "sec_num": "3.1" }, { "text": "\u2022 S \u2208 N is the start symbol.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Grammar", "sec_num": "3.1" }, { "text": "\u2022 P is a finite set of productions of the form A \u2192 R, A \u2192 R , \u223c , where A, A \u2208 N , R is a dependency-graph fragment over N T and R is a string over N T . \u223c is a one-toone mapping between non-terminal symbols in R and R . Figure 3 shows a derivation simultaneously producing a Chinese dependency graph and an English string using a DGSG. Each time a rule is applied, the dependency-graph fragment in the rule replaces an edge in the source graph, and the string in the rule replaces a non-terminal in the target string.", "cite_spans": [], "ref_spans": [ { "start": 221, "end": 229, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The Grammar", "sec_num": "3.1" }, { "text": "Proposition 1. DGSG has stronger generative capacity over graph-string pairs than both SCFG and synchronous tree substitution grammar (STSG).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Grammar", "sec_num": "3.1" }, { "text": "Proof. STSG has stronger generative capacity over structures than SCFG (Chiang, 2012) . 1 Any STSG can easily be converted into a DGSG by labelling edges in tree structures. 1 The following STSG generates a trivial example of a tree-string pair that no SCFG can generate, as SCFG must always have an equal number of non-terminal symbols.", "cite_spans": [ { "start": 71, "end": 85, "text": "(Chiang, 2012)", "ref_id": "BIBREF6" }, { "start": 88, "end": 89, "text": "1", "ref_id": null }, { "start": 174, "end": 175, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Grammar", "sec_num": "3.1" }, { "text": "| :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X", "sec_num": null }, { "text": "X | X |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X", "sec_num": null }, { "text": "The following DGSG generates a trivial example of a graph-string pair, which no STSG can generate, as the left-head side has no head nodes while STSG always requires one to form a tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X", "sec_num": null }, { "text": "This proof is also verified in Figure 3 where the third rule is used to translate a non-syntactic phrase, which can be a problem for dependency tree-to-string methods. In addition, the second rule translates a treelet and the first rule encodes reordering information inside. All these three aspects are uniformly modeled in our grammar, which makes it more powerful than other methods, such as the treelet approach and the Dep2Str.", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 39, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "c : a b", "sec_num": null }, { "text": "Given a dependency graph, training and decoding time using DGSG depends on the number of dependency-graph fragments. For example, for a graph where the degree of a node is k, the number of all possible fragments starting from the node is O(2 k ). Therefore, the time complexity would be exponential if we consider them all.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity and a Restriction", "sec_num": "3.2" }, { "text": "It is easy to find that the high complexity of DGSG comes from the free combination of edges. That means that a dependency-graph fragment can cover discontinuous words of an input sentence. However, this is not the convention in the field of SMT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity and a Restriction", "sec_num": "3.2" }, { "text": "For efficient training and decoding, we add a restriction to DGSG: each dependency-graph fragment covers a continuous span of the source sentence. This reduces the complexity from exponential time to cubic time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity and a Restriction", "sec_num": "3.2" }, { "text": "In this paper we build a dependency graph-tostring model, so we only use one non-terminal symbol X as in HPB on the target side. However, on the source side we define non-terminal symbols over Part-of-Speech (POS) tags, which can be easily obtained as a by-product of dependency parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-terminal Symbols", "sec_num": "3.3" }, { "text": "We define the head of a dependency-graph fragment H as a list of edges, the dependency head of each of which is not in this fragment. Then the FIFA", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-terminal Symbols", "sec_num": "3.3" }, { "text": "X 1 World Cup X 2 in South Africa S S M 2 Zai Nanfei Shijiebei N 1 X 1 World Cup X 2 in South Africa M 2 Zai Nanfei Shijiebei N 1 was held successfully X M Chenggong Juxing Juxing Chenggong Zai Nanfei Shijiebei N 1 X 1 World Cup was held successfully in South Africa N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-terminal Symbols", "sec_num": "3.3" }, { "text": "2010 FIFA 2010 FIFA X Figure 3 : An example of a derivation in dependency graph-to-string grammar to produce a Chinese dependency graph and an English string. Rules are included in dashed rectangles. Target strings are in solid rectangles. External nodes are dark circles. This example is under the restriction in Section 3.2. In addition to the start symbol S, non-terminal symbols for the source side are M and N , while the target side only has one non-terminal X. The index in each non-terminal of a rule indicates the mapping. non-terminal symbol for H is defined as the joining of POS tags of its head (Li et al., 2012) . Figure 4 shows an example.", "cite_spans": [ { "start": 608, "end": 625, "text": "(Li et al., 2012)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 22, "end": 30, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 628, "end": 637, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Non-terminal Symbols", "sec_num": "3.3" }, { "text": "As well as the restriction defined in Section 3.2 making the grammar much smaller, it also results in a similar way of extracting rules as in HPB. Inspired by HPB, we define the rule set over initial pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "Given a word-aligned dependency graph-string pair P = G, e, \u223c , let G j i stand for the sub-graph (it may not be connected) covering words from position i to position j. Then a rule G j i , e j i is an initial pair of P , iff:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "1. G j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "i is a dependency-graph fragment. That means it is a connected sub-graph and has at most two external nodes, nodes which connect with nodes outside or are the root.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "2. It is consistent with the word alignment \u223c (Och and Ney, 2004) .", "cite_spans": [ { "start": 46, "end": 65, "text": "(Och and Ney, 2004)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "The set of rules from P satisfies the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "1. If G j i , e j i is an initial pair, then N (G j i ) \u2192 G j i , X \u2192 e j i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "is a rule, where N (G) defines the nonterminal symbol for G.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "2. If N (R) \u2192 R, X \u2192 R is a rule of P and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "G j i , e j i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "is an initial pair such that G j i is a sub-graph of R and R = r 1 e j i r 2 , then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "N (R) \u2192 R\\G j i k , X \u2192 r 1 X k r 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "is a rule of P , where \\ means replacing G j i in R with an edge labelled with N (G j i ) and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "k is a unique index for a pair of non-terminal symbols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "As in HPB, in addition to rules extracted from the parallel corpus, we also use glue rules to combine fragments and translations when no matched rule can be found.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "Furthermore, we can use the same rule extraction algorithm as that in HPB, except that we need to check if a span of a source sentence indicates a dependency-graph fragment, in which case we keep the dependency structure and induce a nonterminal for the fragment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule Extraction", "sec_num": "3.4" }, { "text": "We define our model in the log-linear framework over a derivation d, as in Equation 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model and Decoding", "sec_num": "4" }, { "text": "P (d) \u221d i \u03c6 i (d) \u03bb i (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model and Decoding", "sec_num": "4" }, { "text": "where \u03c6 i are features defined on derivations and \u03bb i are feature weights. In our experiments, we use 9 features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model and Decoding", "sec_num": "4" }, { "text": "\u2022 translation probabilities P (s|t) and P (t|s), where s is the source graph fragment and t is the target string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model and Decoding", "sec_num": "4" }, { "text": "\u2022 lexical translation probabilities P lex (s|t) and P lex (t|s).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model and Decoding", "sec_num": "4" }, { "text": "\u2022 language model lm(e) over translation e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model and Decoding", "sec_num": "4" }, { "text": "\u2022 rule penalty exp(\u22121).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model and Decoding", "sec_num": "4" }, { "text": "\u2022 word penalty exp(|e|).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model and Decoding", "sec_num": "4" }, { "text": "\u2022 glue penalty exp(\u22121).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model and Decoding", "sec_num": "4" }, { "text": "\u2022 unknown words penalty exp(u(g)), where u(g) is the number of unknown words in a source graph g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model and Decoding", "sec_num": "4" }, { "text": "Our decoder is based on the conventional chart parsing CYK algorithm (Kasami, 1965; Younger, 1967; Cocke and Schwartz, 1970) . It searches for the best derivation d * among all possible derivations D, as in Equation (2): for each fragment, the decoder finds rules to translate it. The translation of a large span can be obtained by combining translations from its sub-span using rules which have non-terminals. Finally, glue rules are used to make sure that at least one translation is produced.", "cite_spans": [ { "start": 69, "end": 83, "text": "(Kasami, 1965;", "ref_id": "BIBREF14" }, { "start": 84, "end": 98, "text": "Younger, 1967;", "ref_id": "BIBREF33" }, { "start": 99, "end": 124, "text": "Cocke and Schwartz, 1970)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Model and Decoding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d * = argmax d\u2208D P (d)", "eq_num": "(2)" } ], "section": "Model and Decoding", "sec_num": "4" }, { "text": "We conduct experiments on Chinese-English and German-English translation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "The Chinese-English training corpus is from LDC, including LDC2002E18, LDC2003E07, LDC2003E14, LDC2004T07, the Hansards portion of LDC2004T08 and LDC2005T06. NIST 2002 is taken as a development set to tune weights, and NIST 2004 (MT04) and NIST 2005 (MT05) are two test sets to evaluate systems. Table 1 provides a summary of this corpus. The Stanford Chinese word segmenter (Chang et al., 2008 ) is used to segment Chinese sentences. The Stanford dependency parser (Chang et al., 2009 ) parses a Chinese sentence into a projective dependency tree which is then converted to a dependency graph in our model. The German-English training corpus is from WMT 2014, including Europarl V7 and News Commentary. News-test 2011 is taken as a development set, while News-test 2012 (WMT12) and News-test 2013 (WMT13) are our test sets. Table 1 provides a summary of this corpus. We use mate-tools 2 to perform morphological analysis and parse German sentences (Bohnet, 2010) . Then MaltParser 3 converts a parse result into a projective dependency tree (Nivre and Nilsson, 2005) .", "cite_spans": [ { "start": 375, "end": 394, "text": "(Chang et al., 2008", "ref_id": "BIBREF1" }, { "start": 466, "end": 485, "text": "(Chang et al., 2009", "ref_id": "BIBREF2" }, { "start": 949, "end": 963, "text": "(Bohnet, 2010)", "ref_id": "BIBREF0" }, { "start": 1042, "end": 1067, "text": "(Nivre and Nilsson, 2005)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 296, "end": 303, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 825, "end": 832, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "In this paper, we mainly compare our system (DGST) with HPB in Moses (Koehn et al., 2007) . We implement our model in Moses and take the same settings as Moses HPB in all experiments. In addition, translation results from a recently open-source dependency tree-to-string system, Dep2Str 4 (Li et al., 2014) , which is implemented in Moses and improves the dependencybased model in Xie et al. (2011) , are also reported. All systems use the same sets of features defined in Section 4.", "cite_spans": [ { "start": 69, "end": 89, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF15" }, { "start": 289, "end": 306, "text": "(Li et al., 2014)", "ref_id": "BIBREF17" }, { "start": 381, "end": 398, "text": "Xie et al. (2011)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "5.2" }, { "text": "In all experiments, word alignment is performed by GIZA++ (Och and Ney, 2003) with the heuristic function grow-diag-final-and. We use SRILM (Stolcke, 2002) to train a 5-gram language model on the Xinhua portion of the English Gigaword corpus 5th edition with modified Kneser-Ney discounting (Chen and Goodman, 1996) . Minimum Error Rate Training (MERT) (Och, 2003) is used to tune weights.", "cite_spans": [ { "start": 58, "end": 77, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF22" }, { "start": 140, "end": 155, "text": "(Stolcke, 2002)", "ref_id": "BIBREF29" }, { "start": 291, "end": 315, "text": "(Chen and Goodman, 1996)", "ref_id": "BIBREF3" }, { "start": 353, "end": 364, "text": "(Och, 2003)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "5.2" }, { "text": "To obtain more reliable results, in each experiment, we run MERT three times and report average scores. These scores are calculated by three widely used automatic metrics in case-insensitive mode: BLEU, METEOR and TER. Table 2 shows the scores of all three metrics on all systems. Similar to Li et al. (2014) , in our experiments Dep2Str has on average a comparable result with Moses HPB in terms of BLEU and METEOR scores. However, it obtains a significantly higher (i.e. worse) TER score on the Chinese-English task. This may suggest that translations produced by Dep2Str need more post-editing effort (He et al., 2010) .", "cite_spans": [ { "start": 292, "end": 308, "text": "Li et al. (2014)", "ref_id": "BIBREF17" }, { "start": 604, "end": 621, "text": "(He et al., 2010)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 219, "end": 226, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Settings", "sec_num": "5.2" }, { "text": "By contrast, on all test sets, measured by all metrics, our system is significantly better than Moses HPB. On the Chinese-English task, our system achieves an average gain of 1.25 (absolute, 3.6% relative) BLEU score and 0.55 (absolute, 1.7% relative) METEOR score while also ob- taining a reduction of 1.1 (absolute, 1.91% relative) TER score on average. On the German-English task, our system achieves an average gain of 0.55 (absolute, 2.56% relative) BLEU score and 0.1 (absolute, 0.35% relative) METEOR score and also obtains a reduction of 0.55 (absolute, 0.89% relative) TER score on average.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.3" }, { "text": "As shown in Table 2 , compared to Moses HPB and Dep2Str, our system achieves higher translation quality as measured by three automatic metrics. In this section, we investigate whether dependency structures bring benefits as expected on long-distance reordering. Table 3 provides the statistics on sentence length of our four test sets.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 2", "ref_id": null }, { "start": 262, "end": 269, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Analysis", "sec_num": "5.4" }, { "text": "In both HPB and our model, the length range of a reordering performed on an input sentence is related to the use of glue grammars which bring two benefits during decoding. When no matched rule is found in the models, glue grammars are applied to make sure a translation is produced. In addition, because of the generalization capability of rules, which typically are learned under a length limitation, using them on long sentences could cause translation quality to deteriorate. Therefore, when the length of a phrase is greater than a certain value, glue grammars are also applied. Therefore, our experiment of analysis is based on the length limitation that a rule can cover (max. phrase length) during decoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5.4" }, { "text": "We set this max. phrase length to different values, including 10, 20 (default), 30, 40 and 50. Figure 5 gives the BLEU scores on all test sets. We find that on all different values, our system achieves higher BLEU scores than Moses HPB. In addition, when the max. phrase length becomes larger, Moses HPB shows a declining trend in most cases, especially on the German-English task (WMT12 and WMT13). However, our system is less sensitive to this value. We hypothesize that this is because rules from dependency graphs have better generalization for translating longer phrases and are more suitable for translating long sentences.", "cite_spans": [], "ref_spans": [ { "start": 95, "end": 103, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Analysis", "sec_num": "5.4" }, { "text": "On a manual check, we find that translations produced by our system are more fluent than those of both Moses HPB and Dep2Str. Figure 6 gives an example comparing translations produced by three systems on the Chinese-English task.", "cite_spans": [], "ref_spans": [ { "start": 126, "end": 134, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Case Study", "sec_num": "5.5" }, { "text": "We first find a case of long-distance relation, i.e. the subject-verb-object (SVO) structure in the source sentence. In this example, this relation implies a long-distance reordering, which moves the translation of the object to the front of its modifiers, as shown in the given reference. Com- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "5.5" }, { "text": "The two sides welcomed the decision by the Iraqi Interim Governing Council to establish a special court to try the murderers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ref:", "sec_num": null }, { "text": "HPB: the two sides welcomed the interim iraqi authority on establishing a special court, trial of the murderer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ref:", "sec_num": null }, { "text": "Dep2Str: the two sides welcomed the decision on the Establishment of a special court, justice murderers of the provisional governing council of iraq.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ref:", "sec_num": null }, { "text": "DGST: the two sides welcomed the decision of the iraqi interim governing council on the establishment of a special court, justice murderers. Figure 6 : An example of comparing translations produced by three systems on the Chinese-English task. The source sentence is parsed into a dependency structure. Each source word is annotated by a corresponding English word (or phrase). Figure 7 : An example of inducing a dependency structure in Figure 6 to \"X \u7684(of) X\" structure in our system by using treelets and non-syntactic phrases. \u00b5 denotes one or more steps. All non-terminals are simply represented by X. pared to Moses HPB, both Dep2Str and our system, which rely on dependency structures, are capable of dealing with this. This also suggests that dependency structures are useful for long-distance reordering.", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 149, "text": "Figure 6", "ref_id": null }, { "start": 378, "end": 386, "text": "Figure 7", "ref_id": null }, { "start": 438, "end": 446, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Ref:", "sec_num": null }, { "text": "\u4f0a\u62c9\u514b Iraqi \u4e34\u65f6 interim \u7ba1\u7406 governing \u59d4\u5458\u4f1a council \u5173\u4e8e about \u5efa\u7acb establish \u7279\u522b special \u6cd5\u5ead court \u3001 , \u5ba1\u5224 try \u6740\u4eba\u72af murderer \u7684 of \u51b3\u5b9a decision X \u7684 of \u51b3\u5b9a decision \u51b3\u5b9a decision X \u5efa\u7acb establish X \u7684 of \u4f0a\u62c9\u514b Iraqi X \u5173\u4e8e about \u5efa\u7acb establish X \u7684 of \u51b3\u5b9a decision", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ref:", "sec_num": null }, { "text": "Furthermore, compared to Dep2Str, our system produces a better translation for the \"X \u7684(of) X\" expression, which is not explicitly represented in the dependency structure and thus results in a wrong translation in Dep2Str. After looking into the details of the translation process, we find that our system induces the dependency structure to the \"X \u7684(of) X\" structure by handling both treelets and non-syntactic phrases. Figure 7 shows the process of this induction.", "cite_spans": [], "ref_spans": [ { "start": 421, "end": 429, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Ref:", "sec_num": null }, { "text": "Dependency structures have been used in SMT for a few years. Because of its better inter-lingual phrasal cohesion properties (Fox, 2002) , it is believed to be beneficial to translation.", "cite_spans": [ { "start": 125, "end": 136, "text": "(Fox, 2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Researchers have tried to use dependency structures on both target and source sides. Shen et al. (2010) propose a string-to-dependency model by using dependency fragments of neighbouring words on the target side, which makes the model easier to include a dependency-based language model. and propose the treelet approach which uses dependency structures on the source side. Xiong et al. (2007) extend this approach by allowing gaps in rules. However, their methods need a separate reordering model to decide the position of translated words (insertion problem). To avoid this problem, Xie et al. (2011) propose to use full head-dependent structures of a dependency tree and build a new dependency-to-string model. However, this model has difficulties in handling non-syntactic phrasal rules and ignores treelets. Meng et al. (2013) and further augment this model by incorporating constituent phrases and integrating fix/float structures (Shen et al., 2010) , respectively, to allow phrasal rules. Li et al. (2014) extend this model by decomposing head-dependent structures into treelets.", "cite_spans": [ { "start": 85, "end": 103, "text": "Shen et al. (2010)", "ref_id": "BIBREF27" }, { "start": 374, "end": 393, "text": "Xiong et al. (2007)", "ref_id": "BIBREF32" }, { "start": 585, "end": 602, "text": "Xie et al. (2011)", "ref_id": "BIBREF30" }, { "start": 813, "end": 831, "text": "Meng et al. (2013)", "ref_id": "BIBREF19" }, { "start": 937, "end": 956, "text": "(Shen et al., 2010)", "ref_id": "BIBREF27" }, { "start": 997, "end": 1013, "text": "Li et al. (2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Different from these methods, by labelling edges and using the ERG, our model considers the three aspects in a unified way: treelet, reordering and non-syntactic phrase. In addition, the ERG also naturally provides a decision on what kind of treelets and phrases should be used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this paper, we present a dependency graph-tostring grammar based on a graph grammar, which we call edge replacement grammar. This grammar can simultaneously produce a pair of dependency graph and string. With a restriction of using contiguous edges, our translation model built using this grammar can decode an input dependency graph, which is directly converted from a dependency tree, in cubic time using the CYK algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Experiments on Chinese-English and German-English tasks show that our model is significantly better than the hierarchical phrase-based model and a recent dependency tree-to-string model (Dep2Str) in Moses. We also find that the rules used in our model are more suitable for longdistance reordering and translating long sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Although experiments show significant improvements over baselines, our model has limitations that can be avenues for future work. The restriction used in this paper reduces the time complexity but at the same time reduces the generative capacity of graph grammars. Without allowing hyperedges or only using at most two external nodes reduces the phrase coverage in our model as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "http://code.google.com/p/mate-tools/ 3 http://www.maltparser.org/ 4 http://computing.dcu.ie/\u02dcliangyouli/ dep2str.zip", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research has received funding from the People Programme (Marie Curie Actions) of the European Union's Framework Programme (FP7/2007-2013) under REA grant agreement n o 317471. The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. We thank anonymous reviewers for their insightful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Very High Accuracy and Fast Dependency Parsing is Not a Contradiction", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "89--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Bohnet. 2010. Very High Accuracy and Fast Dependency Parsing is Not a Contradiction. In Pro- ceedings of the 23rd International Conference on Computational Linguistics, pages 89-97, Beijing, China.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Optimizing Chinese Word Segmentation for Machine Translation Performance", "authors": [ { "first": "Pi-Chuan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "224--232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing Chinese Word Seg- mentation for Machine Translation Performance. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 224-232, Columbus, Ohio.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Discriminative Reordering with Chinese Grammatical Relations Features", "authors": [ { "first": "Pi-Chuan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Huihsin", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation", "volume": "", "issue": "", "pages": "51--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pi-Chuan Chang, Huihsin Tseng, Dan Jurafsky, and Christopher D. Manning. 2009. Discriminative Re- ordering with Chinese Grammatical Relations Fea- tures. In Proceedings of the Third Workshop on Syn- tax and Structure in Statistical Translation, pages 51-59, Boulder, Colorado.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An Empirical Study of Smoothing Techniques for Language Modeling", "authors": [ { "first": "F", "middle": [], "last": "Stanley", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 34th Annual Meeting on Association for Computational Linguistics, ACL '96", "volume": "", "issue": "", "pages": "310--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanley F. Chen and Joshua Goodman. 1996. An Empirical Study of Smoothing Techniques for Lan- guage Modeling. In Proceedings of the 34th Annual Meeting on Association for Computational Linguis- tics, ACL '96, pages 310-318, Santa Cruz, Califor- nia.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Parsing graphs with hyperedge replacement grammars", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "Bevan", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "924--932", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing graphs with hyperedge replacement grammars. In Proceedings of the 51st Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 924-932, Sofia, Bulgaria, August.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Hierarchical Phrase-based Model for Statistical Machine Translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A Hierarchical Phrase-based Model for Statistical Machine Translation. In Pro- ceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 263-270, Ann Arbor, Michigan.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Grammars for Language and Genes: Theoretical and Empirical Investigations", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2012. Grammars for Language and Genes: Theoretical and Empirical Investigations. Springer.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Programming Languages and Their Compilers: Preliminary Notes", "authors": [ { "first": "John", "middle": [], "last": "Cocke", "suffix": "" }, { "first": "Jacob", "middle": [ "T" ], "last": "Schwartz", "suffix": "" } ], "year": 1970, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Cocke and Jacob T. Schwartz. 1970. Program- ming Languages and Their Compilers: Preliminary Notes. Technical report, Courant Institute of Math- ematical Sciences, New York University, New York, NY.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems", "authors": [ { "first": "Michael", "middle": [], "last": "Denkowski", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT '11", "volume": "", "issue": "", "pages": "85--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT '11, pages 85-91, Ed- inburgh, Scotland.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Handbook of graph grammars and computing by graph transformation. chapter Hyperedge Replacement Graph Grammars", "authors": [ { "first": "Frank", "middle": [], "last": "Drewes", "suffix": "" }, { "first": "Hans", "middle": [ "J\u00f6rg" ], "last": "Kreowski", "suffix": "" }, { "first": "Annegret", "middle": [], "last": "Habel", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "95--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frank Drewes, Hans J\u00f6rg Kreowski, and Annegret Ha- bel. 1997. Handbook of graph grammars and com- puting by graph transformation. chapter Hyper- edge Replacement Graph Grammars, pages 95-162. World Scientific Publishing Co., Inc., River Edge, NJ, USA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Phrasal Cohesion and Statistical Machine Translation", "authors": [ { "first": "Heidi", "middle": [ "J" ], "last": "Fox", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing", "volume": "10", "issue": "", "pages": "304--3111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heidi J. Fox. 2002. Phrasal Cohesion and Statis- tical Machine Translation. In Proceedings of the ACL-02 Conference on Empirical Methods in Nat- ural Language Processing -Volume 10, pages 304- 3111, Philadelphia.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bridging SMT and TM with Translation Recommendation", "authors": [ { "first": "Yifan", "middle": [], "last": "He", "suffix": "" }, { "first": "Yanjun", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "622--630", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yifan He, Yanjun Ma, Josef van Genabith, and Andy Way. 2010. Bridging SMT and TM with Translation Recommendation. In Proceedings of the 48th An- nual Meeting of the Association for Computational Linguistics, pages 622-630, Uppsala, Sweden, July.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Augmenting String-to-Tree and Tree-to-String Translation with Non-Syntactic Phrases", "authors": [ { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "486--498", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthias Huck, Hieu Hoang, and Philipp Koehn. 2014. Augmenting String-to-Tree and Tree-to- String Translation with Non-Syntactic Phrases. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 486-498, Baltimore, Maryland, USA, June.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Semantics-Based Machine Translation with Hyperedge Replacement Grammars", "authors": [ { "first": "Bevan", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2012, "venue": "COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers", "volume": "", "issue": "", "pages": "1359--1376", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-Based Machine Translation with Hyper- edge Replacement Grammars. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Tech- nical Papers, pages 1359-1376, Mumbai, India, December.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An Efficient Recognition and Syntax-Analysis Algorithm for Context-Free Languages", "authors": [ { "first": "Tadao", "middle": [], "last": "Kasami", "suffix": "" } ], "year": 1965, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tadao Kasami. 1965. An Efficient Recognition and Syntax-Analysis Algorithm for Context-Free Lan- guages. Technical report, Air Force Cambridge Re- search Lab, Bedford, MA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Moses: Open Source Toolkit for Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "'", "middle": [], "last": "Ond", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond'ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Ses- sions, ACL '07, pages 177-180, Prague, Czech Re- public.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Head-Driven Hierarchical Phrasebased Translation", "authors": [ { "first": "Junhui", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhaopeng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "33--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junhui Li, Zhaopeng Tu, Guodong Zhou, and Josef van Genabith. 2012. Head-Driven Hierarchical Phrase- based Translation. In Proceedings of the 50th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 33-37, Jeju Island, Korea, July.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Transformation and Decomposition for Efficiently Implementing and Improving Dependency-to-String Model In Moses", "authors": [ { "first": "Liangyou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liangyou Li, Jun Xie, Andy Way, and Qun Liu. 2014. Transformation and Decomposition for Efficiently Implementing and Improving Dependency-to-String Model In Moses. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, October.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Dependency Treelet Translation: The Convergence of Statistical and Example-Based Machine-translation?", "authors": [ { "first": "Arul", "middle": [], "last": "Menezes", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Workshop on Example-based Machine Translation at MT Summit X", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arul Menezes and Chris Quirk. 2005. Dependency Treelet Translation: The Convergence of Statistical and Example-Based Machine-translation? In Pro- ceedings of the Workshop on Example-based Ma- chine Translation at MT Summit X, September.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Translation with Source Constituency and Dependency Trees", "authors": [ { "first": "Fandong", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" }, { "first": "L'", "middle": [], "last": "Yajuan", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1066--1076", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fandong Meng, Jun Xie, Linfeng Song, Yajuan L', and Qun Liu. 2013. Translation with Source Con- stituency and Dependency Trees. In Proceedings of the 2013 Conference on Empirical Methods in Natu- ral Language Processing, pages 1066-1076, Seattle, Washington, USA, October.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Pseudo-Projective Dependency Parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "99--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre and Jens Nilsson. 2005. Pseudo- Projective Dependency Parsing. In Proceedings of the 43rd Annual Meeting on Association for Com- putational Linguistics, pages 99-106, Ann Arbor, Michigan.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Discriminative Training and Maximum Entropy Models for Statistical Machine Translation", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "295--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimina- tive Training and Maximum Entropy Models for Sta- tistical Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, pages 295-302, Philadelphia, PA, USA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A Systematic Comparison of Various Statistical Alignment Models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A System- atic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19-51, March.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The Alignment Template Approach to Statistical Machine Translation", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "417--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2004. The Align- ment Template Approach to Statistical Machine Translation. Computational Linguistics, 30(4):417- 449, December.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Minimum Error Rate Training in Statistical Machine Translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting on Association for Com- putational Linguistics -Volume 1, ACL '03, pages 160-167, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "BLEU: A Method for Automatic Evaluation of Machine Translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Com- putational Linguistics, ACL '02, pages 311-318, Philadelphia, Pennsylvania.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Dependency Treelet Translation: Syntactically Informed Phrasal SMT", "authors": [ { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Arul", "middle": [], "last": "Menezes", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", "volume": "", "issue": "", "pages": "271--279", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency Treelet Translation: Syntactically In- formed Phrasal SMT. In Proceedings of the 43rd Annual Meeting of the Association for Computa- tional Linguistics (ACL'05), pages 271-279, Ann Arbor, Michigan, June.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "String-to-Dependency Statistical Machine Translation", "authors": [ { "first": "Libin", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Jinxi", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2010, "venue": "Computational Linguistics", "volume": "36", "issue": "4", "pages": "649--671", "other_ids": {}, "num": null, "urls": [], "raw_text": "Libin Shen, Jinxi Xu, and Ralph Weischedel. 2010. String-to-Dependency Statistical Machine Transla- tion. Computational Linguistics, 36(4):649-671, December.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A Study of Translation Edit Rate with Targeted Human Annotation", "authors": [ { "first": "M", "middle": [], "last": "Snover", "suffix": "" }, { "first": "B", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "L", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "J", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "223--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annotation. In Proceedings of Association for Machine Translation in the Amer- icas, pages 223-231, Cambridge, Massachusetts, USA, August.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "SRILM '' An Extensible Language Modeling Toolkit", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the International Conference Spoken Language Processing", "volume": "", "issue": "", "pages": "901--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke. 2002. SRILM '' An Extensible Lan- guage Modeling Toolkit. In Proceedings of the In- ternational Conference Spoken Language Process- ing, pages 901-904, Denver, CO.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A Novel Dependency-to-string Model for Statistical Machine Translation", "authors": [ { "first": "Jun", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Haitao", "middle": [], "last": "Mi", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "216--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Xie, Haitao Mi, and Qun Liu. 2011. A Novel Dependency-to-string Model for Statistical Machine Translation. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing, pages 216-226, Edinburgh, United Kingdom.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Augment Dependency-to-String Translation with Fixed and Floating Structures", "authors": [ { "first": "Jun", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Jinan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 25th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2217--2226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Xie, Jinan Xu, and Qun Liu. 2014. Augment Dependency-to-String Translation with Fixed and Floating Structures. In Proceedings of the 25th In- ternational Conference on Computational Linguis- tics, pages 2217-2226, Dublin, Ireland.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A Dependency Treelet String Correspondence Model for Statistical Machine Translation", "authors": [ { "first": "Deyi", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shouxun", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "40--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deyi Xiong, Qun Liu, and Shouxun Lin. 2007. A De- pendency Treelet String Correspondence Model for Statistical Machine Translation. In Proceedings of the Second Workshop on Statistical Machine Trans- lation, pages 40-47, Prague, June.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Recognition and Parsing of Context-Free Languages in Time n 3 . Information and Control", "authors": [ { "first": "H", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "", "middle": [], "last": "Younger", "suffix": "" } ], "year": 1967, "venue": "", "volume": "10", "issue": "", "pages": "189--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel H. Younger. 1967. Recognition and Parsing of Context-Free Languages in Time n 3 . Information and Control, 10(2):189-208.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "An example of a derivation in an ERG. Dark circles are external nodes.", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "An edge replacement grammar is a tuple N, T, P, S , where", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "An example of deriving a dependency graph from a dependency tree by labelling edges with words.", "type_str": "figure", "num": null }, "FIGREF4": { "uris": null, "text": "An example inducing a non-terminal symbol (left side) for a dependency-graph fragment (right side). Each edge is labeled by a word associated with its POS tag. The head of this fragment includes three edges which are in the rectangle.", "type_str": "figure", "num": null }, "FIGREF5": { "uris": null, "text": "BLEU scores of Moses HPB and DGST (our system) when the length of maximum phrase that a rule can cover during decoding is set to different values.", "type_str": "figure", "num": null }, "TABREF1": { "content": "
: Chinese-English (ZH-EN) and German-
English (DE-EN) corpora. For the English side of
dev and test sets, words counts are averaged across
all references.
", "html": null, "text": "", "num": null, "type_str": "table" }, "TABREF2": { "content": "
MetricSystemZH-EN MT04 MT05 WMT12 WMT13 DE-EN
Moses HPB 35.633.820.222.7
BLEU \u2191Dep2Str35.433.920.322.8
DGST36.635.320.723.3
Moses HPB 31.631.928.629.7
METEOR \u2191Dep2Str31.831.928.529.5 *
DGST32.132.528.729.8
Moses HPB 57.058.363.259.5
TER \u2193Dep2Str58.2 * 59.6 * 63.159.6
DGST56.157.062.659.0
Table 2: Metric scores for all systems on Chinese-English (ZH-EN) and German-English (DE-EN) cor-
pus. LengthPercentage MT04 MT05 WMT12 WMT13
(0, 10]7.6%8.6%15.0%19.2%
(10, 20] 28.2% 26.0%31.4%37.2%
(20, 30] 28.2% 26.5%26.3%24.5%
(30, 40] 20.2% 23.8%14.4%12.0%
(40, \u221e) 15.7% 15.2%12.9%7.2%
", "html": null, "text": "Each score is the average score over three MERT runs. Bold figures mean a system is significantly better than Moses HPB at p \u2264 0.01. Moses HPB is significantly better than systems with * at p \u2264 0.01.", "num": null, "type_str": "table" }, "TABREF3": { "content": "", "html": null, "text": "Statistics of sentence length on four test sets.", "num": null, "type_str": "table" } } } }