{ "paper_id": "D13-1048", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:41:58.264463Z" }, "title": "Anchor Graph: Global Reordering Contexts for Statistical Machine Translation", "authors": [ { "first": "Hendra", "middle": [], "last": "Setiawan", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Reordering poses one of the greatest challenges in Statistical Machine Translation research as the key contextual information may well be beyond the confine of translation units. We present the \"Anchor Graph\" (AG) model where we use a graph structure to model global contextual information that is crucial for reordering. The key ingredient of our AG model is the edges that capture the relationship between the reordering around a set of selected translation units, which we refer to as anchors. As the edges link anchors that may span multiple translation units at decoding time, our AG model effectively encodes global contextual information that is previously absent. We integrate our proposed model into a state-of-the-art translation system and demonstrate the efficacy of our proposal in a largescale Chinese-to-English translation task. * This work was done when the authors were with IBM. 1 We define translation units as phrases in phrase-based SMT or as translation rules in syntax-based SMT.", "pdf_parse": { "paper_id": "D13-1048", "_pdf_hash": "", "abstract": [ { "text": "Reordering poses one of the greatest challenges in Statistical Machine Translation research as the key contextual information may well be beyond the confine of translation units. We present the \"Anchor Graph\" (AG) model where we use a graph structure to model global contextual information that is crucial for reordering. The key ingredient of our AG model is the edges that capture the relationship between the reordering around a set of selected translation units, which we refer to as anchors. As the edges link anchors that may span multiple translation units at decoding time, our AG model effectively encodes global contextual information that is previously absent. We integrate our proposed model into a state-of-the-art translation system and demonstrate the efficacy of our proposal in a largescale Chinese-to-English translation task. * This work was done when the authors were with IBM. 1 We define translation units as phrases in phrase-based SMT or as translation rules in syntax-based SMT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Reordering remains one of the greatest challenges in Statistical Machine Translation (SMT) research as the key contextual information may span across multiple translation units. 1 Unfortunately, previous approaches fall short in capturing such cross-unit contextual information that could be critical in reordering. For example, state-of-the-art translation models, such as Hiero (Chiang, 2005) or Moses (Koehn et al., 2007) , are good at capturing local reordering within the confine of a translation unit, but their formulation is approximately a simple unigram model over derivation (a sequence of the application of translation units) with some aid from target language models. Moving to a higher order formulation (say to a bigram model) is highly impractical for several reasons: 1) it has to deal with a severe sparsity issue as the size of the unigram model is already huge; and 2) it has to deal with a spurious ambiguity issue which allows multiple derivations of a sentence pair to have radically different model scores.", "cite_spans": [ { "start": 178, "end": 179, "text": "1", "ref_id": null }, { "start": 380, "end": 394, "text": "(Chiang, 2005)", "ref_id": "BIBREF6" }, { "start": 404, "end": 424, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we develop \"Anchor Graph\" (AG) where we use a graph structure to capture global contexts that are crucial for translation. To circumvent the sparsity issue, we design our model to rely only on contexts from a set of selected translation units, particularly those that appear frequently with important reordering patterns. We refer to the units in this special set as anchors where they act as vertices in the graph. To address the spurious ambiguity issue, we insist on computing the model score for every anchors in the derivation, including those that appear inside larger translation units, as such our AG model gives the same score to the derivations that share the same reordering pattern.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In AG model, the actual reordering is modeled by the edges, or more specifically, by the edges' labels where different reordering around the anchors would correspond to a different label. As detailed later, we consider two distinct set of labels, namely dominance and precedence, reflecting the two dominant views about reordering in literature, i.e. the first one that views reordering as a linear operation over a sequence and the second one that views reordering as a recursive operation over nodes in a tree structure The former is prevalent in phrase-based context, while the latter in hierarchical phrase-based and syntax-based context. More concretely, the dominance looks at the anchors' relative positions in the translated sentence, while the precedence looks at the anchors' relative positions in a latent structure, induced via a novel synchronous grammar: Anchorcentric, Lexicalized Synchronous Grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From these two sets of labels, we develop two probabilistic models, namely the dominance and the orientation models. As the edges of AG link pairs of anchors that may appear in multiple translation units, our AG models are able to capture high order contextual information that is previously absent. Furthermore, the parameters of these models are estimated in an unsupervised manner without linguistic supervision. More importantly, our experimental results demonstrate the efficacy of our proposed AGbased models, which we integrate into a state-of-theart syntax-based translation system, in a large scale Chinese-to-English translation task. We would like to emphasize that although we use a syntax-based translation system in our experiments, in principle, our approach is applicable to other translation models as it is agnostic to the translation units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Formally, an AG consists of {A, L} where A is a set of vertices that correspond to anchors, while L is a set of labeled edges that link a pair of anchors. In principle, our AG model is part of a translation model that focuses on the reordering within the source sentence F and its translation E. Thus, we start by first introducing A into a translation model (either word-based, phrase-based or syntax-based model) followed by L. Given an F , A is essentially a subset of non-overlapping (word or phrase) units that make up F . As the information related to A is not observed, we introduce A as a latent variable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anchor Graph Model", "sec_num": "2" }, { "text": "Let P (E, \u223c |F ) be a translation model where \u223c corresponds to the alignments between units in F and E. 2 We introduce A into a translation model, 2 Alignment (\u223c) represents an existing latent variable. Depending on the translation units, it can be defined at different level, i.e. word, phrase or hierarchical phrase. As during translation, we are interested in the anchors that appear inside larger translation units, we set \u223c at word level, which information can be induced for (hierarchical) phrase units by either keeping the word alignment from the training data inside the units or inferring it via lexical translation probability. We use the former.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anchor Graph Model", "sec_num": "2" }, { "text": "as follow:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anchor Graph Model", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (E, \u223c |F ) = \u2200A P (E, \u223c, A |F ) (1) P (E, \u223c, A |F ) = P (E, \u223c |A , F )P (A )", "eq_num": "(2)" } ], "section": "Anchor Graph Model", "sec_num": "2" }, { "text": "As there can be many possible subsets of F and summing over all possible A is intractable, we make the following approximation for P (A ) such that we only need to consider one particular A * : P (A ) = \u03b4(A = A * ) which returns 1 only for A * , otherwise 0. The exact definition of the heuristic will be described in Section 7, but in short, we equate A * with units that appear frequently with important reordering patterns in training data. Given an A * , we then introduce the edges of AG (L) into the equation as follow:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anchor Graph Model", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (E, \u223c |A * , F ) = P (E, \u223c, L|A * , F )", "eq_num": "(3)" } ], "section": "Anchor Graph Model", "sec_num": "2" }, { "text": "Note that L is also a latent variable but its values are derived deterministically from (F, E, \u223c) and A * , thus no extra summation is present in Eq. 3. Then, we further simplify Eq. 3 by factorizing it with respect to each individual edges, as follow:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anchor Graph Model", "sec_num": "2" }, { "text": "P (E, \u223c, L|A * , F ) \u2248 \u2200am,an\u2208A * m", "text": "The application of the shift-reduce parsing algorithm, which corresponds to the following derivation", "num": null } } } }