Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N12-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:05:34.072681Z"
},
"title": "Encouraging Consistent Translation Choices",
"authors": [
{
"first": "Ferhan",
"middle": [],
"last": "Ture",
"suffix": "",
"affiliation": {},
"email": "fture@cs.umd.edu"
},
{
"first": "Douglas",
"middle": [
"W"
],
"last": "Oard",
"suffix": "",
"affiliation": {},
"email": "oard@umd.edu"
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": "",
"affiliation": {},
"email": "resnik@umd.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "It has long been observed that monolingual text exhibits a tendency toward \"one sense per discourse,\" and it has been argued that a related \"one translation per discourse\" constraint is operative in bilingual contexts as well. In this paper, we introduce a novel method using forced decoding to confirm the validity of this constraint, and we demonstrate that it can be exploited in order to improve machine translation quality. Three ways of incorporating such a preference into a hierarchical phrase-based MT model are proposed, and the approach where all three are combined yields the greatest improvements for both Arabic-English and Chinese-English translation experiments.",
"pdf_parse": {
"paper_id": "N12-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "It has long been observed that monolingual text exhibits a tendency toward \"one sense per discourse,\" and it has been argued that a related \"one translation per discourse\" constraint is operative in bilingual contexts as well. In this paper, we introduce a novel method using forced decoding to confirm the validity of this constraint, and we demonstrate that it can be exploited in order to improve machine translation quality. Three ways of incorporating such a preference into a hierarchical phrase-based MT model are proposed, and the approach where all three are combined yields the greatest improvements for both Arabic-English and Chinese-English translation experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In statistical Machine Translation (MT), the state-ofthe-art approach is to translate phrases in the context of a sentence and to re-order those phrases appropriately. Intuitively, it seems as if it should also be possible to draw on information outside of a single sentence to further improve translation quality. In this paper, we challenge the conventional approach of translating each sentence independently, and argue that it can indeed also be beneficial to consider document-scale context when translating text. Motivated by the success of a \"one sense per discourse\" heuristic in Word Sense Disambiguation (WSD), we explore the potential benefit of leveraging a \"one translation per discourse\" heuristic in MT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. We begin with related work in Section 2. Next, we provide new confirmation that the hypothesized one-translationper-discourse condition does indeed often hold, based on a novel analysis using forced decoding (Section 3). We incorporate this idea into a hierarchical MT framework by adding three new documentscale features to the translation model (Section 4). We then present experimental results demonstrating solid improvements in translation quality obtained by leveraging these features, both for Arabic-English (Ar-En) and Chinese-English (Zh-En) translation (Section 5). Conclusions and future work are presented in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Exploiting discourse-level context has to date received only limited attention in MT research (e.g., (Gim\u00e9nez and M\u00e0rquez, 2007; Liu et al., 2010; Carpuat, 2009; Brown, 2008; Xiao et al., 2011) ). Exploratory analysis of reference translations by Carpuat (2009) motivates a hypothesis that MT systems might benefit from the \"one sense per discourse\" heuristic, first introduced by Gale et al. (1992) , which has proven to be effective in the context of WSD (Yarowsky, 1995) . Carpuat's approach was to do post-processing on the translation output to impose a \"one translation per discourse\" constraint where the system would otherwise have made a different choice. A manual evaluation on a sample of sentences suggested promise from the technique, which Carpuat suggested in favor of exploring more integrated approaches. Xiao et al. (2011) took this one step further and implement an approach where they identified ambiguous translations within each document, and at-tempt to fix them by replacing each ambiguity with the most frequent translation choice. Based on their error analysis, the authors indicate two shortcomings when trying to find the correct translation of a given phrase. First, frequency may not provide sufficient information to distinguish between translation candidates, which is why we take rareness into account when scoring translation candidates. Another problem is, like any other heuristic, that there may be cases where the heuristic fails and there are multiple senses per discourse. Guaranteeing consistency hurts performance in such situations, which is why we implement the heuristic as a model feature, and let the model score decide for each case.",
"cite_spans": [
{
"start": 101,
"end": 128,
"text": "(Gim\u00e9nez and M\u00e0rquez, 2007;",
"ref_id": "BIBREF11"
},
{
"start": 129,
"end": 146,
"text": "Liu et al., 2010;",
"ref_id": "BIBREF13"
},
{
"start": 147,
"end": 161,
"text": "Carpuat, 2009;",
"ref_id": "BIBREF3"
},
{
"start": 162,
"end": 174,
"text": "Brown, 2008;",
"ref_id": "BIBREF1"
},
{
"start": 175,
"end": 193,
"text": "Xiao et al., 2011)",
"ref_id": "BIBREF28"
},
{
"start": 247,
"end": 261,
"text": "Carpuat (2009)",
"ref_id": "BIBREF3"
},
{
"start": 381,
"end": 399,
"text": "Gale et al. (1992)",
"ref_id": "BIBREF10"
},
{
"start": 457,
"end": 473,
"text": "(Yarowsky, 1995)",
"ref_id": "BIBREF29"
},
{
"start": 822,
"end": 840,
"text": "Xiao et al. (2011)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We are aware of a few other analyses that have shown promising results based on a similar motivation. For instance, Wasser and Dorr (2008)'s approach biases the MT system based on term statistics from relevant documents in comparable corpora. Ma et al. (2011) show that a translation memory can be used to find similar source sentences, and consecutively adapt translation choices towards consistency. Domain adaptation for MT has has also been shown to be useful in some cases (Bertoldi and Federico, 2009; Hildebrand et al., 2005; Sanchis-Trilles and Casacuberta, 2010; Tiedemann, 2010; Zhao et al., 2004) , so to the extent we consider documents to be micro-domains we might expect similar approaches to be useful at document scale. Indeed, hints that such ideas may work have been available for some time. For example, there is clear evidence that the behavior of human translators can provide evidence that is often useful for automating WSD (Diab and Resnik, 2002; Ng et al., 2003) . When coupled with the one-sense-per-discourse heuristic, this suggests that the reverse may also be true.",
"cite_spans": [
{
"start": 243,
"end": 259,
"text": "Ma et al. (2011)",
"ref_id": "BIBREF14"
},
{
"start": 478,
"end": 507,
"text": "(Bertoldi and Federico, 2009;",
"ref_id": "BIBREF0"
},
{
"start": 508,
"end": 532,
"text": "Hildebrand et al., 2005;",
"ref_id": "BIBREF12"
},
{
"start": 533,
"end": 571,
"text": "Sanchis-Trilles and Casacuberta, 2010;",
"ref_id": "BIBREF24"
},
{
"start": 572,
"end": 588,
"text": "Tiedemann, 2010;",
"ref_id": "BIBREF25"
},
{
"start": 589,
"end": 607,
"text": "Zhao et al., 2004)",
"ref_id": "BIBREF30"
},
{
"start": 947,
"end": 970,
"text": "(Diab and Resnik, 2002;",
"ref_id": "BIBREF7"
},
{
"start": 971,
"end": 987,
"text": "Ng et al., 2003)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "It is well known that writing styles vary by genre, and in particular that the amount of vocabulary variation within a document depends to some extent on the genre (e.g., higher in poetry than in engineering writing). The degree to which authors tend to make consistent word choices in any particular genre is, therefore, an empirical question. In order to gain insight into the extent to which human translators make consistent vocabulary choices in the types of materi-als that we wish to translate (in this work, news stories), we first explore the degree of support for our one-translation-per-discourse hypothesis in the reference translations of a standard MT test collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "We used the Ar-En MT08 data set, which contains 74 newswire documents with a total of 813 sentences, each of which has four reference translations. Throughout this paper we consistently use the document (i.e., one news story) as a convenient discourse unit, although of course finer-scale or broader-scale discourse units might also be explored in future work. Moreover, throughout this paper we use the hierarchical phrase-based translation system (Hiero), which is based on a synchronous contextfree grammar (SCFG) model (Chiang, 2005) . In a SCFG, the rule [X] ||| \u03b1 ||| \u03b2 indicates that context free expansion X \u2192 \u03b1 in the source language can occur synchronously with X \u2192 \u03b2 in the target language. In this case, we call \u03b1 the left hand side (LHS) of the rule, and \u03b2 the right hand side (RHS) of the rule.",
"cite_spans": [
{
"start": 523,
"end": 537,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "To determine the extent and nature of translation consistency choices made by human translators, we randomly selected one of the four sets of reference translations (first set, with id 0) and we used forced decoding to find all possible sequences of rules that could transform the source sentence into the target sentence. In forced decoding, given a pair of source and target sentences, and a grammar consisting of learned translation rules with associated probabilities, the decoder searches all possible derivations for the one sequence of rules that is most likely (under the learned translation model) to synchronously produce the source sentence on the LHS and the target sentence on the RHS. For instance, consider the following Arabic sentence as input:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "\u202b\u0631\u0627\u0628\u0637\u202c \u202b\u0628\u064a\u0646\u202c \u202b\u0627\u0644\u0627\u0639\u062a\u062f\u0627\u0621\u0627\u062a\u202c \u202b\u0627\u0644\u062b\u0644\u0627\u062b\u0629\u202c . and its uncased reference translation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "there is a link between the three attacks .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "The following four rules, which are part of the SCFG learned from the the same translation pairs, allows the decoder to find a sequence of derivations that \"translates\" the source-side Arabic sentence into the target-side reference translation. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "R 1 . [X 12 ] ||| \u202b\u0631\u0627\u0628\u0637\u202c ||| there is a link R 2 . [X 16 ] ||| [2] \u202b\u0628\u064a\u0646\u202c [1] ||| [X 12 , 1] between [X 7 , 2] R 3 . [X 7 ] ||| [1] \u202b\u0627\u0644\u0627\u0639\u062a\u062f\u0627\u0621\u0627\u062a\u202c . ||| [X 3 , 1] attacks . R 4 . [X 3 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "||| \u202b\u0627\u0644\u062b\u0644\u0627\u062b\u0629\u202c ||| the three Figure 1 illustrates how the decoder uses these rules to produce the source and target sides synchronously.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 36,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "As we repeated this procedure for all sentence pairs, we kept track of all rules that were actually used by the decoder to generate a reference English translation from the corresponding Arabic sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "Our next step was to identify cases in which the SCFG could reasonably have produced a substantially different translation. Whenever an Arabic phrase f occurs multiple times in a document, and f appears on the LHS of two or more different grammar rules in the SCFG, we count this as a single \"case\". 2 These cases correspond to unique (source phrase f , document d) pairs in which a translation process using that SCFG could have chosen to produce two or more different translations of f in d. Since the multiple appearances of f are distributed among sentences of d, each counted case may correspond to a number of sentences ranging from 1 to the number of sentences in that document. Table 1 shows a small sample of the cases (i.e., (source phrase f , document d) pairs) identified as a result of forced decoding. There were 321 such cases in our dataset and there were 672 sentences in which at least one case occurred. This is not an uncommon phenomenon; these 672 sentences comprise 83% of the test set. However, many of these cases represent either unlikely choices or inconsequential differences, so some post-processing is called for.",
"cite_spans": [],
"ref_spans": [
{
"start": 686,
"end": 693,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "Since grammar rules are typically more finegrained than is necessary for our purposes (e.g., to capture various punctuation and determiner differences that do not affect the \"sense\" of the translation), we applied a few simple heuristics to edit the source and target sides and group all such minor variations into a single \"mega-rule\" (e.g., \"how\"\u223c\", how\", \"third\"\u223c\"a third\", \"want\"\u223c\"we want\"). For this, we removed nonterminal symbols and punctuation, and considered two target phrases e and e \u2032 to be different only if edit distance(e, e \u2032 ) > max(length(e), length(e \u2032 ))/2, where the edit distance is based on character removal and insertion. For instance, the third example in Table 1 would have been considered to be translated consistently as a result of this heuristic, as opposed to the first example. We also eliminated cases in which no reasonable alternatives were available in the translation grammar (i.e., cases where the second most probable rule with the same LHS was assigned a probability below 0.1 in the grammar). Cases 4 and 5 would have been removed by this heuristic.",
"cite_spans": [],
"ref_spans": [
{
"start": 683,
"end": 690,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "After this filtering and aggregation we were left with 176 (f , d) pairs in which the translation model could reasonably have selected between rules that would have produced substantially different English translations of f in d (such as cases 1-3 and 6-9). It was these 176 cases, affecting a total of 512 sentences (63% of test set) for which we then examined what forced decoding could tell us about translation consistency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "So now that we know what the human who produced the reference translations actually did (according to forced decoding), and in which cases they might reasonably have chosen to do something substantially different (according to the SCFG), we can ask in which cases the human (effectively) made a consistent choice of translation rules when encountering the same Arabic phrase in the same document. In 128 of the 176 cases, that is what they did (i.e., when the same phrase occurred multiple times in a single document and more than one translation was reasonably possible, forced decoding indicated that the human translator translated that phrase in essentially the same way). These cases affected the trans- lation of 455 sentences (56% of the test set), suggesting that if we can replicate this human behavior in a system, it might affect a nontrivial number of translation choices. These statistics also suggest, however, that there may be some risk incurred in such a process, since in 48 of the 176 cases, the human translator opted for a substantially different translation. When we closely examined these 48 instances, we found that 19 (40%) involved changing a content-bearing word (sometimes to a word with similar meaning). The remaining 29 (60%) involved function words or similar constructions. See Figures 2 and 3 for examples. We can make several observations based on this analysis. First, there does indeed seem to be evidence to support the one-translation-per-discourse heuristic, and to suggest that respecting that heuris- tic could improve translation outcomes for a substantial number of sentences. Second, even when a reference translation contains different translations of the same phrase, this may sometimes be the result of stylistic choices rather than an intent by the translator to affect the expressed meaning. If a system were try to \"fix\" such cases by enforcing consistent translation, the resulting translation might be somewhat more stilted, but perhaps not less accurate or less intelligible. Finally, sentence structure conventions or other language-specific phenomena may sometimes require the same phrase to be translated differently, so some way of encouraging consistency while still allowing the model to consider other contextual factors might be better than always imposing a hard consistency constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "1a. [X] ||| \u202b\u0633\u0645\u062d\u062a\u202c ||| had allowed 1b. [X] ||| \u202b\u0633\u0645\u062d\u062a\u202c ||| has permitted 2a. [X] ||| [X,1] \u202b\u062a\u062f\u0631\u0633\u202c ||| examining [X,1] 2b. [X] ||| [X,1] \u202b\u062a\u062f\u0631\u0633\u202c ||| is considering [X,1] 3a. [X] ||| [X,1] \u202b\u0627\u0644\u062c\u0648\u0627\u0631\u202c ||| neighbors 3b. [X] ||| [X,1] \u202b\u0627\u0644\u062c\u0648\u0627\u0631\u202c ||| neighboring countries",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory analysis",
"sec_num": "3"
},
{
"text": "To incorporate document-level features into an MT system that would otherwise operate with only sentence-level evidence, we added three supersentential \"consistency features\" to the translation model. The decoder computes scores for these features in two passes over each document; in each pass, each sentence in the document is decoded. In the first pass, the decoder keeps track of the number of occurrences of some aspects of each grammar rule and stores that information. The consistency features are disabled during this pass, and do not affect decoder scoring. In the second pass, each grammar rule is assigned as many as three consistency feature scores, each of which is based on some frozen counts from the first pass. These features are designed to introduce a bias towards translation consistency, but to leave the final decision to the decoder, which of course also has access to other features from the translation and language model. At this point we are more interested in effectiveness than efficiency, so we simply note that this approach doubles the running time of the decoder and that future work on a more elegant implementation might be productive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "We explore three ways to compute features in this section. The essential idea behind all of them is to define some feature function that increases monotonically with an increase in some count that we believe to be informative, and in which the rate of increase is damped more strongly as that count increases. Several feature functions could satisfy those broad requirements; in this section, we describe three variants, C 1 , C 2 and C 3 , and discuss the potential benefits and drawbacks of each.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "C 1 : Counting rules In this variant, we count instances of the same entire grammar rule, where a rule r contains both the source phrase f and the target phrase e. During the first pass, whenever a grammar rule is chosen by the decoder for the one-best output, the count for that rule is incremented. Given a grammar rule r and the number of times r was counted in the first pass (given by N {r}), the consistency feature score is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "C 1 (r) = 2.2N {r} 1.2 + N {r} (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "Equation 1 is the term frequency component of the well known Okapi BM25 term weighting function, when parameters are set to the conventional values k = 1.2, b = 0. This is an increasing and concave function in which the count has a diminishing marginal effect on the feature score. It has proven to be useful in information retrieval applications, in which the goal is to model \"aboutness\" based on term counts (Robertson et al., 1994) . Because our goal is to demonstrate the potential of consistency features, it seemed reasonable to work with some simple function that has a shape like the one we desired. We leave exploration of optimal damping functions for future work.",
"cite_spans": [
{
"start": 411,
"end": 435,
"text": "(Robertson et al., 1994)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "A drawback of this C 1 approach is that as we saw in Section 3, grammar rules in phrase-based MT systems tend to be somewhat more fine-grained than seems optimal for constructing a consistency feature. For instance, consider the following rules that all translate the same Arabic term:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "R 1 . [X] ||| [X,1] \u202b\u0627\u062c\u0647\u0632\u0629\u202c ||| [X,1] the bodies R 2 . [X] ||| [X,1] \u202b\u0627\u062c\u0647\u0632\u0629\u202c ||| [X,1] the organs R 3 . [X] ||| [X,1] \u202b\u0627\u062c\u0647\u0632\u0629\u202c ||| [X,1] organs R 4 . [X] ||| \u202b\u0627\u062c\u0647\u0632\u0629\u202c [X,1] ||| the organs of [X,1] R 5 . [X] ||| \u202b\u0627\u062c\u0647\u0632\u0629\u202c [X,1] ||| [X,1] bodies",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "Based on these grammar rules, we as human readers infer that this Arabic phrase can be translated in two different ways: as organs or as bodies. An optimal application of the one-translation-per-discourse heuristic would thus group the rules based on the presence of one of those words. However, in the C 1 variant, each of these rules would be counted separately because of differences that in some cases do not directly affect the choice of content words. For instance, on the source side, the Arabic token appears to the right of the nonterminal symbol in R 1 , R 2 and R 3 , while it is to the left of the nonterminal in R 4 and R 5 . On the target side, differences are due to both nonterminal symbol position and the existence of determiners. Motivated by many examples like this, we came up with an alternative way of counting rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "C 2 : Counting target tokens To partially address this sparseness issue, variant C 2 focuses only on the target side. We extract all target tokens whenever a grammar rule is used by the decoder in a one-best derivation and increment a counter for each. Since we are mainly interested in content words (e.g. bodies, organs), we use simple pattern matching to discard nonterminal symbols and punctuation, and we ignore terms that appear in more than 50% of all documents (a convenient way of discarding common tokens such as the, or, and). This approach separates the rules in the example above into two groups: rules with bodies on the target side and rules with organs on the target side. Upon completion of the first pass, the consistency feature score for rule r is then determined by first computing a score for each unique target-side token w using:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "bm25(w) = 2.2N {w} 1.2 + N {w} log D + 1 DF (w) + 0.5",
"eq_num": "(2)"
}
],
"section": "Approach",
"sec_num": "4"
},
{
"text": "where in this case N {w} maps tokens to their respective counts in the document, D is the total number of documents in the collection, and DF (document frequency) is the number of documents in which the token occurs. This is a fuller version of the BM25 function in which (in the information retrieval application) both high term frequencies and rare terms are rewarded. We then set the feature score for each rule r to the maximum score of any of its target-side terminal tokens:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C 2 (r) = max e\u2208RHS(r) bm25(e)",
"eq_num": "(3)"
}
],
"section": "Approach",
"sec_num": "4"
},
{
"text": "Our motivation for choosing the maximum is that when there is more than one content word that survives the pruning of common terms, we want the score to be influenced most strongly by the most important of those terms. Since BM25 term weights can be thought of as a measure of term importance, taking the maximum is a simple expedient. Although counting only target-side tokens yields coarser granularity than counting rules, ignoring the source side of the rule risks combining target side statistics from translations of unrelated source language terms. Consider the following grammar rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "R 6 . [X] ||| <s> [X,1] \u202b\u0627\u062c\u0647\u0632\u0629\u202c ||| <s> [X,1] life support",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "Since the counter for life and support will both be incremented whenever rule R 6 fires in the one-best decoding during the first pass, problems could arise if a rule with a different LHS that also contains support on the RHS were to fire in the same document, for example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "R 7 . [X] ||| \u202b\u0627\u0644\u0627\u0631\u0647\u0627\u202c ||| support",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "If we don't take the source side into account, both occurrences of support will be grouped together when counting and R 7 will receive extra score from the consistency feature whenever R 6 is used by the decoder. Of course, this problem will only arise when the LHS of R 6 and R 7 are present in the same document, and how often that happens (and thus how large the risk from this factor is) is an empirical question. We therefore developed a third alternative as a middle ground between the fine-grained C 1 and the coarse-grained C 2 . C 3 : Counting token translation pairs In this variant, we count each terminal (source token, target token) pair that survives pruning. Specifically, if grammar rule [X]|||f 1 f 2 ...f m |||e 1 e 2 ...e n fires, we increment the count of every pair \u27e8f i , e j \u27e9, where f i is aligned to e j . After the first pass, we compute the feature value of each observed pair, based on this count and the DF of the target-side of the pair. We chose to use only the target token in the DF computation (i.e., aggregating over all source tokens) to reduce sparsity effects. Similar to C 2 , the feature of a rule r is defined by the maximum of scores of all pairs extracted from r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C 3 (r) = max f \u2208LHS(r) e\u2208RHS(r) \u27e8f,e\u27e9 aligned bm25(\u27e8f, e\u27e9)",
"eq_num": "(4)"
}
],
"section": "Approach",
"sec_num": "4"
},
{
"text": "Since each variant has its benefits and drawbacks, we can include all three in the system and let the tuning process decide on how each should be weighted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "We have evaluated the one-translation-per-discourse feature using the cdec MT system (Dyer et al., 2010) . We started by building a baseline system using standard features in cdec: lexical and phrase translation probabilities in both directions, word and arity penalty features, and a 5-gram language model. We then added each of the three consistency feature variants, along with all two-way and the one three-way combinations of them, thus yielding a total of eight systems for comparison, including the baseline. For training the Ar-En system, we used the dataset from the DARPA GALE evaluation (Olive et al., 2011) , which consists of NIST and LDC releases. The corpus was filtered to remove sentence pairs with anomalous length ratios and subsampled to yield a training set containing 3.4 million parallel sentence pairs. The Arabic text was preprocessed to produce two different segmentations (simple punctuation tokenization with orthographic normalization, and LDC's ATBv3 representation (Maamouri et al., 2008) ), represented together using cdec's lattice input format (Dyer et al., 2008) .",
"cite_spans": [
{
"start": 85,
"end": 104,
"text": "(Dyer et al., 2010)",
"ref_id": "BIBREF9"
},
{
"start": 598,
"end": 618,
"text": "(Olive et al., 2011)",
"ref_id": "BIBREF21"
},
{
"start": 996,
"end": 1019,
"text": "(Maamouri et al., 2008)",
"ref_id": "BIBREF15"
},
{
"start": 1078,
"end": 1097,
"text": "(Dyer et al., 2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "The Zh-En system was trained on parallel training text consisting of the non-UN portions and non-HK Hansards portions of the NIST training corpora. Chinese was automatically segmented by the Stanford segmenter (Tseng et al., 2005) , and traditional characters were simplified. After subsampling and filtering, we obtain a training corpus of 1.6 million parallel sentences.",
"cite_spans": [
{
"start": 210,
"end": 230,
"text": "(Tseng et al., 2005)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "Both training sets were word-aligned with GIZA++ (Och and Ney, 2003) , using 5 Model 1 and 5 HMM iterations. A SCFG was then extracted from these alignments using a suffix array extractor (Chiang, 2007) . Evaluation was done with multi-reference BLEU (Papineni et al., 2002) on test sets with four references for each language pair, and MIRA was used for tuning (Crammer et al., 2006) . In our experiments, we run the first decoding phase using feature weights that are guessed heuristically based on weights from previously tuned systems. All feature weights, including the discourse feature, were then tuned together, based on the output of the second decoding phase. For Ar-En parameter tuning, we used the MT06 newswire dataset, which contains 104 documents and a total of 1,797 sentences. For testing, we used the MT08 dataset described above (74 documents, 813 sentences). For Zh-En experiments, the MT02 newswire dataset (100 documents, 878 sentences) was used for tuning, and evaluation was done on the MT06 test set (79 documents, 1,664 sentences). For both language pairs, DF values were computed from the tuning set for both tuning and evaluation experiments.",
"cite_spans": [
{
"start": 49,
"end": 68,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF20"
},
{
"start": 188,
"end": 202,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF5"
},
{
"start": 251,
"end": 274,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF22"
},
{
"start": 362,
"end": 384,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "When we used NIST's official metric (BLEU-4) to compare our results to the official NIST evaluation (NIST, 2006; NIST, 2008) , our baseline system achieved 54.70 for Ar-En and 31.69 for Zh-En. Based on reported NIST results, our baseline would have ranked 4 th in the Zh-En MT06 evaluation, and would have outperformed all Ar-En MT08 systems. We used a slightly different IBM-BLEU metric for the rest of our evaluation. In this case, the baseline system achieved 53.07 BLEU points for Ar-En and 30.43 points for Zh-En. Among more recent papers, the best reported results were 56.87 for Ar-En MT08 (Zhao et al., 2011a) and 35.87 for Zh-En MT06 (Zhao et al., 2011b) , although many papers report BLEU scores below 53 points for Arabic (Carpuat et al., 2011) and 32 points for Chinese (Monz, 2011) . The systems that outperformed our baseline applied novel techniques, and used larger language models, as well as many nonstandard features. We argue that these novelties are complementary to our approach, and therefore do not damage the credibility of our baseline.",
"cite_spans": [
{
"start": 100,
"end": 112,
"text": "(NIST, 2006;",
"ref_id": null
},
{
"start": 113,
"end": 124,
"text": "NIST, 2008)",
"ref_id": null
},
{
"start": 597,
"end": 617,
"text": "(Zhao et al., 2011a)",
"ref_id": "BIBREF31"
},
{
"start": 643,
"end": 663,
"text": "(Zhao et al., 2011b)",
"ref_id": "BIBREF32"
},
{
"start": 733,
"end": 755,
"text": "(Carpuat et al., 2011)",
"ref_id": "BIBREF2"
},
{
"start": 782,
"end": 794,
"text": "(Monz, 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "Among the single-feature runs, C 3 had the best performance in Ar-En experiments, with 53.84 BLEU points, whereas C 2 yielded the best results for Zh-En with a BLEU score of 30.96. In any case, all three variants outperformed the baseline (see Table 2) . When multiple features were combined, we generally observed an increase in BLEU, suggesting that our features have usefully different error char- acteristics. The combination of all three variants, C 123 , yielded the best results, nearly 1.0 BLEU point higher than the baseline for both language pairs. Evaluation results are summarized in Table 2 . Given our focus on documents, it is natural to ask what fraction of the documents were helped or harmed by consistency features. Documentlevel BLEU scores for Arabic-to-English translations show that C 3 outperformed the baseline on a larger number of documents than any other single feature (42/74=57%), compared with 37/74 (50%) for both C 1 and C 2 . C 123 did better by this measure as well, with BLEU increasing for 43 of the documents. There were no documents where the BLEU score was exactly the same, therefore the BLEU score declined for the remaining documents. As Table 3 indicates, document-level BLEU for the Zh-En experiments shows similar results.",
"cite_spans": [],
"ref_spans": [
{
"start": 244,
"end": 252,
"text": "Table 2)",
"ref_id": "TABREF4"
},
{
"start": 596,
"end": 603,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "We can also look at our results in a more finegrained way, focusing on differences in how each system translated the same source-language phrase. For this analysis, we defined English phrases e and e \u2032 to be different if edit distance(e, e \u2032 ) > max(length(e), length(e \u2032 ))/2. By this way of counting, there are 197 unique (Arabic phrase, document) pairs for which at least one single-feature system produced translations differently from the baseline system. Together, these cases affect 553 sentences (68%) in 67 of the 74 documents, with as many as 12 differences observed in a single document. The number of such differences is even higher for Chinese-to-English translation, probably due to lower confidence from the translation model and longer documents. Table 4 shows the number of changes by each system, and the percentage of the test set affected by these changes.",
"cite_spans": [],
"ref_spans": [
{
"start": 763,
"end": 770,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "In order to gain greater insight into the effect of the consistency features, we randomly sampled 60 of the 197 cases and analyzed the influence of the change to the document BLEU score. In 25 of the sampled cases, at least one of the three systems made a change that improved the BLEU score, whereas the score was adversely affected for at least one system in 13 cases. BLEU remained unchanged in the remaining 22 cases, mostly due to the use of multiple reference translations. When we analyze the effect of each system separately, we see that C 2 was the most aggressive, making 25 changes that influenced BLEU (16 positive, 9 negative). C 1 was the most conservative, with only 13 such changes (8 positive, 5 negative). Consistent with the overall BLEU scores, C 3 evidenced the best ratio between benefit and harm, making 20 changes that affected the score (16 positive, 4 negative).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "Looking at specific cases can yield some insight into how the consistency features achieve improvements. For example, results improved when translating the phrase \u202b,\u062a\u0646\u0638\u064a\u0645\u064a\u0629\u202c (Eng. organizational, regulatory), which appears in the context of organi-zational groups that support terrorist ideology. The baseline system translated this as organizational in one case, and regulatory in another. Variants C 1 and C 2 changed this behavior, so that the translation was organizational in both cases. One of the reference translations used organizational in one case and dropped the phrase in the other, and the other three translators provided consistent translations (using organized and organizational). As a result, applying the one-translation-per-discourse heuristic improved the multi-reference BLEU score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "On the other hand, here is one of the cases where our feature hurt performance. The phrase \u8fb9\u9632 \u90e8\u961f (Eng. border/frontier troops/guards) appears in two sentences of a Chinese news story about violence along the India -Nepal border. All reference translations consistently used the word border in the translation, as it is a better choice in this context. The baseline system translated the phrase as frontier guards and border troops in the two sentences. All system variants replaced border with frontier to maintain consistency, and therefore produced worse translations, causing a decrease in BLEU score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "Examples can, however, also point up limitations in our ability to measure improvements. In one of the test documents, the Arabic phrase \u202b\u0627\u0644\u062a\u0633\u0644\u0644\u202c \u202b\u0627\u0644\u064a\u202c (Eng. sneak, infiltrate, enter without approval) appears in the context of Turkey trying to enter the European Union. This was translated by the baseline system as sneak into in one occurrence and infiltrate into in another. C 1 didn't change the output, but C 2 and C 3 translated the phrase as infiltrate into in both cases. Although all of the four reference translators were consistent within their choices, each of them chose different translations, namely worm its way, enter, sneak and sneak into. This resulted in a decrease in BLEU score for the two systems that chose infiltrate into. This case illustrates a limitation to fine-grained use of BLEU alone as a basis for analysis, since we might argue that infiltrate into is no less appropriate than sneak into in this context. In other words, some of the reductions we see in BLEU may not be actual errors but rather simply changes that take us outside of the coverage of the test set. We did not find any cases in our sample in which improvements in BLEU seemed to reward changes that adversely affected meaning. From this, we conclude that BLEU is a somewhat conservative measure when used in this way, and that the actual overall improvement in translation quality over our baseline may be somewhat more than our roughly 1.0 measured BLEU improvement would suggest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "In this paper, we started with a new way of looking at, and largely supporting, the \"one translation per discourse\" hypothesis using forced decoding of human reference translations. We then leveraged insights from that analysis to design the translation model consistency features, obtaining solid improvements for both Ar-En and Zh-En translation. In future work, we plan to explore additional variants. For example, we can further address sparsity by incorporating monolingual paraphrase detection on the source side, the target side or both. We can and should explore other monotonically increasing concave feature functions in addition to the Okapi BM25 function that we have found to be useful in this work, we should explore alternatives to our use of the maximum function in C 2 and C 3 , and we should consider optimizing to measures other than BLEU (e.g., METEOR) that extend the range of rewarded lexical choices by leveraging monolingual paraphrase evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "In designing our features we were guided by our intuition about which kinds of consistency should be rewarded. Data can be superior to intuition, however, and our forced decoding technique might also be helpful in generating new insights that could help to guide the design of even more useful features. For example, our forced decoding clearly points to cases in which translators have chosen different structural variants when translating the same phrase, and closer examination of these cases might help us to automatically detect which kinds of structural variation can most profitably be moderated using a consistency feature. We should also note that we have only done forced decoding to date in one language pair (Ar-En), and there might be more to be learned about language-specific issues from doing the same analysis for additional language pairs. Finally, the time seems propitious to reconsider our choice of document-scale as our discourse context. Documents have much to recommend them, but much of the content that we might wish to translate (conversational speech, text chat, email threads, . . . ) doesn't present the kinds of obvious and unambiguous document boundaries that we find in MT test collections that are built from news stories. Moreover, some documents (e.g., textbooks) may be too diverse for an entire document to be the right scale for consistency. We might also be able to productively group similar documents into clusters in which the vocabulary choices are (or should be) mutually reinforcing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "We therefore end where we began, with many questions to be answered. Now, however, we have somewhat different questions -not whether to encourage consistency at a super-sentential scale, but rather when and how best to do that.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Since our goal was an exploratory analysis, the MT08 test set was combined with the training set in order to ensure reachability of the reference translations using the learned grammar. Proper train/dev/test splits were, of course, used for the evaluation results reported in Section 5.2 We define a phrase as any text that constitutes the entire LHS of a grammar rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part by the BOLT program of the Defense Advanced Research Projects Agency, Contract No. HR0011-12-C-0015. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the view of DARPA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Domain adaptation for statistical machine translation with monolingual resources",
"authors": [
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Fourth Workshop on Statistical Machine Translation (StatMT '09)",
"volume": "",
"issue": "",
"pages": "182--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicola Bertoldi and Marcello Federico. 2009. Do- main adaptation for statistical machine translation with monolingual resources. In Proceedings of the Fourth Workshop on Statistical Machine Translation (StatMT '09), pages 182-189.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Exploiting document-level context for data-driven machine translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ralf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the the Eighth Conference of the Association for Machine Translation in the Americas (AMTA '08)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralf D. Brown. 2008. Exploiting document-level context for data-driven machine translation. In Proceedings of the the Eighth Conference of the Association for Ma- chine Translation in the Americas (AMTA '08).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improved Arabic-to-English statistical machine translation by reordering post-verbal subjects for word alignment",
"authors": [
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Marton",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2011,
"venue": "Machine Translation",
"volume": "",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marine Carpuat, Yuval Marton, and Nizar Habash. 2011. Improved Arabic-to-English statistical machine trans- lation by reordering post-verbal subjects for word alignment. Machine Translation, pages 1-16.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "One translation per discourse",
"authors": [
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, DEW '09",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marine Carpuat. 2009. One translation per discourse. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, DEW '09, pages 19-27.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL '05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL '05.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33:201-228.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Shai Shalev-Shwartz, and Yoram Singer",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Dekel",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Keshet",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Machine Learning Research",
"volume": "7",
"issue": "",
"pages": "551--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Shwartz, and Yoram Singer. 2006. Online passive- aggressive algorithms. Journal of Machine Learning Research, 7:551-585.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An unsupervised method for word sense tagging using parallel corpora",
"authors": [
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL '02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mona Diab and Philip Resnik. 2002. An unsupervised method for word sense tagging using parallel corpora. In Proceedings of ACL '02.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generalizing Word Lattice Translation",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-HLT'08",
"volume": "",
"issue": "",
"pages": "1012--1020",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Dyer, Smaranda Muresan, and Philip Resnik. 2008. Generalizing Word Lattice Translation. In Pro- ceedings of ACL-HLT'08, pages 1012-1020, June.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "cdec: a decoder, alignment, and learning framework for finitestate and context-free translation models",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Weese",
"suffix": ""
},
{
"first": "Hendra",
"middle": [],
"last": "Setiawan",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Ferhan",
"middle": [],
"last": "Ture",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Eidelman",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2010,
"venue": "ACLDemos '10",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Jonathan Weese, Hendra Setiawan, Adam Lopez, Ferhan Ture, Vladimir Eidelman, Juri Ganitke- vitch, Phil Blunsom, and Philip Resnik. 2010. cdec: a decoder, alignment, and learning framework for finite- state and context-free translation models. In ACLDe- mos '10, pages 7-12.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "One sense per discourse",
"authors": [
{
"first": "William",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the workshop on Speech and Natural Language, HLT '91",
"volume": "",
"issue": "",
"pages": "233--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William A. Gale, Kenneth W. Church, and David Yarowsky. 1992. One sense per discourse. In Pro- ceedings of the workshop on Speech and Natural Lan- guage, HLT '91, pages 233-237.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Context-aware discriminative phrase selection for statistical machine translation",
"authors": [
{
"first": "Jes\u00fas",
"middle": [],
"last": "Gim\u00e9nez",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of StatMT '07",
"volume": "",
"issue": "",
"pages": "159--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jes\u00fas Gim\u00e9nez and Llu\u00eds M\u00e0rquez. 2007. Context-aware discriminative phrase selection for statistical machine translation. In Proceedings of StatMT '07, pages 159-166.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adaptation of the translation model for statistical machine translation based on information retrieval",
"authors": [
{
"first": "",
"middle": [],
"last": "As Hildebrand",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Eck",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of The European Association for Machine Translation (EAMT '05)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "AS Hildebrand, M Eck, S Vogel, and Alex Waibel. 2005. Adaptation of the translation model for statistical ma- chine translation based on information retrieval. In Proceedings of The European Association for Machine Translation (EAMT '05).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improving statistical machine translation with monolingual collocation",
"authors": [
{
"first": "Zhanyi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL '10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhanyi Liu, Haifeng Wang, Hua Wu, and Sheng Li. 2010. Improving statistical machine translation with mono- lingual collocation. In ACL '10.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Consistent translation using discriminative learning: a translation memory-inspired approach",
"authors": [
{
"first": "Yanjun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL-HLT'11",
"volume": "",
"issue": "",
"pages": "1239--1248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanjun Ma, Yifan He, Andy Way, and Josef van Gen- abith. 2011. Consistent translation using discrimina- tive learning: a translation memory-inspired approach. In Proceedings of ACL-HLT'11, pages 1239-1248.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Enhancing the Arabic Treebank: A Collaborative Effort toward New Annotation Guidelines",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Maamouri",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Kulick",
"suffix": ""
}
],
"year": 2008,
"venue": "LREC '08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Maamouri, Ann Bies, and Seth Kulick. 2008. Enhancing the Arabic Treebank: A Collaborative Ef- fort toward New Annotation Guidelines. In LREC '08.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Statistical Machine Translation with Local Language Models",
"authors": [
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP '11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christof Monz. 2011. Statistical Machine Translation with Local Language Models. In EMNLP '11.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Exploiting parallel texts for word sense disambiguation: an empirical study",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Yee Seng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL '03",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Bin Wang, and Yee Seng Chan. 2003. Ex- ploiting parallel texts for word sense disambiguation: an empirical study. In ACL '03.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Och and Hermann Ney. 2003. A systematic com- parison of various statistical alignment models. Com- putational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Olive",
"suffix": ""
},
{
"first": "Caitlin",
"middle": [],
"last": "Christianson",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Mccary",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Olive, Caitlin Christianson, and John McCary. 2011. Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation. Springer Publishing Com- pany, Inc., 1st edition.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "ACL '02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In ACL '02.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Okapi at TREC-3",
"authors": [
{
"first": "Stephen",
"middle": [
"E"
],
"last": "Robertson",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Micheline",
"middle": [],
"last": "Hancock-Beaulieu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Gatford",
"suffix": ""
}
],
"year": 1994,
"venue": "TREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen E. Robertson, Steve Walker, Susan Jones, Miche- line Hancock-Beaulieu, and Mike Gatford. 1994. Okapi at TREC-3. In TREC.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bayesian adaptation for statistical machine translation",
"authors": [
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Sanchis",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Trilles",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the workshop on Structural and Syntactic Pattern Recognition (SSPR '10)",
"volume": "",
"issue": "",
"pages": "620--629",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Germ\u00e1n Sanchis-Trilles and Francisco Casacuberta. 2010. Bayesian adaptation for statistical machine translation. In Proceedings of the workshop on Struc- tural and Syntactic Pattern Recognition (SSPR '10), pages 620-629.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Context adaptation in statistical machine translation using models with exponentially decaying cache",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the workshop on Domain Adaptation for Natural Language Processing (DANLP '10)",
"volume": "",
"issue": "",
"pages": "8--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2010. Context adaptation in statistical machine translation using models with exponentially decaying cache. In Proceedings of the workshop on Domain Adaptation for Natural Language Processing (DANLP '10), pages 8-15.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A conditional random field word segmenter",
"authors": [
{
"first": "Huihsin",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "Pi-Chuan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Galen",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Fourth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huihsin Tseng, Pi-Chuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A con- ditional random field word segmenter. In Fourth SIGHAN Workshop on Chinese Language Processing.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Machine translation with cross-lingual information retrieval based document relevance scores",
"authors": [
{
"first": "M",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Wasser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dorr",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael M. Wasser and Bonnie Dorr. 2008. Ma- chine translation with cross-lingual information re- trieval based document relevance scores. Unpublished.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Document-level consistency verification in machine translation",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2011,
"venue": "Machine Translation Summit XIII (MTS'11)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong Xiao, Jingbo Zhu, Shujie Yao, and Hao Zhang. 2011. Document-level consistency verification in ma- chine translation. In Machine Translation Summit XIII (MTS'11).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Unsupervised word sense disambiguation rivaling supervised methods",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1995,
"venue": "ACL '95",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In ACL '95.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Language model adaptation for statistical machine translation with structured query models",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Eck",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Zhao, Matthias Eck, and Stephan Vogel. 2004. Lan- guage model adaptation for statistical machine transla- tion with structured query models. In COLING '04.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning to transform and select elementary trees for improved syntax-based machine translations",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Young-Suk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Liu",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL-HLT '11",
"volume": "",
"issue": "",
"pages": "846--855",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Zhao, Young-Suk Lee, Xiaoqiang Luo, and Liu Li. 2011a. Learning to transform and select elementary trees for improved syntax-based machine translations. In ACL-HLT '11, pages 846-855.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Language model weight adaptation based on cross-entropy for statistical machine translation",
"authors": [
{
"first": "Yinggong",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yangsheng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2011,
"venue": "Pacific Asia Conference on Language, Information and Computation (PACLIC '11)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinggong Zhao, Yangsheng Ji, Ning Xi, Shujian Huang, and Jiajun Chen. 2011b. Language model weight adaptation based on cross-entropy for statistical ma- chine translation. In Pacific Asia Conference on Lan- guage, Information and Computation (PACLIC '11).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Illustration of forced decoding.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Examples of differences in lexical choice for content-bearing words within the same document.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Examples of differences in lexical choice for other types of lexical units within the same document.",
"type_str": "figure"
},
"TABREF1": {
"html": null,
"content": "<table/>",
"num": null,
"text": "",
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table/>",
"num": null,
"text": "4a. [X] ||| \u202b\u0641\u064a\u202c ||| on 4b. [X] ||| \u202b\u0641\u064a\u202c ||| in 4c. [X] ||| \u202b\u0641\u064a\u202c ||| 's 5a. [X] ||| \u202b\u0642\u062f\u202c ||| had 5b. [X] ||| \u202b\u0642\u062f\u202c ||| was",
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table><tr><td>Method # documents Ar-En Zh-En Docs 74 79 C 1 37 30 C 2 37 35 C 3 42 36 C 123 43 41</td></tr></table>",
"num": null,
"text": "Evaluation results: BLEU scores with four references for Ar-En and Zh-En experiments.",
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Doc-level analysis: Number of documents where each variant outperforms baseline.",
"type_str": "table"
},
"TABREF7": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Effect of applying variants of the consistency feature (Any=C 1 or C 2 or C 3 ).",
"type_str": "table"
}
}
}
}