{ "paper_id": "P16-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:00:13.539529Z" }, "title": "Models and Inference for Prefix-Constrained Machine Translation", "authors": [ { "first": "Joern", "middle": [], "last": "Wuebker", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Spence", "middle": [], "last": "Green", "suffix": "", "affiliation": {}, "email": "" }, { "first": "John", "middle": [], "last": "Denero", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Sa\u0161a", "middle": [], "last": "Hasan", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We apply phrase-based and neural models to a core task in interactive machine translation: suggesting how to complete a partial translation. For the phrase-based system, we demonstrate improvements in suggestion quality using novel objective functions, learning techniques, and inference algorithms tailored to this task. Our contributions include new tunable metrics, an improved beam search strategy, an n-best extraction method that increases suggestion diversity, and a tuning procedure for a hierarchical joint model of alignment and translation. The combination of these techniques improves next-word suggestion accuracy dramatically from 28.5% to 41.2% in a large-scale English-German experiment. Our recurrent neural translation system increases accuracy yet further to 53.0%, but inference is two orders of magnitude slower. Manual error analysis shows the strengths and weaknesses of both approaches.", "pdf_parse": { "paper_id": "P16-1007", "_pdf_hash": "", "abstract": [ { "text": "We apply phrase-based and neural models to a core task in interactive machine translation: suggesting how to complete a partial translation. For the phrase-based system, we demonstrate improvements in suggestion quality using novel objective functions, learning techniques, and inference algorithms tailored to this task. Our contributions include new tunable metrics, an improved beam search strategy, an n-best extraction method that increases suggestion diversity, and a tuning procedure for a hierarchical joint model of alignment and translation. The combination of these techniques improves next-word suggestion accuracy dramatically from 28.5% to 41.2% in a large-scale English-German experiment. Our recurrent neural translation system increases accuracy yet further to 53.0%, but inference is two orders of magnitude slower. Manual error analysis shows the strengths and weaknesses of both approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A core prediction task in interactive machine translation (MT) is to complete a partial translation (Ortiz-Mart\u00ednez et al., 2009; Koehn et al., 2014) . Sentence completion enables interfaces that are richer than basic post-editing of MT output. For example, the translator can receive updated suggestions after each word typed (Langlais et al., 2000) . However, we show that completing partial translations by na\u00efve constrained decoding-the standard in prior work-yields poor suggestion quality. We describe new phrase-based objective functions, learning techniques, and inference algorithms for the sentence completion task. 1 We then compare this improved phrase-based system to a state-of-theart recurrent neural translation system in large-scale English-German experiments.", "cite_spans": [ { "start": 100, "end": 129, "text": "(Ortiz-Mart\u00ednez et al., 2009;", "ref_id": "BIBREF27" }, { "start": 130, "end": 149, "text": "Koehn et al., 2014)", "ref_id": "BIBREF18" }, { "start": 327, "end": 350, "text": "(Langlais et al., 2000)", "ref_id": "BIBREF19" }, { "start": 626, "end": 627, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A system for completing partial translations takes as input a source sentence and a prefix of the target sentence. It predicts a suffix: a sequence of tokens that extends the prefix to form a full sentence. In an interactive setting, the first words of the suffix are critical; these words are the focus of the user's attention and can typically be appended to the translation with a single keystroke. We introduce a tuning metric that scores correctness of the whole suffix, but is particularly sensitive to these first words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Phrase-based inference for this task involves aligning the prefix to the source, then generating the suffix by translating the unaligned words. We describe a beam search strategy and a hierarchical joint model of alignment and translation that together improve suggestions dramatically. For English-German news, next-word accuracy increases from 28.5% to 41.2%. An interactive MT system could also display multiple suggestions to the user. We describe an algorithm for efficiently finding the n-best next words directly following a prefix and their corresponding best suffixes. Our experiments show that this approach to n-best list extraction, combined with our other improvements, increased next-word suggestion accuracy of 10-best lists from 33.4% to 55.5%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We also train a recurrent neural translation system to maximize the conditional likelihood of the next word following a translation prefix, which is both a standard training objective in neural translation and an ideal fit for our task. This neural system provides even more accurate predictions than our improved phrase-based system. However, inference is two orders of magnitude slower, which is prob-lematic for an interactive setting. We conclude with a manual error analysis that reveals the strengths and weaknesses of both the phrase-based and neural approaches to suffix prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Let F and E denote the set of all source and target language strings, respectively. Given a source sentence f \u2208 F and target prefix e p \u2208 E, a predicted suffix e s \u2208 E can be evaluated by comparing the full sentence e = e p e s to a reference e * . Let e * s denote the suffix of the reference that follows e p .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating Suffix Prediction", "sec_num": "2" }, { "text": "We define three metrics below that score translations by the characteristics that are most relevant in an interactive setting: the accuracy of the first words of the suffix and the overall quality of the suffix. Each metric takes example triples (f, e p , e * ) produced during an interactive MT session in which e p was generated in the process of constructing e * .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating Suffix Prediction", "sec_num": "2" }, { "text": "A simulated corpus of examples can be produced from a parallel corpus of (f, e * ) pairs by selecting prefixes of each e * . An exhaustive simulation selects all possible prefixes, while a sampled simulation selects only k prefixes uniformly at random for each e * . Computing metrics for exhaustive simulations is expensive because it requires performing suffix prediction inference for every prefix: |e * | times for each reference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating Suffix Prediction", "sec_num": "2" }, { "text": "Word Prediction Accuracy (WPA) or nextword accuracy (Koehn et al., 2014) is 1 if the first word of the predicted suffix e s is also the first word of reference suffix e * s , and 0 otherwise. Averaging over examples gives the frequency that the word following the prefix was predicted correctly. In a sampled simulation, all reference words that follow the first word of a sampled suffix are ignored by the metric, so most reference information is unused.", "cite_spans": [ { "start": 52, "end": 72, "text": "(Koehn et al., 2014)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating Suffix Prediction", "sec_num": "2" }, { "text": "is the maximum number of contiguous words at the start of the predicted suffix that match the reference. Like WPA, this metric is 0 if the first word of e s is not also the first word of e * s . In a sampled simulation, all reference words that follow the first mis-predicted word in the sampled suffix are ignored. While it is possible that the metric will require the full reference suffix, most reference information is unused in practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Number of Predicted Words (#prd)", "sec_num": null }, { "text": "(pxB ): B (Papineni et al., 2002) is computed from the geometric mean of clipped n-gram precisions prec n (\u2022, \u2022) and a brevity penalty BP (\u2022, \u2022). Given a sequence of references E * = e * 1 , . . . , e * t and corresponding predictions E = e 1 , . . . , e t ,", "cite_spans": [ { "start": 10, "end": 33, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Prefix-B", "sec_num": null }, { "text": "(E, E", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B", "sec_num": null }, { "text": "* ) = BP (E, E * ) \u2022 4 n=1 prec n (E, E * ) 1 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B", "sec_num": null }, { "text": "Ortiz-Mart\u00ednez et al. (2010) use BLEU directly for training an interactive system, but we propose a variant that only scores the predicted suffix and not the input prefix. The pxB metric computes B", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B", "sec_num": null }, { "text": "(\u00ca,\u00ca * ) for the following constructed sequences\u00ca and\u00ca * :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B", "sec_num": null }, { "text": "\u2022 For each (f, e p , e * ) and suffix prediction e s , E includes the full sentence e = e p e s . \u2022 For each (f, e p , e * ),\u00ca * is a masked copy of e * in which all prefix words that do not match any word in e are replaced by null tokens. This construction maintains the original computation of the brevity penalty, but does not include the prefix in the precision calculations. Unlike the two previous metrics, the pxB metric uses all available reference information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B", "sec_num": null }, { "text": "In order to account for boundary conditions, the reference e * is masked by the prefix e p as follows: we replace each of the first |e p \u2212 3| words with a null token e null , unless the word also appears in the suffix e * s . Masking retains the last three words of the prefix so that the first words after the prefix can contribute to the precision of all n-grams that overlap with the prefix, up to n = 4. Words that also appear in the suffix are retained so that their correct prediction in the suffix can contribute to those precisions, which would otherwise be clipped.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B", "sec_num": null }, { "text": "All of these metrics can be used as the tuning objective of a phrase-based machine translation system. Tuning toward a sampled simulation that includes one or two prefixes per reference is much faster than using an exhaustive set of prefixes. A linear combination of these metrics can be used to trade off the relative importance of the full suffix and the words immediately following the prefix. With a combined metric, learning can focus on these words while using all available information in the references.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loss Functions for Learning", "sec_num": "2.1" }, { "text": "In addition to these metrics, suffix prediction can be evaluated by the widely used keystroke ratio (KSR) metric . This ratio assumes that any number of characters from the beginning of the suggested suffix can be appended to the user prefix using a single keystroke. It computes the ratio of key strokes required to enter the reference interactively to the character count of the reference. Our MT architecture does not permit tuning to KSR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Keystroke Ratio (KSR)", "sec_num": "2.2" }, { "text": "Other methods of quantifying effort in an interactive MT system are more appropriate for user studies than for direct evaluation of MT predictions. For example, measuring pupil dilation, pause duration and frequency (Schilperoord, 1996) , mouse-action ratio (Sanchis-Trilles et al., 2008) , or source difficulty (Bernth and McCord, 2000) would certainly be relevant for evaluating a full interactive system, but are beyond the scope of this work.", "cite_spans": [ { "start": 216, "end": 236, "text": "(Schilperoord, 1996)", "ref_id": "BIBREF31" }, { "start": 258, "end": 288, "text": "(Sanchis-Trilles et al., 2008)", "ref_id": "BIBREF30" }, { "start": 312, "end": 337, "text": "(Bernth and McCord, 2000)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Keystroke Ratio (KSR)", "sec_num": "2.2" }, { "text": "In the log-linear approach to phrase-based translation (Och and Ney, 2004) , the distribution of translations e \u2208 E given a source sentence f \u2208 F is:", "cite_spans": [ { "start": 55, "end": 74, "text": "(Och and Ney, 2004)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-Based Inference", "sec_num": "3" }, { "text": "p(e|f ; w) = r: src(r)=f tgt(r)=e 1 Z(f ) exp w \u03c6(r) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-Based Inference", "sec_num": "3" }, { "text": "Here, r is a phrasal derivation with source and target projections src(r) and tgt(r), w \u2208 R d is the vector of model parameters, \u03c6(\u2022) \u2208 R d is a feature map, and Z(f ) is an appropriate normalizing constant. For the same model, the distribution over suffixes e s \u2208 E must also condition on a prefix e p \u2208 E:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-Based Inference", "sec_num": "3" }, { "text": "p(e s |e p , f ; w) = r: src(r)=f tgt(r)=epes 1 Z(f ) exp w \u03c6(r) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-Based Inference", "sec_num": "3" }, { "text": "In phrase-based decoding, the best scoring derivation r given a source sentence f and weights w is found efficiently by beam search, with one beam for every count of source words covered by a partial derivation (known as the source coverage cardinality). To predict a suffix conditioned on a prefix by constrained decoding, Barrachina et al. (2008) and Ortiz-Mart\u00ednez et al. (2009) modify the beam search by discarding hypotheses (partial derivations) that do not match the prefix e p .", "cite_spans": [ { "start": 324, "end": 348, "text": "Barrachina et al. (2008)", "ref_id": "BIBREF0" }, { "start": 353, "end": 381, "text": "Ortiz-Mart\u00ednez et al. (2009)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-Based Inference", "sec_num": "3" }, { "text": "We propose target beam search, a two-step inference procedure. The first step is to produce a phrase-based alignment between the target prefix and a subset of the source words. The target is aligned left-to-right by appending aligned phrase pairs. However, each beam is associated with a target word count, rather than a source word count. Therefore, each beam contains hypotheses for a fixed prefix of target words. Phrasal translation candidates are bundled and sorted with respect to each target phrase rather than each source phrase. Crucially, the source distortion limit is not enforced during alignment, so that long-range reorderings can be analyzed correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-Based Inference", "sec_num": "3" }, { "text": "The second step generates the suffix using standard beam search. 2 Once the target prefix is completely aligned, each hypothesis from the final target beam is copied to an appropriate source beam. Search starts with the lowest-count source beam that contains at least one hypothesis. Here, we re-instate the distortion limit with the following modification to avoid search failures: The decoder can always translate any source position before the last source position that was covered in the alignment phase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-Based Inference", "sec_num": "3" }, { "text": "The phrase pairs available during decoding may not be sufficient to align the target prefix to the source. Pre-compiled phrase tables (Koehn et al., 2003) are typically pruned, and dynamic phrase tables (Levenberg et al., 2010) require sampling for efficient lookup.", "cite_spans": [ { "start": 134, "end": 154, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF17" }, { "start": 203, "end": 227, "text": "(Levenberg et al., 2010)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Synthetic Phrase Pairs", "sec_num": "3.1" }, { "text": "To improve alignment coverage, we include additional synthetic phrases extracted from word-level alignments between the source sentence and target prefix inferred using unpruned lexical statistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthetic Phrase Pairs", "sec_num": "3.1" }, { "text": "We first find the intersection of two directional word alignments. The directional alignments are obtained similar to IBM Model 2 (Brown et al., 1993) by aligning the most likely source word to each target word. Given a source sequence f = f 1 . . . f |f | and a target sequence e = e 1 . . . e |e| , we define the alignment a = a 1 . . . a |e| , where a i = j means that e i is aligned to f j . The likelihood is modeled by a single-word lexicon probability that is provided by our translation model and an alignment probability modeled as a Poisson distribution P oisson(k, \u03bb) in the distance to the diagonal.", "cite_spans": [ { "start": 130, "end": 150, "text": "(Brown et al., 1993)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Synthetic Phrase Pairs", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a i = arg max j\u2208{1,...,|f |} p(a i = j|f, e) (3) p(a i = j|f, e) = p(e i |f j ) \u2022 p(a i |j) (4) p(e i |f j ) = cnt(e i , f j ) cnt(f j ) (5) p(a i |j) = Poisson(|a i \u2212 j|, 1.0)", "eq_num": "(6)" } ], "section": "Synthetic Phrase Pairs", "sec_num": "3.1" }, { "text": "Here, cnt(e i , f j ) is the count of all word alignments between e i and f j in the training bitext, and cnt(f j ) the monolingual occurrence count of f j . We perform standard phrase extraction (Och et al., 1999; Koehn et al., 2003) to obtain our synthetic phrases, whose translation probabilities are again estimated based on the single-word probabilities p(e i |f j ) from our translation model. Given a synthetic phrase pair (e, f ), the phrase translation probability is computed as", "cite_spans": [ { "start": 196, "end": 214, "text": "(Och et al., 1999;", "ref_id": "BIBREF25" }, { "start": 215, "end": 234, "text": "Koehn et al., 2003)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Synthetic Phrase Pairs", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(e|f ) = 1\u2264i\u2264|e| max 1\u2264j\u2264|f | p(e i |f j )", "eq_num": "(7)" } ], "section": "Synthetic Phrase Pairs", "sec_num": "3.1" }, { "text": "Additionally, we introduce three indicator features that count the number of synthetic phrase pairs, source words and target words, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthetic Phrase Pairs", "sec_num": "3.1" }, { "text": "In order to tune the model for suffix prediction, we optimize the weights w in Equation 2 to maximize the metrics introduced in Section 2. Model tuning is performed with AdaGrad (Duchi et al., 2011) , an online subgradient method. It features an adaptive learning rate and comes with good theoretical guarantees. See Green et al. (2013) for the details of applying AdaGrad to phrase-based translation. The same model scores both alignment of the prefix and translation of the suffix. However, different feature weights may be appropriate for scoring each step of the inference process. In order to learn different weights for alignment and translation within a unified joint model, we apply the hierarchical adaptation method of Wuebker et al. (2015) , which is based on frustratingly easy domain adaptation (FEDA) (Daum\u00e9 III, 2007) . We define three sub-segment domains:", "cite_spans": [ { "start": 178, "end": 198, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF8" }, { "start": 317, "end": 336, "text": "Green et al. (2013)", "ref_id": "BIBREF13" }, { "start": 729, "end": 750, "text": "Wuebker et al. (2015)", "ref_id": "BIBREF34" }, { "start": 815, "end": 832, "text": "(Daum\u00e9 III, 2007)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Tuning", "sec_num": "4" }, { "text": ", and . The domain contains all phrases that are used for aligning the prefix with the source sentence. Phrases that span both prefix and suffix additionally belong to the domain. Finally, once the prefix has been completely covered, the domain applies to all phrases that are used to translate the remainder of the sentence. The domain spans the entire phrasal derivation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning", "sec_num": "4" }, { "text": "Formally, given a set of domains D = { , , , }, each feature is replicated for each domain d \u2208 D. These replicas can be interpreted as domain-specific \"offsets\" to the baseline weights. For an original feature vector \u03c6 with a set of domains D \u2286 D, the replicated feature vector contains |D| copies f d of each feature f \u2208 \u03c6, one for each d \u2208 D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f d = f, d \u2208 D 0, otherwise.", "eq_num": "(8)" } ], "section": "Tuning", "sec_num": "4" }, { "text": "The weights of the replicated feature space are initialized with 0 except for the domain, where we copy the baseline weights w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w d = w, d is 0, otherwise.", "eq_num": "(9)" } ], "section": "Tuning", "sec_num": "4" }, { "text": "All our phrase-based systems are first tuned without prefixes or domains to maximize B", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning", "sec_num": "4" }, { "text": ". When tuning for suffix prediction, we keep these baseline weights w fixed to maintain baseline translation quality and only update the weights corresponding to the , and domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning", "sec_num": "4" }, { "text": "Consider the interactive MT application setting in which the user is presented with an autocomplete list of alternative translations (Langlais et al., 2000) . The user query may be satisfied if the machine predicts the correct completion in its top-n output. However, it is well-known that n-best lists are poor approximations of MT structured output spaces (Macherey et al., 2008; Gimpel et al., 2013) . Even very large values of n can fail to produce alternatives that differ in the first words of the suffix, which limits n-best KSR and WPA improvements at test time. For tuning, WPA is often zero for every item on the n-best list, which prevents learning. Fortunately, the prefix can help efficiently enumerate diverse next-word alternatives. If we can find all edges in the decoding lattice that span the prefix e p and suffix e s , then we can generate diverse alternatives in precisely the right location in the target. Let G = (V, E) be the search lattice created by decoding, where V are nodes and E are the edges produced by rule applications. For any w \u2208", "cite_spans": [ { "start": 133, "end": 156, "text": "(Langlais et al., 2000)", "ref_id": "BIBREF19" }, { "start": 358, "end": 381, "text": "(Macherey et al., 2008;", "ref_id": "BIBREF22" }, { "start": 382, "end": 402, "text": "Gimpel et al., 2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Diverse n-best Extraction", "sec_num": "5" }, { "text": "V , let parent(w) return v s.t. v, w \u2208 E, target(w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diverse n-best Extraction", "sec_num": "5" }, { "text": "return the target sequence e defined by following the next pointers from w, and length(w) be the length of the target sequence up to w. During decoding, we set parent pointers and also assign monotonically increasing integer ids to each w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diverse n-best Extraction", "sec_num": "5" }, { "text": "To extract a full sentence completion given an edge v, w \u2208 E that spans the prefix/suffix boundary, we must find the best path to a goal node efficiently. To do this, we sort V in reverse topological order and set forward pointers from each node v to the Algorithm 1 Diverse n-best list extraction", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diverse n-best Extraction", "sec_num": "5" }, { "text": "Require: Lattice G = (V, E), prefix length P 1: M = [] Marked nodes 2: for w \u2208 V in reverse topological order do 3: v = parent(w) v, w \u2208 E 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diverse n-best Extraction", "sec_num": "5" }, { "text": "if length(v) \u2264 P and length(w) > P then 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diverse n-best Extraction", "sec_num": "5" }, { "text": "Add w to M Mark node 6: end if 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diverse n-best Extraction", "sec_num": "5" }, { "text": "v.child = v.child \u2295 w Child pointer update 8: end for 9: N = [] n-best target strings 10: for m \u2208 M do 11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diverse n-best Extraction", "sec_num": "5" }, { "text": "Add target(m) to N 12: end for 13: return N child node on the best goal path. During this traversal, we also mark all child nodes of edges that span the prefix/suffix boundary. Finally, we use the parent and child pointers to extract an n-best list of translations. Algorithm 1 shows the full procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diverse n-best Extraction", "sec_num": "5" }, { "text": "Neural machine translation (NMT) models the conditional probability p(e|f ) of translating a source sentence f to a target sentence e. In the encoderdecoder NMT framework (Sutskever et al., 2014; Cho et al., 2014) , an encoder computes a representation s for each source sentence. From that source representation, the decoder generates a translation one word at a time by maximizing:", "cite_spans": [ { "start": 171, "end": 195, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF33" }, { "start": 196, "end": 213, "text": "Cho et al., 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Neural machine translation", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "log p(e|f ) = |e| i=1 log p (e i |e autodesknewstest2014tuning criterion pxBWPA #prd KSR pxBWPA #prd KSRbaselineB57.941.11.49 57.840.938.00.96 61.7target beam searchB61.047.21.74 50.344.149.41.35 51.1+ prefix tuning(pxB pxB +WPA) 264.0 64.050.3 50.11.95 48.2 1.95 48.244.7 44.950.9 50.31.40 50.5 1.38 50.8WPA62.450.21.88 48.143.350.51.34 51.7#prd63.849.71.95 48.444.150.31.37 50.7" }, "TABREF1": { "text": "Phrase-based results on the English-French task. We compare the baseline with the target beam search proposed in this work. Prefix tuning is evaluated with four different tuning criteria.", "html": null, "num": null, "type_str": "table", "content": "
autodesknewstest2015
pxBWPA #prd KSR pxBWPA #prd KSR
baseline58.537.81.54 64.732.128.50.61 72.7
target beam search61.244.61.78 58.036.039.70.84 64.5
+ prefix tuning62.246.01.85 57.236.041.20.88 63.7
" }, "TABREF3": { "text": "Translation examples from the English-German newstest2015 test set. We compare the prefix decoding output of the baseline against target beam search both with and without prefix tuning. The prefix is printed in italics.", "html": null, "num": null, "type_str": "table", "content": "
English-FrenchEnglish-German
autodesknewstest2014autodesknewstest2015
WPA KSR WPA KSR WPA KSR WPA KSR
baseline1-best41.157.838.061.737.864.728.572.7
10-best48.653.342.758.543.960.233.469.5
target beam search 1-best50.348.250.950.546.057.241.263.7
10-best56.843.754.947.351.153.246.660.3
10-best diverse64.539.166.241.457.348.455.554.5
" }, "TABREF5": { "text": "He is due to appear in Karratha Magistrates Court on September 23. reference Er soll am 23. September vor dem Amtsgericht in Karratha erscheinen. phrase-based Er ist aufgrund der in Karratha Magistrates Court am 23. September. NMT Er wird am 23. September in Karratah Magistrates Court erscheinen. 2. source The research, funded by the [...], will be published today in the Medical Journal of Australia. reference Die von [...] finanzierte Studie wird heute im Medical Journal of Australia ver\u00f6ffentlicht. phrase-based Die von [...] finanzierte Studie wird heute im Medical Journal of Australia. NMT Die von [...] finanzierte Studie wird heute im Medical Journal of Australia ver\u00f6ffentlicht. 3. source But it is certainly not a radical initiative -at least by American standards. reference Aber es ist mit Sicherheit keine radikale Initiative -jedenfalls nicht nach amerikanischen Standards. phrase-based Aber es ist sicherlich kein radikale Initiative -zumindest von den amerikanischen Standards. NMT Aber es ist gewiss keine radikale Initiative -zumindest nicht nach amerikanischem Ma\u00dfstab. 4. source Now everyone knows that the labor movement did not diminish the strength of the nation but enlarged it. reference Jetzt wissen alle, dass die Arbeiterbewegung die St\u00e4rke der Nation nicht einschr\u00e4nkte, sondern sie vergr\u00f6\u00dferte. phrase-based Jetzt wissen alle, dass die Arbeiterbewegung die St\u00e4rke der Nation nicht schm\u00e4lern, aber vergr\u00f6\u00dfert . NMT Jetzt wissen alle, dass die Arbeiterbewegung die St\u00e4rke der Nation nicht verringert, sondern erweitert hat. 5. source \"As go unions, so go middle-class jobs,\" says Ellison, the Minnesota Democrat who serves as a Congressional Progressive Caucus co-chair. reference \"So wie Gewerkschaften sterben, sterben auch die Mittelklassejobs,\" sagte Ellison, ein Demokrat aus Minnesota und stellvertretender Vorsitzender des Progressive Caucus im Kongress. phrase-based \"So wie Gewerkschaften sterben, so Mittelklasse-Jobs\", sagt Ellison, der Minnesota Demokrat, dient als Congressional Progressive Caucus Mitveranstalter. NMT \"So wie Gewerkschaften sterben, so gehen die gehen,\" sagt Ellison, der Liberalen, der als Kongresses des eine dient. 6. source The opposition politician, Imran Khan, accuses Prime Minister Sharif of rigging the parliamentary elections, which took place in May last year. reference Der Oppositionspolitiker Imran Khan wirft Premier Sharif vor, bei der Parlamentswahl im Mai vergangenen Jahres betrogen zu haben. phrase-based Der Oppositionspolitiker Imran Khan wirft Premier Sharif vor, bei der Parlamentswahl im Mai vergangenen Jahres betrogen zu haben. , die NMT Der Oppositionspolitiker Imran Khan wirft Premier Sharif vor, bei der Parlamentswahl im Mai vergangenen Jahres betrogen zu haben.", "html": null, "num": null, "type_str": "table", "content": "
1. source
" }, "TABREF6": { "text": "", "html": null, "num": null, "type_str": "table", "content": "" } } } }