|
{ |
|
"paper_id": "N19-1042", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:57:51.085812Z" |
|
}, |
|
"title": "Lost in Machine Translation: A Method to Reduce Meaning Loss", |
|
"authors": [ |
|
{ |
|
"first": "Reuben", |
|
"middle": [], |
|
"last": "Cohn-Gordon", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "reubencg@stanford.edu" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Goodman", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "ngoodman@stanford.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "A desideratum of high-quality translation systems is that they preserve meaning, in the sense that two sentences with different meanings should not translate to one and the same sentence in another language. However, stateof-the-art systems often fail in this regard, particularly in cases where the source and target languages partition the \"meaning space\" in different ways. For instance, \"I cut my finger.\" and \"I cut my finger off.\" describe different states of the world but are translated to French (by both Fairseq and Google Translate) as \"Je me suis coup\u00e9 le doigt.\", which is ambiguous as to whether the finger is detached. More generally, translation systems are typically manyto-one (non-injective) functions from source to target language, which in many cases results in important distinctions in meaning being lost in translation. Building on Bayesian models of informative utterance production, we present a method to define a less ambiguous translation system in terms of an underlying pretrained neural sequence-to-sequence model. This method increases injectivity, resulting in greater preservation of meaning as measured by improvement in cycle-consistency, without impeding translation quality (measured by BLEU score).", |
|
"pdf_parse": { |
|
"paper_id": "N19-1042", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "A desideratum of high-quality translation systems is that they preserve meaning, in the sense that two sentences with different meanings should not translate to one and the same sentence in another language. However, stateof-the-art systems often fail in this regard, particularly in cases where the source and target languages partition the \"meaning space\" in different ways. For instance, \"I cut my finger.\" and \"I cut my finger off.\" describe different states of the world but are translated to French (by both Fairseq and Google Translate) as \"Je me suis coup\u00e9 le doigt.\", which is ambiguous as to whether the finger is detached. More generally, translation systems are typically manyto-one (non-injective) functions from source to target language, which in many cases results in important distinctions in meaning being lost in translation. Building on Bayesian models of informative utterance production, we present a method to define a less ambiguous translation system in terms of an underlying pretrained neural sequence-to-sequence model. This method increases injectivity, resulting in greater preservation of meaning as measured by improvement in cycle-consistency, without impeding translation quality (measured by BLEU score).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Languages differ in what meaning distinctions they must mark explicitly. As such, translations risk mapping from a form in one language to a more ambiguous form in another. For example, the definite (1) and indefinite (2) both translate (under Fairseq and Google Translate) to (3) in French, which is ambiguous in definiteness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Many-to-One Translations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The animals run fast.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Many-to-One Translations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) Animals run fast.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Many-to-One Translations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(2) Les animaux courent vite", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Many-to-One Translations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(3) Survey To evaluate the nature of this problem, we explored a corpus 1 of 500 pairs of distinct English sentences which map to a single German one (the evaluation language in section 2.3). We identify a number of common causes for the manyto-one maps. Two frequent types of verbal distinction lost when translating to German are tense (54 pairs, e.g. \"...others {were, have been} introduced .\") and modality (16 pairs, e.g. \"...prospects for this year {could, might} be better.\"), where German \"k\u00f6nnen\" can express both epistemic and ability modality, distinguished in English with \"might\" and \"could\" respectively. Owing to English's large vocabulary, lexical difference in verb (31 pairs, e.g. \"arise\" vs. \"emerge\" ), noun (56 pairs, e.g. \"mystery\" vs. \"secret\"), adjective (47 pairs, e.g. \"unaffected\" vs. \"untouched\") or deictic/pronoun (32 pairs, usually \"this\" vs \"that\") are also common. A large number of the pairs differ instead either orthographically, or in other ways that do not correspond to a clear semantic distinc-A He is wearing glasses. B", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Many-to-One Translations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "He wears glasses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Many-to-One Translations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "S SNT 0 (A)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Many-to-One Translations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Er tr\u00e4gt eine Brille.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Many-to-One Translations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "S SNT 0 (B)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Many-to-One Translations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Er tr\u00e4gt eine Brille . S SNT-IP 1 (A) Er tr\u00e4gt jetzt eine Brille. S SNT-IP 1 (B) Er hat eine Brille. tion (e.g. \"She had {taken, made} a decision.\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Many-to-One Translations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our approach While languages differ in what distinctions they are required to express, all are usually capable of expressing any given distinction when desired. As such, meaning loss of the kind discussed above is, in theory, avoidable. To this end, we propose a method to reduce meaning loss by applying the Rational Speech Acts (RSA) model of an informative speaker to translation. RSA has been used to model natural language pragmatics (Goodman and Frank, 2016), and recent work has shown its applicability to image captioning (Andreas and Klein, 2016; Vedantam et al., 2017; Mao et al., 2016) , another sequencegeneration NLP task. Here we use RSA to define a translator which reduces many-to-one mappings and consequently meaning loss, in terms of a pretrained neural translation model. We introduce a strategy for performing inference efficiently with this model in the setting of translation, and show gains in cycle-consistency 2 as a result. Moreover, we obtain improvements in translation quality (BLEU score), demonstrating that the goal of meaning preservation directly yields improved translations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 530, |
|
"end": 555, |
|
"text": "(Andreas and Klein, 2016;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 556, |
|
"end": 578, |
|
"text": "Vedantam et al., 2017;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 579, |
|
"end": 596, |
|
"text": "Mao et al., 2016)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Many-to-One Translations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the RSA framework, speakers and listeners, modeled as Bayesian agents, reason about each other in a nested fashion. We refer to listeners and speakers which do not reason about another agent as L 0 and S 0 respectively, and an agent which reasons about another agent as L 1 or S 1 . For instance, an informative speaker model S 1 is given a state 2 Formally, say that a pair of functions f : A \u2192 B, g : B \u2192 A is cycle-consistent if g \u2022 f = id, the identity function. If f is not one-to-one, then (f, g) is not cycleconsistent. (Note however that when A and B are infinite, the converse does not hold: even if f and g are both one-toone, (f, g) need not be cycle-consistent, since many-to-one maps between infinite sets are not necessarily bijective.) w \u2208 W , and chooses an utterance u \u2208 U to convey w to S 1 's model of a listener. By contrast, S 0 chooses utterances without a listener model in mind -its behavior might be determined by a semantics, or in our case, by a pretrained neural model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Meaning Preservation as Informativity", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For translation, the state space W is a set of source language sentences (sequences of words in the language), while U is a set of target language sentences. S 1 's goal is to choose a translation u which allows a listener to pick out the source sentence w from among the set of distractors. This informative behavior discourages many-to-one maps that a non-informative translation model S 0 might allow.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Meaning Preservation as Informativity", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "S 0 Model BiLSTMs with attention (Bahdanau et al., 2014) , and more recently CNNs (Gehring et al., 2016) and entirely attention based models (Vaswani et al., 2017) constitute the state-of-theart architectures in neural machine translation . All of these systems, once trained end-to-end on aligned data, can be viewed as a conditional distribution 3 S WD 0 (wd |w, c), for a word wd in the target language, a source language sentence w, and a partial sentence c in the target language. S WD 0 yields a distribution S SNT 0 over full sentences 4 :", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 56, |
|
"text": "(Bahdanau et al., 2014)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 82, |
|
"end": 104, |
|
"text": "(Gehring et al., 2016)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 163, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Meaning Preservation as Informativity", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "S SNT 0 (u|w, c) = t S WD 0 (u[t]|w, c + u[: t]) (4) S SNT 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Meaning Preservation as Informativity", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "returns a distribution over continuations of c into full target language sentences 5 . To obtain a sentence from S SNT 0 given a source language sentence s, one can greedily choose the highest probability word from S WD 0 at each timestep, or explore a beam of possible candidates. We implement S WD 0 (in terms of which all our other models are defined) using Fairseq's publicly available 6 pretrained Transformer models for English-German, and for German-English train a CNN using Fairseq.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Meaning Preservation as Informativity", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We first describe a sentence level, globally pragmatic model S SNT-GP 1 for the simple case where 3 We use S WD 0/1 and S SNT 0/1 respectively to distinguish word and sentence level speaker models 4 Python list indexing conventions are used, \"+\" means concatenation of list to element or list 5 In what follows, we omit c when it is empty, so that S SNT 0 (u|w) is the probability of sentence u given w 6 https://github.com/pytorch/fairseq a source language sentence needs to be distinguished from a presupplied distractor 7 (as in the pairs shown in figures (2) and (1)). We use this model as a stepping stone to one which requires an input sentence in the source language only, and no distractors. We begin by defining a listener L SNT 1 , which receives a target language sentence u and infers which sentence w \u2208 W (a presupplied set such as the pair (1) and (2)) would have resulted in the pretrained neural model S SNT 0 producing u:", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 99, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 294, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explicit Distractors", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L SNT 1 (w|u) \u221d S SNT 0 (u|w) w \u2208W S SNT 0 (u|w )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Explicit Distractors", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "This allows S SNT-GP 1 to be defined in terms of L SNT 1 , where U is the set of all possible target language sentences 8 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explicit Distractors", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "S SNT-GP 1 (u|w) = S SNT 0 (u|w)L SNT 1 (w|u) \u03b1 u \u2208U S SNT 0 (u |w)L SNT 1 (w|u ) \u03b1 (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explicit Distractors", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The key property of this model is that, for W = {A, B}, when translating A, S SNT-GP 1 prefers translations of A that are unlikely to be good translations of B. So for pairs like (1) and(2), S SNT-GP 1 is compelled to produce a translation for the former that reflects its difference from the latter, and vice versa. as an approximation of S SNT-GP 1 on which inference can be tractably performed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explicit Distractors", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "S SNT-IP 1 considers informativity not over the whole set of utterances, but instead at each decision of the next word in the target language sentence. For this reason, the incremental method avoids the problem of lack of beam diversity encountered when subsampling from S SNT 0 , which 7 Implementations for all models are available to https://github.com/reubenharry/ pragmatic-translation 8 \u03b1 is a hyperparameter of S ; as it increases, the model cares more about being informative and less about producing a reasonable translation. becomes especially bad when producing long sequences, as is often the case in translation. S is defined as the product of informative decisions, specified by S WD 1 (itself defined in terms of L WD 1 ), which are defined analogously to (6) and (5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explicit Distractors", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L WD 1 (w|wd, c) \u221d S WD 0 (wd|w, c)", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Explicit Distractors", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "S WD 1 (wd |w, c) \u221d (8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explicit Distractors", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "S WD 0 (wd|w, c) * L WD 1 (w|wd, c) \u03b1 S SNT-IP 1 (u|w, c) = t S WD 1 (u[t]|w, c + u[: t])", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Explicit Distractors", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Examples S SNT-IP 1 is able to avoid many-to-one mappings by choosing more informative translations. For instance, its translation of (1) is \"Ces animaux courent vite\" (These animals run fast.). See figures (1) and (2) for other examples of manyto-one mappings under S SNT 0 avoided by S SNT-IP 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explicit Distractors", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "While S can disambiguate between pairs of sentences, it has two shortcomings. First, it requires one (or more) distractors to be provided, so translation is no longer fully automatic. Second, because the distractor set W consists of only a pair (or finite set) of sentences, S SNT-IP 1 only cares about being informative up to the goal of distinguishing between these sentences. Intuitively, total meaning preservation is achieved by a translation which distinguishes the source sentence w from every other sentence in the source language which differs in meaning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Avoiding Explicit Distractors", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Both of these problems can be addressed by introducing a new \"cyclic\" globally pragmatic model S SNT-CGP 1 which reasons not about L SNT 1 but about a pretrained translation model from target language to source language, which we term L SNT 0 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Avoiding Explicit Distractors", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "S SNT-CGP 1 (u|w) \u221d S SNT 0 (u|w)L SNT 0 (w|u) \u03b1 (10) S SNT-CGP 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Avoiding Explicit Distractors", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "is like S SNT-GP 1 , but its goal is to produce a translation which allows a listener model (now L SNT 0 ) to infer the original sentence, not among a small set of presupplied possibilities, but among all source language sentences. As such, an optimal translation u of w under S SNT-CGP 1 has high probability of being generated by S SNT 0 and high probability of being back-translated to w by L SNT 0 . S SNT-CGP 1 is very closely related to reconstruction methods, e.g. (Tu et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 472, |
|
"end": 489, |
|
"text": "(Tu et al., 2017)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Avoiding Explicit Distractors", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Incremental Model Exact inference is again intractable, though as with S SNT-GP 1 , it is possible to approximate by subsampling from S SNT 0 . This is very close to the approach taken by (Li et al., 2016) , who find that reranking a set of outputs by probability of recovering input \"dramatically decreases the rate of dull and generic responses.\" in a question-answering task. However, because the subsample is small relative to U , they use this method in conjunction with a diversity increasing decoding algorithm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 205, |
|
"text": "(Li et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Avoiding Explicit Distractors", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "As in the case with explicit distractors, we instead opt for an incremental model, now S SNT-CIP 1 which approximates S SNT-CGP", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Avoiding Explicit Distractors", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": ". The definition of S SNT-CIP 1 (12) is more complex than the incremental model with explicit distractors (S SNT-IP 1 ) since L WD 0 must receive complete sentences, rather than partial ones like L WD 1 . As such, we need to marginalize over continuations k of partial sentences in the target language:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "S WD-C 1 (wd |w, c) \u221d S WD 0 (wd |w, c) * k (L SNT 0 (w|c + wd + k)S SNT 0 (k|w, c + wd ))", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "S SNT-CIP 1 (u|w, c) = t S WD-C 1 (u[t]|w, c + u[: t])", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since the sum over continuations of c in (11) is intractable to compute exactly, we approximate it by a single continuation, obtained by greedily unrolling S SNT", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ". The whole process of generating a new word wd of the translation from a sequence c and a source language sentence w is as follows: first use S WD 0 to generate a set of candidates for the next word (in practice, we only consider two, for efficiency). For each of these, use S SNT 0 to greedily unroll a full target language sentence from c + wd , namely c + wd + k, and rank each wd by the probability L SNT 0 (w|c + wd + k).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "0", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our objective is to improve meaning preservation without detracting from translation quality in other regards (e.g. grammaticality). We conduct our evaluations on English to German translation, making use of publicly available pre-trained English-German and German-English Fairseq models. The pragmatic model we evaluate is S SNT-CIP 1 since, unlike S SNT-IP 1 , it is not necessary to hand-supply a distractor set of source language sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating the Informative Translator", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "An example of the behavior of S SNT-CIP 1 and S SNT 0 on of our test sentences is shown below; S SNT 0 is able to preserve the phrase \"world's eyes\", which S SNT 0 translates merely as \"world\":", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating the Informative Translator", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 Source sentence: Isolation keeps the world's eyes off Papua.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating the Informative Translator", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 Reference translation: Isolation h\u00e4lt die Augen der Welt fern von Papua.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating the Informative Translator", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 S SNT 0 : Die Isolation h\u00e4lt die Welt von Papua fern.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating the Informative Translator", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 S : Die Isolation h\u00e4lt die Augen der Welt von Papua fern.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating the Informative Translator", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We use cycle-consistency as a measure of meaning preservation, since the ability to recover the original sentence requires meaning distinctions not to be collapsed. In evaluating cycleconsistency, it is important to use a separate targetsource translation mechanism than the one used to define S SNT-CIP", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating the Informative Translator", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": ". Otherwise, the system has access to the model which evaluates it and may improve cycle-consistency without producing meaningful target language sentences. For this reason, we translate German sentences (produced by S SNT 0 or S SNT-CIP 1 ) back to English with Google Translate. To measure cycle-consistency, we use the BLEU metric (implemented with sacreBLEU (Post, 2018) ), with the original sentence as the reference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 374, |
|
"text": "(Post, 2018)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "However, this improvement of cycle consistency, especially with a high value of \u03b1, may come at the cost of translation quality. Moreover, it is unclear whether BLEU serves as a good metric for evaluating sentences of a single language. To further ensure that translation quality is not compromised by S SNT-CIP 1 , we evaluate BLEU scores of the German sentences it produces. This requires evaluation on a corpus of aligned sentences, unlike the sentences collected from the Brown corpus in section 1 9 . 9 While we find that S SNT-CIP 1 improves cycle-consistency for the Brown corpus over S SNT 0 , we have no way to establish whether this comes at the cost of translation quality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 505, |
|
"end": 506, |
|
"text": "9", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Cycle Translate", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "S SNT 0 43.35 37.42 S SNT-CIP 1 47.34", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "38.29 Table 1 : BLEU score on cycle-consistency and translation for WMT, across baseline and informative models. Greedy unrolling and \u03b1 = 0.1", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 13, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We perform both evaluations (cycle-consistency and translation) on 750 sentences 10 of the 2018 English-German WMT News test-set. 11 We use greedy unrolling in all models (using beam search is a goal for future work). For \u03b1 (which represents the trade-off between informativity and translation quality) we use 0.1, obtained by tuning on validation data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 132, |
|
"text": "11", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Results As shown in table (1), S SNT-CIP 1 improves over S SNT 0 not only in cycle-consistency, but in translation quality as well. This suggests that the goal of preserving information, in the sense defined by S SNT-CGP 1 and approximated by S , is important for translation quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We identify a shortcoming of state-of-the-art translation systems and show that a version of the RSA framework's informative speaker S 1 , adapted to the domain of translation, alleviates this problem in a way which improves not only cycleconsistency but translation quality as well. The success of S SNT-CIP 1 on two fairly similar languages raises the question of whether improvements will increase for more distant language pairs, in which larger scale differences exist in what information is obligatorily represented -this is a direction for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Generated by selecting short sentences from the Brown corpus(Ku\u010dera and Francis, 1967), translating them to German, and taking the best two candidate translations back into English, if these two themselves translate to a single German sentence. Translation in both directions was done with Fairseq.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our implementation of S SNT-CIP 1 was not efficient, and we could not evaluate on more sentences for reasons of time.11 http://www.statmt.org/wmt18/ translation-task.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Thanks to the reviewers for their substantive comments, and to Daniel Fried and Jacob Andreas for many helpful discussions during the development of this project.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Reasoning about pragmatics with neural listeners and speakers", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Andreas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1173--1182", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1173-1182. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1409.0473" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Pragmatically informative image captioning with character-level inference", |
|
"authors": [ |
|
{ |
|
"first": "Reuben", |
|
"middle": [], |
|
"last": "Cohn-Gordon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "439--443", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reuben Cohn-Gordon, Noah Goodman, and Christo- pher Potts. 2018. Pragmatically informative image captioning with character-level inference. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 2 (Short Papers), pages 439-443. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A convolutional encoder model for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Gehring", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann N", |
|
"middle": [], |
|
"last": "Dauphin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.02344" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, and Yann N Dauphin. 2016. A convolutional encoder model for neural machine translation. arXiv preprint arXiv:1611.02344.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Pragmatic language interpretation as probabilistic inference", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Noah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael C", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Trends in Cognitive Sciences", |
|
"volume": "20", |
|
"issue": "11", |
|
"pages": "818--829", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Noah D Goodman and Michael C Frank. 2016. Prag- matic language interpretation as probabilistic infer- ence. Trends in Cognitive Sciences, 20(11):818- 829.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Computational analysis of present-day American English", |
|
"authors": [ |
|
{ |
|
"first": "Henry", |
|
"middle": [], |
|
"last": "Ku\u010dera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Winthrop", |
|
"middle": [ |
|
"Nelson" |
|
], |
|
"last": "Francis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Henry Ku\u010dera and Winthrop Nelson Francis. 1967. Computational analysis of present-day American English. Dartmouth Publishing Group.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A simple, fast diverse decoding algorithm for neural generation", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Monroe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.08562" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. A sim- ple, fast diverse decoding algorithm for neural gen- eration. arXiv preprint arXiv:1611.08562.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Generation and comprehension of unambiguous object descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Junhua", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Toshev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oana", |
|
"middle": [], |
|
"last": "Camburu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Yuille", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--20", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous ob- ject descriptions. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 11-20.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A call for clarity in reporting bleu scores", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.08771" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Post. 2018. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Neural machine translation with reconstruction", |
|
"authors": [ |
|
{ |
|
"first": "Zhaopeng", |
|
"middle": [], |
|
"last": "Tu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Shang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaohua", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Thirty-First AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017. Neural machine translation with reconstruction. In Thirty-First AAAI Conference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Context-aware captions from context-agnostic supervision", |
|
"authors": [ |
|
{ |
|
"first": "Ramakrishna", |
|
"middle": [], |
|
"last": "Vedantam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samy", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devi", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gal", |
|
"middle": [], |
|
"last": "Chechik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Computer Vision and Pattern Recognition (CVPR)", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. 2017. Context-aware captions from context-agnostic supervision. In Computer Vision and Pattern Recognition (CVPR), volume 3.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "State-of-the-art neural image captioner S SNT 0 loses a meaning distinction which informative model S", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "Similar to Figure 1, S SNT 0 collapses two English sentences into a single German one, whereas S SNT-IP 1 distinguishes the two in German.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "Since U is an infinite set, exactly computing the most probable utterance under S SNT-GP 1 (\u2022|w) is intractable. Andreas and Klein (2016) and Mao et al. (2016) perform approximate inference by sampling the subset of U produced by a beam search from S SNT 0 . Vedantam et al. (2017) and Cohn-Gordon et al. (2018) employ a different method, using an incremental model S SNT-IP 1", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
} |
|
} |
|
} |
|
} |