{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:34:28.504249Z" }, "title": "Log-Linear Reformulation of the Noisy Channel Model for Document-Level Neural Machine Translation", "authors": [ { "first": "S\u00e9bastien", "middle": [], "last": "Jean", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "kyunghyun.cho@nyu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We seek to maximally use various data sources, such as parallel and monolingual data, to build an effective and efficient documentlevel translation system. In particular, we start by considering a noisy channel approach (Yu et al., 2020) that combines a target-to-source translation model and a language model. By applying Bayes' rule strategically, we reformulate this approach as a log-linear combination of translation, sentence-level and documentlevel language model probabilities. In addition to using static coefficients for each term, this formulation alternatively allows for the learning of dynamic per-token weights to more finely control the impact of the language models. Using both static or dynamic coefficients leads to improvements over a context-agnostic baseline and a context-aware concatenation model.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We seek to maximally use various data sources, such as parallel and monolingual data, to build an effective and efficient documentlevel translation system. In particular, we start by considering a noisy channel approach (Yu et al., 2020) that combines a target-to-source translation model and a language model. By applying Bayes' rule strategically, we reformulate this approach as a log-linear combination of translation, sentence-level and documentlevel language model probabilities. In addition to using static coefficients for each term, this formulation alternatively allows for the learning of dynamic per-token weights to more finely control the impact of the language models. Using both static or dynamic coefficients leads to improvements over a context-agnostic baseline and a context-aware concatenation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural machine translation (NMT) Bahdanau et al., 2015) has been reported to reach near human-level performance on sentence-by-sentence translation (L\u00e4ubli et al., 2018) . Going beyond sentence-level, documentlevel NMT aims to translate sentences by taking into account neighboring source or target sentences in order to produce a more cohesive output (Jean et al., 2017; Wang et al., 2017; Maruf et al., 2019) . These approaches often train new models from scratch using parallel data.", "cite_spans": [ { "start": 33, "end": 55, "text": "Bahdanau et al., 2015)", "ref_id": "BIBREF1" }, { "start": 148, "end": 169, "text": "(L\u00e4ubli et al., 2018)", "ref_id": "BIBREF6" }, { "start": 352, "end": 371, "text": "(Jean et al., 2017;", "ref_id": "BIBREF4" }, { "start": 372, "end": 390, "text": "Wang et al., 2017;", "ref_id": "BIBREF25" }, { "start": 391, "end": 410, "text": "Maruf et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, in a similar spirit to Voita et al. (2019a) ; , we seek a document-level approach that maximally uses various available corpora, such as parallel and monolingual data, leveraging models trained at the sentence and document levels, while also striving for computational efficiency. We start from the noisy channel model which combines a target-to-source translation model and a document-level language model. By applying Bayes' rule, we reformulate this approach into a log-linear model. It consists of a translation model, as well as sentence and document-level language models. This reformulation admits an auto-regressive expression of tokenby-token target document probabilities, facilitating the use of existing inference algorithms such as beam search. In this log-linear model, there are coefficients modulating the impact of the language models. We first consider static coefficients and, for more fine-grained control, we train a merging module that dynamically adjusts the LM weights.", "cite_spans": [ { "start": 38, "end": 58, "text": "Voita et al. (2019a)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With either static or dynamic coefficients, we observe improvements over a context-agnostic baseline, as well as a context-aware concatenation model (Tiedemann and Scherrer, 2017) . Similarly to the noisy channel model, our approach reuses off-the-shelf models and benefits from future translation or language modelling improvements.", "cite_spans": [ { "start": 149, "end": 179, "text": "(Tiedemann and Scherrer, 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given the availability of various heterogeneous data sources that could be used for document-level translation, we seek a strategy to maximally use them. These sources include parallel data, at either the sentence or document level, as well as more broadly available monolingual data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear reformulation of the noisy channel model", "sec_num": "2" }, { "text": "As the starting point, we consider the noisy channel approach proposed by . Given a source document (X (1) , . . . , X (N ) ) and its translation (Y (1) , . . . , Y (N ) ), they assume a generation process where target sentences are produced from left to right, and where each source sentence is translated only from the corresponding target sentence. Under these assumptions, the probability of a source-target document pair is given by", "cite_spans": [ { "start": 119, "end": 123, "text": "(N )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Log-linear reformulation of the noisy channel model", "sec_num": "2" }, { "text": "P (X (1) , . . . ,X (N ) , Y (1) , . . . , Y (N ) ) = N n=1 P (X (n) |Y (n) )P (Y (n) |Y (