{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:27:28.909240Z"
},
"title": "Capturing document context inside sentence-level neural machine translation models with self-training",
"authors": [
{
"first": "Elman",
"middle": [],
"last": "Mansimov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {}
},
"email": "mansimov@cs.nyu.edu"
},
{
"first": "G\u00e1bor",
"middle": [],
"last": "Melis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {}
},
"email": "melisgl@google.com"
},
{
"first": "Lei",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {}
},
"email": "leiyu@google.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Neural machine translation (NMT) has arguably achieved human level parity when trained and evaluated at the sentence-level. Document-level neural machine translation has received less attention and lags behind its sentence-level counterpart. The majority of the proposed document-level approaches investigate ways of conditioning the model on several source or target sentences to capture document context. These approaches require training a specialized NMT model from scratch on parallel document-level corpora. We propose an approach that doesn't require training a specialized model on parallel document-level corpora and is applied to a trained sentence-level NMT model at decoding time. We process the document from left to right multiple times and self-train the sentence-level model on pairs of source sentences and generated translations. Our approach reinforces the choices made by the model, thus making it more likely that the same choices will be made in other sentences in the document. We evaluate our approach on three document-level datasets: NIST Chinese-English, WMT19 Chinese-English and Open-Subtitles English-Russian. We demonstrate that our approach has higher BLEU score and higher human preference than the baseline. Qualitative analysis of our approach shows that choices made by model are consistent across the document.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Neural machine translation (NMT) has arguably achieved human level parity when trained and evaluated at the sentence-level. Document-level neural machine translation has received less attention and lags behind its sentence-level counterpart. The majority of the proposed document-level approaches investigate ways of conditioning the model on several source or target sentences to capture document context. These approaches require training a specialized NMT model from scratch on parallel document-level corpora. We propose an approach that doesn't require training a specialized model on parallel document-level corpora and is applied to a trained sentence-level NMT model at decoding time. We process the document from left to right multiple times and self-train the sentence-level model on pairs of source sentences and generated translations. Our approach reinforces the choices made by the model, thus making it more likely that the same choices will be made in other sentences in the document. We evaluate our approach on three document-level datasets: NIST Chinese-English, WMT19 Chinese-English and Open-Subtitles English-Russian. We demonstrate that our approach has higher BLEU score and higher human preference than the baseline. Qualitative analysis of our approach shows that choices made by model are consistent across the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural machine translation (NMT) Kalchbrenner and Blunsom, 2013; Bahdanau et al., 2014) has achieved great success, arguably reaching the levels of human parity (Hassan et al., 2018) on Chinese to English news translation that led to its popularity and adoption in academia and industry. These models are predominantly trained and evaluated on sentence-level parallel corpora. Document-level machine translation that requires capturing the context to accurately translate sentences has been recently gaining more popularity and was selected as one of the main tasks in the premier machine translation conference WMT19 (Barrault et al., 2019) and WMT20 (Barrault et al., 2020) .",
"cite_spans": [
{
"start": 33,
"end": 64,
"text": "Kalchbrenner and Blunsom, 2013;",
"ref_id": "BIBREF25"
},
{
"start": 65,
"end": 87,
"text": "Bahdanau et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 161,
"end": 182,
"text": "(Hassan et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 618,
"end": 641,
"text": "(Barrault et al., 2019)",
"ref_id": null
},
{
"start": 652,
"end": 675,
"text": "(Barrault et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A straightforward solution to translate documents by translating sentences in isolation leads to inconsistent but syntactically valid text. The inconsistency is the result of the model not being able to resolve ambiguity with consistent choices across the document. For example, the recent NMT system that achieved human parity (Hassan et al., 2018) inconsistently used three different names \"Twitter Move Car\", \"WeChat mobile\", \"WeChat move\" when referring to the same entity (Sennrich, 2018) .",
"cite_spans": [
{
"start": 328,
"end": 349,
"text": "(Hassan et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 477,
"end": 493,
"text": "(Sennrich, 2018)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To tackle this issue, the majority of the previous approaches (Jean et al., 2017; Wang et al., 2017; Kuang et al., 2017; Tiedemann and Scherrer, 2017; Maruf and Haffari, 2018; Agrawal et al., 2018; Xiong et al., 2018; Miculicich et al., 2018; Voita et al., 2019a,b; Jean et al., 2019; Junczys-Dowmunt, 2019) proposed contextconditional NMT models trained on documentlevel data. However, none of the previous approaches are able to exploit trained NMT models on sentence-level parallel corpora and require training specialized context-conditional NMT models for document-level machine translation.",
"cite_spans": [
{
"start": 62,
"end": 81,
"text": "(Jean et al., 2017;",
"ref_id": "BIBREF23"
},
{
"start": 82,
"end": 100,
"text": "Wang et al., 2017;",
"ref_id": "BIBREF53"
},
{
"start": 101,
"end": 120,
"text": "Kuang et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 121,
"end": 150,
"text": "Tiedemann and Scherrer, 2017;",
"ref_id": "BIBREF47"
},
{
"start": 151,
"end": 175,
"text": "Maruf and Haffari, 2018;",
"ref_id": "BIBREF32"
},
{
"start": 176,
"end": 197,
"text": "Agrawal et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 198,
"end": 217,
"text": "Xiong et al., 2018;",
"ref_id": "BIBREF57"
},
{
"start": 218,
"end": 242,
"text": "Miculicich et al., 2018;",
"ref_id": "BIBREF34"
},
{
"start": 243,
"end": 265,
"text": "Voita et al., 2019a,b;",
"ref_id": null
},
{
"start": 266,
"end": 284,
"text": "Jean et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 285,
"end": 307,
"text": "Junczys-Dowmunt, 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a way of incorporating context into a trained sentence-level neural machine translation model at decoding time. We process each document monotonically from left to right one sentence at a time and self-train the sentence-level NMT model on its own generated translation. This procedure reinforces choices made by the model and hence increases the chance of making the same choices in the remaining sentences in the document. Our approach does not require training a separate context-conditional model on parallel document-Algorithm 1: Document-level NMT with self-training at decoding time Input: Document D = (X 1 , ..., X n ), pretrained sentence-level NMT model f (\u03b8), learning rate \u03b1, decay prior \u03bb and number of passes over document P Output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Translated sentences (Y 1 , ..., Y n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Backup original values of parameters\u03b8 \u2190 \u03b8 for p = 1 to P {multi-pass document} do for i = 1 to n do Translate sentence X i using sentence-level model f (\u03b8) into target sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Y i . Calculate cross-entropy loss L(X i , Y i ) using Y i as target. for j = 1 to m do \u03b8 \u2190 \u03b8 \u2212 \u03b1\u2207 \u03b8 L(X i , Y i ) + \u03bb(\u03b8 \u2212 \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "end for end for end for level data and allows us to capture context in documents using a trained sentence-level model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We make the key contribution in the paper by introducing the document-level neural machine translation approach that does not require training a context-conditional model on document data and does not require separate document-level language model to rank the outputs of the NMT model according to consistency of translated document. We show how to adapt a trained sentence-level neural machine translation model to capture context in the document during decoding. We evaluate and demonstrate improvements of our proposed approach measured by BLEU score and preferences of human annotators on several document-level machine translation tasks including NIST Chinese-English, WMT19 Chinese-English and OpenSubtitles English-Russian datasets. We qualitatively analyze the decoded sentences produced using our approach and show that they indeed capture the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We translate a document D consisting of n source sentences X 1 , X 2 , ..., X n into the target language, given a well-trained sentence-level neural machine translation model f \u03b8 . The sentencelevel model parametrizes a conditional distribution p(Y |X) = T i=1 p(y t |Y
Ours | \u0432 \u0441\u0443\u0431\u0431\u043e\u0442\u0443 \u0434\u0435\u043d\u044c \u0440\u043e\u0436\u0434\u0435\u043d\u0438\u044f \u0441\u0451\u0440\u0435\u043d\u0430 \u0438 \u044f \u043d\u0435 \u0437\u043d\u0430\u044e , \u043f\u0440\u0438\u0433\u043b\u0430\u0448\u0435\u043d\u0430 \u043b\u0438 \u044f . |
\u043f\u043e\u0447\u0435\u043c\u0443 \u0442\u0435\u0431\u044f \u043d\u0435 \u043f\u0440\u0438\u0433\u043b\u0430\u0441\u0438\u043b\u0438 ? \u0432\u0441\u0435 \u043f\u0440\u043e\u0441\u0442\u043e \u043f\u043e\u0448\u043b\u043e \u043d\u0435 \u0442\u0430\u043a . -\u0438 \u044f \u043f\u043e\u0441\u0441\u043e\u0440\u0438\u043b\u0430\u0441\u044c \u0441 \u043a\u043d\u0443\u0434\u043e\u043c . |