{ "paper_id": "P16-1008", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:56:55.743240Z" }, "title": "Modeling Coverage for Neural Machine Translation", "authors": [ { "first": "Zhaopeng", "middle": [], "last": "Tu", "suffix": "", "affiliation": { "laboratory": "Noah's Ark Lab, Huawei Technologies", "institution": "", "location": { "settlement": "Hong Kong" } }, "email": "tu.zhaopeng@huawei.com" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "Noah's Ark Lab, Huawei Technologies", "institution": "", "location": { "settlement": "Hong Kong" } }, "email": "lu.zhengdong@huawei.com" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": { "settlement": "Beijing" } }, "email": "" }, { "first": "Xiaohua", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "Noah's Ark Lab, Huawei Technologies", "institution": "", "location": { "settlement": "Hong Kong" } }, "email": "liuxiaohua3@huawei.com" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "Noah's Ark Lab, Huawei Technologies", "institution": "", "location": { "settlement": "Hong Kong" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Attention mechanism has enhanced stateof-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT. 1", "pdf_parse": { "paper_id": "P16-1008", "_pdf_hash": "", "abstract": [ { "text": "Attention mechanism has enhanced stateof-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The past several years have witnessed the rapid progress of end-to-end Neural Machine Translation (NMT) (Sutskever et al., 2014; Bahdanau et al., 2015) . Unlike conventional Statistical Machine Translation (SMT) (Koehn et al., 2003; Chiang, 2007) , NMT uses a single and large neural network to model the entire translation process. It enjoys the following advantages. First, the use of distributed representations of words can alleviate the curse of dimensionality (Bengio et al., 2003) . Second, there is no need to explicitly design features to capture translation regularities, which is quite difficult in SMT. Instead, NMT is capable of learning representations directly from the training data. Third, Long Short-Term Memory (Hochreiter and Schmidhuber, 1997) enables NMT to cap-ture long-distance reordering, which is a significant challenge in SMT.", "cite_spans": [ { "start": 104, "end": 128, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF22" }, { "start": 129, "end": 151, "text": "Bahdanau et al., 2015)", "ref_id": "BIBREF0" }, { "start": 212, "end": 232, "text": "(Koehn et al., 2003;", "ref_id": "BIBREF12" }, { "start": 233, "end": 246, "text": "Chiang, 2007)", "ref_id": "BIBREF4" }, { "start": 466, "end": 487, "text": "(Bengio et al., 2003)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "NMT has a serious problem, however, namely lack of coverage. In phrase-based SMT (Koehn et al., 2003) , a decoder maintains a coverage vector to indicate whether a source word is translated or not. This is important for ensuring that each source word is translated in decoding. The decoding process is completed when all source words are \"covered\" or translated. In NMT, there is no such coverage vector and the decoding process ends only when the end-of-sentence mark is produced. We believe that lacking coverage might result in the following problems in conventional NMT:", "cite_spans": [ { "start": 81, "end": 101, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Over-translation: some words are unnecessarily translated for multiple times;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Under-translation: some words are mistakenly untranslated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Specifically, in the state-of-the-art attention-based NMT model (Bahdanau et al., 2015) , generating a target word heavily depends on the relevant parts of the source sentence, and a source word is involved in generation of all target words. As a result, over-translation and under-translation inevitably happen because of ignoring the \"coverage\" of source words (i.e., number of times a source word is translated to a target word). Figure 1(a) shows an example: the Chinese word \"gu\u0101nb\u00ec\" is over translated to \"close(d)\" twice, while \"b\u00e8ip\u00f2\" (means \"be forced to\") is mistakenly untranslated.", "cite_spans": [ { "start": 64, "end": 87, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" }, { "start": 433, "end": 444, "text": "Figure 1(a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we propose a coverage mechanism to NMT (NMT-COVERAGE) to alleviate the overtranslation and under-translation problems. Basically, we append a coverage vector to the intermediate representations of an NMT model, which are sequentially updated after each attentive read (a) Over-translation and under-translation generated by NMT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(b) Coverage model alleviates the problems of over-translation and under-translation. Figure 1 : Example translations of (a) NMT without coverage, and (b) NMT with coverage. In conventional NMT without coverage, the Chinese word \"gu\u0101nb\u00ec\" is over translated to \"close(d)\" twice, while \"b\u00e8ip\u00f2\" (means \"be forced to\") is mistakenly untranslated. Coverage model alleviates these problems by tracking the \"coverage\" of source words.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 94, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "during the decoding process, to keep track of the attention history. The coverage vector, when entering into attention model, can help adjust the future attention and significantly improve the overall alignment between the source and target sentences. This design contains many particular cases for coverage modeling with contrasting characteristics, which all share a clear linguistic intuition and yet can be trained in a data driven fashion. Notably, we achieve significant improvement even by simply using the sum of previous alignment probabilities as coverage for each word, as a successful example of incorporating linguistic knowledge into neural network based NLP models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experiments show that NMT-COVERAGE significantly outperforms conventional attentionbased NMT on both translation and alignment tasks. Figure 1(b) shows an example, in which NMT-COVERAGE alleviates the over-translation and under-translation problems that NMT without coverage suffers from.", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 145, "text": "Figure 1(b)", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work is built on attention-based NMT (Bahdanau et al., 2015) , which simultaneously conducts dynamic alignment and generation of the target sentence, as illustrated in Figure 2 . It Figure 2 : Architecture of attention-based NMT. Whenever possible, we omit the source index j to make the illustration less cluttered. produces the translation by generating one target word y i at each time step. Given an input sentence x = {x 1 , . . . , x J } and previously generated words {y 1 , . . . , y i\u22121 }, the probability of generating next word y i is", "cite_spans": [ { "start": 41, "end": 64, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 172, "end": 180, "text": "Figure 2", "ref_id": null }, { "start": 186, "end": 194, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (y i |y 1) and g update (\u2022) is a neural network, we actually have an RNN model for coverage, as illustrated in Figure 4 . In this work, we take the following form:", "cite_spans": [], "ref_spans": [ { "start": 190, "end": 198, "text": "Figure 4", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Neural Network Based Coverage Model", "sec_num": "3.1.2" }, { "text": "C i,j = f (C i\u22121,j , \u03b1 i,j , h j , t i\u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Network Based Coverage Model", "sec_num": "3.1.2" }, { "text": "where f (\u2022) is a nonlinear activation function and t i\u22121 is the auxiliary input that encodes past translation information. Note that we leave out the word-specific feature function \u03a6(\u2022) and only take the input annotation h j as the input to the coverage RNN. It is important to emphasize that the NN-based coverage model is able to be fed with arbitrary inputs, such as the previous attentional context s i\u22121 . Here we only employ C i\u22121,j for past alignment information, t i\u22121 for past translation information, and h j for word-specific bias. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Network Based Coverage Model", "sec_num": "3.1.2" }, { "text": "Gating The neural function f (\u2022) can be either a simple activation function tanh or a gating function that proves useful to capture long-distance dependencies. In this work, we adopt GRU for the gating activation since it is simple yet powerful (Chung et al., 2014) . Please refer to (Cho et al., 2014b) for more details about GRU.", "cite_spans": [ { "start": 245, "end": 265, "text": "(Chung et al., 2014)", "ref_id": "BIBREF7" }, { "start": 284, "end": 303, "text": "(Cho et al., 2014b)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Network Based Coverage Model", "sec_num": "3.1.2" }, { "text": "Discussion Intuitively, the two types of models summarize coverage information in \"different languages\". Linguistic models summarize coverage information in human language, which has a clear interpretation to humans. Neural models encode coverage information in \"neural language\", which can be \"understood\" by neural networks and let them to decide how to make use of the encoded coverage information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Network Based Coverage Model", "sec_num": "3.1.2" }, { "text": "Although attention based model has the capability of jointly making alignment and translation, it does not take into consideration translation history. Specifically, a source word that has significantly contributed to the generation of target words in the past, should be assigned lower alignment probabilities, which may not be the case in attention based NMT. To address this problem, we propose to calculate the alignment probabilities by incorporating past alignment information embedded in the coverage model. Intuitively, at each time step i in the decoding phase, coverage from time step (i \u2212 1) serves as an additional input to the attention model, which provides complementary information of that how likely the source words are translated in the past. We expect the coverage information would guide the attention model to focus more on untranslated source words (i.e., assign higher alignment probabilities). In practice, we find that the coverage model does fulfill the expectation (see Section 5). The translated ratios of source words from linguistic coverages negatively correlate to the corresponding alignment probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Coverage into NMT", "sec_num": "3.2" }, { "text": "More formally, we rewrite the attention model in Equation 5 as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Coverage into NMT", "sec_num": "3.2" }, { "text": "e i,j = a(t i\u22121 , h j , C i\u22121,j ) = v a tanh(W a t i\u22121 + U a h j + V a C i\u22121,j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Coverage into NMT", "sec_num": "3.2" }, { "text": "where C i\u22121,j is the coverage of source word x j before time i. V a \u2208 R n\u00d7d is the weight matrix for coverage with n and d being the numbers of hidden units and coverage units, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Coverage into NMT", "sec_num": "3.2" }, { "text": "We take end-to-end learning for the NMT-COVERAGE model, which learns not only the parameters for the \"original\" NMT (i.e., \u03b8 for encoding RNN, decoding RNN, and attention model) but also the parameters for coverage modeling (i.e., \u03b7 for annotation and guidance of attention) . More specifically, we choose to maximize the likelihood of reference sentences as most other NMT models (see, however ):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "(\u03b8 * , \u03b7 * ) = arg max \u03b8,\u03b7 N n=1 log P (y n |x n ; \u03b8, \u03b7) (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "No auxiliary objective For the coverage model with a clearer linguistic interpretation (Section 3.1.1), it is possible to inject an auxiliary objective function on some intermediate representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "More specifically, we may have the following objective:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "(\u03b8 * , \u03b7 * ) = arg max \u03b8,\u03b7 N n=1 log P (y n |x n ; \u03b8, \u03b7) \u2212 \u03bb J j=1 (\u03a6 j \u2212 I i=1 \u03b1 i,j ) 2 ; \u03b7 where the term J j=1 (\u03a6 j \u2212 I i=1 \u03b1 i,j ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "; \u03b7 penalizes the discrepancy between the sum of alignment probabilities and the expected fertility for linguistic coverage. This is similar to the more explicit training for fertility as in Xu et al. (2015) , which encourages the model to pay equal attention to every part of the image (i.e., \u03a6 j = 1). However, our empirical study shows that the combined objective consistently worsens the translation quality while slightly improves the alignment quality.", "cite_spans": [ { "start": 191, "end": 207, "text": "Xu et al. (2015)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "Our training strategy poses less constraints on the dependency between \u03a6 j and the attention than a more explicit strategy taken in (Xu et al., 2015) . We let the objective associated with the translation quality (i.e., the likelihood) to drive the training, as in Equation 9. This strategy is arguably advantageous, since the attention weight on a hidden state h j cannot be interpreted as the proportion of the corresponding word being translated in the target sentence. For one thing, the hidden state h j , after the transformation from encoding RNN, bears the contextual information from other parts of the source sentence, and thus loses the rigid correspondence with the corresponding word. Therefore, penalizing the discrepancy between the sum of alignment probabilities and the expected fertility does not hold in this scenario.", "cite_spans": [ { "start": 132, "end": 149, "text": "(Xu et al., 2015)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "We carry out experiments on a Chinese-English translation task. Our training data for the translation task consists of 1.25M sentence pairs extracted from LDC corpora 4 , with 27.9M Chinese words and 34.5M English words respectively. We choose NIST 2002 dataset as our development set, and the NIST 2005 NIST , 2006 NIST and 2008 datasets as our test sets. We carry out experiments of the alignment task on the evaluation dataset from (Liu and Sun, 2015), which contains 900 manually aligned Chinese-English sentence pairs. We use the caseinsensitive 4-gram NIST BLEU score (Papineni et al., 2002) for the translation task, and the alignment error rate (AER) (Och and Ney, 2003) for the alignment task. To better estimate the quality of the soft alignment probabilities generated by NMT, we propose a variant of AER, naming SAER:", "cite_spans": [ { "start": 294, "end": 303, "text": "NIST 2005", "ref_id": null }, { "start": 304, "end": 315, "text": "NIST , 2006", "ref_id": null }, { "start": 316, "end": 329, "text": "NIST and 2008", "ref_id": null }, { "start": 574, "end": 597, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF19" }, { "start": 659, "end": 678, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "SAER = 1 \u2212 |M A \u00d7 M S | + |M A \u00d7 M P | |M A | + |M S |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "where A is a candidate alignment, and S and P are the sets of sure and possible links in the reference alignment respectively (S \u2286 P ). M denotes alignment matrix, and for both M S and M P we assign the elements that correspond to the existing links in S and P with probabilities 1 while assign the other elements with probabilities 0. In this way, we are able to better evaluate the quality of the soft alignments produced by attention-based NMT. We use sign-test (Collins et al., 2005) for statistical significance test. For efficient training of the neural networks, we limit the source and target vocabularies to the most frequent 30K words in Chinese and English, covering approximately 97.7% and 99.3% of the two corpora respectively. All the out-of-vocabulary words are mapped to a special token UNK. We set N = 2 for the fertility model in the linguistic coverages. We train each model with the sentences of length up to 80 words in the training data. The word embedding dimension is 620 and the size of a hidden layer is 1000. All the other settings are the same as in (Bahdanau et al., 2015 Table 1 : Evaluation of translation quality. d denotes the dimension of NN-based coverages, and \u2020 and \u2021 indicate statistically significant difference (p < 0.01) from GroundHog and Moses, respectively. \"+\" is on top of the baseline system GroundHog.", "cite_spans": [ { "start": 465, "end": 487, "text": "(Collins et al., 2005)", "ref_id": "BIBREF9" }, { "start": 1078, "end": 1100, "text": "(Bahdanau et al., 2015", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 1101, "end": 1108, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "We compare our method with two state-of-theart models of SMT and NMT 5 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "\u2022 Moses (Koehn et al., 2007) : an open source phrase-based translation system with default configuration and a 4-gram language model trained on the target portion of training data.", "cite_spans": [ { "start": 8, "end": 28, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "\u2022 GroundHog (Bahdanau et al., 2015): an attention-based NMT system. Table 1 shows the translation performances measured in BLEU score. Clearly the proposed NMT-COVERAGE significantly improves the translation quality in all cases, although there are still considerable differences among different variants.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 75, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "Parameters Coverage model introduces few parameters. The baseline model (i.e., GroundHog) has 84.3M parameters. The linguistic coverage using fertility introduces 3K parameters (2K for fertility model), and the NN-based coverage with gating introduces 10K\u00d7d parameters (6K\u00d7d for gating), where d is the dimension of the coverage vector. In this work, the most complex coverage model only introduces 0.1M additional parameters, which is quite small compared to the number of parameters in the existing model (i.e., 84.3M).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Quality", "sec_num": "5.2" }, { "text": "Speed Introducing the coverage model slows down the training speed, but not significantly. When running on a single GPU device Tesla K80, the speed of the baseline model is 960 target words per second. System 4 (\"+Linguistic coverage with fertility\") has a speed of 870 words per second, while System 7 (\"+NN-based coverage (d=10)\") achieves a speed of 800 words per second.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Quality", "sec_num": "5.2" }, { "text": "Linguistic Coverages (Rows 3 and 4): Two observations can be made. First, the simplest linguistic coverage (Row 3) already significantly improves translation performance by 1.1 BLEU points, indicating that coverage information is very important to the attention model. Second, incorporating fertility model boosts the performance by better estimating the covered ratios of source words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Quality", "sec_num": "5.2" }, { "text": "NN-based Coverages (Rows 5-7): (1) Gating (Rows 5 and 6): Both variants of NN-based coverages outperform GroundHog with averaged gains of 0.8 and 1.3 BLEU points, respectively. Introducing gating activation function improves the performance of coverage models, which is consistent with the results in other tasks (Chung et al., 2014) . (2) Coverage dimensions (Rows 6 and 7): Increasing the dimension of coverage models further improves the translation performance by 0.6 point in BLEU score, at the cost of introducing more parameters (e.g., from 10K to 100K). 6 Table 2 lists the alignment performances. We find that coverage information improves attention model as expected by maintaining an annotation summarizing attention history on each source word. More specifically, linguistic coverage with fertility significantly reduces alignment errors under both metrics, in which fertility plays an important role. NN-based coverages, however, does not significantly reduce alignment errors until increasing the coverage dimension from 1 to 10. It indicates that NN-based models need slightly more Table 2 : Evaluation of alignment quality. The lower the score, the better the alignment quality.", "cite_spans": [ { "start": 313, "end": 333, "text": "(Chung et al., 2014)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 564, "end": 571, "text": "Table 2", "ref_id": null }, { "start": 1097, "end": 1104, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Translation Quality", "sec_num": "5.2" }, { "text": "dimensions to encode the coverage information. Figure 5 shows an example. The coverage mechanism does meet the expectation: the alignments are more concentrated and most importantly, translated source words are less likely to get involved in generation of the target words next. For example, the first four Chinese words are assigned lower alignment probabilities (i.e., darker color) after the corresponding translation \"romania reinforces old buildings\" is produced.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 55, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Alignment Quality", "sec_num": "5.3" }, { "text": "Following Bahdanau et al. (2015), we group sentences of similar lengths together and compute BLEU score and averaged length of translation for each group, as shown in Figure 6 . Cho et al. (2014a) show that the performance of Groundhog drops rapidly when the length of input sentence increases. Our results confirm these findings. One main reason is that Groundhog produces much shorter translations on longer sentences (e.g., > 40, see right panel in Figure 6 ), and thus faces a serious under-translation problem. NMT-COVERAGE alleviates this problem by incorporating coverage information into the attention model, which in general pushes the attention to untranslated parts of the source sentence and implicitly discourages early stop of decoding. It is worthy to emphasize that both NN-based coverages (with gating, d = 10) and linguistic coverages (with fertility) achieve similar performances on long sentences, reconfirming our claim that the two variants improve the attention model in their own ways.", "cite_spans": [ { "start": 178, "end": 196, "text": "Cho et al. (2014a)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 167, "end": 175, "text": "Figure 6", "ref_id": null }, { "start": 452, "end": 460, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Effects on Long Sentences", "sec_num": "5.4" }, { "text": "As an example, consider this source sentence in the test set: qi\u00e1od\u0101n b\u011bn s\u00e0ij\u00ec p\u00edngj\u016bn d\u00e9f\u0113n 24.3f\u0113n , t\u0101 z\u00e0i s\u0101n zh\u014du qi\u00e1n ji\u0113sh\u00f2u sh\u01d2ush\u00f9 , qi\u00fadu\u00ec z\u00e0i c\u01d0 q\u012bji\u0101n 4 sh\u00e8ng 8 f\u00f9 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effects on Long Sentences", "sec_num": "5.4" }, { "text": "jordan achieved an average score of eight weeks ahead with a surgical operation three weeks ago .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Groundhog translates this sentence into:", "sec_num": null }, { "text": "in which the sub-sentence \", qi\u00fadu\u00ec z\u00e0i c\u01d0 q\u012bji\u0101n 4 sh\u00e8ng 8 f\u00f9\" is under-translated. With the (NNbased) coverage mechanism, NMT-COVERAGE translates it into: jordan 's average score points to UNK this year . he received surgery before three weeks , with a team in the period of 4 to 8 . Figure 6 : Performance of the generated translations with respect to the lengths of the input sentences. Coverage models alleviate under-translation by producing longer translations on long sentences. in which the under-translation is rectified.", "cite_spans": [], "ref_spans": [ { "start": 286, "end": 294, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Groundhog translates this sentence into:", "sec_num": null }, { "text": "The quantitative and qualitative results show that the coverage models indeed help to alleviate under-translation, especially for long sentences consisting of several sub-sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Groundhog translates this sentence into:", "sec_num": null }, { "text": "Our work is inspired by recent works on improving attention-based NMT with techniques that have been successfully applied to SMT. Following the success of Minimum Risk Training (MRT) in SMT , proposed MRT for end-to-end NMT to optimize model parameters directly with respect to evaluation metrics. Based on the observation that attentionbased NMT only captures partial aspects of attentional regularities, proposed agreement-based learning (Liang et al., 2006) to encourage bidirectional attention models to agree on parameterized alignment matrices. Along the same direction, inspired by the coverage mechanism in SMT, we propose a coverage-based approach to NMT to alleviate the over-translation and under-translation problems.", "cite_spans": [ { "start": 440, "end": 460, "text": "(Liang et al., 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Independent from our work, Cohn et al. (2016) and Feng et al. (2016) made use of the concept of \"fertility\" for the attention model, which is similar in spirit to our method for building the linguistically inspired coverage with fertility. Cohn et al. (2016) introduced a feature-based fertility that includes the total alignment scores for the sur-rounding source words. In contrast, we make prediction of fertility before decoding, which works as a normalizer to better estimate the coverage ratio of each source word. Feng et al. (2016) used the previous attentional context to represent implicit fertility and passed it to the attention model, which is in essence similar to the input-feed method proposed in (Luong et al., 2015) . Comparatively, we predict explicit fertility for each source word based on its encoding annotation, and incorporate it into the linguistic-inspired coverage for attention model.", "cite_spans": [ { "start": 27, "end": 45, "text": "Cohn et al. (2016)", "ref_id": "BIBREF8" }, { "start": 50, "end": 68, "text": "Feng et al. (2016)", "ref_id": "BIBREF10" }, { "start": 240, "end": 258, "text": "Cohn et al. (2016)", "ref_id": "BIBREF8" }, { "start": 521, "end": 539, "text": "Feng et al. (2016)", "ref_id": "BIBREF10" }, { "start": 713, "end": 733, "text": "(Luong et al., 2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We have presented an approach for enhancing NMT, which maintains and utilizes a coverage vector to indicate whether each source word is translated or not. By encouraging NMT to pay less attention to translated words and more attention to untranslated words, our approach alleviates the serious over-translation and under-translation problems that traditional attention-based NMT suffers from. We propose two variants of coverage models: linguistic coverage that leverages more linguistic information and NN-based coverage that resorts to the flexibility of neural network approximation . Experimental results show that both variants achieve significant improvements in terms of translation quality and alignment quality over NMT without coverage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Our code is publicly available at https://github. com/tuzhaopeng/NMT-Coverage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Fertility in SMT is a random variable with a set of fertility probabilities, n(\u03a6j|xj) = p(\u03a6# System#Params MT05MT06MT08Avg.1 Moses-31.3730.8523.0128.412 GroundHog84.3M30.6131.1223.2328.323 + Linguistic coverage w/o fertility+1K31.26 \u202032.16 \u2020 \u2021 24.84 \u2020 \u2021 29.424 + Linguistic coverage w/ fertility+3K32.36 29.126 + NN-based coverage w/ gating (d = 1)+10K31.94 \u2020 \u2021 32.16 \u2020 \u2021 24.67 \u2020 \u2021 29.597 + NN-based coverage w/ gating (d = 10) +100K32.73 \u2020 \u2021 32.47 \u2020 \u2021 25.23 \u2020 \u2021 30.14).", "type_str": "table", "num": null }, "TABREF1": { "text": ").", "html": null, "content": "
SystemSAER AER
GroundHog67.00 54.67
+ Ling. cov. w/o fertility66.75 53.55
+ Ling. cov. w/ fertility64.85 52.13
+ NN cov. w/o gating (d = 1) 67.10 54.46
+ NN cov. w/ gating (d = 1)66.30 53.51
+ NN cov. w/ gating (d = 10) 64.25 50.50
", "type_str": "table", "num": null } } } }