{ "paper_id": "Y11-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:39:40.356336Z" }, "title": "Language Model Weight Adaptation Based on Cross-entropy for Statistical Machine Translation", "authors": [ { "first": "Yinggong", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "State Key Laboratory for Novel Software Technology at", "institution": "Nanjing University", "location": { "postCode": "210093", "settlement": "Nanjing", "country": "P.R.China" } }, "email": "zhaoyg@nlp.nju.edu.cn" }, { "first": "Yangsheng", "middle": [], "last": "Ji", "suffix": "", "affiliation": { "laboratory": "State Key Laboratory for Novel Software Technology at", "institution": "Nanjing University", "location": { "postCode": "210093", "settlement": "Nanjing", "country": "P.R.China" } }, "email": "" }, { "first": "Ning", "middle": [], "last": "Xi", "suffix": "", "affiliation": { "laboratory": "State Key Laboratory for Novel Software Technology at", "institution": "Nanjing University", "location": { "postCode": "210093", "settlement": "Nanjing", "country": "P.R.China" } }, "email": "" }, { "first": "Shujian", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "State Key Laboratory for Novel Software Technology at", "institution": "Nanjing University", "location": { "postCode": "210093", "settlement": "Nanjing", "country": "P.R.China" } }, "email": "huangsj@nlp.nju.edu.cn" }, { "first": "Jiajun", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "State Key Laboratory for Novel Software Technology at", "institution": "Nanjing University", "location": { "postCode": "210093", "settlement": "Nanjing", "country": "P.R.China" } }, "email": "chenjj@nlp.nju.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we investigate the language model (LM) adaptation issue for Statistical Machine Translation (SMT). In order to overcome the weight bias on the LM obtained from the development data, a simple but effective method is proposed to adapt the LM for diverse test datasets by employing the cross entropy of translation hypotheses as a metric to measure the similarity between different datasets. Experimental results show that the cross entropy of a test dataset is closely correlated with the bias in estimating the language models and our adaptation strategy significantly outperforms a strong baseline.", "pdf_parse": { "paper_id": "Y11-1003", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we investigate the language model (LM) adaptation issue for Statistical Machine Translation (SMT). In order to overcome the weight bias on the LM obtained from the development data, a simple but effective method is proposed to adapt the LM for diverse test datasets by employing the cross entropy of translation hypotheses as a metric to measure the similarity between different datasets. Experimental results show that the cross entropy of a test dataset is closely correlated with the bias in estimating the language models and our adaptation strategy significantly outperforms a strong baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Language modeling is applied in many natural language processing (NLP) applications, including automatic speech recognition (ASR) and SMT. In reality, we often encounter the scenario in which the performance of language model learned from given dataset changes drastically among different datasets. Many adaptation techniques have been proposed to tackle this problem in the field of ASR. A similar situation arises with respect to SMT. In SMT we build language model from large amounts of monolingual data but incorporate it in the translation task of the dataset that is not well covered by the model. This inconsistency inevitably affects the SMT training procedure, making adaptation techniques a necessity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different from other tasks, language model is incorporated under a log-linear framework in SMT. Specifically, for each source sentence f , we search for the final translation e * among all possible candidates under the following equation: P (e * |f ) = arg max e P r(e|f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Under log-linear model, the posterior probability P r(e|f ) can be decomposed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "P r(e|f ) = p \u03bb (e|f ) = exp( M m=1 (\u03bb m \u2022 h m (e, f )))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "for others. Although the value of each feature's weight represents its importance in the decoding procedure, such type of importance might vary for different datasets under a specific language model. In this article we concentrate on the bias-estimation of language model weight, i.e., the difference between the oracle and actual LM weight as shown in section 3. We measure the similarity between datasets based on cross-entropy of translation output according to a given language model, adapt the LM weight based on the ratio of the cross-entropy and obtain the final results through a second-pass translation. Our LM weight adaptation method is also related with density ratio estimation, as mentioned in (Tsuboi et al., 2008) , in which reweighting approach is proposed to overcome the bias due to the different distribution of test and training data. The remainder of this paper is organized as follows: Related work of LM adaptation is presented in Section 2. In Section 3 we discuss the problem of LM weight bias-estimation in machine translation. And in Section 4, cross-entropy is proposed as a metric for measuring the similarity between different datasets and we further present our adaptation method. Experimental results are shown in Section 5. We conclude and present several directions for future work in the last section.", "cite_spans": [ { "start": 708, "end": 729, "text": "(Tsuboi et al., 2008)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Nowadays LM adaptation in SMT has been paid lots of attentions. There are two main categories for this problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The first one is data selection, i.e., when given a test dataset and a large general corpus, which tries to extract sentences from the whole corpus that are relevant to the test dataset under some metric. There are two main approaches for the measurement: One is to apply tf-idf metric (Hildebrand et al., 2005; L\u00fc et al., 2007; Zhao et al., 2004) , which arises from information retrieval; while for the other approach cross-entropy (perplexity) is adopted for selection, as reported in (Axelrod et al., 2011; Moore and Lewis, 2010) .", "cite_spans": [ { "start": 286, "end": 311, "text": "(Hildebrand et al., 2005;", "ref_id": "BIBREF2" }, { "start": 312, "end": 328, "text": "L\u00fc et al., 2007;", "ref_id": "BIBREF7" }, { "start": 329, "end": 347, "text": "Zhao et al., 2004)", "ref_id": "BIBREF16" }, { "start": 488, "end": 510, "text": "(Axelrod et al., 2011;", "ref_id": "BIBREF0" }, { "start": 511, "end": 533, "text": "Moore and Lewis, 2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The second is model weighting. The main idea is to assign appropriate weight to each model according to the similarity between the model corpus and test dataset. In this approach, the models could be built from domain-specific corpus (Koehn and Schroeder, 2007) when domain of the test dataset is known, or from datasets that belong to different sources (Foster and Kuhn, 2007; L\u00fc et al., 2007) when it is unavailable in advance. Such weighting method even could be apply to either each sentence from the training corpus (Matsoukas et al., 2009) or phrase pair from the phrase-table (Foster et al., 2010) . Besides, the work of (Mohit et al., 2009; Mohit et al., 2010) also belongs to such scenario, in which they attempt to build a classifier to predict whether or not a phrase is difficult, then the LM weight is updated for each phrase segment according to its difficulty.", "cite_spans": [ { "start": 234, "end": 261, "text": "(Koehn and Schroeder, 2007)", "ref_id": "BIBREF5" }, { "start": 354, "end": 377, "text": "(Foster and Kuhn, 2007;", "ref_id": "BIBREF3" }, { "start": 378, "end": 394, "text": "L\u00fc et al., 2007)", "ref_id": "BIBREF7" }, { "start": 521, "end": 545, "text": "(Matsoukas et al., 2009)", "ref_id": "BIBREF8" }, { "start": 583, "end": 604, "text": "(Foster et al., 2010)", "ref_id": "BIBREF4" }, { "start": 628, "end": 648, "text": "(Mohit et al., 2009;", "ref_id": "BIBREF9" }, { "start": 649, "end": 668, "text": "Mohit et al., 2010)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The methods mentioned above try to overcome the difference between the training and the test data. However, the bias between the development and the test data is also an open issue. Not much attention has been paid to the such weight adaptation. In Li et al. (2010) , the model weight is tuned on a subset of the development set, which is extracted based on the relevance to the test set.", "cite_spans": [ { "start": 249, "end": 265, "text": "Li et al. (2010)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this paper, different from (Li et al., 2010) , we focus on the adaptation of LM weight only, as LM is one of the key components of SMT and has its own characteristic. In our work, we adopt cross-entropy as a metric, just as (Axelrod et al., 2011; Moore and Lewis, 2010) , to measure the similarity between different datasets. However, only LM weight is adjusted during the adaptation, and no extra model needs to be built. Although our method is quite simple and straightforward, the improvements obtained from the adaptation show that the bias-estimation of LM weight due to the difference between development and test dataset is also quite an important issue in SMT.", "cite_spans": [ { "start": 30, "end": 47, "text": "(Li et al., 2010)", "ref_id": "BIBREF6" }, { "start": 227, "end": 249, "text": "(Axelrod et al., 2011;", "ref_id": "BIBREF0" }, { "start": 250, "end": 272, "text": "Moore and Lewis, 2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As model weight is tuned on development dataset only but applied to various test datasets, the mismatch between development and test is inevitable. To verify the LM weight bias-estimation for different dataset pairs, we conduct the following experiment in this section: for development dataset pair D(development) and T (test), we firstly learn the weight via MERT on D. Then with all other feature weights fixed, we translate T and record the change of BLEU score compared with baseline during the step-by-step modification of the LM weight by starting from the initial weight with a constant value each time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model Weight Mismatch in Statistical Machine Translation", "sec_num": "3" }, { "text": "Based on the above approach, we use four dataset pairs for comparison under a large scale experiment setting (Section 5.1). Figure 1 shows the relation between the BLEU score of the test datasets and the corresponding LM weight. Specifically speaking, each point (x, y) in the figure means that under new LM weight x * baseline-LM -weight, the BLEU score of test dataset under new weight changes y points compared with baseline. We could observe that for some datasets pair like MT03 as development and MT08 as test, the weight is seriously bias-estimated. The detailed comparison is shown in table 1, in which the oracle performance represents the maximal BLEU score obtained when we manually modify the LM weight. The significant difference between baseline and oracle result (about 3 BLEU points) shows much room for potential improvement. Meanwhile, the weight fits well for dataset pair MT03(development) and MT04(test), since the baseline performance is close to the oracle.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 132, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Language Model Weight Mismatch in Statistical Machine Translation", "sec_num": "3" }, { "text": "Based on the above observation, we find that the LM weight mismatch is a common phenomenon in SMT. And the bias-estimation is different for various dataset pairs. Thus it is necessary to propose a metric that could measure the similarity between datasets and an adaptation strategy on LM weight, as we will discuss in the next section. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model Weight Mismatch in Statistical Machine Translation", "sec_num": "3" }, { "text": "Entropy is used as a metric to show how much information one dataset contains. Given sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "X i = (x 1 , x 2 , .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": ". . , x n ), the corresponding cross-entropy under specific language model lm could be calculated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H(X i ) = \u2212 1 n log P lm (x , x 2 , . . . , x n )", "eq_num": "(3)" } ], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "Given two datasets and one language model, we could use cross-entropy to identify which dataset matches the language model better. As the language model is built on target language in SMT task, we can use the entropy of the translation outputs for each dataset as a measurement of the dataset. Given a language model and two datasets (development and test), the model weights are tuned through MERT on development dataset. Then we compute their cross-entropy after translating both datasets under current weight. Specifically speaking, for a dataset X that contains multiple sentences, we obtain its cross-entropy according to the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H(X) = \u2212 i j log P lm (X j i ) i j length(X j i )", "eq_num": "(4)" } ], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "in which, X j i denotes the j th best translation or reference for the i th sentence in the dataset. As the decoder generates translation outputs together with corresponding feature vectors, log P lm (X j i ) could be viewed as equivalent to the language model feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "According to the property of cross-entropy, we can know how the dataset fits the language model. Empirically speaking, a small cross-entropy value indicates a well-matching between the language model and dataset. The language model could thus play a more important role in the translating procedure, which further reveals a large value relatively. Hence, if the test data matches language model better than the development data, the language model weight might be under-estimated; otherwise it would be over-estimated. So we could conclude that cross-entropy difference can be used as a metric for how well the LM weight is estimated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "However, we encounter two problems: Firstly, how can we estimate the degree to which the LM weight is bias-estimated and the second is how we can adjust the weight appropriately. Here we hold the straightforward opinion that the difference of the cross-entropy between test and development data can be a metric for the LM bias-estimation. For the adaptation on the language model weight, we propose an effective method that merely uses cross-entropy. Let D be the development dataset and T be the test dataset. The adaptation approach is shown as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "1. Train a log-linear model based on D and obtain feature weight W .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "2. Translate D using W and calculate the cross-entropy of D as H(D), similarly translate T and obtain H(T ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "3. Modify the LM weight in W lm by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "W lm = W lm H(D) H(T )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "and get new weight W .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "4. Translate T again under W and get the final result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "In the third step, we use the ratio of the entropy of the development and the test dataset for weight adjustment, as it could reflect the variance between these two datasets. It is known that in development dataset each sentence owns references, the reason we use entropy of translation outputs rather than references for development is that in real application we usually translate datasets without references, although it is included for standard SMT evaluation datasets. In fact we could observe that the adaptation result based on the cross-entropy of translation outputs is consistent with that based on cross-entropy of the references, as shown in section 5.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Language Model Weight Adaptation Under Cross Entropy", "sec_num": "4" }, { "text": "We implement a hierarchical phrase-based decoder according to Chiang (2005) . The development data includes NIST 2003 (MT03), NIST 2004 (MT04), NIST 2005 (MT05), NIST 2006 (MT06) and NIST 2008 (MT08). Besides the above four datasets, the test datasets contain all portions of MT06, including newswire (MT06nw), newsgroup (MT06wg) and weblog (MT06wl), and two portions of MT08, including newswire (MT08nw) and webgroup (MT08wg). The statistics are shown in Table 2 . All results are measured in case-insensitive BLEU4 (Papineni et al., 2002) . In the experiments, the training corpus includes LDC2002E18, LDC2003E07, LDC2003E14, LDC2004E12, LDC2004T08, LDC2005E83, LDC2005T06, LDC2005T10, LDC2006E26, LD-C2006E34, LDC2006E85, LDC2006E92, and LDC2007T09, which consists of about 8.5M sentence pairs. The word alignment result is trained by GIZA++ in both directions and refined under intersect-diag-grow heuristics. The plain phrases are extracted from the all bilingual training data, while hierarchical rules are only extracted from selected datasets, including LDC2003E14, LD-C2003E07, LDC2005T10, LDC2006E34, LDC2006E85, and LDC2006E92, which covers nearly 467K sentence pairs. We further train the 5-gram language model over the English part of training data plus Xinhua portion of the English Gigaword corpus.", "cite_spans": [ { "start": 62, "end": 75, "text": "Chiang (2005)", "ref_id": "BIBREF1" }, { "start": 517, "end": 540, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 456, "end": 463, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiment Settings", "sec_num": "5.1" }, { "text": "In this part, we will evaluate the performance of our method introduced in section 4. Under each development dataset, we calculate the cross-entropy of all test datasets, which are displayed in table 3. The results of both baseline and under our adapted method are also presented in From table 5, we may find that the cross-entropy is quite close for some dataset pairs like MT03 and MT05 , which indicates that the adapted score would change little compared with baseline. While for the pair like MT03 and MT08, the remarkable difference means that we can achieve significant improvement (1.60 BLEU points for MT08 test and MT03 development, and 0.99 BLEU points for the reverse). We also obtain similar results on the other dataset pairs, including all separate portions of MT06 and MT08 whose genre information is available. Table 6 displays the oracle test performance in each dataset pair. We can observe that oracle performance for MT05(test) under MT03(development) is 37.54, while the baseline is 37.33, which is consistent with the ratio of cross-entropy between two datasets. We also calculate the entropy on translations after adaptation of all dataset pairs, which is also listed in table 3. From the results in table 3, we find that the cross entropy usually changes according to the ratio of cross-entropy of development and test datasets. Specifically, the cross entropy of test dataset increases as LM weight decreases, as shown in figure 2, in which we use the same dataset pairs as in section 3. The reason for the phenomenon in figure 2 is that when the LM weight increases, the language model turns to play a more important role in the whole SMT system. As a result, the decoder prefers to select the translations with higher LM scores, which are also with shorter length and smaller cross-entropy.", "cite_spans": [], "ref_spans": [ { "start": 828, "end": 835, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Adaptation on 1-best Translation Result", "sec_num": "5.2" }, { "text": "Furthermore, we want to know what the improvements could be under our adaptation method. Here we take the pair MT03 and MT08 as example, the details of the results are shown in table 4. We may observe that for the pair of MT03 as development and MT08 as test, the length penalty is quite large. Meanwhile our adaptation method could notably reduce such penalty and get significant improvement based on BLEU metric. Although the n-gram precision decrease in some sense, Figure 2 : The cross entropy of test vs. LM weight variation in percentage for different dataset pairs the gain from the length penalty decrease could counteract reduction on the precision. While for the case in which MT08 as development and MT03 as test, the length penalty of both baseline and adapted results are equal, while n-gram precision of adapted method is higher than that of baseline, which leads to improvements on final performance. Meanwhile, we also apply another SMT metric TER (Snover et al., 2006) to evaluate the results of the dataset pair MT03 and MT08, as shown in table 4. When we use MT03 as development and MT08 as test, the TER result shows no improvement. This is consistent with observation from above discussion, as improvement for BLEU mainly comes from length penalty, not n-gram precision. Meanwhile, when we use MT08 as development and MT03 as test, we achieve significant improvement on the TER score. This inconsistency shows some potential difference between the TER metric and the BLEU metric.", "cite_spans": [ { "start": 964, "end": 985, "text": "(Snover et al., 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 469, "end": 477, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Adaptation on 1-best Translation Result", "sec_num": "5.2" }, { "text": "However, for some dataset pairs, the adapted result is not so good as the baseline. The reason might be that the closeness of test and development measured through cross-entropy is more significant than the real difference. Taking MT03(development) and MT04(test) for example, from figure 1 we could find that the baseline is almost the same as oracle (0.01 BLEU points difference), while the ratio of the cross-entropy from table 3 is larger than our intuition, making the LM weight over-adapted and the performance decreased. Nevertheless, the results in table 5 show that our method works well for most dataset pairs (33 of 50 groups increase, while only 6 of 50 groups decrease). Although our adaption method is in a sense empirical, we believe it reflects the inherent relations in the LM adaptation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptation on 1-best Translation Result", "sec_num": "5.2" }, { "text": "Furthermore, we want to know the influence of the cross-entropy variation on BLEU score improvements. In figure 3, the X-axis represents the absolute value of relative change between development and test dataset (i.e., | H(D) H(T ) \u2212 1|), and the Y-axis displays the improvements of the BLEU score under adaptation. We would observe that all five groups of points are well linear, showing strong correlations between adaptation improvements and cross-entropy difference. Based on above results, we can draw the conclusion that even if cross-entropy may not be the only factor that determines the bias-estimation of LM weight, it is still one of the most important.", "cite_spans": [ { "start": 221, "end": 225, "text": "H(D)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Adaptation on 1-best Translation Result", "sec_num": "5.2" }, { "text": "In the above part, we utilize mere 1-best translation results for entropy calculation. We wonder what the result would be if more outputs are used. With MT03 as development, MT05 and MT08 as test respectively, we run adaptation under number from 1 to 20 best translations. Results in figure 4 show that the number of translation outputs shows little impact on the adaptation results, since the deviation between maximal and minimal score is quite small(less than 0.2 BLEU score points). And in the following parts, we adopt 1-best translation as default setting. ", "cite_spans": [], "ref_spans": [ { "start": 284, "end": 292, "text": "figure 4", "ref_id": null } ], "eq_spans": [], "section": "1-best VS. N-best Translation Result Adaptation", "sec_num": "5.3" }, { "text": "In practice, entropy of translation outputs, rather than references, is used for adaptation. Nevertheless, we want to know whether there exists some difference between these two approaches. Table 7 shows the results under adaptation on entropy of references, while the related entropy is shown in Table 8 . We could find that adapted results and the cross-entropy are both consistent with those of using 1-best translation, as in SMT the model weight is tuned in the way that tries to make translation outputs as close as possible to the references. Table 8 : Cross entropy of each dataset calculated on references.", "cite_spans": [], "ref_spans": [ { "start": 190, "end": 197, "text": "Table 7", "ref_id": "TABREF11" }, { "start": 297, "end": 304, "text": "Table 8", "ref_id": null }, { "start": 550, "end": 557, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Adaptation on Translation References", "sec_num": "5.4" }, { "text": "In our experiment, we always use the standard NIST datasets to evaluate the adaptation method. We also want to validate our method under more datasets in this section. Using MT03 as development, we build six extra test datasets by randomly selecting 50, 100, 300, 600, 1200 and 2000 sentences respectively from the collections of MT04, MT05, MT06 and MT08. Related results are shown in Table 9 , in which improvements still could be achieved but not so significant as the results in table 5. Based on experimental results, we know that some datasets like MT04 and MT05 are close to the development MT03, while some others are different. One basic assumption for our adaptation method is that the dataset is composed of several documents, each belongs to a specific domain. Hence for the random datasets, their distribution is a mixture of multiple sources, making the adaptation performance not so significant as that on normal MT evaluation datasets.", "cite_spans": [], "ref_spans": [ { "start": 386, "end": 393, "text": "Table 9", "ref_id": "TABREF14" } ], "eq_spans": [], "section": "Adaptation on Random Test Data", "sec_num": "5.5" }, { "text": "ConclusionIn this article, we address the problem of LM weight mismatch between tuning and testing. In particular, cross-entropy on n-best translation hypotheses is adopted as a metric to indicate the bias-estimation in language modeling. Furthermore, an adaptation approach is proposed to adjust the LM weight using the ratio of cross-entropy between different datasets. Experimental results show that our cross-entropy based adaptation strategy significantly alleviates the bias problem of language modeling and significant improvements could be achieved when the test data is quite different from the development.In this paper, we only tackle the adaptation on corpus level. In future we are going to explore LM adaptation on document and sentence level. Besides, we also intend to apply adaptation to multiple LMs. Although our method works well on most dataset pairs, there still exist some pairs on which our method fails. Therefore, it will be interesting to further investigate the factors that determine the adaptation performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Shujie Liu for his meaningful suggestions. We would also like to thank the anonymous reviewers for their helpful comments. This work is supported by the National Natural Science Foundation of China (No.61003112) and the National Fundamental Research Program of China (2010CB327903).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Domain Adaptation via Pseudo In-Domain Data Selection", "authors": [ { "first": "Amittai", "middle": [], "last": "Axelrod", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "355--362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amittai Axelrod, Xiaodong He and Jianfeng Gao. 2011. Domain Adaptation via Pseudo In- Domain Data Selection. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, 355-362, Edinburgh, July, 2011.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Hierarchical Phrase-based Model for Statistical Machine Translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A Hierarchical Phrase-based Model for Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting of the ACL, 263-270.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Adaptation of the Translation Model for Statistical Machine translation based on Information Retrieval", "authors": [ { "first": "Almut", "middle": [], "last": "Hildebrand", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Eck", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2005, "venue": "Proceedings of EAMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Almut Hildebrand, Matthias Eck, Stephan Vogel, and Alex Waibel. 2005. Adaptation of the Translation Model for Statistical Machine translation based on Information Retrieval. In Pro- ceedings of EAMT, Budapest, Hungary.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Mixture-Model Adaptation for SMT", "authors": [ { "first": "George", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Kuhn", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second ACL Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Foster and Roland Kuhn. 2007. Mixture-Model Adaptation for SMT. In Proceedings of the Second ACL Workshop on Statistical Machine Translation, Prague, Czech Republic.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Discriminative Instance Weighting for Domain Adaptation in Statistical Machine Translation", "authors": [ { "first": "George", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Goutte", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Kuhn", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "451--459", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Foster, Cyril Goutte and Roland Kuhn. 2010. Discriminative Instance Weighting for Domain Adaptation in Statistical Machine Translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, 451-459, MIT, Massachusetts, USA, October 2010.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Experiments in Domain Adaptation for Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "224--227", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn and Josh Schroeder. 2007. Experiments in Domain Adaptation for Statistical Ma- chine Translation. In Proceedings of the Second Workshop on Statistical Machine Translation, 224-227, Prague, June 2007.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Adaptive Development Data Selection for Log-linear Model in Statistical Machine Translation", "authors": [ { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yinggong", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Dongdong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "662--670", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mu Li, Yinggong Zhao, Dongdong Zhang and Ming Zhou. 2010. Adaptive Development Data Selection for Log-linear Model in Statistical Machine Translation. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), 662-670.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving Statistical Machine Translation Performance by Training Data Selection and Optimization", "authors": [ { "first": "Yajuan", "middle": [], "last": "L\u00fc", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "343--350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yajuan L\u00fc, Jin Huang and Qun Liu. 2007. Improving Statistical Machine Translation Performance by Training Data Selection and Optimization. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, 343-350.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Discriminative Corpus Weight Estimation for Machine Translation", "authors": [ { "first": "Spyros", "middle": [], "last": "Matsoukas", "suffix": "" }, { "first": "Antti-Veikko", "middle": [], "last": "Rosti", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2009, "venue": "Proc. of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Spyros Matsoukas, Antti-Veikko Rosti and Bing Zhang. 2009. Discriminative Corpus Weight Estimation for Machine Translation. In Proc. of the Conference on Empirical Methods in Natural Language Processing, 160-167", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Language Model Adaptation for Difficult To Translate Phrases", "authors": [ { "first": "Behrang", "middle": [], "last": "Mohit", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Liberato", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 13th Annual Conference of the EAMT", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Behrang Mohit, Frank Liberato and Rebecca Hwa. 2009. Language Model Adaptation for D- ifficult To Translate Phrases. In Proceedings of the 13th Annual Conference of the EAMT, 160-167.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Using Variable Decoding Weight for Language Model in Statistical Machine Translation", "authors": [ { "first": "Behrang", "middle": [], "last": "Mohit", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2010, "venue": "The Proceedings of the 9th Conference of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Behrang Mohit, Rebecca Hwa and Alon Lavie. 2010. Using Variable Decoding Weight for Lan- guage Model in Statistical Machine Translation In The Proceedings of the 9th Conference of the Association for Machine Translation in the Americas, Colorado", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Intelligent Selection of Language Model Training Data", "authors": [ { "first": "C", "middle": [], "last": "Robert", "suffix": "" }, { "first": "William", "middle": [], "last": "Moore", "suffix": "" }, { "first": "", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL 2010 Conference Short Papers", "volume": "", "issue": "", "pages": "220--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert C. Moore and William Lewis. 2010. Intelligent Selection of Language Model Training Data. In Proceedings of the ACL 2010 Conference Short Papers, 220-224, Uppsala, Sweden.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Minimum Error Rate Training in Statistical Machine Translation", "authors": [ { "first": "Franz", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41th Annual Meeting of the Association for Computational Linguistic (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Pro- ceedings of the 41th Annual Meeting of the Association for Computational Linguistic (ACL), Sapporo, Japan.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bleu: a Method for Automatic Evaluation of Machine Translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistic (ACL)", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistic (ACL), 311-318.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A Study of Translation Edit Rate with Targeted Human Annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Linnea", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annotation. In Proceedings of Associa- tion for Machine Translation in the Americas", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Direct Density Ratio Estimation for Large-scale Covariate Shift Adaptation", "authors": [ { "first": "Yuta", "middle": [], "last": "Tsuboi", "suffix": "" }, { "first": "Hisashi", "middle": [], "last": "Kashima", "suffix": "" }, { "first": "Shohei", "middle": [], "last": "Hido", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Bickel", "suffix": "" }, { "first": "Masashi", "middle": [], "last": "Sugiyama", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Eighth SIAM International Conference on Data Mining", "volume": "", "issue": "", "pages": "443--454", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuta Tsuboi, Hisashi Kashima, Shohei Hido, Steffen Bickel and Masashi Sugiyama. 2008. Direct Density Ratio Estimation for Large-scale Covariate Shift Adaptation. In Proceedings of the Eighth SIAM International Conference on Data Mining, pp. 443-454, 2008.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Language Model Adaptation for Statistical Machine Translation with Structured Query Models", "authors": [ { "first": "Bing", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Eck", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2004, "venue": "Proceedings of International Conference on Computational Linguistics(COLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Zhao, Matthias Eck and Stephan Vogel. 2004. Language Model Adaptation for Statistical Machine Translation with Structured Query Models. In Proceedings of International Confer- ence on Computational Linguistics(COLING), Geneva, August.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
4
Dev:MT03 Test:MT08
2Dev:MT08 Test:MT03 Dev:MT03 Test:MT04
Dev:MT04 Test:MT03
0
BLEU score variation\u22126 \u22124 \u22122
\u22128
\u221210
0.2 \u2212120.40.60.811.21.41.61.82
Language model weight variation percentage
Figure 1: DEV TEST BaselineOracle
MT03 MT0437.5437.55(+0.01)
MT04 MT0338.7638.96(+0.20)
MT03 MT0824.8628.66(+3.80)
MT08 MT0335.8638.77(+2.91)
", "text": "The variation of BLEU score (in value) vs. variation of language model weight (in percentage)", "num": null, "html": null, "type_str": "table" }, "TABREF1": { "content": "", "text": "Comparison between baseline and oracle performance under language model weight for different dataset pairs.", "num": null, "html": null, "type_str": "table" }, "TABREF3": { "content": "
", "text": "Statistics on development and test datasets.", "num": null, "html": null, "type_str": "table" }, "TABREF4": { "content": "
DEVMT03MT04MT05MT06MT08
TESTBaseline Adapted Baseline AdaptedBaseline Adapted Baseline AdaptedBaseline Adapted
MT031.88421.88421.85071.87011.99531.99581.80581.79921.88001.8564
MT041.75561.73531.72641.72641.87201.84821.69001.66791.76001.7191
MT051.88801.88841.86211.87762.00222.00221.81091.80911.87591.8513
MT061.92871.93611.89971.91852.04592.05711.84081.84081.92241.9012
MT082.14622.17872.11762.15502.28902.34882.03762.05872.12242.1224
MT06bc1.83241.82601.82621.81501.94251.92921.76571.75361.83521.7995
MT06nw1.84801.84121.81751.82941.95351.94691.76071.75021.83781.8108
MT06ng2.29622.34402.26602.30692.44362.51702.16922.20472.26372.2737
MT08nw2.06022.07862.02082.05562.19132.23041.95481.96652.02712.0166
MT08wg2.26492.31912.25442.29952.42042.51202.15112.18152.25272.2664
", "text": "", "num": null, "html": null, "type_str": "table" }, "TABREF5": { "content": "", "text": ".Cross entropy of test datasets on different development datasets, under large scale setting.", "num": null, "html": null, "type_str": "table" }, "TABREF6": { "content": "
DEVMT03MT04MT05MT06MT08
TESTBaselineAdaptedBaselineAdaptedBaselineAdaptedBaselineAdaptedBaselineAdapted
MT0339.1439.14 (|)38.7738.45 (\u2193)38.6138.69 (|)37.3137.44 (|)35.8636.85 (\u2191)
MT0437.5236.74 (\u2193)37.9337.93 (|)36.7236.12 (\u2193)35.8136.84 (\u2191)34.6636.23(\u2191)
MT0537.3337.37 (|)36.9437.24(\u2191)36.8736.87 (\u2191)35.9336.07(|)34.1535.29 (\u2191)
MT0633.5834.04 (\u2191)33.6335.13 (\u2191)33.4433.49 (|)36.3636.36 (\u2191)35.0435.87 (\u2191)
MT0824.8626.46 (\u2191)24.1827.03 (\u2191)25.4326.65 (\u2191)27.7428.86 (\u2191)29.2929.29 (|)
MT06bc24.2227.20 (\u2191)23.7727.70 (\u2191)24.6426.26 (\u2191)27.3728.14 (\u2191)28.8728.43 (\u2193)
MT06nw40.3639.91 (\u2193)39.9740.74 (\u2191)39.8539.72 (|)39.5740.26 (\u2191)37.7139.72 (\u2191)
MT06ng33.8133.65 (|)34.2334.71 (\u2191)33.7933.45 (\u2193)36.7236.53 (|)35.5836.61 (\u2191)
MT08nw29.4030.66 (\u2191)28.9531.32 (\u2191)29.6730.31 (\u2191)32.4733.31 (\u2191)33.0333.23 (\u2191)
MT08wg18.7821.09 (\u2191)17.8121.04 (\u2191)19.7420.91 (\u2191)21.3523.06 (\u2191)22.7223.13 (\u2191)
", "text": "Detailed analysis of BLEU scores, including n-gram precision and length penalty and TER scores, based on dataset pair of MT03 and MT08.", "num": null, "html": null, "type_str": "table" }, "TABREF7": { "content": "", "text": "", "num": null, "html": null, "type_str": "table" }, "TABREF9": { "content": "
2.4DEV:MT03 TEST:MT08
DEV:MT08 TEST:MT03
2.3DEV:MT03 TEST:MT04 DEV:MT04 TEST:MT03
Cross\u2212entropy of test data1.9 2 2.1 2.2
1.8
1.7
0.4 1.60.60.811.21.41.61.8
LM weight variation percentage
", "text": "Oracle performance of different dataset pairs under large scale setting.", "num": null, "html": null, "type_str": "table" }, "TABREF11": { "content": "", "text": "Comparison between baseline and LM weight adaption method using references on different dataset pairs, under large scale setting.", "num": null, "html": null, "type_str": "table" }, "TABREF13": { "content": "
Adaptation under different N\u2212best Calculation, MT03:DEV, MT05:TestAdaptation under different N\u2212best Calculation, MT03:DEV, MT08:Test
3827
baselinebaseline
37.9adaptedadapted
37.826.5
37.726
37.6
BLEU37.5BLEU25.5
37.4
37.325
37.224.5
37.1
37051015202405101520
nbestnbest
Figure 4: Random BaselineAdaptedOracle
5034.5235.03(+0.51) 35.05
10034.7934.33(-0.46)34.94
30033.0633.38(+0.32) 33.82
60033.4833.72(+0.24) 34.91
120033.6433.92(+0.28) 35.03
200033.7234.04(+0.32) 35.22
", "text": "The adaptation results under entropy calculation on different number of translation outputs, with MT03 as Development and MT05 as Test(Left), MT08 as Test(Right)", "num": null, "html": null, "type_str": "table" }, "TABREF14": { "content": "", "text": "Results with MT03 as development and random selected datasets as test, under large scale setting", "num": null, "html": null, "type_str": "table" } } } }