{ "paper_id": "D14-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:55:23.478881Z" }, "title": "Neural Network Based Bilingual Language Model Growing for Statistical Machine Translation", "authors": [ { "first": "Rui", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "postCode": "200240", "settlement": "Shanghai", "country": "China" } }, "email": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "postCode": "200240", "settlement": "Shanghai", "country": "China" } }, "email": "" }, { "first": "Bao-Liang", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "postCode": "200240", "settlement": "Shanghai", "country": "China" } }, "email": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "", "affiliation": { "laboratory": "Multilingual Translation Laboratory", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikaridai" } }, "email": "" }, { "first": "Eiichro", "middle": [], "last": "Sumita", "suffix": "", "affiliation": { "laboratory": "Multilingual Translation Laboratory", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikaridai" } }, "email": "eiichiro.sumita@nict.go.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Since larger n-gram Language Model (LM) usually performs better in Statistical Machine Translation (SMT), how to construct efficient large LM is an important topic in SMT. However, most of the existing LM growing methods need an extra monolingual corpus, where additional LM adaption technology is necessary. In this paper, we propose a novel neural network based bilingual LM growing method, only using the bilingual parallel corpus in SMT. The results show that our method can improve both the perplexity score for LM evaluation and BLEU score for SMT, and significantly outperforms the existing LM growing methods without extra corpus.", "pdf_parse": { "paper_id": "D14-1023", "_pdf_hash": "", "abstract": [ { "text": "Since larger n-gram Language Model (LM) usually performs better in Statistical Machine Translation (SMT), how to construct efficient large LM is an important topic in SMT. However, most of the existing LM growing methods need an extra monolingual corpus, where additional LM adaption technology is necessary. In this paper, we propose a novel neural network based bilingual LM growing method, only using the bilingual parallel corpus in SMT. The results show that our method can improve both the perplexity score for LM evaluation and BLEU score for SMT, and significantly outperforms the existing LM growing methods without extra corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "'Language Model (LM) Growing' refers to adding n-grams outside the corpus together with their probabilities into the original LM. This operation is useful as it can make LM perform better through letting it become larger and larger, by only using a small training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are various methods for adding n-grams selected by different criteria from a monolingual corpus (Ristad and Thomas, 1995; Niesler and Woodland, 1996; Siu and Ostendorf, 2000; Siivola et al., 2007) . However, all of these approaches need additional corpora. Meanwhile the extra corpora from different domains will not result in better LMs (Clarkson and Robinson, 1997; Iyer et al., 1997; Bellegarda, 2004; Koehn and Schroeder, Part of this work was done as Rui Wang visited in NICT. 2007 ). In addition, it is very difficult or even impossible to collect an extra large corpus for some special domains such as the TED corpus (Cettolo et al., 2012) or for some rare languages. Therefore, to improve the performance of LMs, without assistance of extra corpus, is one of important research topics in SMT.", "cite_spans": [ { "start": 102, "end": 127, "text": "(Ristad and Thomas, 1995;", "ref_id": "BIBREF22" }, { "start": 128, "end": 155, "text": "Niesler and Woodland, 1996;", "ref_id": "BIBREF19" }, { "start": 156, "end": 180, "text": "Siu and Ostendorf, 2000;", "ref_id": "BIBREF28" }, { "start": 181, "end": 202, "text": "Siivola et al., 2007)", "ref_id": "BIBREF27" }, { "start": 344, "end": 373, "text": "(Clarkson and Robinson, 1997;", "ref_id": "BIBREF6" }, { "start": 374, "end": 392, "text": "Iyer et al., 1997;", "ref_id": "BIBREF9" }, { "start": 393, "end": 410, "text": "Bellegarda, 2004;", "ref_id": "BIBREF3" }, { "start": 411, "end": 431, "text": "Koehn and Schroeder,", "ref_id": "BIBREF12" }, { "start": 462, "end": 492, "text": "Rui Wang visited in NICT. 2007", "ref_id": null }, { "start": 630, "end": 652, "text": "(Cettolo et al., 2012)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, Continues Space Language Model (CSLM), especially Neural Network based Language Model (NNLM) (Bengio et al., 2003; Schwenk, 2007; Mikolov et al., 2010; Le et al., 2011) , is being actively used in SMT (Schwenk et al., 2006; Son et al., 2010; Schwenk, 2010; Schwenk et al., 2012; Son et al., 2012; Niehues and Waibel, 2012) . One of the main advantages of CSLM is that it can more accurately predict the probabilities of the n-grams, which are not in the training corpus. However, in practice, CSLMs have not been widely used in the current SMT systems, due to their too high computational cost. Vaswani and colleagues (2013) propose a method for reducing the training cost of CSLM and apply it to SMT decoder. However, they do not show their improvement for decoding speed, and their method is still slower than the n-gram LM. There are several other methods for attempting to implement neural network based LM or translation model for SMT (Devlin et al., 2014; Liu et al., 2014; Auli et al., 2013) . However, the decoding speed using n-gram LM is still state-ofthe-art one. Some approaches calculate the probabilities of the n-grams n-grams before decoding, and store them in the n-gram format (Wang et al., 2013a; Arsoy et al., 2013; Arsoy et al., 2014) . The 'converted CSLM' can be directly used in SMT. Though more n-grams which are not in the train-ing corpus can be generated by using some of these 'converting' methods, these methods only consider the monolingual information, and do not take the bilingual information into account.", "cite_spans": [ { "start": 103, "end": 124, "text": "(Bengio et al., 2003;", "ref_id": "BIBREF4" }, { "start": 125, "end": 139, "text": "Schwenk, 2007;", "ref_id": "BIBREF25" }, { "start": 140, "end": 161, "text": "Mikolov et al., 2010;", "ref_id": "BIBREF17" }, { "start": 162, "end": 178, "text": "Le et al., 2011)", "ref_id": "BIBREF15" }, { "start": 211, "end": 233, "text": "(Schwenk et al., 2006;", "ref_id": "BIBREF23" }, { "start": 234, "end": 251, "text": "Son et al., 2010;", "ref_id": "BIBREF29" }, { "start": 252, "end": 266, "text": "Schwenk, 2010;", "ref_id": "BIBREF26" }, { "start": 267, "end": 288, "text": "Schwenk et al., 2012;", "ref_id": "BIBREF24" }, { "start": 289, "end": 306, "text": "Son et al., 2012;", "ref_id": "BIBREF30" }, { "start": 307, "end": 332, "text": "Niehues and Waibel, 2012)", "ref_id": "BIBREF18" }, { "start": 605, "end": 634, "text": "Vaswani and colleagues (2013)", "ref_id": null }, { "start": 950, "end": 971, "text": "(Devlin et al., 2014;", "ref_id": "BIBREF7" }, { "start": 972, "end": 989, "text": "Liu et al., 2014;", "ref_id": "BIBREF16" }, { "start": 990, "end": 1008, "text": "Auli et al., 2013)", "ref_id": "BIBREF2" }, { "start": 1205, "end": 1225, "text": "(Wang et al., 2013a;", "ref_id": "BIBREF34" }, { "start": 1226, "end": 1245, "text": "Arsoy et al., 2013;", "ref_id": "BIBREF0" }, { "start": 1246, "end": 1265, "text": "Arsoy et al., 2014)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We observe that the translation output of a phrase-based SMT system is concatenation of phrases from the phrase table, whose probabilities can be calculated by CSLM. Based on this observation, a novel neural network based bilingual LM growing method is proposed using the 'connecting phrases'. The remainder of this paper is organized as follows: In Section 2, we will review the existing CSLM converting methods. The new neural network based bilingual LM growing method will be proposed in Section 3. In Section 4, the experiments will be conducted and the results will be analyzed. We will conclude our work in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditional Backoff N -gram LMs (BNLMs) have been widely used in many NLP tasks (Zhang and Zhao, 2013; Jia and Zhao, 2014; Zhang et al., 2012; Xu and Zhao, 2012; Wang et al., 2013b; Jia and Zhao, 2013; Wang et al., 2014) . Recently, CSLMs become popular because they can obtain more accurate probability estimation.", "cite_spans": [ { "start": 80, "end": 102, "text": "(Zhang and Zhao, 2013;", "ref_id": "BIBREF38" }, { "start": 103, "end": 122, "text": "Jia and Zhao, 2014;", "ref_id": "BIBREF11" }, { "start": 123, "end": 142, "text": "Zhang et al., 2012;", "ref_id": "BIBREF39" }, { "start": 143, "end": 161, "text": "Xu and Zhao, 2012;", "ref_id": "BIBREF37" }, { "start": 162, "end": 181, "text": "Wang et al., 2013b;", "ref_id": "BIBREF35" }, { "start": 182, "end": 201, "text": "Jia and Zhao, 2013;", "ref_id": "BIBREF10" }, { "start": 202, "end": 220, "text": "Wang et al., 2014)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Existing CSLM Converting Methods", "sec_num": "2" }, { "text": "A CSLM implemented in a multi-layer neural network contains four layers: the input layer projects (first layer) all words in the context h i onto the projection layer (second layer); the hidden layer (third layer) and the output layer (fourth layer) achieve the non-liner probability estimation and calculate the LM probability P (w i |h i ) for the given context (Schwenk, 2007) .", "cite_spans": [ { "start": 364, "end": 379, "text": "(Schwenk, 2007)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "2.1" }, { "text": "CSLM is able to calculate the probabilities of all words in the vocabulary of the corpus given the context. However, due to too high computational complexity, CSLM is mainly used to calculate the probabilities of a subset of the whole vocabulary (Schwenk, 2007) . This subset is called a shortlist, which consists of the most frequent words in the vocabulary. CSLM also calculates the sum of the probabilities of all words not included in the short-list by assigning a neuron with the help of BNLM. The probabilities of other words not in the short-list are obtained from an BNLM (Schwenk, 2007; Schwenk, 2010; Wang et al., 2013a) .", "cite_spans": [ { "start": 246, "end": 261, "text": "(Schwenk, 2007)", "ref_id": "BIBREF25" }, { "start": 580, "end": 595, "text": "(Schwenk, 2007;", "ref_id": "BIBREF25" }, { "start": 596, "end": 610, "text": "Schwenk, 2010;", "ref_id": "BIBREF26" }, { "start": 611, "end": 630, "text": "Wang et al., 2013a)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "2.1" }, { "text": "Let w i and h i be the current word and history, respectively. CSLM with a BNLM calculates the probability P (w i |h i ) of w i given h i , as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "2.1" }, { "text": "P (wi|hi) = \uf8f1 \uf8f2 \uf8f3 Pc(w i |h i ) \u2211 w\u2208V 0 Pc(w|h i ) Ps(hi) if wi \u2208 V0 P b (wi|hi) otherwise (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "2.1" }, { "text": "where V 0 is the short-list, P c (\u2022) is the probability calculated by CSLM,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "2.1" }, { "text": "\u2211 w\u2208V 0 P c (w|h i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "2.1" }, { "text": "is the summary of probabilities of the neuron for all the words in the short-list, P b (\u2022) is the probability calculated by the BNLM, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Ps(hi) = \u2211 v\u2208V 0 P b (v|hi).", "eq_num": "(2)" } ], "section": "Continues Space Language Model", "sec_num": "2.1" }, { "text": "We may regard that CSLM redistributes the probability mass of all words in the short-list, which is calculated by using the n-gram LM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "2.1" }, { "text": "As baseline systems, our approach proposed in (Wang et al., 2013a) only re-writes the probabilities from CSLM into the BNLM, so it can only conduct a convert LM with the same size as the original one. The main difference between our proposed method in this paper and our previous approach is that n-grams outside the corpus are generated firstly and the probabilities using CSLM are calculated by using the same method as our previous approach. That is, the proposed new method is the same as our previous one when no grown n-grams are generated.", "cite_spans": [ { "start": 46, "end": 66, "text": "(Wang et al., 2013a)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Converting Methods", "sec_num": "2.2" }, { "text": "The method developed by Arsoy and colleagues (Arsoy et al., 2013; Arsoy et al., 2014) adds all the words in the short-list after the tail word of the i-grams to construct the (i+1)-grams. For example, if the i-gram is \"I want\", then the (i+1)grams will be \"I want *\", where \"*\" stands for any word in the short list. Then the probabilities of the (i+1)-grams are calculated using (i+1)-CSLM. So a very large intermediate (i+1)-grams will have to be grown 1 , and then be pruned into smaller suitable size using an entropy-based LM pruning method modified from (Stolcke, 1998) . The (i+2)grams are grown using (i+1)-grams, recursively.", "cite_spans": [ { "start": 45, "end": 65, "text": "(Arsoy et al., 2013;", "ref_id": "BIBREF0" }, { "start": 66, "end": 85, "text": "Arsoy et al., 2014)", "ref_id": "BIBREF1" }, { "start": 560, "end": 575, "text": "(Stolcke, 1998)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Converting Methods", "sec_num": "2.2" }, { "text": "The translation output of a phrase-based SMT system can be regarded as a concatenation of phrases in the phrase table (except unknown words). This leads to the following procedure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual LM Growing", "sec_num": "3" }, { "text": "Step 1. All the n-grams included in the phrase table should be maintained at first.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual LM Growing", "sec_num": "3" }, { "text": "Step 2. The connecting phrases are defined in the following way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual LM Growing", "sec_num": "3" }, { "text": "The w b a is a target language phrase starting from the a-th word ending with the b-th word, and \u03b2w b a \u03b3 is a phrase including w b a as a part of it, where \u03b2 and \u03b3 represent any word sequence or none.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual LM Growing", "sec_num": "3" }, { "text": "An i-gram phrase w k 1 w i k+1 (1 \u2264 k \u2264 i \u2212 1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual LM Growing", "sec_num": "3" }, { "text": "is a connecting phrase 2 , if :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual LM Growing", "sec_num": "3" }, { "text": "(1) w k 1 is the right (rear) part of one phrase \u03b2w k 1 in the phrase table, or (2) w i k+1 is the left (front) part of one phrase w i k+1 \u03b3 in the phrase table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual LM Growing", "sec_num": "3" }, { "text": "After the probabilities are calculated using C-SLM (Eqs.1 and 2), we combine the n-grams in the phrase table from Step 1 and the connecting phrases from Step 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual LM Growing", "sec_num": "3" }, { "text": "Since the size of connecting phrases is too huge (usually more than one Terabyte), it is necessary to decide the usefulness of connecting phrases for SMT. The more useful connecting phrases can be selected, by ranking the appearing probabilities of the connecting phrases in SMT decoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking the Connecting Phrases", "sec_num": "3.1" }, { "text": "Each line of a phrase table can be simplified (without considering other unrelated scores in the phrase table) as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking the Connecting Phrases", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f ||| e ||| P (e|f ),", "eq_num": "(3)" } ], "section": "Ranking the Connecting Phrases", "sec_num": "3.1" }, { "text": "where the P (e|f ) means the translation probability from f (source phrase) to e(target phrase), which can be calculated using bilingual parallel training data. In decoding, the probability of a target phrase e appearing in SMT should be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking the Connecting Phrases", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pt(e) = \u2211 f Ps(f ) \u00d7 P (e|f ),", "eq_num": "(4)" } ], "section": "Ranking the Connecting Phrases", "sec_num": "3.1" }, { "text": "where the P s (f ) means the appearing probability of a source phrase, which can be calculated using source language part in the bilingual training data. Using P t (e) 3 , we can select the connecting phrases e with high appearing probabilities as the n-grams to be added to the original ngrams. These n-grams are called 'grown ngrams'. Namely, we build all the connecting phrases at first, and then we use the appearing probabilities of the connecting phrases to decide which connecting phrases should be selected. For an i-gram connecting phrase w k 1 w i k+1 , where w k 1 is part of \u03b2w k 1 and w i k+1 is part of w i k+1 \u03b3 (the \u03b2w k 1 and w i k+1 \u03b3 are from the phrase table), the probability of the connecting phrases can be roughly estimated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking the Connecting Phrases", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pcon(w k 1 w i k+1 ) = i\u22121 \u2211 k=1 ( \u2211 \u03b2 Pt(\u03b2w k 1 )\u00d7 \u2211 \u03b3 Pt(w i k+1 \u03b3)).", "eq_num": "(5)" } ], "section": "Ranking the Connecting Phrases", "sec_num": "3.1" }, { "text": "A threshold for P con (w k 1 w i k+1 ) is set, and only the connecting phrases whose appearing probabilities are higher than the threshold will be selected as the grown n-grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking the Connecting Phrases", "sec_num": "3.1" }, { "text": "To our bilingual LM growing method, a 5-gram LM and n-gram (n=2,3,4,5) CSLMs are built by using the target language of the parallel corpus, and the phrase table is learned from the parallel corpus. The probabilities of unigram in the original ngram LM will be maintained as they are. The n-grams from the bilingual phrase table will be grown by using the 'connecting phrases' method. As the whole connecting phrases are too huge, we use the ranking method to select the more useful connecting phrases. The distribution of different n-grams (n=2,3,4,5) of the grown LMs are set as the same as the original LM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating the Probabilities of Grown N -grams Using CSLM", "sec_num": "3.2" }, { "text": "The probabilities of the grown n-grams (n=2,3,4,5) are calculated using the 2,3,4,5-CSLM, respectively. If the tail (target) words of the grown n-grams are not in the short-list of C-SLM, the P b (\u2022) in Eq. 1 will be applied to calculate their probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating the Probabilities of Grown N -grams Using CSLM", "sec_num": "3.2" }, { "text": "We combine the n-grams (n=1,2,3,4,5) together and re-normalize the probabilities and backoff weights of the grown LM. Finally the original BNLM and the grown LM are interpolated. The entire process is illustrated in Figure 1 . ", "cite_spans": [], "ref_spans": [ { "start": 216, "end": 224, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Calculating the Probabilities of Grown N -grams Using CSLM", "sec_num": "3.2" }, { "text": "The same setting up of the NTCIR-9 Chinese to English translation baseline system (Goto et al., 2011) was followed, only with various LMs to compare them. The Moses phrase-based SMT system was applied , together with GIZA++ (Och and Ney, 2003) for alignment and MERT (Och, 2003) for tuning on the development data. Fourteen standard SMT features were used: five translation model scores, one word penalty score, seven distortion scores, and one LM score. The translation performance was measured by the case-insensitive BLEU on the tokenized test data.", "cite_spans": [ { "start": 82, "end": 101, "text": "(Goto et al., 2011)", "ref_id": "BIBREF8" }, { "start": 224, "end": 243, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF20" }, { "start": 267, "end": 278, "text": "(Och, 2003)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setting up", "sec_num": "4.1" }, { "text": "We used the patent data for the Chinese to English patent translation subtask from the NTCIR-9 patent translation task (Goto et al., 2011) . The parallel training, development, and test data sets consist of 1 million (M), 2,000, and 2,000 sentences, respectively.", "cite_spans": [ { "start": 119, "end": 138, "text": "(Goto et al., 2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setting up", "sec_num": "4.1" }, { "text": "Using SRILM (Stolcke, 2002; Stolcke et al., 2011) , we trained a 5-gram LM with the interpolated Kneser-Ney smoothing method using the 1M English training sentences containing 42M words without cutoff. The 2,3,4,5-CSLMs were trained on the same 1M training sentences using CSLM toolkit (Schwenk, 2007; Schwenk, 2010) . The settings for CSLMs were: input layer of the same dimension as vocabulary size (456K), projection layer of dimension 256 for each word, hidden layer of dimension 384 and output layer (short-list) of dimension 8192, which were recommended in the CSLM toolkit and (Wang et al., 2013a) 4 . 4 Arsoy used around 55 M words as the corpus, including", "cite_spans": [ { "start": 12, "end": 27, "text": "(Stolcke, 2002;", "ref_id": "BIBREF32" }, { "start": 28, "end": 49, "text": "Stolcke et al., 2011)", "ref_id": "BIBREF31" }, { "start": 286, "end": 301, "text": "(Schwenk, 2007;", "ref_id": "BIBREF25" }, { "start": 302, "end": 316, "text": "Schwenk, 2010)", "ref_id": "BIBREF26" }, { "start": 609, "end": 610, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setting up", "sec_num": "4.1" }, { "text": "The experiment results were divided into four groups: the original BNLMs (BN), the CSLM Re-ranking (RE), our previous converting (WA), the Arsoy's growing, and our growing methods. For our bilingual LM growing method, 5 bilingual grown LMs (BI-1 to 5) were conducted in increasing sizes. For the method of Arsoy, 5 grown LMs (AR-1 to 5) with similar size of BI-1 to 5 were also conducted, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "For the CSLM re-ranking, we used CSLM to re-rank the 100-best lists of SMT. Our previous converted LM, Arsoy's grown LMs and bilingual grown LMs were interpolated with the original BNLMs, using default setting of SRILM 5 . To reduce the randomness of MERT, we used two methods for tuning the weights of different SMT features, and two BLEU scores are corresponding to these two methods. The BLEU-s indicated that the same weights of the BNLM (BN) features were used for all the SMT systems. The BLEU-i indicated that the MERT was run independently by three times and the average BLEU scores were taken.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "We also performed the paired bootstrap resampling test (Koehn, 2004) 6 . Two thousands samples were sampled for each significance test. The marks at the right of the BLEU score indicated whether the LMs were significantly better/worse than the Arsoy's grown LMs with the same IDs for SMT (\"++/\u2212\u2212\": significantly better/worse at \u03b1 = 0.01, \"+/\u2212\": \u03b1 = 0.05, no mark: not significantly better/worse at \u03b1 = 0.05).", "cite_spans": [ { "start": 55, "end": 70, "text": "(Koehn, 2004) 6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "From the results shown in Table 1 , we can get the following observations:", "cite_spans": [], "ref_spans": [ { "start": 26, "end": 33, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "(1) Nearly all the bilingual grown LMs outperformed both BNLM and our previous converted LM on PPL and BLEU. As the size of grown LMs is increased, the PPL always decreased and the BLEU scores trended to increase. These indicated that our proposed method can give better probability estimation for LM and better performance for SMT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "(2) In comparison with the grown LMs in Ar-84K words as vocabulary, and 20K words as short-list. In this paper, we used the same setting as our previous work, which covers 92.89% of the frequency of words in the training corpus, for all the baselines and our method for fair comparison. soy's method, our grown LMs obtained better P-PL and significantly better BLEU with the similar size. Furthermore, the improvement of PPL and BLEU of the existing methods became saturated much more quickly than ours did, as the LMs grew.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "(3) The last column was the Average Length of the n-grams Hit (ALH) in SMT decoding for different LMs using the following function", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "ALH = 5 \u2211 i=1 Pi\u2212gram \u00d7 i,", "eq_num": "(6)" } ], "section": "Results", "sec_num": "4.2" }, { "text": "where the P i\u2212gram means the ratio of the i-grams hit in SMT decoding. There were also positive correlations between ALH, PPL and BLEUs. The ALH of bilingual grown LM was longer than that of the Arsoy's grown LM of the similar size. In another word, less back-off was used for our proposed grown LMs in SMT decoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "The TED corpus is in special domain as discussed in the introduction, where large extra monolingual corpora are hard to find. In this subsection, we conducted the SMT experiments on TED corpora using our proposed LM growing method, to evaluate whether our method was adaptable to some special domains. We mainly followed the baselines of the IWSLT 2014 evaluation campaign 7 , only with a few modifications such as the LM toolkits and n-gram order for constructing LMs. The Chinese (CN) to English (EN) language pair was chosen, using de-v2010 as development data and test2010 as evaluation data. The same LM growing method was ap-7 https://wit3.fbk.eu/ plied on TED corpora as on NTCIR corpora. The results were shown in Table 2 . Table 2 indicated that our proposed LM growing method improved both PPL and BLEU in comparison with both BNLM and our previous CSLM converting method, so it was suitable for domain adaptation, which is one of focuses of the current SMT research.", "cite_spans": [], "ref_spans": [ { "start": 722, "end": 729, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 732, "end": 739, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments on TED Corpus", "sec_num": "4.3" }, { "text": "In this paper, we have proposed a neural network based bilingual LM growing method by using the bilingual parallel corpus only for SMT. The results show that our proposed method can improve both LM and SMT performance, and outperforms the existing LM growing methods significantly without extra corpus. The connecting phrase-based method can also be applied to LM adaptation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In practice, the probabilities of all the target/tail words in the short list for the history i-grams can be calculated by the neurons in the output layer at the same time, which will save some time. According to our experiments, the time cost for Arsoy's growing method is around 4 times more than our proposed method, if the LMs which are 10 times larger than the original one are grown with other settings all the same.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We are aware that connecting phrases can be applied to not only two phrases, but also three or more. However the appearing probabilities (which will be discussed in Eq. 5 of next subsection) of connecting phrases are approximately estimated. To estimate and compare probabilities of longer phrases in different lengths will lead to serious bias, and the experiments also showed using more than two connecting phrases did not perform well (not shown for limited space), so only two connecting phrases are applied in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This Pt(e) hence provides more bilingual information, in comparison with using monolingual target LMs only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In our previous work, we used the development data to tune the weights of interpolation. In this paper, we used the default 0.5 as the interpolation weights for fair comparison.6 We used the code available at http://www.ark.cs. cmu.edu/MT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We appreciate the helpful discussion with Dr. Isao Goto and Zhongye Jia, and three anonymous reviewers for valuable comments and suggestions on our paper. Rui Wang, Hai Zhao and Bao-Liang Lu were partially supported by the National Natural Science Foundation of China (No. 60903119, No. 61170114, and No. 61272248), the National Basic Research Program of China (No. 2013CB329401), the Science and Technology Commission of Shanghai Municipality (No. 13511500200), the European Union Seventh Framework Program (No. 247619), the Cai Yuanpei Program (CSC fund 201304490199 and 201304490171), and the art and science interdiscipline funds of Shanghai Jiao Tong University (A study on mobilization mechanism and alerting threshold setting for online community, and media image and psychology evaluation: a computational intelligence approach). The corresponding author of this paper, according to the meaning given to this role by Shanghai Jiao Tong University, is Hai Zhao.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Converting neural network language models into back-off language models for efficient decoding in automatic speech recognition", "authors": [ { "first": "Ebru", "middle": [], "last": "Arsoy", "suffix": "" }, { "first": "Stanley", "middle": [ "F" ], "last": "Chen", "suffix": "" }, { "first": "Bhuvana", "middle": [], "last": "Ramabhadran", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Sethy", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ICASSP-2013", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ebru Arsoy, Stanley F. Chen, Bhuvana Ramabhadran, and Abhinav Sethy. 2013. Converting neural net- work language models into back-off language mod- els for efficient decoding in automatic speech recog- nition. In Proceedings of ICASSP-2013, Vancouver, Canada, May. IEEE.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Converting neural network language models into back-off language models for efficient decoding in automatic speech recognition", "authors": [ { "first": "Ebru", "middle": [], "last": "Arsoy", "suffix": "" }, { "first": "Stanley", "middle": [ "F" ], "last": "Chen", "suffix": "" }, { "first": "Bhuvana", "middle": [], "last": "Ramabhadran", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Sethy", "suffix": "" } ], "year": 2014, "venue": "IEEE/ACM Transactions on Audio, Speech, and Language", "volume": "22", "issue": "1", "pages": "184--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ebru Arsoy, Stanley F. Chen, Bhuvana Ramabhadran, and Abhinav Sethy. 2014. Converting neural net- work language models into back-off language mod- els for efficient decoding in automatic speech recog- nition. IEEE/ACM Transactions on Audio, Speech, and Language, 22(1):184-192.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Joint language and translation modeling with recurrent neural networks", "authors": [ { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2013, "venue": "Processings of EMNLP-2013", "volume": "", "issue": "", "pages": "1044--1054", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Auli, Michel Galley, Chris Quirk, and Geof- frey Zweig. 2013. Joint language and translation modeling with recurrent neural networks. In Pro- cessings of EMNLP-2013, pages 1044-1054, Seat- tle, Washington, USA, October. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Statistical language model adaptation: review and perspectives", "authors": [ { "first": "R", "middle": [], "last": "Jerome", "suffix": "" }, { "first": "", "middle": [], "last": "Bellegarda", "suffix": "" } ], "year": 2004, "venue": "Adaptation Methods for Speech Recognition", "volume": "42", "issue": "", "pages": "93--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerome R Bellegarda. 2004. Statistical language mod- el adaptation: review and perspectives. Speech Communication, 42(1):93-108. Adaptation Meth- ods for Speech Recognition.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Janvin", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Re- search (JMLR), 3:1137-1155, March.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Wit 3 : Web inventory of transcribed and translated talks", "authors": [ { "first": "Mauro", "middle": [], "last": "Cettolo", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Girardi", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2012, "venue": "Proceedings of EAMT-2012", "volume": "", "issue": "", "pages": "261--268", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. Wit 3 : Web inventory of transcribed and translated talks. In Proceedings of EAMT-2012, pages 261-268, Trento, Italy, May.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Language model adaptation using mixtures and an exponentially decaying cache", "authors": [ { "first": "Philip", "middle": [], "last": "Clarkson", "suffix": "" }, { "first": "A", "middle": [ "J" ], "last": "Robinson", "suffix": "" } ], "year": 1997, "venue": "Proceedings of ICASSP-1997", "volume": "2", "issue": "", "pages": "799--802", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Clarkson and A.J. Robinson. 1997. Lan- guage model adaptation using mixtures and an ex- ponentially decaying cache. In Proceedings of ICASSP-1997, volume 2, pages 799-802 vol.2, Mu- nich,Germany.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Fast and robust neural network joint models for statistical machine translation", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Rabih", "middle": [], "last": "Zbib", "suffix": "" }, { "first": "Zhongqiang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Lamar", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL-2014", "volume": "", "issue": "", "pages": "1370--1380", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for sta- tistical machine translation. In Proceedings of ACL- 2014, pages 1370-1380, Baltimore, Maryland, June. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Overview of the patent machine translation task at the NTCIR-9 workshop", "authors": [ { "first": "Isao", "middle": [], "last": "Goto", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Ka", "middle": [ "Po" ], "last": "Chow", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Benjamin", "middle": [ "K" ], "last": "Tsou", "suffix": "" } ], "year": 2011, "venue": "Proceedings of NTCIR-9 Workshop Meeting", "volume": "", "issue": "", "pages": "559--578", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isao Goto, Bin Lu, Ka Po Chow, Eiichiro Sumita, and Benjamin K. Tsou. 2011. Overview of the paten- t machine translation task at the NTCIR-9 work- shop. In Proceedings of NTCIR-9 Workshop Meet- ing, pages 559-578, Tokyo, Japan, December.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Using out-of-domain data to improve indomain language models", "authors": [ { "first": "Rukmini", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "Herbert", "middle": [], "last": "Gish", "suffix": "" } ], "year": 1997, "venue": "Signal Processing Letters", "volume": "4", "issue": "8", "pages": "221--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rukmini Iyer, Mari Ostendorf, and Herbert Gish. 1997. Using out-of-domain data to improve in- domain language models. Signal Processing Letter- s, IEEE, 4(8):221-223.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Kyss 1.0: a framework for automatic evaluation of chinese input method engines", "authors": [ { "first": "Zhongye", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2013, "venue": "Asian Federation of Natural Language Processing", "volume": "", "issue": "", "pages": "1195--1201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhongye Jia and Hai Zhao. 2013. Kyss 1.0: a framework for automatic evaluation of chinese input method engines. In Proceedings of IJCNLP-2013, pages 1195-1201, Nagoya, Japan, October. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A joint graph model for pinyin-to-chinese conversion with typo correction", "authors": [ { "first": "Zhongye", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL-2014", "volume": "", "issue": "", "pages": "1512--1523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhongye Jia and Hai Zhao. 2014. A joint graph mod- el for pinyin-to-chinese conversion with typo cor- rection. In Proceedings of ACL-2014, pages 1512- 1523, Baltimore, Maryland, June. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Experiments in domain adaptation for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL-2007 Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "224--227", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn and Josh Schroeder. 2007. Experi- ments in domain adaptation for statistical machine translation. In Proceedings of ACL-2007 Workshop on Statistical Machine Translation, pages 224-227, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL-2007", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertol- di, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexan- dra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine transla- tion. In Proceedings of ACL-2007, pages 177-180, Prague, Czech Republic, June. Association for Com- putational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Statistical significance tests for machine translation evaluation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP-2004", "volume": "", "issue": "", "pages": "388--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP-2004, pages 388-395, Barcelona, Spain, July. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Structured output layer neural network language model", "authors": [ { "first": "Hai-Son", "middle": [], "last": "Le", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Oparin", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Gauvain", "suffix": "" }, { "first": "", "middle": [], "last": "Yvon", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ICASSP-2011", "volume": "", "issue": "", "pages": "5524--5527", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai-Son Le, Ilya Oparin, Alexandre Allauzen, J Gau- vain, and Fran\u00e7ois Yvon. 2011. Structured output layer neural network language model. In Proceed- ings of ICASSP-2011, pages 5524-5527, Prague, Czech Republic, May. IEEE.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A recursive recurrent neural network for statistical machine translation", "authors": [ { "first": "Shujie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL-2014", "volume": "", "issue": "", "pages": "1491--1500", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shujie Liu, Nan Yang, Mu Li, and Ming Zhou. 2014. A recursive recurrent neural network for statistical machine translation. In Proceedings of ACL-2014, pages 1491-1500, Baltimore, Maryland, June. As- sociation for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Recurrent neural network based language model", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Karafi\u00e1t", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Burget", "suffix": "" } ], "year": 2010, "venue": "Proceedings of INTERSPEECH-2010", "volume": "", "issue": "", "pages": "1045--1048", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Re- current neural network based language model. In Proceedings of INTERSPEECH-2010, pages 1045- 1048.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Continuous space language models using restricted boltzmann machines", "authors": [ { "first": "Jan", "middle": [], "last": "Niehues", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2012, "venue": "Proceedings of IWSLT-2012", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan Niehues and Alex Waibel. 2012. Continuous space language models using restricted boltzman- n machines. In Proceedings of IWSLT-2012, pages 311-318, Hong Kong.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A variablelength category-based n-gram language model", "authors": [ { "first": "Thomas", "middle": [], "last": "Niesler", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Woodland", "suffix": "" } ], "year": 1996, "venue": "Proceedings of ICASSP-1996", "volume": "1", "issue": "", "pages": "164--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Niesler and Phil Woodland. 1996. A variable- length category-based n-gram language model. In Proceedings of ICASSP-1996, volume 1, pages 164- 167 vol. 1.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignmen- t models. Computational Linguistics, 29(1):19-51, March.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL-2003", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL-2003, pages 160-167, Sapporo, Japan, July. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "New techniques for context modeling", "authors": [ { "first": "Eric", "middle": [ "Sven" ], "last": "Ristad", "suffix": "" }, { "first": "Robert", "middle": [ "G" ], "last": "Thomas", "suffix": "" } ], "year": 1995, "venue": "Massachusetts. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "220--227", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Sven Ristad and Robert G. Thomas. 1995. New techniques for context modeling. In Proceedings of ACL-1995, pages 220-227, Cambridge, Mas- sachusetts. Association for Computational Linguis- tics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Continuous space language models for statistical machine translation", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Dchelotte", "suffix": "" }, { "first": "Jean-Luc", "middle": [], "last": "Gau", "suffix": "" } ], "year": 2006, "venue": "Proceedings of COLING ACL-2006", "volume": "", "issue": "", "pages": "723--730", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk, Daniel Dchelotte, and Jean-Luc Gau- vain. 2006. Continuous space language models for statistical machine translation. In Proceedings of COLING ACL-2006, pages 723-730, Sydney, Aus- tralia, July. Association for Computational Linguis- tics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Large, pruned or continuous space language models on a gpu for statistical machine translation", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Rousseau", "suffix": "" }, { "first": "Mohammed", "middle": [], "last": "Attik", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, WLM '12", "volume": "", "issue": "", "pages": "11--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk, Anthony Rousseau, and Mohammed Attik. 2012. Large, pruned or continuous space language models on a gpu for statistical machine translation. In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, WLM '12, pages 11-19, Montreal, Canada, June. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Continuous space language models", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2007, "venue": "Computer Speech and Language", "volume": "21", "issue": "3", "pages": "492--518", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk. 2007. Continuous space lan- guage models. Computer Speech and Language, 21(3):492-518.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Continuous-space language models for statistical machine translation", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2010, "venue": "The Prague Bulletin of Mathematical Linguistics", "volume": "", "issue": "", "pages": "137--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk. 2010. Continuous-space language models for statistical machine translation. The Prague Bulletin of Mathematical Linguistics, pages 137-146.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "On growing and pruning kneser-ney smoothed n-gram models", "authors": [ { "first": "Vesa", "middle": [], "last": "Siivola", "suffix": "" }, { "first": "Teemu", "middle": [], "last": "Hirsimki", "suffix": "" }, { "first": "Sami", "middle": [], "last": "Virpioja", "suffix": "" } ], "year": 2007, "venue": "IEEE Transactions on Audio, Speech, and Language", "volume": "15", "issue": "5", "pages": "1617--1624", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vesa Siivola, Teemu Hirsimki, and Sami Virpioja. 2007. On growing and pruning kneser-ney s- moothed n-gram models. IEEE Transactions on Au- dio, Speech, and Language, 15(5):1617-1624.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Variable ngrams and extensions for conversational speech language modeling", "authors": [ { "first": "Manhung", "middle": [], "last": "Siu", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2000, "venue": "IEEE Transactions on Speech and Audio", "volume": "8", "issue": "1", "pages": "63--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manhung Siu and Mari Ostendorf. 2000. Variable n- grams and extensions for conversational speech lan- guage modeling. IEEE Transactions on Speech and Audio, 8(1):63-75.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Training continuous space language models: some practical issues", "authors": [ { "first": "Le", "middle": [], "last": "Hai Son", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wisniewski", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" } ], "year": 2010, "venue": "October. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "778--788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Le Hai Son, Alexandre Allauzen, Guillaume Wis- niewski, and Fran\u00e7ois Yvon. 2010. Training con- tinuous space language models: some practical is- sues. In Proceedings of EMNLP-2010, pages 778- 788, Cambridge, Massachusetts, October. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Continuous space translation models with neural networks", "authors": [ { "first": "Le", "middle": [], "last": "Hai Son", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" } ], "year": 2012, "venue": "Proceedings of NAACL HLT-2012", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Le Hai Son, Alexandre Allauzen, and Fran\u00e7ois Yvon. 2012. Continuous space translation models with neural networks. In Proceedings of NAACL HLT- 2012, pages 39-48, Montreal, Canada, June. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "SRILM at sixteen: Update and outlook", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Wen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Abrash", "suffix": "" } ], "year": 2011, "venue": "Proceedings of INTERSPEECH 2011", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke, Jing Zheng, Wen Wang, and Vic- tor Abrash. 2011. SRILM at sixteen: Update and outlook. In Proceedings of INTERSPEECH 2011, Waikoloa, HI, USA, December.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Entropy-based pruning of backoff language models", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 1998, "venue": "Proceedings of DARPA Broadcast News Transcription and Understanding Workshop", "volume": "", "issue": "", "pages": "257--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke. 1998. Entropy-based pruning of backoff language models. In Proceedings of DARPA Broadcast News Transcription and Understanding Workshop, pages 270-274, Lansdowne, VA, USA. Andreas Stolcke. 2002. Srilm-an extensible language modeling toolkit. In Proceedings of INTERSPEECH-2002, pages 257-286, Seattle, US- A, November.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Decoding with largescale neural language models improves translation", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Yinggong", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Fossum", "suffix": "" }, { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP-2013", "volume": "", "issue": "", "pages": "1387--1392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. 2013. Decoding with large- scale neural language models improves translation. In Proceedings of EMNLP-2013, pages 1387-1392, Seattle, Washington, USA, October. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Converting continuous-space language models into n-gram language models for statistical machine translation", "authors": [ { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Isao", "middle": [], "last": "Goto", "suffix": "" }, { "first": "Eiichro", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Bao-Liang", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP-2013", "volume": "", "issue": "", "pages": "845--850", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rui Wang, Masao Utiyama, Isao Goto, Eiichro Sumi- ta, Hai Zhao, and Bao-Liang Lu. 2013a. Convert- ing continuous-space language models into n-gram language models for statistical machine translation. In Proceedings of EMNLP-2013, pages 845-850, Seattle, Washington, USA, October. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Labeled alignment for recognizing textual entailment", "authors": [ { "first": "Xiaolin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Bao-Liang", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2013, "venue": "Asian Federation of Natural Language Processing", "volume": "", "issue": "", "pages": "605--613", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaolin Wang, Hai Zhao, and Bao-Liang Lu. 2013b. Labeled alignment for recognizing textual entail- ment. In Proceedings of IJCNLP-2013, pages 605- 613, Nagoya, Japan, October. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Parallelized extreme learning machine ensemble based on minmax modular network", "authors": [ { "first": "Xiao-Lin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yang-Yang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Bao-Liang", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2014, "venue": "Neurocomputing", "volume": "128", "issue": "0", "pages": "31--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiao-Lin Wang, Yang-Yang Chen, Hai Zhao, and Bao- Liang Lu. 2014. Parallelized extreme learning ma- chine ensemble based on minmax modular network. Neurocomputing, 128(0):31 -41.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Using deep linguistic features for finding deceptive opinion spam", "authors": [ { "first": "Qiongkai", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2012, "venue": "The COLING 2012 Organizing Committee", "volume": "", "issue": "", "pages": "1341--1350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiongkai Xu and Hai Zhao. 2012. Using deep lin- guistic features for finding deceptive opinion spam. In Proceedings of COLING-2012, pages 1341-1350, Mumbai, India, December. The COLING 2012 Or- ganizing Committee.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Improving function word alignment with frequency and syntactic information", "authors": [ { "first": "Jingyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2013, "venue": "Proceedings of IJCAI-2013", "volume": "", "issue": "", "pages": "2211--2217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingyi Zhang and Hai Zhao. 2013. Improving function word alignment with frequency and syntactic infor- mation. In Proceedings of IJCAI-2013, pages 2211- 2217. AAAI Press.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "A machine learning approach to convert CCGbank to Penn treebank", "authors": [ { "first": "Xiaotian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Cong", "middle": [], "last": "Hui", "suffix": "" } ], "year": 2012, "venue": "The COLING 2012 Organizing Committee", "volume": "", "issue": "", "pages": "535--542", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaotian Zhang, Hai Zhao, and Cong Hui. 2012. A machine learning approach to convert CCGbank to Penn treebank. In Proceedings of COLING- 2012, pages 535-542, Mumbai, India, December. The COLING 2012 Organizing Committee.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "An empirical study on word segmentation for chinese machine translation", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Bao-Liang", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics and Intelligent Text Processing", "volume": "7817", "issue": "", "pages": "248--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao, Masao Utiyama, Eiichiro Sumita, and Bao- Liang Lu. 2013. An empirical study on word segmentation for chinese machine translation. In Alexander Gelbukh, editor, Computational Linguis- tics and Intelligent Text Processing, volume 7817 of Lecture Notes in Computer Science, pages 248-263. Springer Berlin Heidelberg.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "NN based bilingual LM growing.", "uris": null, "num": null }, "TABREF0": { "type_str": "table", "text": "Performance of the Grown LMs", "html": null, "content": "
LMsn-gramsPPL BLEU-s BLEU-i ALH
BN73.9M 108.8 32.1932.193.03
REN/A97.5 32.3432.42N/A
WA73.9M 104.4 32.6032.623.03
AR-1217.6M 103.3 32.5532.753.14
AR-2323.8M 103.1 32.6132.643.18
AR-3458.5M 103.0 32.3932.713.20
AR-4565.6M 102.8 32.6732.513.21
AR-5712.2M 102.5 32.4932.603.22
BI-1223.5M 101.9 32.81+33.02+3.20
BI-2343.6M 101.0 32.92+33.11++ 3.24
BI-3464.5M 100.6 33.08++ 33.25++ 3.26
BI-4571.0M 100.3 33.15++ 33.12++ 3.28
BI-5705.5M 100.1 33.11++ 33.24++ 3.31
", "num": null }, "TABREF1": { "type_str": "table", "text": "CN-EN TED Experiments", "html": null, "content": "
LMs n-grams PPL BLEU-s
BN7.8M 87.1 12.41
WA7.8M 85.3 12.73
BI-123.1M 79.2 12.92
BI-249.7M 78.3 13.16
BI-373.4M 77.6 13.24
", "num": null } } } }