{ "paper_id": "P11-1021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:47:24.365523Z" }, "title": "A Large Scale Distributed Syntactic, Semantic and Lexical Language Model for Machine Translation", "authors": [ { "first": "Ming", "middle": [], "last": "Tan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Wright State University", "location": { "postCode": "45435", "settlement": "Dayton", "region": "OH", "country": "USA" } }, "email": "" }, { "first": "Wenli", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "", "institution": "Wright State University", "location": { "postCode": "45435", "settlement": "Dayton", "region": "OH", "country": "USA" } }, "email": "" }, { "first": "Lei", "middle": [], "last": "Zheng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Wright State University", "location": { "postCode": "45435", "settlement": "Dayton", "region": "OH", "country": "USA" } }, "email": "lei.zheng@wright.edu" }, { "first": "Shaojun", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Wright State University", "location": { "postCode": "45435", "settlement": "Dayton", "region": "OH", "country": "USA" } }, "email": "shaojun.wang@wright.edu" }, { "first": "Kno", "middle": [ "E" ], "last": "Sis Center", "suffix": "", "affiliation": { "laboratory": "", "institution": "Wright State University", "location": { "postCode": "45435", "settlement": "Dayton", "region": "OH", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents an attempt at building a large scale distributed composite language model that simultaneously accounts for local word lexical information, mid-range sentence syntactic structure, and long-span document semantic content under a directed Markov random field paradigm. The composite language model has been trained by performing a convergent N-best list approximate EM algorithm that has linear time complexity and a followup EM algorithm to improve word prediction power on corpora with up to a billion tokens and stored on a supercomputer. The large scale distributed composite language model gives drastic perplexity reduction over ngrams and achieves significantly better translation quality measured by the BLEU score and \"readability\" when applied to the task of re-ranking the N-best list from a state-of-theart parsing-based machine translation system.", "pdf_parse": { "paper_id": "P11-1021", "_pdf_hash": "", "abstract": [ { "text": "This paper presents an attempt at building a large scale distributed composite language model that simultaneously accounts for local word lexical information, mid-range sentence syntactic structure, and long-span document semantic content under a directed Markov random field paradigm. The composite language model has been trained by performing a convergent N-best list approximate EM algorithm that has linear time complexity and a followup EM algorithm to improve word prediction power on corpora with up to a billion tokens and stored on a supercomputer. The large scale distributed composite language model gives drastic perplexity reduction over ngrams and achieves significantly better translation quality measured by the BLEU score and \"readability\" when applied to the task of re-ranking the N-best list from a state-of-theart parsing-based machine translation system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The Markov chain (n-gram) source models, which predict each word on the basis of previous n-1 words, have been the workhorses of state-of-the-art speech recognizers and machine translators that help to resolve acoustic or foreign language ambiguities by placing higher probability on more likely original underlying word strings. Research groups (Brants et al., 2007; Zhang, 2008) have shown that using an immense distributed computing paradigm, up to 6grams can be trained on up to billions and trillions of words, yielding consistent system improvements, but Zhang (2008) did not observe much improvement beyond 6-grams. Although the Markov chains are efficient at encoding local word interactions, the n-gram model clearly ignores the rich syntactic and semantic structures that constrain natural languages. As the machine translation (MT) working groups stated on page 3 of their final report (Lavie et al., 2006) , \"These approaches have resulted in small improvements in MT quality, but have not fundamentally solved the problem. There is a dire need for developing novel approaches to language modeling.\" Wang et al. (2006) integrated n-gram, structured language model (SLM) (Chelba and Jelinek, 2000) and probabilistic latent semantic analysis (PLSA) (Hofmann, 2001) under the directed MRF framework (Wang et al., 2005) and studied the stochastic properties for the composite language model. They derived a generalized inside-outside algorithm to train the composite language model from a general EM (Dempster et al., 1977) by following Jelinek's ingenious definition of the inside and outside probabilities for SLM (Jelinek, 2004) with 6th order of sentence length time complexity. Unfortunately, there are no experimental results reported.", "cite_spans": [ { "start": 346, "end": 367, "text": "(Brants et al., 2007;", "ref_id": "BIBREF1" }, { "start": 368, "end": 380, "text": "Zhang, 2008)", "ref_id": "BIBREF27" }, { "start": 561, "end": 573, "text": "Zhang (2008)", "ref_id": "BIBREF27" }, { "start": 897, "end": 917, "text": "(Lavie et al., 2006)", "ref_id": "BIBREF19" }, { "start": 1112, "end": 1130, "text": "Wang et al. (2006)", "ref_id": "BIBREF23" }, { "start": 1182, "end": 1208, "text": "(Chelba and Jelinek, 2000)", "ref_id": "BIBREF5" }, { "start": 1259, "end": 1274, "text": "(Hofmann, 2001)", "ref_id": "BIBREF11" }, { "start": 1308, "end": 1327, "text": "(Wang et al., 2005)", "ref_id": "BIBREF22" }, { "start": 1508, "end": 1531, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF9" }, { "start": 1624, "end": 1639, "text": "(Jelinek, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we study the same composite language model. Instead of using the 6th order generalized inside-outside algorithm proposed in (Wang et al., 2006) , we train this composite model by a convergent N-best list approximate EM algorithm that has linear time complexity and a follow-up EM algorithm to improve word prediction power. We conduct comprehensive experiments on corpora with 44 million tokens, 230 million tokens, and 1.3 billion tokens and compare perplexity results with n-grams (n=3,4,5 respectively) on these three corpora, we obtain drastic perplexity reductions. Finally, we ap-ply our language models to the task of re-ranking the N-best list from Hiero (Chiang, 2005; Chiang, 2007) , a state-of-the-art parsing-based MT system, we achieve significantly better translation quality measured by the BLEU score and \"readability\".", "cite_spans": [ { "start": 139, "end": 158, "text": "(Wang et al., 2006)", "ref_id": "BIBREF23" }, { "start": 678, "end": 692, "text": "(Chiang, 2005;", "ref_id": "BIBREF6" }, { "start": 693, "end": 706, "text": "Chiang, 2007)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The n-gram language model is essentially a word predictor that given its entire document history it predicts next word w k+1 based on the last n-1 words with probability p(w k+1 |w k k\u2212n+2 ) where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composite language model", "sec_num": "2" }, { "text": "w k k\u2212n+2 = w k\u2212n+2 , \u2022 \u2022 \u2022 , w k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composite language model", "sec_num": "2" }, { "text": "The SLM (Chelba and Jelinek, 1998; Chelba and Jelinek, 2000) uses syntactic information beyond the regular n-gram models to capture sentence level long range dependencies. The SLM is based on statistical parsing techniques that allow syntactic analysis of sentences; it assigns a probability p(W, T ) to every sentence W and every possible binary parse T . The terminals of T are the words of W with POS tags, and the nodes of T are annotated with phrase headwords and non-terminal labels. Let W be a sentence of length n words to which we have prepended the sentence beginning marker and appended the sentence end marker so that w 0 = and w n+1 =. Let W k = w 0 , \u2022 \u2022 \u2022 , w k be the word k-prefix of the sentence -the words from the beginning of the sentence up to the current position k and W k T k the word-parse k-prefix. A word-parse k-prefix has a set of exposed heads h \u2212m , \u2022 \u2022 \u2022 , h \u22121 , with each head being a pair (headword, non-terminal label), or in the case of a root-only tree (word, POS tag). An m-th order SLM (m-SLM) has three operators to generate a sentence: WORD-PREDICTOR predicts the next word w k+1 based on the m left-most exposed headwords", "cite_spans": [ { "start": 8, "end": 34, "text": "(Chelba and Jelinek, 1998;", "ref_id": "BIBREF4" }, { "start": 35, "end": 60, "text": "Chelba and Jelinek, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Composite language model", "sec_num": "2" }, { "text": "h \u22121 \u2212m = h \u2212m , \u2022 \u2022 \u2022 , h \u22121 in the word-parse k-prefix with prob- ability p(w k+1 |h \u22121 \u2212m )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composite language model", "sec_num": "2" }, { "text": ", and then passes control to the TAGGER; the TAGGER predicts the POS tag t k+1 to the next word w k+1 based on the next word w k+1 and the POS tags of the m left-most exposed headwords h \u22121 \u2212m in the word-parse k-prefix with probability p(t k+1 |w k+1 , h \u2212m .tag, \u2022 \u2022 \u2022 , h \u22121 .tag); the CONSTRUCTOR builds the partial parse T k from T k\u22121 , w k , and t k in a series of moves ending with NULL, where a parse move a is made with probability p(a|h \u22121 \u2212m ); a \u2208 A={(unary, NTlabel), (adjoinleft, NTlabel), (adjoin-right, NTlabel), null}. Once the CONSTRUCTOR hits NULL, it passes control to the WORD-PREDICTOR. See detailed description in (Chelba and Jelinek, 2000) .", "cite_spans": [ { "start": 638, "end": 664, "text": "(Chelba and Jelinek, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Composite language model", "sec_num": "2" }, { "text": "A PLSA model (Hofmann, 2001 ) is a generative probabilistic model of word-document cooccurrences using the bag-of-words assumption described as follows: (i) choose a document d with probability p(d); (ii) SEMANTIZER: select a semantic class g with probability p(g|d); and (iii) WORD-PREDICTOR: pick a word w with probability p(w|g). Since only one pair of (d, w) is being observed, as a result, the joint probability model is a mixture of log-linear model with the expression", "cite_spans": [ { "start": 13, "end": 27, "text": "(Hofmann, 2001", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Composite language model", "sec_num": "2" }, { "text": "p(d, w) = p(d) g p(w|g)p(g|d).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composite language model", "sec_num": "2" }, { "text": "Typically, the number of documents and vocabulary size are much larger than the size of latent semantic class variables. Thus, latent semantic class variables function as bottleneck variables to constrain word occurrences in documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composite language model", "sec_num": "2" }, { "text": "When combining n-gram, m order SLM and PLSA models together to build a composite generative language model under the directed MRF paradigm (Wang et al., 2005; Wang et al., 2006) , the TAGGER and CONSTRUCTOR in SLM and SEMANTIZER in PLSA remain unchanged; however the WORD-PREDICTORs in n-gram, m-SLM and PLSA are combined to form a stronger WORD-PREDICTOR that generates the next word, w k+1 , not only depending on the m left-most exposed headwords h \u22121 \u2212m in the word-parse k-prefix but also its n-gram history w k k\u2212n+2 and its semantic content g k+1 . The parameter for WORD-PREDICTOR in the composite n-gram/m-SLM/PLSA language model becomes p(w k+1 |w k k\u2212n+2 h \u22121 \u2212m g k+1 ). The resulting composite language model has an even more complex dependency structure but with more expressive power than the original SLM. Figure 1 illustrates the structure of a composite n-gram/m-SLM/PLSA language model. The composite n-gram/m-SLM/PLSA language model can be formulated as a directed MRF model (Wang et al., 2006) with local normalization constraints for the parameters of each model component, WORD-PREDICTOR, TAGGER, CONSTRUCTOR, SEMANTIZER, i.e., Figure 1 : A composite n-gram/m-SLM/PLSA language model where the hidden information is the parse tree T and semantic content g. The WORD-PREDICTOR generates the next word w k+1 with probability", "cite_spans": [ { "start": 139, "end": 158, "text": "(Wang et al., 2005;", "ref_id": "BIBREF22" }, { "start": 159, "end": 177, "text": "Wang et al., 2006)", "ref_id": "BIBREF23" }, { "start": 995, "end": 1014, "text": "(Wang et al., 2006)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 822, "end": 830, "text": "Figure 1", "ref_id": null }, { "start": 1151, "end": 1159, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Composite language model", "sec_num": "2" }, { "text": "w\u2208V p(w|w \u22121 \u2212n+1 h \u22121 \u2212m g) = 1, t\u2208O p(t|wh \u22121 \u2212m .tag) = 1, a\u2208A p(a|h \u22121 \u2212m ) = 1, g\u2208G p(g|d) = 1. ...... ...... ...... g w g g g ...... ...... ...... ...... d k k\u2212n+2 j+1 ...... w 1 i i ...... ...... g 1 w k w k+1 g k+1 h \u22121 h \u22122 h \u2212m j+1 w w j g j ...... k\u2212n+2 w ......", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composite language model", "sec_num": "2" }, { "text": "p(w k+1 |w k k\u2212n+2 h \u22121 \u2212m g k+1 ) instead of p(w k+1 |w k k\u2212n+2 ), p(w k+1 |h \u22121 \u2212m ) and p(w k+1 |g k+1 ) respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composite language model", "sec_num": "2" }, { "text": "Under the composite n-gram/m-SLM/PLSA language model, the likelihood of a training corpus D, a collection of documents, can be written as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training algorithm", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(D, p) = Y d\u2208D Y l X G l X T l Pp(W l , T l , G l |d) !!!", "eq_num": "(1)" } ], "section": "Training algorithm", "sec_num": "3" }, { "text": "where (W l , T l , G l , d) denote the joint sequence of the lth sentence W l with its parse tree structure T l and semantic annotation string G l in document d. This sequence is produced by a unique sequence of model actions: WORD-PREDICTOR, TAGGER, CONSTRUCTOR, SEMANTIZER moves, its probability is obtained by chaining the probabilities of these moves", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training algorithm", "sec_num": "3" }, { "text": "Pp(W l , T l , G l |d) = Y g\u2208G 0 @ p(g|d) #(g,W l ,G l ,d) Y h \u22121 ,\u2022\u2022\u2022 ,h \u2212m \u2208H Y w,w \u22121 ,\u2022\u2022\u2022 ,w \u2212n+1 \u2208V p(w|w \u22121 \u2212n+1 h \u22121 \u2212m g) #(w \u2212 1 \u2212n+1 wh \u22121 \u2212m g,W l ,T l ,G l ,d) Y t\u2208O p(t|wh \u22121 \u2212m .tag) #(t,wh \u22121 \u2212m .tag,W l ,T l ,d) Y a\u2208A p(a|h \u22121 \u2212m ) #(a,h \u22121 \u2212m ,W l ,T l ,d) ! where #(g, W l , G l , d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training algorithm", "sec_num": "3" }, { "text": ") is the count of semantic content g in semantic annotation string G l of the lth sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training algorithm", "sec_num": "3" }, { "text": "W l in document d, #(w \u22121 \u2212n+1 wh \u22121 \u2212m g, W l , T l , G l , d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training algorithm", "sec_num": "3" }, { "text": ") is the count of n-grams, its m most recent exposed headwords and semantic content g in parse T l and semantic annotation string G l of the lth sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training algorithm", "sec_num": "3" }, { "text": "W l in document d, #(twh \u22121 \u2212m .tag, W l , T l , d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training algorithm", "sec_num": "3" }, { "text": ") is the count of tag t predicted by word w and the tags of m most recent exposed headwords in parse tree T l of the lth sentence W l in document d, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training algorithm", "sec_num": "3" }, { "text": "finally #(ah \u22121 \u2212m , W l , T l , d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training algorithm", "sec_num": "3" }, { "text": ") is the count of constructor move a conditioning on m exposed headwords h \u22121 \u2212m in parse tree T l of the lth sentence W l in document d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training algorithm", "sec_num": "3" }, { "text": "The objective of maximum likelihood estimation is to maximize the likelihood L(D, p) respect to model parameters. For a given sentence, its parse tree and semantic content are hidden and the number of parse trees grows faster than exponential with sentence length, Wang et al. (2006) have derived a generalized inside-outside algorithm by applying the standard EM algorithm. However, the complexity of this algorithm is 6th order of sentence length, thus it is computationally too expensive to be practical for a large corpus even with the use of pruning on charts (Jelinek and Chelba, 1999; Jelinek, 2004) .", "cite_spans": [ { "start": 265, "end": 283, "text": "Wang et al. (2006)", "ref_id": "BIBREF23" }, { "start": 565, "end": 591, "text": "(Jelinek and Chelba, 1999;", "ref_id": "BIBREF13" }, { "start": 592, "end": 606, "text": "Jelinek, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Training algorithm", "sec_num": "3" }, { "text": "Similar to SLM (Chelba and Jelinek, 2000) , we adopt an N -best list approximate EM re-estimation with modular modifications to seamlessly incorporate the effect of n-gram and PLSA components. Instead of maximizing the likelihood L(D, p), we maximize the N -best list likelihood,", "cite_spans": [ { "start": 15, "end": 41, "text": "(Chelba and Jelinek, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "max T \u2032 N L(D, p, T \u2032 N ) = Y d\u2208D Y l max T \u2032 l N \u2208T \u2032 N X G l 0 @ X T l \u2208T \u2032l N ,||T \u2032 l N ||=N Pp(W l , T l , G l |d) 1 A 1 A 1 A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "where T \u2032 l N is a set of N parse trees for sentence W l in document d and || \u2022 || denotes the cardinality and T \u2032 N is a collection of T \u2032 l N for sentences over entire corpus D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "The N-best list approximate EM involves two steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "1. N-best list search: For each sentence W in document d, find N -best parse trees,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "T l N = arg max T \u2032l N n X G l X T l \u2208T \u2032l N Pp(W l , T l , G l |d), ||T \u2032l N || = N o", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "and denote T N as the collection of N -best list parse trees for sentences over entire corpus D under model parameter p. 2. EM update: Perform one iteration (or several iterations) of EM algorithm to estimate model parameters that maximizes N -best-list likelihood of the training corpus D,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "L(D, p, TN ) = Y d\u2208D ( Y l ( X G l ( X T l \u2208T l N \u2208T N Pp(W l , T l , G l |d))))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "That is, (a) E-step: Compute the auxiliary function of the N -best-list likelihood", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "Q(p \u2032 , p, TN ) = X d\u2208D X l X G l X T l \u2208T l N \u2208T N Pp(T l , G l |W l , d) log P p \u2032 (W l , T l , G l |d) (b) M-step: MaximizeQ(p \u2032 , p, T N )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "with respect to p \u2032 to get new update for p.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "Iterate steps (1) and (2) until the convergence of the N -best-list likelihood. Due to space constraints, we omit the proof of the convergence of the N-best list approximate EM algorithm which uses Zangwill's global convergence theorem (Zangwill, 1969) . N -best list search strategy: To extract the Nbest parse trees, we adopt a synchronous, multistack search strategy that is similar to the one in (Chelba and Jelinek, 2000) , which involves a set of stacks storing partial parses of the most likely ones for a given prefix W k and the less probable parses are purged. Each stack contains hypotheses (partial parses) that have been constructed by the same number of WORD-PREDICTOR and the same number of CONSTRUCTOR operations. The hypotheses in each stack are ranked according to the log( G k P p (W k , T k , G k |d)) score with the highest on top, where P p (W k , T k , G k |d) is the joint probability of prefix W k = w 0 , \u2022 \u2022 \u2022 , w k with its parse structure T k and semantic annotation string", "cite_spans": [ { "start": 236, "end": 252, "text": "(Zangwill, 1969)", "ref_id": "BIBREF25" }, { "start": 400, "end": 426, "text": "(Chelba and Jelinek, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "G k = g 1 , \u2022 \u2022 \u2022 , g k in a document d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "A stack vector consists of the ordered set of stacks containing partial parses with the same number of WORD-PREDICTOR operations but different number of CONSTRUCTOR operations. In WORD-PREDICTOR and TAGGER operations, some hypotheses are discarded due to the maximum number of hypotheses the stack can contain at any given time. In CONSTRUCTOR operation, the resulting hypotheses are discarded due to either finite stack size or the log-probability threshold: the maximum tolerable difference between the log-probability score of the top-most hypothesis and the bottom-most hypothesis at any given state of the stack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "EM update: Once we have the N -best parse trees for each sentence in document d and N -best topics for document d, we derive the EM algorithm to estimate model parameters. In E-step, we compute the expected count of each model parameter over sentence W l in document d in the training corpus D. For the WORD-PREDICTOR and the SEMANTIZER, the number of possible semantic annotation sequences is exponential, we use forward-backward recursive formulas that are similar to those in hidden Markov models to compute the expected counts. We define the forward vector \u03b1 l (g|d) to be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "\u03b1 l k+1 (g|d) = X G l k Pp(W l k , T l k , w k k\u2212n+2 w k+1 h \u22121 \u2212m g, G l k |d)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "that can be recursively computed in a forward manner, where W l k is the word k-prefix for sentence W l , T l k is the parse for k-prefix. We define backward vector \u03b2 l (g|d) to be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "\u03b2 l k+1 (g|d) = X G l k+1,\u2022 Pp(W l k+1,\u2022 , T l k+1,\u2022 , G l k+1,\u2022 |w k k\u2212n+2 w k+1 h \u22121 \u2212m g, d)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "that can be computed in a backward manner, here W l k+1,\u2022 is the subsequence after k+1th word in sentence W l , T l k+1,\u2022 is the incremental parse structure after the parse structure T l k+1 of word k+1prefix W l k+1 that generates parse tree T l , G l k+1,\u2022 is the semantic subsequence in G l relevant to W l k+1,\u2022 . Then, the expected count of w \u22121 \u2212n+1 wh \u22121 \u2212m g for the WORD-PREDICTOR on sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "W l in document d is X G l Pp(T l , G l |W l , d)#(w \u22121 \u2212n+1 wh \u22121 \u2212m g, W l , T l , G l , d) = X l X k \u03b1 l k+1 (g|d)\u03b2 l k+1 (g|d)p(g|d) \u03b4(w k k\u2212n+2 w k+1 h \u22121 \u2212m g k+1 = w \u22121 \u2212n+1 wh \u22121 \u2212m g)/Pp(W l |d)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "where \u03b4(\u2022) is an indicator function and the expected count of g for the SEMANTIZER on sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "W l in document d is X G l Pp(T l , G l |W l , d)#(g, W l , G l , d) = j\u22121 X k=0 \u03b1 l k+1 (g|d)\u03b2 l k+1 (g|d)p(g|d)/Pp(W l |d)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "For the TAGGER and the CONSTRUCTOR, the expected count of each event of twh \u22121 \u2212m .tag and ah \u22121 \u2212m over parse T l of sentence W l in document d is the real count appeared in parse tree T l of sentence W l in document d times the conditional distribution", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "P p (T l |W l , d) = P p (T l , W l |d)/ T l \u2208T l P p (T l , W l |d) respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "In M-step, the recursive linear interpolation scheme (Jelinek and Mercer, 1981) is used to obtain a smooth probability estimate for each model component, WORD-PREDICTOR, TAGGER, and CONSTRUCTOR. The TAGGER and CONSTRUCTOR are conditional probabilistic models of the type (Chelba and Jelinek, 2000) . The WORD-PREDICTOR is, however, a conditional probabilistic model p(w|w \u22121 \u2212n+1 h \u22121 \u2212m g) where there are three kinds of context w \u22121 \u2212n+1 , h \u22121 \u2212m and g, each forms a linear Markov chain. The model has a combinatorial number of relative frequency estimates of different orders among three linear Markov chains. We generalize Jelinek and Mercer's original recursive mixing scheme (Jelinek and Mercer, 1981) and form a lattice to handle the situation where the context is a mixture of Markov chains.", "cite_spans": [ { "start": 53, "end": 79, "text": "(Jelinek and Mercer, 1981)", "ref_id": "BIBREF12" }, { "start": 271, "end": 297, "text": "(Chelba and Jelinek, 2000)", "ref_id": "BIBREF5" }, { "start": 682, "end": 708, "text": "(Jelinek and Mercer, 1981)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "p(u|z 1 , \u2022 \u2022 \u2022 , z n ) where u, z 1 , \u2022 \u2022 \u2022 , z", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best list approximate EM", "sec_num": "3.1" }, { "text": "As explained in (Chelba and Jelinek, 2000) , for the SLM component, a large fraction of the partial parse trees that can be used for assigning probability to the next word do not survive in the synchronous, multistack search strategy, thus they are not used in the N-best approximate EM algorithm for the estimation of WORD-PREDICTOR to improve its predictive power. To remedy this weakness, we estimate WORD-PREDICTOR using the algorithm below.", "cite_spans": [ { "start": 16, "end": 42, "text": "(Chelba and Jelinek, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Follow-up EM", "sec_num": "3.2" }, { "text": "The language model probability assignment for the word at position k+1 in the input sentence of document d can be computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Follow-up EM", "sec_num": "3.2" }, { "text": "Pp(w k+1 |W k , d) = X h \u22121 \u2212m \u2208T k ;T k \u2208Z k ,g k+1 \u2208G d p(w k+1 |w k k\u2212n+2 h \u22121 \u2212m g k+1 ) Pp(T k |W k , d)p(g k+1 |d) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Follow-up EM", "sec_num": "3.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Follow-up EM", "sec_num": "3.2" }, { "text": "P p (T k |W k , d) = P G k Pp(W k ,T k ,G k |d) P T k \u2208Z k P G k Pp(W k ,T k ,G k |d)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Follow-up EM", "sec_num": "3.2" }, { "text": "and Z k is the set of all parses present in the stacks at the current stage k during the synchronous multi-stack pruning strategy and it is a function of the word k-prefix W k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Follow-up EM", "sec_num": "3.2" }, { "text": "The likelihood of a training corpus D under this language model probability assignment that uses partial parse trees generated during the process of the synchronous, multi-stack search strategy can be written as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Follow-up EM", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(D, p) = Y d\u2208D Y l \" X k Pp(w (l) k+1 |W l k , d) \"", "eq_num": "(3)" } ], "section": "Follow-up EM", "sec_num": "3.2" }, { "text": "We employ a second stage of parameter reestimation for p(w k+1 |w k k\u2212n+2 h \u22121 \u2212m g k+1 ) and p(g k+1 |d) by using EM again to maximize Equation (3) to improve the predictive power of WORD-PREDICTOR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Follow-up EM", "sec_num": "3.2" }, { "text": "When using very large corpora to train our composite language model, both the data and the parameters can't be stored in a single machine, so we have to resort to distributed computing. The topic of large scale distributed language models is relatively new, and existing works are restricted to n-grams only (Brants et al., 2007; Emami et al., 2007; Zhang et al., 2006) . Even though all use distributed architectures that follow the client-server paradigm, the real implementations are in fact different. Zhang et al. (2006) and Emami et al. (2007) store training corpora in suffix arrays such that one sub-corpus per server serves raw counts and test sentences are loaded in a client. This implies that when computing the language model probability of a sentence in a client, all servers need to be contacted for each ngram request. The approach by Brants et al. (2007) follows a standard MapReduce paradigm (Dean and Ghemawat, 2004) : the corpus is first divided and loaded into a number of clients, and n-gram counts are collected at each client, then the n-gram counts mapped and stored in a number of servers, resulting in exactly one server being contacted per n-gram when computing the language model probability of a sentence. We adopt a similar approach to Brants et al. and make it suitable to perform iterations of N -best list approximate EM algorithm, see Figure 2. The corpus is divided and loaded into a number of clients. We use a public available parser to parse the sentences in each client to get the initial counts for w \u22121 \u2212n+1 wh \u22121 \u2212m g etc., finish the Map part, and then the counts for a particular w \u22121 \u2212n+1 wh \u22121 \u2212m g at different clients are summed up and stored in one of the servers by hashing through the word w \u22121 (or h \u22121 ) and its topic g, finish the Reduce part. This is the initialization of the N -best list approximate EM step. Each client then calls the servers for parameters to perform synchronous multi-stack search for each sentence to get the N -best list parse trees. Again, the expected count for a particular parameter of w \u22121 \u2212n+1 wh \u22121 \u2212m g at the clients are computed, thus we finish a Map part, then summed up and stored in one of the servers by hashing through the word w \u22121 (or h \u22121 ) and its topic g, thus we finish the Reduce part. We repeat this procedure until convergence.", "cite_spans": [ { "start": 308, "end": 329, "text": "(Brants et al., 2007;", "ref_id": "BIBREF1" }, { "start": 330, "end": 349, "text": "Emami et al., 2007;", "ref_id": "BIBREF10" }, { "start": 350, "end": 369, "text": "Zhang et al., 2006)", "ref_id": "BIBREF26" }, { "start": 506, "end": 525, "text": "Zhang et al. (2006)", "ref_id": "BIBREF26" }, { "start": 530, "end": 549, "text": "Emami et al. (2007)", "ref_id": "BIBREF10" }, { "start": 851, "end": 871, "text": "Brants et al. (2007)", "ref_id": "BIBREF1" }, { "start": 910, "end": 935, "text": "(Dean and Ghemawat, 2004)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 1370, "end": 1376, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Distributed architecture", "sec_num": "3.3" }, { "text": "Similarly, we use a distributed architecture as in Figure 2 to perform the follow-up EM algorithm to re-estimate WORD-PREDICTOR.", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 59, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Distributed architecture", "sec_num": "3.3" }, { "text": "We have trained our language models using three different training sets: one has 44 million tokens, another has 230 million tokens, and the other has 1.3 billion tokens. An independent test set which has 354 k tokens is chosen. The independent check data set used to determine the linear interpolation coefficients has 1.7 million tokens for the 44 million tokens training corpus, 13.7 million tokens for both 230 million and 1.3 billion tokens training corpora. All these data sets are taken from the LDC English Gigaword corpus with non-verbalized punctuation and we remove all punctuation. Table 1 gives the detailed information on how these data sets are chosen from the LDC English Gigaword corpus.", "cite_spans": [], "ref_spans": [ { "start": 593, "end": 600, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "The vocabulary sizes in all three cases are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "\u2022 word (also WORD-PREDICTOR operation) (Chelba and Jelinek, 2000) , after the parses undergo headword percolation and binarization, each model component of WORD-PREDICTOR, TAGGER, and CONSTRUCTOR is initialized from a set of parsed sentences. We use the \"openNLP\" software (Northedge, 2005) to parse a large amount of sentences in the LDC English Gigaword corpus to generate an automatic treebank, which has a slightly different word-tokenization than that of the manual treebank such as the Upenn Treebank used in (Chelba and Jelinek, 2000) . For the 44 and 230 million tokens corpora, all sentences are automatically parsed and used to initialize model parameters, while for 1.3 billion tokens corpus, we parse the sentences from a portion of the corpus that contain 230 million tokens, then use them to initialize model parameters. The parser at \"openNLP\" is trained by Upenn treebank with 1 million tokens and there is a mismatch between Upenn treebank and LDC English Gigaword corpus. Nevertheless, experimental results show that this approach is effective to provide initial values of model parameters.", "cite_spans": [ { "start": 39, "end": 65, "text": "(Chelba and Jelinek, 2000)", "ref_id": "BIBREF5" }, { "start": 273, "end": 290, "text": "(Northedge, 2005)", "ref_id": null }, { "start": 515, "end": 541, "text": "(Chelba and Jelinek, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "As we have explained, the proposed EM algorithms can be naturally cast into a MapReduce framework, see more discussion in (Lin and Dyer, 2010). If we have access to a large cluster of machines with Hadoop installed that are powerful enough to process a billion tokens level corpus, we just need to specify a map function and a reduce function etc., Hadoop will automatically parallelize and execute programs written in this functional style. Unfortunately, we don't have this kind of resources available. Instead, we have access to a supercomputer at a supercomputer center with MPI installed that has more than 1000 core processors usable. Thus we implement our algorithms using C++ under MPI on the supercomputer, where we have to write C++ codes for Map part and Reduce part, and the MPI is used to take care of massage passing, scheduling, synchronization, etc. between clients and servers. This involves a fair amount of programming work, even though our implementation under MPI is not as reliable as under Hadoop but it is more efficient. We use up to 1000 core processors to train the composite language models for 1.3 billion tokens corpus where 900 core processors are used to store the parameters alone. We decide to use linearly smoothed trigram as the baseline model for 44 million token corpus, linearly smoothed 4-gram as the baseline model for 230 million token corpus, and linearly smoothed 5-gram as the baseline model for 1.3 billion token corpus. Model size is a big issue, we have to keep only a small set of topics due to the consideration in both computational time and resource demand. Table 2 shows the perplexity results and computation time of composite n-gram/PLSA language models that are trained on three corpora when the pre-defined number of total topics is 200 but different numbers of most likely topics are kept for each document in PLSA, the rest are pruned. For composite 5-gram/PLSA model trained on 1.3 billion tokens corpus, 400 cores have to be used to keep top 5 most likely topics. For composite tri-gram/PLSA model trained on 44M tokens corpus, the computation time increases drastically with less than 5% percent perplexity improvement. So in the following experiments, we keep top 5 topics for each document from total 200 topics and all other 195 topics are pruned.", "cite_spans": [], "ref_spans": [ { "start": 1610, "end": 1617, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "All composite language models are first trained by performing N-best list approximate EM algorithm until convergence, then EM algorithm for a second stage of parameter re-estimation for WORD-PREDICTOR and SEMANTIZER until convergence. We fix the size of topics in PLSA to be 200 and then prune to 5 in the experiments, where the unpruned 5 topics in general account for 70% probability in p(g|d). Table 3 shows comprehensive perplexity results for a variety of different models such as composite n-gram/m-SLM, n-gram/PLSA, m-SLM/PLSA, their linear combinations, etc., where we use online EM with fixed learning rate to reestimate the parameters of the SEMANTIZER of test document. The m-SLM performs competitively with its counterpart n-gram (n=m+1) on large scale corpus. In Table 3 , for composite n-gram/m-SLM model (n = 3, m = 2 and n = 4, m = 3) trained on 44 million tokens and 230 million tokens, we cut off its fractional expected counts that are less than a threshold 0.005, this significantly reduces the number of predictor's types by 85%. When we train the composite language on 1.3 billion tokens corpus, we have to both aggressively prune the parameters of WORD-PREDICTOR and shrink the order of n-gram and m-SLM in order to store them in a supercomputer having 1000 cores. In particular, for composite 5-gram/4-SLM model, its size is too big to store, thus we use its approximation, a linear combination of 5-gram/2-SLM and 2-gram/4-SLM, and for 5-gram/2-SLM or 2-gram/4-SLM, again we cut off its fractional expected counts that are less than a threshold 0.005, this significantly reduces the number of predictor's types by 85%. For composite 4-SLM/PLSA model, we cut off its fractional expected counts that are less than a threshold 0.002, again this significantly reduces the number of predictor's types by 85%. For composite 4-SLM/PLSA model or its linear combination with models, we ignore all the tags and use only the words in the 4 head words. In this too big to store in the supercomputer. The composite n-gram/m-SLM/PLSA model gives significant perplexity reductions over baseline n-grams, n = 3, 4, 5 and m-SLMs, m = 2, 3, 4. The majority of gains comes from PLSA component, but when adding SLM component into n-gram/PLSA, there is a further 10% relative perplexity reduction.", "cite_spans": [], "ref_spans": [ { "start": 397, "end": 404, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 776, "end": 783, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "We have applied our composite 5-gram/2-SLM+2-gram/4-SLM+5-gram/PLSA language model that is trained by 1.3 billion word corpus for the task of re-ranking the N -best list in statistical machine translation. We used the same 1000-best list that is used by Zhang et al. (2006) . This list was generated on 919 sentences from the MT03 Chinese-English evaluation set by Hiero (Chiang, 2005; Chiang, 2007) , a state-of-the-art parsing-based translation model. Its decoder uses a trigram language model trained with modified Kneser-Ney smoothing (Kneser and Ney, 1995) on a 200 million tokens corpus. Each translation has 11 features and language model is one of them. We substitute our language model and use MERT (Och, 2003) to optimize the BLEU score (Papineni et al., 2002) . We partition the data into ten pieces, 9 pieces are used as training data to optimize the BLEU score (Papineni et al., 2002) by MERT (Och, 2003) , a remaining single piece is used to re-rank the 1000-best list and obtain the BLEU score. The cross-validation process is then repeated 10 times (the folds), with each of the 10 pieces used exactly once as the validation data. The 10 results from the folds then can be averaged (or otherwise combined) to produce a single estimation for BLEU score. Table 4 shows the BLEU scores through 10-fold cross-validation. The composite 5-gram/2-SLM+2gram/4-SLM+5-gram/PLSA language model gives 1.57% BLEU score improvement over the baseline and 0.79% BLEU score improvement over the 5-gram. This is because there is not much diversity on the 1000-best list, and essentially only 20 \u223c 30 distinct sentences are there in the 1000-best list. Chiang (2007) studied the performance of machine translation on Hiero, the BLEU score is 33.31% when n-gram is used to re-rank the N -best list, however, the BLEU score becomes significantly higher 37.09% when the n-gram is embedded directly into Hiero's one pass decoder, this is because there is not much diversity in the N -best list. It is expected that putting the our composite language into a one pass decoder of both phrase-based (Koehn et al., 2003) and parsing-based (Chiang, 2005; Chiang, 2007) MT systems should result in much improved BLEU scores.", "cite_spans": [ { "start": 254, "end": 273, "text": "Zhang et al. (2006)", "ref_id": "BIBREF26" }, { "start": 371, "end": 385, "text": "(Chiang, 2005;", "ref_id": "BIBREF6" }, { "start": 386, "end": 399, "text": "Chiang, 2007)", "ref_id": "BIBREF7" }, { "start": 539, "end": 561, "text": "(Kneser and Ney, 1995)", "ref_id": "BIBREF16" }, { "start": 703, "end": 719, "text": "MERT (Och, 2003)", "ref_id": null }, { "start": 747, "end": 770, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF20" }, { "start": 874, "end": 897, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF20" }, { "start": 901, "end": 917, "text": "MERT (Och, 2003)", "ref_id": null }, { "start": 1650, "end": 1663, "text": "Chiang (2007)", "ref_id": "BIBREF7" }, { "start": 2088, "end": 2108, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF17" }, { "start": 2127, "end": 2141, "text": "(Chiang, 2005;", "ref_id": "BIBREF6" }, { "start": 2142, "end": 2155, "text": "Chiang, 2007)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 1269, "end": 1276, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "SYSTEM MODEL MEAN (%) BASELINE 31.75 5-GRAM 32.53 5-GRAM/2-SLM+2-GRAM/4-SLM 32.87 5-GRAM/PLSA 33.01 5-GRAM/2-SLM+2-GRAM/4-SLM 33.32 +5-GRAM/PLSA Table 4 : 10-fold cross-validation BLEU score results for the task of re-ranking the N -best list.", "cite_spans": [], "ref_spans": [ { "start": 145, "end": 152, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "Besides reporting the BLEU scores, we look at the \"readability\" of translations similar to the study conducted by Charniak et al. (2003) . The translations are sorted into four groups: good/bad syntax crossed with good/bad meaning by human judges, see Table 5. We find that many more sentences are perfect, many more are grammatically correct, and many more are semantically correct. The syntactic language model (Charniak, 2001; Charniak, 2003) only improves translations to have good grammar, but does not improve translations to preserve meaning.", "cite_spans": [ { "start": 114, "end": 136, "text": "Charniak et al. (2003)", "ref_id": "BIBREF3" }, { "start": 413, "end": 429, "text": "(Charniak, 2001;", "ref_id": "BIBREF2" }, { "start": 430, "end": 445, "text": "Charniak, 2003)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "The composite 5-gram/2-SLM+2-gram/4-SLM+5gram/PLSA language model improves both significantly. Bear in mind that Charniak et al. (2003) integrated Charniak's language model with the syntaxbased translation model Yamada and Knight proposed (2001) to rescore a tree-to-string translation forest, whereas we use only our language model for N -best list re-ranking. Also, in the same study in (Charniak, 2003) , they found that the outputs produced using the n-grams received higher scores from BLEU; ours did not. The difference between human judgments and BLEU scores indicate that closer agreement may be possible by incorporating syntactic structure and semantic information into the BLEU score evaluation. For example, semantically similar words like \"insure\" and \"ensure\" in the example of BLEU paper (Papineni et al., 2002) should be substituted in the formula, and there is a weight to measure the goodness of syntactic structure. This modification will lead to a better metric and such information can be provided by our composite language models. P S G W BASELINE 95 398 20 406 5-GRAM 122 406 24 367 5-GRAM/2-SLM 151 425 33 310 +2-GRAM/4-SLM +5-GRAM/PLSA Table 5 : Results of \"readability\" evaluation on 919 translated sentences, P: perfect, S: only semantically correct, G: only grammatically correct, W: wrong.", "cite_spans": [ { "start": 113, "end": 135, "text": "Charniak et al. (2003)", "ref_id": "BIBREF3" }, { "start": 212, "end": 245, "text": "Yamada and Knight proposed (2001)", "ref_id": null }, { "start": 389, "end": 405, "text": "(Charniak, 2003)", "ref_id": "BIBREF3" }, { "start": 803, "end": 826, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 1053, "end": 1097, "text": "P S G W BASELINE 95 398 20 406 5-GRAM", "ref_id": "TABREF3" }, { "start": 1168, "end": 1175, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "As far as we know, this is the first work of building a complex large scale distributed language model with a principled approach that is more powerful than ngrams when both trained on a very large corpus with up to a billion tokens. We believe our results still hold on web scale corpora that have trillion tokens, since the composite language model effectively encodes long range dependencies of natural language that n-gram is not viable to consider. Of course, this implies that we have to take a huge amount of resources to perform the computation, nevertheless this becomes feasible, affordable, and cheap in the era of cloud computing. 209", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Perplexity\u0142a measure of difficulty of speech recognition tasks", "authors": [ { "first": "L", "middle": [], "last": "Bahl", "suffix": "" }, { "first": "J", "middle": [], "last": "Baker", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "R", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1977, "venue": "94th Meeting of the Acoustical Society of America", "volume": "62", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Bahl and J. Baker,F. Jelinek and R. Mercer. 1977. Per- plexity\u0142a measure of difficulty of speech recognition tasks. 94th Meeting of the Acoustical Society of Amer- ica, 62:S63, Supplement 1.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Large language models in machine translation", "authors": [ { "first": "T", "middle": [], "last": "Brants", "suffix": "" } ], "year": 2007, "venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "858--867", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Brants et al.. 2007. Large language models in ma- chine translation. The 2007 Conference on Empirical Methods in Natural Language Processing (EMNLP), 858-867.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Immediate-head parsing for language models", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2001, "venue": "The 39th Annual Conference on Association of Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "124--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak. 2001. Immediate-head parsing for language models. The 39th Annual Conference on Association of Computational Linguistics (ACL), 124-131.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Syntaxbased language models for statistical machine translation", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "K", "middle": [], "last": "Yamada", "suffix": "" } ], "year": 2003, "venue": "MT Summit IX., Intl. Assoc. for Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak, K. Knight and K. Yamada. 2003. Syntax- based language models for statistical machine transla- tion. MT Summit IX., Intl. Assoc. for Machine Trans- lation.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Exploiting syntactic structure for language modeling", "authors": [ { "first": "C", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" } ], "year": 1998, "venue": "The 36th Annual Conference on Association of Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "225--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Chelba and F. Jelinek. 1998. Exploiting syntactic structure for language modeling. The 36th Annual Conference on Association of Computational Linguis- tics (ACL), 225-231.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Structured language modeling", "authors": [ { "first": "C", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" } ], "year": 2000, "venue": "Computer Speech and Language", "volume": "14", "issue": "4", "pages": "283--332", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Chelba and F. Jelinek. 2000. Structured lan- guage modeling. Computer Speech and Language, 14(4):283-332.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "D", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "The 43th Annual Conference on Association of Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. The 43th Annual Con- ference on Association of Computational Linguistics (ACL), 263-270.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Hierarchical phrase-based translation", "authors": [ { "first": "D", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "2", "pages": "201--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "MapReduce: Simplified data processing on large clusters", "authors": [ { "first": "J", "middle": [], "last": "Dean", "suffix": "" }, { "first": "S", "middle": [], "last": "Ghemawat", "suffix": "" } ], "year": 2004, "venue": "Operating Systems Design and Implementation (OSDI)", "volume": "", "issue": "", "pages": "137--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Dean and S. Ghemawat. 2004. MapReduce: Simpli- fied data processing on large clusters. Operating Sys- tems Design and Implementation (OSDI), 137-150.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Maximum likelihood estimation from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of Royal Statistical Society", "volume": "39", "issue": "", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Dempster, N. Laird and D. Rubin. 1977. Maximum likelihood estimation from incomplete data via the EM algorithm. Journal of Royal Statistical Society, 39:1- 38.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Largescale distributed language modeling", "authors": [ { "first": "A", "middle": [], "last": "Emami", "suffix": "" }, { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "J", "middle": [], "last": "Sorensen", "suffix": "" } ], "year": 2007, "venue": "The 32nd IEEE International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "37--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Emami, K. Papineni and J. Sorensen. 2007. Large- scale distributed language modeling. The 32nd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), IV:37-40.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Unsupervised learning by probabilistic latent semantic analysis", "authors": [ { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2001, "venue": "Machine Learning", "volume": "42", "issue": "", "pages": "177--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Hofmann. 2001. Unsupervised learning by proba- bilistic latent semantic analysis. Machine Learning, 42(1):177-196.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Interpolated estimation of Markov source parameters from sparse data. Pattern Recognition in Practice", "authors": [ { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "R", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "381--397", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Jelinek and R. Mercer. 1981. Interpolated estimation of Markov source parameters from sparse data. Pat- tern Recognition in Practice, 381-397.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Putting language into language modeling", "authors": [ { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "C", "middle": [], "last": "Chelba", "suffix": "" } ], "year": 1999, "venue": "Sixth European Conference on Speech Communication and Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Jelinek and C. Chelba. 1999. Putting language into language modeling. Sixth European Confer- ence on Speech Communication and Technology (EU- ROSPEECH), Keynote Paper 1.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Stochastic analysis of structured language modeling", "authors": [ { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" } ], "year": 2004, "venue": "Mathematical Foundations of Speech and Language Processing", "volume": "", "issue": "", "pages": "37--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Jelinek. 2004. Stochastic analysis of structured lan- guage modeling. Mathematical Foundations of Speech and Language Processing, 37-72, Springer-Verlag.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Speech and Language Processing", "authors": [ { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "J", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Jurafsky and J. Martin. 2008. Speech and Language Processing, 2nd Edition, Prentice Hall.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improved backing-off for m-gram language modeling", "authors": [ { "first": "R", "middle": [], "last": "Kneser", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1995, "venue": "The 20th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "181--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Kneser and H. Ney. 1995. Improved backing-off for m-gram language modeling. The 20th IEEE Interna- tional Conference on Acoustics, Speech, and Signal Processing (ICASSP), 181-184.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Statistical phrasebased translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "F", "middle": [], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "The Human Language Technology Conference (HLT)", "volume": "", "issue": "", "pages": "48--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, F. Och and D. Marcu. 2003. Statistical phrase- based translation. The Human Language Technology Conference (HLT), 48-54.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Maximum entropy techniques for exploiting syntactic, semantic and collocational dependencies in language modeling", "authors": [ { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "J", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2000, "venue": "Computer Speech and Language", "volume": "14", "issue": "4", "pages": "355--372", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Khudanpur and J. Wu. 2000. Maximum entropy tech- niques for exploiting syntactic, semantic and colloca- tional dependencies in language modeling. Computer Speech and Language, 14(4):355-372.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "A", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2003, "venue": "The 41th Annual meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Lavie et al. 2006. MINDS Workshops Machine Translation Working Group Final Report. http://www- nlpir.nist.gov/MINDS/FINAL/MT.web.pdf J. Lin and C. Dyer. 2010. Data-Intensive Text Processing with MapReduce. Morgan and Claypool Publishers. R. Northedge. 2005. OpenNLP software http://www.codeproject.com/KB/recipes/englishpar sing.aspx F. Och. 2003. Minimum error rate training in statisti- cal machine translation. The 41th Annual meeting of the Association for Computational Linguistics (ACL), 311-318.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "The 40th Annual meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. The 40th Annual meeting of the Associa- tion for Computational Linguistics (ACL), 311-318.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Probabilistic top-down parsing and language modeling", "authors": [ { "first": "B", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "2", "pages": "249--276", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249-276.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Exploiting syntactic, semantic and lexical regularities in language modeling via directed Markov random fields", "authors": [ { "first": "S", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2005, "venue": "The 22nd International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "953--960", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Wang et al. 2005. Exploiting syntactic, semantic and lexical regularities in language modeling via directed Markov random fields. The 22nd International Con- ference on Machine Learning (ICML), 953-960.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Stochastic analysis of lexical and semantic enhanced structural language model", "authors": [ { "first": "S", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2006, "venue": "The 8th International Colloquium on Grammatical Inference (ICGI)", "volume": "", "issue": "", "pages": "97--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Wang et al. 2006. Stochastic analysis of lexical and semantic enhanced structural language model. The 8th International Colloquium on Grammatical Inference (ICGI), 97-111.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A syntax-based statistical translation model", "authors": [ { "first": "K", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2001, "venue": "The 39th Annual Conference on Association of Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "1067--1074", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Yamada and K. Knight. 2001. A syntax-based statis- tical translation model. The 39th Annual Conference on Association of Computational Linguistics (ACL), 1067-1074.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Nonlinear Programming: A Unified Approach", "authors": [ { "first": "W", "middle": [], "last": "", "suffix": "" } ], "year": 1969, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Zangwill. 1969. Nonlinear Programming: A Unified Approach. Prentice-Hall.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Distributed language modeling for N-best list re-ranking. The", "authors": [ { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "A", "middle": [], "last": "Hildebrand", "suffix": "" }, { "first": "S", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2006, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "216--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Zhang, A. Hildebrand and S. Vogel. 2006. Dis- tributed language modeling for N-best list re-ranking. The 2006 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), 216-223.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Structured language models for statistical machine translation", "authors": [ { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Zhang, 2008. Structured language models for statisti- cal machine translation. Ph.D. dissertation, CMU.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "n belong to a mixed set of words, POS tags, NTtags, CONSTRUCTOR actions (u only), and z 1 , \u2022 \u2022 \u2022 , z n form a linear Markov chain. The recursive mixing scheme is the standard one among relative frequency estimates of different orders k = 0, \u2022 \u2022 \u2022 , n as explained in", "uris": null, "type_str": "figure" }, "TABREF2": { "html": null, "text": "table, we have three items missing (marked by -), since the size of corresponding model is 207", "type_str": "table", "num": null, "content": "
CORPUSn# OFPPLTIME# OF# OF# OF TYPES
TOPICS(HOURS) SERVERS CLIENTS OF ww \u22121 \u2212n+1 g
44M351960.540100120.1M
3101941.040100218.6M
3201902.780100537.8M
3501896.3801001.123B
310018911.2801001.616B
320018819.3801002.280B
230M4514625.62801000.681B
1.3B5211126.54001001.790B
5510275.04001004.391B
" }, "TABREF3": { "html": null, "text": "Perplexity (ppl) results and time consumed of composite n-gram/PLSA language model trained on three corpora when different numbers of most likely topics are kept for each document in PLSA.", "type_str": "table", "num": null, "content": "
LANGUAGE MODEL44MREDUC-230MREDUC-1.3BREDUC-
n=3,m=2TIONn=4,m=3TIONn=5,m=4TION
BASELINE n-GRAM (LINEAR)262200138
n-GRAM (KNESER-NEY)2446.9%1838.5%--
m-SLM279-6.5%1905.0%1370.0%
PLSA825-214.9%812-306.0%773-460.0%
n-GRAM+m-SLM2475.7%1848.0%1296.5%
n-GRAM+PLSA23510.3%17910.5%1287.2%
n-GRAM+m-SLM+PLSA22215.3%17512.5%12310.9%
n-GRAM/m-SLM2437.3%17114.5%(125)9.4%
n-GRAM/PLSA19625.2%14627.0%10226.1%
m-SLM/PLSA19824.4%14030.0%(103)25.4%
n-GRAM/PLSA+m-SLM/PLSA18330.2%14030.0%(93)32.6%
n-GRAM/m-SLM+m-SLM/PLSA18330.2%13930.5%(94)31.9%
n-GRAM/m-SLM+n-GRAM/PLSA18429.8%13731.5%(91)34.1%
n-GRAM/m-SLM+n-GRAM/PLSA18031.3%13035.0%--
+m-SLM/PLSA
n-GRAM/m-SLM/PLSA17632.8%----
" }, "TABREF4": { "html": null, "text": "Perplexity results for various language models on test corpus, where + denotes linear combination, / denotes composite model; n denotes the order of n-gram and m denotes the order of SLM; the topic nodes are pruned from 200 to 5.", "type_str": "table", "num": null, "content": "" } } } }