{ "paper_id": "D12-1047", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:23:52.758481Z" }, "title": "Translation Model Based Cross-Lingual Language Model Adaptation: from Word Models to Phrase Models", "authors": [ { "first": "Shixiang", "middle": [], "last": "Lu", "suffix": "", "affiliation": {}, "email": "shixiang.lu@ia.ac.cn" }, { "first": "Wei", "middle": [], "last": "Wei", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Xiaoyin", "middle": [], "last": "Fu", "suffix": "", "affiliation": {}, "email": "xiaoyin.fu@ia.ac.cn" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "", "affiliation": {}, "email": "xubo@ia.ac.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we propose a novel translation model (TM) based cross-lingual data selection model for language model (LM) adaptation in statistical machine translation (SMT), from word models to phrase models. Given a source sentence in the translation task, this model directly estimates the probability that a sentence in the target LM training corpus is similar. Compared with the traditional approaches which utilize the first pass translation hypotheses, cross-lingual data selection model avoids the problem of noisy proliferation. Furthermore, phrase TM based cross-lingual data selection model is more effective than the traditional approaches based on bag-ofwords models and word-based TM, because it captures contextual information in modeling the selection of phrase as a whole. Experiments conducted on large-scale data sets demonstrate that our approach significantly outperforms the state-of-the-art approaches on both LM perplexity and SMT performance.", "pdf_parse": { "paper_id": "D12-1047", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we propose a novel translation model (TM) based cross-lingual data selection model for language model (LM) adaptation in statistical machine translation (SMT), from word models to phrase models. Given a source sentence in the translation task, this model directly estimates the probability that a sentence in the target LM training corpus is similar. Compared with the traditional approaches which utilize the first pass translation hypotheses, cross-lingual data selection model avoids the problem of noisy proliferation. Furthermore, phrase TM based cross-lingual data selection model is more effective than the traditional approaches based on bag-ofwords models and word-based TM, because it captures contextual information in modeling the selection of phrase as a whole. Experiments conducted on large-scale data sets demonstrate that our approach significantly outperforms the state-of-the-art approaches on both LM perplexity and SMT performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Language model (LM) plays a critical role in statistical machine translation (SMT). It seems to be a universal truth that LM performance can always be improved by using more training data (Brants et al., 2007) , but only if the training data is reasonably well-matched with the desired output (Moore and Lewis, 2010) . It is also obvious that among the large training data the topics or domains of discussion will change , which causes the mismatch problems with the translation task. For this reason, most researchers preferred to select similar training data from the large training corpus in the past few years Zhao et al., 2004; Kim, 2005; Masskey and Sethy, 2010; Axelrod et al., 2011) . This would empirically provide more accurate lexical probabilities, and thus better match the translation task at hand (Axelrod et al., 2011) .", "cite_spans": [ { "start": 188, "end": 209, "text": "(Brants et al., 2007)", "ref_id": "BIBREF4" }, { "start": 304, "end": 316, "text": "Lewis, 2010)", "ref_id": "BIBREF20" }, { "start": 614, "end": 632, "text": "Zhao et al., 2004;", "ref_id": "BIBREF34" }, { "start": 633, "end": 643, "text": "Kim, 2005;", "ref_id": "BIBREF15" }, { "start": 644, "end": 668, "text": "Masskey and Sethy, 2010;", "ref_id": "BIBREF18" }, { "start": 669, "end": 690, "text": "Axelrod et al., 2011)", "ref_id": "BIBREF2" }, { "start": 812, "end": 834, "text": "(Axelrod et al., 2011)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many previous data selection approaches for LM adaptation in SMT depend on the first pass translation hypotheses Zhao et al., 2004; Kim, 2005; Masskey and Sethy, 2010) , they select the sentences which are similar to the translation hypotheses. These schemes are overall limited by the quality of the translation hypotheses (Tam et al., 2007 and , and better initial translation hypotheses lead to better selected sentences (Zhao et al., 2004) . However, while SMT has achieved a great deal of development in recent years, the translation hypotheses are still far from perfect (Wei and Pal, 2010) , which have many noisy data. The noisy translation hypotheses mislead data selection process (Xu et al., 2001; Tam et al., 2006 and 2007; Wei and Pal, 2010) , and thus take noisy data into the selected training data, which causes noisy proliferation and degrades the performance of adapted LM.", "cite_spans": [ { "start": 113, "end": 131, "text": "Zhao et al., 2004;", "ref_id": "BIBREF34" }, { "start": 132, "end": 142, "text": "Kim, 2005;", "ref_id": "BIBREF15" }, { "start": 143, "end": 167, "text": "Masskey and Sethy, 2010)", "ref_id": "BIBREF18" }, { "start": 324, "end": 345, "text": "(Tam et al., 2007 and", "ref_id": "BIBREF29" }, { "start": 424, "end": 443, "text": "(Zhao et al., 2004)", "ref_id": "BIBREF34" }, { "start": 577, "end": 596, "text": "(Wei and Pal, 2010)", "ref_id": "BIBREF31" }, { "start": 691, "end": 708, "text": "(Xu et al., 2001;", "ref_id": "BIBREF32" }, { "start": 709, "end": 729, "text": "Tam et al., 2006 and", "ref_id": "BIBREF28" }, { "start": 730, "end": 735, "text": "2007;", "ref_id": "BIBREF7" }, { "start": 736, "end": 754, "text": "Wei and Pal, 2010)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Furthermore, traditional approaches for LM adaptation are based on bag-of-words models and considered to be context independent, despite of their state-of-the-art performance, such as TF-IDF Zhao et al., 2004; Hildebrand et al., 2005; Kim, 2005; Foster and Kuhn, 2007) , centroid similarity (Masskey and Sethy, 2010) , and cross-lingual similarity (CLS) (Ananthakrishnan et al., 2011a) . They all perform at the word level, exact only ter-m matching schemes, and do not take into account any contextual information when modeling the selection by single words in isolation, which degrade the quality of selected sentences.", "cite_spans": [ { "start": 191, "end": 209, "text": "Zhao et al., 2004;", "ref_id": "BIBREF34" }, { "start": 210, "end": 234, "text": "Hildebrand et al., 2005;", "ref_id": "BIBREF14" }, { "start": 235, "end": 245, "text": "Kim, 2005;", "ref_id": "BIBREF15" }, { "start": 246, "end": 268, "text": "Foster and Kuhn, 2007)", "ref_id": "BIBREF11" }, { "start": 291, "end": 316, "text": "(Masskey and Sethy, 2010)", "ref_id": "BIBREF18" }, { "start": 354, "end": 385, "text": "(Ananthakrishnan et al., 2011a)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we argue that it is beneficial to model the data selection based on the source translation task directly and capture the contextual information for LM adaptation. To this end, we propose a more principled translation model (TM) based cross-lingual data selection model for LM adaptation, from word models to phrase models. We assume that the data selection should be performed by the cross-lingual model and at the phrase level. Given a source sentence in the translation task, this model directly estimates the probability before translation that a sentence in the target LM training corpus is similar. Therefore, it does not require the translation task to be pre-translation as in monolingual adaptation, and can address the problem of noisy proliferation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To the best of our knowledge, this is the first extensive and empirical study of using phrase T-M based cross-lingual data selection for LM adaptation. This model learns the transform probability of a multi-term phrase in a source sentence given a phrase in the target sentence of LM training corpus. Compared with bag-of-words models and word-based TM that account for selecting single words in isolation, this model performs at the phrase level and captures some contextual information in modeling the selection of phrase as a whole, thus it is potentially more effective. More precise data selection can be determined for phrases than for words. In this model, we propose a linear ranking model framework to further improve the performance, referred to the linear discriminant function (Duda et al., 2001; Collins, 2002; Gao et al., 2005) in pattern classification and information retrieval (IR), where different models are incorporated as features, as we will show in our experiments.", "cite_spans": [ { "start": 789, "end": 808, "text": "(Duda et al., 2001;", "ref_id": "BIBREF9" }, { "start": 809, "end": 823, "text": "Collins, 2002;", "ref_id": "BIBREF8" }, { "start": 824, "end": 841, "text": "Gao et al., 2005)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unlike the general TM in SMT, we explore the use of TextRank algorithm (Mihalcea et al., 2004) to identify and eliminate unimportant words (e.g., non-topical words, common words) for corpus preprocessing, and construct TM by important words. This reduces the average number of words in crosslingual data selection model, thus improving the efficiency. Moreover, TextRank utilizes the contex-t information of words to assign term weights (Lee et al., 2008) , which makes phrase TM based crosslingual data selection model play its advantage of capturing the contextual information, thus further improving the performance.", "cite_spans": [ { "start": 71, "end": 94, "text": "(Mihalcea et al., 2004)", "ref_id": "BIBREF19" }, { "start": 437, "end": 455, "text": "(Lee et al., 2008)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organized as follows. Section 2 introduces the related work of LM adaptation. Section 3 presents the framework of cross-lingual data selection for LM adaptation. Section 4 describes our proposed TM based crosslingual data selection model: from word models to phrase models. In section 5 we present large-scale experiments and analyses, and followed by conclusions and future work in section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "TF-IDF and cosine similarity have been widely used for LM adaptation Zhao et al., 2004; Hildebrand et al., 2005; Kim, 2005; Foster and Kuhn, 2007) . Masskey and Sethy (2010) selected the auxiliary data by computing centroid similarity score to the centroid of the in-domain data. The main idea of these methods is to select the sentences which are similar to the first pass translation hypotheses or in-domain corpus from the large LM training corpus, and estimate the bias LM for SMT system to improve the translation quality. Tam et al. (2007 and proposed a bilingual-LSA model for LM adaptation. They integrated the LSA marginal into the target generic LM using marginal adaptation which minimizes the Kullback-Leibler divergence between the adapted LM and the generic LM. Ananthakrishnan et al. (2011a) proposed CLS to bias the count and probability of corresponding n-gram through weighting the LM training corpus. However, these two cross-lingual approaches focus on modify LM itself, which are different from data selection method for LM adaptation. In our comparable experiments, we apply CLS for the first time to the task of cross-lingual data selection for LM adaptation. Due to lack of smoothing measure for sparse vector representation in CLS, the similarity computation is not accurate which degrades the performance of adapted LM. To avoid this, we add smoothing measure like TF-IDF, called CLS s , as we will discuss in the experiments. Snover et al. (2008) used a word TM based CLIR system (Xu et al., 2001) to select a subset of target documents comparable to the source document for adapting LM. Because of the data sparseness in the document state and it operated at the document level, this model selected large quantities of irrelevant text, which may degrade the adapted LM Ananthakrishnan et al., 2011b) . In our word TM based cross-lingual data selection model, we operate at the sentence level and add the smoothing mechanism by integrating with the background word frequency model, and these can significantly improve the performance. Axelrod et al. (2011) proposed a bilingual cross-entropy difference to select data from parallel corpus for domain adaptation which captures the contextual information slightly, and outperformed monolingual cross-entropy difference (Moore and Lewis, 2010), which first shows the advantage of bilingual data selection. However, its performance depends on the parallel in-domain corpus which is usually hard to find, and its application is assumed to be limited.", "cite_spans": [ { "start": 69, "end": 87, "text": "Zhao et al., 2004;", "ref_id": "BIBREF34" }, { "start": 88, "end": 112, "text": "Hildebrand et al., 2005;", "ref_id": "BIBREF14" }, { "start": 113, "end": 123, "text": "Kim, 2005;", "ref_id": "BIBREF15" }, { "start": 124, "end": 146, "text": "Foster and Kuhn, 2007)", "ref_id": "BIBREF11" }, { "start": 528, "end": 548, "text": "Tam et al. (2007 and", "ref_id": "BIBREF29" }, { "start": 1453, "end": 1473, "text": "Snover et al. (2008)", "ref_id": "BIBREF26" }, { "start": 1507, "end": 1524, "text": "(Xu et al., 2001)", "ref_id": "BIBREF32" }, { "start": 1797, "end": 1827, "text": "Ananthakrishnan et al., 2011b)", "ref_id": "BIBREF1" }, { "start": 2062, "end": 2083, "text": "Axelrod et al. (2011)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our LM adaptation is an unsupervised similar training data selection guided by TM based cross-lingual data selection model. For the source sentences in the translation task, we estimate a new LM, the bias LM, from the corresponding target LM training sentences which are selected as the similar sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Data Selection for Language Model Adaptation", "sec_num": "3" }, { "text": "Since the size of the selected sentences is small, the corresponding bias LM is specific and more effective, giving high probabilities to those phrases that occur in the desired output translations. Following the work of (Zhao et al., 2004; Snover et al., 2008) , the generic LM P g (w i |h) and the bias LM P b (w i |h) are combined using linear interpolation as the adapted LM P a (w i |h), which is shown to improve the performance over individual model,", "cite_spans": [ { "start": 221, "end": 240, "text": "(Zhao et al., 2004;", "ref_id": "BIBREF34" }, { "start": 241, "end": 261, "text": "Snover et al., 2008)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Data Selection for Language Model Adaptation", "sec_num": "3" }, { "text": "P a (w i |h) = \u00b5P g (w i |h) + (1 \u2212 \u00b5)P b (w i |h) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Data Selection for Language Model Adaptation", "sec_num": "3" }, { "text": "where the interpolation factor \u00b5 can be simply estimated using the Powell Search algorithm (Press et al., 1992) via cross-validation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Data Selection for Language Model Adaptation", "sec_num": "3" }, { "text": "Our work focuses on TM based cross-lingual data selection model, from word model to phrase models, and the quality of this model is crucial to the performance of adapted LM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Data Selection for Language Model Adaptation", "sec_num": "3" }, { "text": "Let Q = q 1 , . . . , q j be a source sentence in the translation task and S = w 1 , . . . , w i be a sentence in the general target LM training corpus, thus crosslingual data selection model can be framed probabilistically as maximizing the P (S|Q) . By Bayes' rule,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model for Cross-Lingual Data Selection (CLTM)", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (S|Q) = P (S)P (Q|S) P (Q)", "eq_num": "(2)" } ], "section": "Translation Model for Cross-Lingual Data Selection (CLTM)", "sec_num": "4" }, { "text": "where the prior probability P (S) can be viewed as uniform, and the P (Q) is constant across all sentences. Therefore, selecting a sentence to maximize P (S|Q) is equivalent to selecting a sentence that maximizes P (Q|S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model for Cross-Lingual Data Selection (CLTM)", "sec_num": "4" }, { "text": "Cross-Lingual Data Selection (CLWTM)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Based Translation Model for", "sec_num": "4.1" }, { "text": "Following the work of (Xu et al., 2001; Snover et al., 2008) , CLWTM can be described as", "cite_spans": [ { "start": 22, "end": 39, "text": "(Xu et al., 2001;", "ref_id": "BIBREF32" }, { "start": 40, "end": 60, "text": "Snover et al., 2008)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (Q|S) = q\u2208Q P (q|S)", "eq_num": "(3)" } ], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (q|S) = \u03b1P (q|C q ) + (1 \u2212 \u03b1) w\u2208S P (q|w)P (w|S)", "eq_num": "(4)" } ], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.1.1" }, { "text": "where \u03b1 is the interpolation weight empirically set as a constant 1 , P (q|w) is the word-based TM which is estimated by IBM Model 1 (Brown et al., 1993) from the parallel corpus, P (q|C q ) and P (w|S) are the un-smoothed background and sentence model, respectively, estimated using maximum likelihood estimation (MLE) as", "cite_spans": [ { "start": 133, "end": 153, "text": "(Brown et al., 1993)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.1.1" }, { "text": "P (q|C q ) = f req(q, C q ) |C q | (5) P (w|S) = f req(w, S) |S| (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.1.1" }, { "text": "where C q refers to the translation task, f req(q, C q ) refers to the number of times q occurs in C q , f req(w, S) refers to the number of times w occurs in S, and |C q | and |S| are the sizes of the translation task and the current target sentence, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.1.1" }, { "text": "Because of the data sparseness in the sentence state which degrades the model, Equation (6) does not perform well in our data selection experiments. Inspired by the work of (Berger et al., 1999) in IR, we make the following smoothing mechanism:", "cite_spans": [ { "start": 173, "end": 194, "text": "(Berger et al., 1999)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Ranking Candidate Sentences", "sec_num": "4.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (q|S) = \u03b1P (q|C q )+(1\u2212\u03b1) w\u2208S P (q|w)P s (w|S) (7) P s (w|S) = \u03b2P (w|C s ) + (1 \u2212 \u03b2)P (w|S) (8) P (w|C s ) = f req(w, C s ) |C s |", "eq_num": "(9)" } ], "section": "Ranking Candidate Sentences", "sec_num": "4.1.2" }, { "text": "where P (w|C s ) is the un-smoothed background model, estimated using MLE as Equation 5, C s refers to the LM training corpus and |C s | refers to its size. Here, \u03b2 is interpolation weight; notice that letting \u03b2 = 0 in Equation 8reduces the model to the un-smoothed model in Equation (4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking Candidate Sentences", "sec_num": "4.1.2" }, { "text": "Cross-Lingual Data Selection (CLPTM)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-Based Translation Model for", "sec_num": "4.2" }, { "text": "The phrase-based TM (Koehn et al., 2003; Och and Ney, 2004) has shown superior performance compared to the word-based TM. In this paper, the goal of phrase-based TM is to transfer S into Q.", "cite_spans": [ { "start": 20, "end": 40, "text": "(Koehn et al., 2003;", "ref_id": "BIBREF16" }, { "start": 41, "end": 59, "text": "Och and Ney, 2004)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "Rather than transferring single words in isolation, the phrase model transfers one sequence of words into another sequence of words, thus incorporating contextual information. Inspired by the work of web search (Gao et al., 2010) and question retrieval in community question answer (Q&A) (Zhou et al., 2011) , we assume the following generative process: first the sentence S is broken into K nonempty word sequences w 1 , . . . , w k , then each is transferred into a new non-empty word sequences q 1 , . . . , q k , and finally these phrases are permutated and concatenated to form the sentence Q, where q and w denote the phrases or consecutive sequence of words.", "cite_spans": [ { "start": 211, "end": 229, "text": "(Gao et al., 2010)", "ref_id": "BIBREF13" }, { "start": 288, "end": 307, "text": "(Zhou et al., 2011)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "To formulate this generative process, let U denote the segmentation of S into K phrases w 1 , . . . , w k , and let V denote the K phrases q 1 , . . . , q k , we refer to these (w i , q i ) pairs as bi-phrases. Finally, let M denote a permutation of K elements representing the final ranking step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "Next we place a probability distribution over rewrite pairs. Let B(S, Q) denote the set of U , V , M triples that transfer S into Q. Here we assume a uniform probability over segmentations, so the phrase-based selection probability can be formulated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (Q|S) \u221d (U,V,M )\u2208 B(S,Q) P (V |S, U ) \u2022 P (M |S, U, V )", "eq_num": "(10" } ], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": ") Then, we use the maximum approximation to the sum:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (Q|S) \u2248 max (U,V,M )\u2208 B(S,Q) P (V |S, U ) \u2022 P (M |S, U, V )", "eq_num": "(11)" } ], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "Although we have defined a generative model for transferring S into Q, our goal is to calculate the ranking score function over existing Q and S. However, this model can not be used directly for sentence ranking because Q and S are often of different lengths, the length of S is almost 1.5 times to that of Q in our corpus, leaving many words in S unaligned to any word in Q. This is another key difference between our task and SMT. As pointed out by the previous work (Berger and Lafferty, 1999; Gao et al., 2010; Zhou et al., 2011) , sentence-query selection requires a distillation of the sentence, while selection of natural language tolerates little being thrown away. Thus we restrict our attention to those key sentence words that form the distillation of S, do not consider the unaligned words in S, and assume that Q is transfered only from the key sentence words.", "cite_spans": [ { "start": 469, "end": 496, "text": "(Berger and Lafferty, 1999;", "ref_id": "BIBREF3" }, { "start": 497, "end": 514, "text": "Gao et al., 2010;", "ref_id": "BIBREF13" }, { "start": 515, "end": 533, "text": "Zhou et al., 2011)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "In this paper, the key sentence words are identified via word alignment. Let A = a 1 . . . a J be the \"hidden\" word alignment, which describes a mapping from a term position j in Q to a word position a j in S. We assume that the positions of the key sentence words are determined by the Viterbi align-ment\u00c2, which can be obtained using IBM Model 1 (Brown et al., 1993) as follows:", "cite_spans": [ { "start": 348, "end": 368, "text": "(Brown et al., 1993)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "A = arg max A P (Q, A|S) = arg max A P (J|I) J j=1 P (q j |w a j ) = arg max a j P (q j |w a j ) J j=1 (12)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "Given\u00c2, when scoring a given Q/S pair, we restrict our attention to those U , V , M triples that are consistent with\u00c2, which we denote as B(S, Q,\u00c2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "Here, consistency requires that if two words are aligned in\u00c2, then they must appear in the same biphrase (w i , q i ). Once the word alignment is fixed, the final permutation is uniquely determined, so we can safely discard that factor. Then Equation (11) can be written as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (Q|S) \u2248 max (U,V,M )\u2208 B(S,Q,\u00c2) P (V |S, U )", "eq_num": "(13)" } ], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "For the sole remaining factor P (V |S, U ), we assume that a segmented queried question V = q 1 , . . . , q k is generated from left to right by transferring each phrase w 1 , . . . , w k independently, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (V |S, U ) = K k=1 P (q k |w k )", "eq_num": "(14)" } ], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "where P (q k |w k ) is a phrase translation probability computed from the parallel corpus, which can be estimated in two ways (Koehn et al., 2003; Och and Ney, 2004) : relative frequency and lexical weighting, and has two format: phrase translation probability and lexical weight probability. In order to find the maximum probability assignment P (Q|S) efficiently, we use a dynamic programming approach, somewhat similar to the monotone decoding algorithm described in the work (Och, 2002) . We consider quantity a j as the maximal probability of the most likely sequence of phrases in S covering the first j words in Q, therefore the probability can be calculated using the following recursion:", "cite_spans": [ { "start": 126, "end": 146, "text": "(Koehn et al., 2003;", "ref_id": "BIBREF16" }, { "start": 147, "end": 165, "text": "Och and Ney, 2004)", "ref_id": "BIBREF23" }, { "start": 479, "end": 490, "text": "(Och, 2002)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "step (1). Initialization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 0 = 1", "eq_num": "(15)" } ], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "step (2). Induction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Sentence Selection Model", "sec_num": "4.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 j = j