Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:34:07.142307Z"
},
"title": "HMM Word and Phrase Alignment for Statistical Machine Translation",
"authors": [
{
"first": "Yonggang",
"middle": [],
"last": "Deng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cambridge University Engineering",
"location": {
"postCode": "CB2 1PZ",
"settlement": "Cambridge",
"country": "UK"
}
},
"email": "dengyg@jhu.edu"
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cambridge University Engineering",
"location": {
"postCode": "CB2 1PZ",
"settlement": "Cambridge",
"country": "UK"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "HMM-based models are developed for the alignment of words and phrases in bitext. The models are formulated so that alignment and parameter estimation can be performed efficiently. We find that Chinese-English word alignment performance is comparable to that of IBM Model-4 even over large training bitexts. Phrase pairs extracted from word alignments generated under the model can also be used for phrase-based translation, and in Chinese to English and Arabic to English translation, performance is comparable to systems based on Model-4 alignments. Direct phrase pair induction under the model is described and shown to improve translation performance.",
"pdf_parse": {
"paper_id": "H05-1022",
"_pdf_hash": "",
"abstract": [
{
"text": "HMM-based models are developed for the alignment of words and phrases in bitext. The models are formulated so that alignment and parameter estimation can be performed efficiently. We find that Chinese-English word alignment performance is comparable to that of IBM Model-4 even over large training bitexts. Phrase pairs extracted from word alignments generated under the model can also be used for phrase-based translation, and in Chinese to English and Arabic to English translation, performance is comparable to systems based on Model-4 alignments. Direct phrase pair induction under the model is described and shown to improve translation performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Describing word alignment is one of the fundamental goals of Statistical Machine Translation (SMT). Alignment specifies how word order changes when a sentence is translated into another language, and given a sentence and its translation, alignment specifies translation at the word level. It is straightforward to extend word alignment to phrase alignment: two phrases align if their words align.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Deriving phrase pairs from word alignments is now widely used in phrase-based SMT. Parameters of a statistical word alignment model are estimated from bitext, and the model is used to generate word alignments over the same bitext. Phrase pairs are extracted from the aligned bitext and used in the SMT system. With this approach the quality of the underlying word alignments can have a strong influence on phrase-based SMT system performance. The common practice therefore is to extract phrase pairs from the best attainable word alignments. Currently, Model-4 alignments (Brown and others, 1993) as produced by GIZA++ (Och and Ney, 2000) are often the best that can be obtained, especially with large bitexts.",
"cite_spans": [
{
"start": 572,
"end": 596,
"text": "(Brown and others, 1993)",
"ref_id": null
},
{
"start": 619,
"end": 638,
"text": "(Och and Ney, 2000)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite its modeling power and widespread use, Model-4 has shortcomings. Its formulation is such that maximum likelihood parameter estimation and bitext alignment are implemented by approximate, hill-climbing, methods. Consequently parameter estimation can be slow, memory intensive, and difficult to parallelize. It is also difficult to compute statistics under Model-4. This limits its usefulness for modeling tasks other than the generation of word alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We describe an HMM alignment model developed as an alternative to Model-4. In the word alignment and phrase-based translation experiments to be presented, its performance is comparable or improved relative to Model-4. Practically, we can train the model by the Forward-Backward algorithm, and by parallelizing estimation, we can control memory usage, reduce the time needed for training, and increase the bitext used for training. We can also compute statistics under the model in ways not practical with Model-4, and we show the value of this in the extraction of phrase pairs from bitext.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal is to develop a generative probabilistic model of Word-to-Phrase (WtoP) alignment. We start with an l-word source sentence e = e l 1 , and an m-word target sentence f = f m 1 , which is realized as a sequence of K phrases: f = v K 1 . Each phrase is generated as a translation of one source word, which is determined by the alignment sequence a K",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "1 : e a k \u2192 v k . The length of each phrase is specified by the process \u03c6 K 1 , which is constrained so that K k=1 \u03c6 k = m. We also allow target phrases to be inserted, i.e. to be generated by a NULL source word. For this, we define a binary hallucination sequence h K 1 : if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "h k = 0, then NULL \u2192 v k ; if h k = 1 then e a k \u2192 v k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "With all these quantities gathered into an alignment a = (\u03c6 K 1 , a K 1 , h K 1 , K), the modeling objective is to realize the conditional distribution P (f , a|e). With the assumption that P (f , a|e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": ") = 0 if f = v K 1 , we write P (f , a|e) = P (v K 1 , K, a K 1 , h K 1 , \u03c6 K 1 |e) and P (v K 1 , K, a K 1 , h K 1 , \u03c6 K 1 |e) = \u01eb(m|l) \u00d7 P (K|m, e) \u00d7 P (a K 1 , \u03c6 K 1 , h K 1 |K, m, e) \u00d7 P (v K 1 |a K 1 , h K 1 , \u03c6 K 1 , K, m, e)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "We now describe the component distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "Sentence Length \u01eb(m|l) determines the target sentence length. It is not needed during alignment, where sentence lengths are known, and is ignored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "Phrase Count P (K|m, e) specifies the number of target phrases. We use a simple, single parameter distribution, with \u03b7 = 8.0 throughout P (K|m, e) = P (K|m, l) \u221d \u03b7 K Word-to-Phrase Alignment Alignment is a Markov process that specifies the lengths of phrases and their alignment with source words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "P (a K 1 , h K 1 , \u03c6 K 1 |K, m, e) = K k=1 P (a k , h k , \u03c6 k |a k\u22121 , \u03c6 k\u22121 , e) = K k=1 p(a k |a k\u22121 , h k ; l) d(h k ) n(\u03c6 k ; e a k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "The actual word-to-phrase alignment (a k ) is a firstorder Markov process, as in HMM-based word-toword alignment (Vogel et al., 1996) . It necessarily depends on the hallucination variable",
"cite_spans": [
{
"start": 113,
"end": 133,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "p(a j |a j\u22121 , h j ; l) = \uf8f1 \uf8f2 \uf8f3 1 a j = a j\u22121 , h j = 0 0 a j = a j\u22121 , h j = 0 a(a j |a j\u22121 ; l) h j = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "This formulation allows target phrases to be inserted without disrupting the Markov dependencies of phrases aligned to actual source words. The phrase length model n(\u03c6; e) gives the probability that a word e produces a phrase with \u03c6 words in the target language; n(\u03c6; e) is defined for \u03c6 = 1, \u2022 \u2022 \u2022 , N . The hallucination process is a simple i.i.d. process, where d(0) = p 0 , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "d(1) = 1 \u2212 p 0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "Word-to-Phrase Translation The translation of words to phrases is given as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "P (v K 1 |a K 1 , h K 1 , \u03c6 K 1 , K, m, e) = K k=1 p(v k |e a k , h k , \u03c6 k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "We introduce the notation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "v k = v k [1], . . . , v k [\u03c6 k ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "and a dummy variable x k (for phrase insertion) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "x k = e a k h k = 1 NULL h k = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "We define two models of word-to-phrase translation. This simplest is based on context-independent wordto-word translation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "p(v k |e a k , h k , \u03c6 k ) = \u03c6 k j=1 t(v k [j] | x k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "We also define a model that captures foreign word context with bigram translation probabilities",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "p(v k |e a k , h k , \u03c6 k ) = t(v k [1] | x k ) \u03c6 k j=2 t 2 (v k [j] | v k [j \u2212 1], x k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "Here, t(f |e) is the usual context independent wordto-word translation probability. The bigram translation probability t 2 (f |f \u2032 , e) specifies the likelihood that target word f is to follow f \u2032 in a phrase generated by source word e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Word and Phrase Alignment",
"sec_num": "2"
},
{
"text": "The formulation of the WtoP alignment model was motivated by both the HMM word alignment model (Vogel et al., 1996) and IBM Model-4 with the goal of building on the strengths of each.",
"cite_spans": [
{
"start": 95,
"end": 115,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of the Model and Prior Work",
"sec_num": "2.1"
},
{
"text": "The relationship with the word-to-word HMM alignment model is straightforward. For example, constraining the phrase length component n(\u03c6; e) to permit only phrases of one word would give a word-to-word HMM alignment model. The extensions introduced are the phrase count, and the phrase length models, and the bigram translation distribution. The hallucination process is motivated by the use of NULL alignments into Markov alignment models as done by (Och and Ney, 2003) .",
"cite_spans": [
{
"start": 451,
"end": 470,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of the Model and Prior Work",
"sec_num": "2.1"
},
{
"text": "The phrase length model is motivated by Toutanova et al. (2002) who introduced 'stay' probabilities in HMM alignment as an alternative to word fertility. By comparison, Word-to-Phrase HMM alignment models contain detailed models of state occupancy, motivated by the IBM fertility model, which are more powerful than a single staying parameter. In fact, the WtoP model is a segmental Hidden Markov Model (Ostendorf et al., 1996) , in which states emit observation sequences.",
"cite_spans": [
{
"start": 40,
"end": 63,
"text": "Toutanova et al. (2002)",
"ref_id": "BIBREF14"
},
{
"start": 403,
"end": 427,
"text": "(Ostendorf et al., 1996)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of the Model and Prior Work",
"sec_num": "2.1"
},
{
"text": "Comparison with Model-4 is less straightforward. The main features of Model-4 are NULL source words, source word fertility, and the distortion model. The WtoP alignment model includes the first two of these. However distortion, which allows hypothesized words to be distributed throughout the target sentence, is difficult to incorporate into a model that supports efficient DP-based search. We preserve efficiency in the WtoP model by insisting that target words form connected phrases; this is not as general as Model-4 distortion. This weakness is somewhat offset by a more powerful (Markov) alignment process as well as by the phrase count distribution. Despite these differences, the WtoP alignment model and Model-4 allow similar alignments. For example, in Figure 1: Word-to-Word and Word-to-Phrase Links f 1 , f 3 , and f 4 to be generated by e 1 with a fertility of 3. Under the WtoP model, e 1 could generate f 1 and f 3 f 4 with phrase lengths 1 and 2, respectively: source words can generate more than one phrase. This alignment could also be generated via four single word foreign phrases. The balance between word-to-word and word-to-phrase alignments is set by the phrase count distribution parameter \u03b7. As \u03b7 increases, alignments with shorter phrases are favored, and for very large \u03b7 the model allows only word-to-word alignments (see Fig. 2 ). Although the WtoP alignment model is more complex than the word-to-word HMM alignment model, the Baum-Welch and Viterbi algorithms can still be used. Word-to-word alignments are generated by the Viterbi algorithm:\u00e2 = argmax a P (f , a|e); if e a k \u2192 v k , e a k is linked to all the words in v k .",
"cite_spans": [],
"ref_spans": [
{
"start": 1352,
"end": 1358,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Properties of the Model and Prior Work",
"sec_num": "2.1"
},
{
"text": "The bigram translation probability relies on word context, known to be helpful in translation (Berger et al., 1996) , to improve the identification of target phrases. As an example, f is the Chinese word for \"world trade center\". Table 1 shows how the likelihood of the correct English phrase is improved with bigram translation probabilities; this example is from the C\u2192E, N=4 system of Table 2 . There are of course much prior work in translation that incorporates phrases. Sumita et al. (2004) develop a model of phrase-to-phrase alignment, which while based on HMM alignment process, appears to be deficient. Marcu and Wong (2002) propose a model to learn lexical correspondences at the phrase level. To our knowledge, ours is the first nonsyntactic model of bitext alignment (as opposed to translation) that links words and phrases.",
"cite_spans": [
{
"start": 94,
"end": 115,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF0"
},
{
"start": 476,
"end": 496,
"text": "Sumita et al. (2004)",
"ref_id": "BIBREF13"
},
{
"start": 613,
"end": 634,
"text": "Marcu and Wong (2002)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 230,
"end": 237,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 388,
"end": 395,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Properties of the Model and Prior Work",
"sec_num": "2.1"
},
{
"text": "We now discuss estimation of the WtoP model parameters by the EM algorithm. Since the WtoP model can be treated as an HMM with a very complex state space, it is straightforward to apply Baum-Welch parameter estimation. We show the forward recursion as an example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedded Alignment Model Estimation",
"sec_num": "3"
},
{
"text": "Given a sentence pair (e l 1 , f m 1 ), the forward probability \u03b1 j (i, \u03c6) is defined as the probability of generating the first j target words with the added condition that the target words f j j\u2212\u03c6+1 form a phrase aligned to source word e i . It can be calculated recursively (omitting the hallucination process, for simplicity) as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedded Alignment Model Estimation",
"sec_num": "3"
},
{
"text": "\u03b1 j (i, \u03c6) = i \u2032 ,\u03c6 \u2032 \u03b1 j\u2212\u03c6 (i \u2032 , \u03c6 \u2032 )a(i|i \u2032 , l) \u2022 \u03b7 \u2022 n(\u03c6; e i ) \u2022 t(f j\u2212\u03c6+1 |e i ) \u2022 j j \u2032 =j\u2212\u03c6+2 t 2 (f j \u2032 |e i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedded Alignment Model Estimation",
"sec_num": "3"
},
{
"text": "This recursion is over a trellis of l(N + 1)m nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedded Alignment Model Estimation",
"sec_num": "3"
},
{
"text": "Models are trained from a flat-start. We begin with 10 iterations of EM to train Model-1, followed by 5 EM iterations to train Model-2 (Brown and others, 1993) . We initialize the parameters of the wordto-word HMM alignment model by collecting word alignment counts from the Model-2 Viterbi alignments, and refine the word-to-word HMM alignment model by 5 iterations of the Baum-Welch algorithm. We increase the order of the WtoP model (N ) from 2 to the final value in increments of 1, by performing 5 Baum Welch iterations at each step. At the final value of N , we introduce the bigram translation probability; we use Witten-Bell smoothing (1991) as a backoff strategy for t 2 , and other strategies are possible.",
"cite_spans": [
{
"start": 135,
"end": 159,
"text": "(Brown and others, 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedded Alignment Model Estimation",
"sec_num": "3"
},
{
"text": "We now investigate bitext word alignment performance. We start with the FBIS Chinese/English parallel corpus which consists of approx. 10M English/7.5M Chinese words. The Chinese side of the corpus is segmented into words by the LDC segmenter 1 . The alignment test set consists of 124 sentences from the NIST 2001 dry-run MT-eval 2 set that are manually word aligned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "We first analyze the distribution of word links within these manual alignments. Of the Chinese words which are aligned to more than one English words, 82% of these words align with consecutive English words (phrases). In the other direction, among all English words which are aligned to multiple Chinese words, 88% of these align to Chinese phrases. In this collection, at least, word-to-phrase alignments are plentiful. Alignment performance is measured by the Alignment Error Rate (AER) (Och and Ney, 2003) ",
"cite_spans": [
{
"start": 489,
"end": 508,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "AER(B; B \u2032 ) = 1 \u2212 2 \u00d7 |B \u2229 B \u2032 |/(|B \u2032 | + |B|)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "where B is a set reference word links, and B \u2032 are the word links generated automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "AER gives a general measure of word alignment quality. We are also interested in how the model performs over the word-to-word and word-to-phrase alignments it supports. We split the reference alignments into two subsets: B 1\u22121 contains word-toword reference links (e.g. 1\u21921 in Fig 1) ; and B 1\u2212N contains word-to-phrase reference links (e.g. 1\u21923, 1\u21924 in Fig 1) ; The automatic alignment B \u2032 is partitioned similarly. We define additional AERs:",
"cite_spans": [],
"ref_spans": [
{
"start": 277,
"end": 283,
"text": "Fig 1)",
"ref_id": null
},
{
"start": 354,
"end": 360,
"text": "Fig 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "AER 1\u22121 = AER(B 1\u22121 , B \u2032 1\u22121 ), and AER 1\u2212N = AER(B 1\u2212N , B \u2032 1\u2212N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": ", which measure word-to-word and word-to-phrase alignment, separately. Table 2 presents the three AER measurements for the WtoP alignment models trained as described in Section 3. GIZA++ Model 4 alignment performance is also presented for comparison. We note first that the word-to-word HMM (N=1) alignment model is worse than Model 4, as expected. For the WtoP models in the C\u2192E direction, we see reduced AER for phrases lengths up to 4, although in the E\u2192C direction, AER is reduced only for phrases of length 2; performance for N > 2 is not reported.",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 78,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "In introducing the bigram phrase translation (the bigram t-table), there is a tradeoff between wordto-word and word-to-phrase alignment quality. As mentioned, the bigram t-table increases the likelihood of word-to-phrase alignments. In both translation directions, this reduces the AER 1\u2212N . However, it also causes increases in AER 1\u22121 , primarily due to a drop in recall: fewer word-to-word alignments are produced. For C\u2192E, this is not severe enough to cause an overall AER increase; however, in E\u2192C, AER does increase. Fig. 2 (C\u2192E, N=4) shows how the 1-1 and 1-N alignment behavior is balanced by the phrase count parameter. As \u03b7 increases, the model favors alignments with more word-to-word links and fewer word-to-phrase links; the overall Alignment Error Rate (AER) suggests a good balance at \u03b7 = 8.0.",
"cite_spans": [],
"ref_spans": [
{
"start": 523,
"end": 540,
"text": "Fig. 2 (C\u2192E, N=4)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "After observing that the WtoP model performs as well as Model-4 over the FBIS C-E bitext, we investigated performance over these large bitexts : -\"NEWS\" containing non-UN parallel Chinese/English corpora from LDC (mainly FBIS, Xinhua, Hong Kong, Sinorama, and Chinese Treebank).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "-\"NEWS+UN01-02\" also including UN parallel corpora from the years 2001 and 2002.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "-\"ALL C-E\" refers to all the C-E bitext available from LDC as of his submission; this consists of the NEWS corpora with the UN bitext from all years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "Over all these collections, WtoP alignment performance (Table 3) is comparable to that of Model-4. We do note a small degradation in the E\u2192C WtoP alignments. It is quite possible that this one-to-many model suffers slightly with English as the source and Chinese as the target, since English sentences tend to be longer. Notably, simply increasing the amount of bitext used in training need not improve AER. However, larger aligned bitexts can give improved phrase pair coverage of the test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 64,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "One of the desirable features of HMMs is that the Table 3 : AER Over Large C-E Bitexts.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "Forward-Backward steps can be run in parallel: bitext is partitioned; the Forward-Backward algorithm is run over the subsets on different CPUs; statistics are merged to reestimate model parameters. Partitioning the bitext also reduces the memory usage, since different cooccurrence tables can be kept for each partition. With the \"ALL C-E\" bitext collection, a single set of WtoP models (C\u2192E, N=4, bigram t-table) can be trained over 200M words of Chinese-English bitext by splitting training over 40 CPUs; each Forward-Backward process takes less than 2GB of memory and the training run finishes in five days. By contrast, the 96M English word NEWS+UN01-02 is about the largest C-E bitext over which we can train Model-4 with our GIZA++ configuration and computing infrastructure. Based on these and other experiments, in this paper we set a maximum value of N = 4 for F\u2192E; in E\u2192F, we set N=2 and omit the bigram phrase translation probability; \u03b7 is set to 8.0. We do not claim that this is optimal, however.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bitext Word Alignment",
"sec_num": "4"
},
{
"text": "A common approach to phrase-based translation is to extract an inventory of phrase pairs (PPI) from bitext (Koehn et al., 2003) , For example, in the phraseextract algorithm (Och, 2002) , a word alignment a m 1 is generated over the bitext, and all word subsequences e i 2 i 1 and f j 2 j 1 are found that satisfy :",
"cite_spans": [
{
"start": 107,
"end": 127,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 174,
"end": 185,
"text": "(Och, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Pair Induction",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a m 1 :\u00e2 j \u2208 [i 1 , i 2 ] iff j \u2208 [j 1 , j 2 ] .",
"eq_num": "(1)"
}
],
"section": "Phrase Pair Induction",
"sec_num": "5"
},
{
"text": "The PPI comprises all such phrase pairs (e i 2 i 1 , f j 2 j 1 ). The process can be stated slightly differently. First, we define a set of alignments :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Pair Induction",
"sec_num": "5"
},
{
"text": "A(i 1 , i 2 ; j 1 , j 2 ) = {a m 1 : a j \u2208 [i 1 , i 2 ] iff j \u2208 [j 1 , j 2 ]} . If\u00e2 m 1 \u2208 A(i 1 , i 2 ; j 1 , j 2 ) then (e i 2 i 1 , f j 2 j 1 ) form a phrase pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Pair Induction",
"sec_num": "5"
},
{
"text": "Viewed in this way, there are many possible alignments under which phrases might be paired, and the selection of phrase pairs need not be based on a single alignment. Rather than simply accepting a phrase pair (e i 2 i 1 , f j 2 j 1 ) if the unique MAP alignment satisfies Equation 1, we can assign a probability to phrases occurring as translation pairs :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Pair Induction",
"sec_num": "5"
},
{
"text": "P (f , A(i 1 , i 2 ; j 1 , j 2 ) | e) = a : a m 1 \u2208A(i 1 ,i 2 ;j 1 ,j 2 ) P (f , a|e)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Pair Induction",
"sec_num": "5"
},
{
"text": "For a fixed set of indices i 1 , i 2 , j 1 , j 2 , the quantity P (f , A(i 1 , i 2 ; j 1 , j 2 ) | e) can be computed efficiently using a modified Forward algorithm. Since P (f |e) can also be computed by the Forward algorithm, the phrase-to-phrase posterior distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Pair Induction",
"sec_num": "5"
},
{
"text": "P ( A(i 1 , i 2 ; j 1 , j 2 ) | f , e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Pair Induction",
"sec_num": "5"
},
{
"text": ") is easily found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Pair Induction",
"sec_num": "5"
},
{
"text": "In the phrase-extract algorithm (Och, 2002) , the alignment\u00e2 is generated as follows: Model-4 is trained in both directions (e.g. F\u2192E and E\u2192F); two sets of word alignments are generated by the Viterbi algorithm for each set of models; and the two alignments are merged. This forms a static aligned bitext. Next, all foreign word sequences up to a given length (here, 5 words) are extracted from the test set. For each of these, a phrase pair is added to the PPI if the foreign phrase can be found aligned to an English phrase under Eq 1. We refer to the result as the Model-4 Viterbi Phrase-Extract PPI. Constructed in this way, the PPI is limited to phrase pairs which can be found in the Viterbi alignments. Some foreign phrases which do appear in the training bitext will not be included in the PPI because suitable English phrases cannot be found. To add these to the PPI we can use the phrase-tophrase posterior distribution to find English phrases as candidate translations. This adds phrases to the Viterbi Phrase-Extract PPI and increase the test set coverage. A somewhat ad hoc PPI Augmentation algorithm is given to the right.",
"cite_spans": [
{
"start": 32,
"end": 43,
"text": "(Och, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PPI Induction Strategies",
"sec_num": null
},
{
"text": "Condition (A) extracts phrase pairs based on the geometric mean of the E\u2192F and F\u2192E posteriors (T g = 0.01 throughout). The threshold T p selects additional phrase pairs under a more forgiving criterion: as T p decreases, more phrase pairs are added and PPI coverage increases. Note that this algorithm is constructed specifically to improve a Viterbi PPI; it is certainly not the only way to extract phrase pairs under the phrase-to-phrase posterior distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PPI Induction Strategies",
"sec_num": null
},
{
"text": "Once the PPI phrase pairs are set, the phrase translation probabilities are set based on the number of times each phrase pair is extracted from a sentence pair, i.e. from relative frequencies. For each foreign phrase v not in the Viterbi PPI :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PPI Induction Strategies",
"sec_num": null
},
{
"text": "For all pairs (f m 1 , e l 1 ) and ) to the PPI if any of A, B, or C hold :",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 36,
"text": ")",
"ref_id": null
}
],
"eq_spans": [],
"section": "PPI Induction Strategies",
"sec_num": null
},
{
"text": "j 1 , j 2 s.t. f j 2 j 1 = v : For 1 \u2264 i 1 \u2264 i 2 \u2264 l, find f (i 1 , i 2 ) = P F \u2192E ( A(i 1 , i 2 ; j 1 , j 2 ) | e l 1 , f m 1 ) b(i 1 , i 2 ) = P E\u2192F ( A(i 1 , i 2 ; j 1 , j 2 ) | e l 1 , f m 1 ) g(i 1 , i 2 ) = f (1 1 , i 2 ) b(i 1 , i 2 ) (\u00ee 1 ,\u00ee 2 ) = argmax 1\u2264i 1 ,i 2 \u2264l g(i 1 , i 2 ) , and set u = e\u00ee 2 i 1 Add (u, v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PPI Induction Strategies",
"sec_num": null
},
{
"text": "b(\u00ee 1 ,\u00ee 2 ) \u2265 T g and f (\u00ee 1 ,\u00ee 2 ) \u2265 T g (A) b(\u00ee 1 ,\u00ee 2 ) < T g and f (\u00ee 1 ,\u00ee 2 ) > T p (B) f (\u00ee 1 ,\u00ee 2 ) < T g and b(\u00ee 1 ,\u00ee 2 ) > T p (C)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PPI Induction Strategies",
"sec_num": null
},
{
"text": "PPI Augmentation via Phrase-Posterior Induction HMM-based models are often used if posterior distributions are needed. Model-1 can also be used in this way (Venugopal et al., 2003) , although it is a relatively weak alignment model. By comparison, finding posterior distributions under Model-4 is difficult. The Word-to-Phrase alignment model appears not to suffer this tradeoff: it is a good model of word alignment under which statistics such as the phraseto-phrase posterior can be calculated.",
"cite_spans": [
{
"start": 156,
"end": 180,
"text": "(Venugopal et al., 2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PPI Induction Strategies",
"sec_num": null
},
{
"text": "We evaluate the quality of phrase pairs extracted from the bitext through the translation performance of the Translation Template Model (TTM) (Kumar et al., 2005) , which is a phrase-based translation system implemented using weighted finite state transducers. Performance is measured by BLEU (Papineni and others, 2001 ).",
"cite_spans": [
{
"start": 142,
"end": 162,
"text": "(Kumar et al., 2005)",
"ref_id": "BIBREF3"
},
{
"start": 293,
"end": 319,
"text": "(Papineni and others, 2001",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Experiments",
"sec_num": "6"
},
{
"text": "We report performance on the NIST Chinese/English 2002 , 2003 and 2004 MT evaluation sets. These consist of 878, 919, and 901 sentences, respectively. Each Chinese sentence has 4 reference translations.",
"cite_spans": [
{
"start": 29,
"end": 54,
"text": "NIST Chinese/English 2002",
"ref_id": null
},
{
"start": 55,
"end": 61,
"text": ", 2003",
"ref_id": "BIBREF8"
},
{
"start": 62,
"end": 70,
"text": "and 2004",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese\u2192English Translation",
"sec_num": null
},
{
"text": "We evaluate two C\u2192E translation systems. The smaller system is built on the FBIS C-E bitext collection. The language model used for this system is a trigram word language model estimated with 21M (Stolcke, 2002 ).",
"cite_spans": [
{
"start": 196,
"end": 210,
"text": "(Stolcke, 2002",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese\u2192English Translation",
"sec_num": null
},
{
"text": "The larger system is based on alignments generated over all available C-E bitext (the \"ALL C-E\" collection of Section 4). The language model is an equal-weight interpolated trigram model trained over 373M English words taken from the English side of the bitext and the LDC Gigaword corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese\u2192English Translation",
"sec_num": null
},
{
"text": "We also evaluate our WtoP alignment models in Arabic-English translation. We report results on a small and a large system. In each, Arabic text is tokenized by the Buckwalter analyzer provided by LDC. We test our models on NIST Arabic/English 2002 , 2003 and 2004 MT evaluation sets that consists of 1043, 663 and 707 Arabic sentences, respectively. Each Arabic sentence has 4 reference translations. In the small system, the training bitext is from A-E News parallel text, with \u223c3.5M words on the English side. We follow the same training procedure and configurations as in Chinese/English system in both translation directions. The language model is an equal-weight interpolated trigram built over \u223c400M words from the English side of the bitext, including UN text, and the LDC English Gigaword collection. The large Arabic/English system employs the same language model. Alignments are generated over all A-E bitext available from LDC as of this submission; this consists of approx. 130M words on the English side.",
"cite_spans": [
{
"start": 223,
"end": 247,
"text": "NIST Arabic/English 2002",
"ref_id": null
},
{
"start": 248,
"end": 254,
"text": ", 2003",
"ref_id": "BIBREF8"
},
{
"start": 255,
"end": 263,
"text": "and 2004",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic\u2192English Translation",
"sec_num": null
},
{
"text": "We first look at translation performance of the small A\u2192E and C\u2192E systems, where alignment models are trained over the smaller bitext collections. The baseline systems (Table 4 , line 1) are based on Model-4 Viterbi Phrase-Extract PPIs.",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 176,
"text": "(Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "WtoP Model and Model-4 Comparison",
"sec_num": null
},
{
"text": "We compare WtoP alignments directly to Model-4 alignments by extracting PPIs from the WtoP alignments using the Viterbi Phrase-Extract procedure (Table 4 , line 3). In C\u2192E translation, performance is comparable to that of Model-4; in A\u2192E translation, performance lags slightly. As we add phrase pairs to the WtoP Viterbi Phrase-Extract PPI via the Phrase-Posterior Augmentation procedure (Table 4 , lines 4-7), we obtain a \u223c1% improvement in BLEU; the value of T p = 0.7 gives improvements across all sets. In C\u2192E translation, this yields good gains relative to Model-4, while in A\u2192E we match or improve the Model-4 performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 153,
"text": "(Table 4",
"ref_id": "TABREF5"
},
{
"start": 388,
"end": 396,
"text": "(Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "WtoP Model and Model-4 Comparison",
"sec_num": null
},
{
"text": "The performance gains through PPI augmentation are consistent with increased PPI coverage of the test set. We tabulate the percentage of test set phrases that appear in each of the PPIs (the 'cvg' values in Table 4 ). The augmentation scheme is designed specifically to increase coverage, and we find that BLEU score improvements track the phrase coverage of the test set. This is further confirmed by the experiment of Table 4 , line 2 in which we take the PPI extracted from Model-4 Viterbi alignments, and add phrase pairs to it using the Phrase-Posterior augmentation scheme with T p = 0.7. We find that the augmentation scheme under the WtoP models can be used to improve the Model-4 PPI itself.",
"cite_spans": [],
"ref_spans": [
{
"start": 207,
"end": 214,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 420,
"end": 427,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "WtoP Model and Model-4 Comparison",
"sec_num": null
},
{
"text": "We also investigate C\u2192E and A\u2192E translation performance with PPIs extracted from large bitexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WtoP Model and Model-4 Comparison",
"sec_num": null
},
{
"text": "Performance of systems based on Model-4 Viterbi Phrase-Extract PPIs is shown in Table 4 , line 8. To train Model-4 using GIZA++, we split the bitexts into two (A-E) or three (C-E) partitions, and train models for each division separately; we find that memory usage is otherwise too great. These serve as a single set of alignments for the bitext, as if they had been generated under a single alignment model. When we translate with Viterbi Phrase-Extract PPIs taken from WtoP alignments created over all available bitext, we find comparable performance to the Model-4 baseline (Table 4 , line 9). Using the Phrase-Posterior augmentation scheme with T p = 0.7 yields further improvement (Table 4 , line 10). Pooling the sets to form two large C\u2192E and A\u2192E test sets, the A\u2192E system improvements are significant at a 95% level (Och, 2003) ; the C\u2192E systems are only equivalent.",
"cite_spans": [
{
"start": 824,
"end": 835,
"text": "(Och, 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 80,
"end": 87,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 577,
"end": 585,
"text": "(Table 4",
"ref_id": "TABREF5"
},
{
"start": 686,
"end": 694,
"text": "(Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "WtoP Model and Model-4 Comparison",
"sec_num": null
},
{
"text": "We have described word-to-phrase alignment models capable of good quality bitext word alignment. In Arabic-English and Chinese-English translation and alignment they compare well to Model-4, even with large bitexts. The model architecture was inspired by features of Model-4, such as fertility and distortion, but care was taken to ensure that dynamic programming procedures, such as EM and Viterbi alignment, could still be performed. There is practical value in this: training and alignment are easily parallelized. Working with HMMs also makes it straightforward to explore new modeling approaches. We show an augmentation scheme that adds to phrases extracted from Viterbi alignments; this improves translation with both the WtoP and the Model-4 phrase pairs, even though it would be infeasible to implement the scheme under Model-4 itself. We note that these models are still relatively simple, and we anticipate further alignment and translation improvement as the models are refined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "http://www.ldc.upenn.edu/Projects/Chinese 2 http://www.nist.gov/speech/tests/mt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgments The TTM translation system was provided by Shankar Kumar. This work was funded by ONR MURI Grant N00014-01-1-0685.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. L. Berger, S. Della Pietra, and V. J. Della Pietra. 1996. A maximum entropy approach to natural language pro- cessing. Computational Linguistics, 22(1):39-71.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The mathematics of machine translation: Parameter estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "263--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown et al. 1993. The mathematics of machine translation: Parameter estimation. Computational Lin- guistics, 19:263-312.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Statistical phrasebased translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, F. Och, and D. Marcu. 2003. Statistical phrase- based translation. In Proc. of HLT-NAACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A weighted finite state transducer translation template model for statistical machine translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Natural Language Engineering",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kumar, Y. Deng, and W. Byrne. 2005. A weighted fi- nite state transducer translation template model for sta- tistical machine translation. Journal of Natural Lan- guage Engineering, 11(3).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A phrase-based, joint probability model for statistical machine translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu and W. Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Proc. of EMNLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och and H. Ney. 2000. Improved statistical alignment models. In Proc. of ACL, Hong Kong, China.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Statistical Machine Translation: From Single Word Models to Alignment Templates",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och. 2002. Statistical Machine Translation: From Single Word Models to Alignment Templates. Ph.D. thesis, RWTH Aachen, Germany.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "From HMMs to segment models: a unified view of stochastic modeling for speech recognition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Digalakis",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Kimball",
"suffix": ""
}
],
"year": 1996,
"venue": "IEEE Trans",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Ostendorf, V. Digalakis, and O. Kimball. 1996. From HMMs to segment models: a unified view of stochas- tic modeling for speech recognition. IEEE Trans.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni et al. 2001. BLEU: a method for automatic evaluation of machine translation. Technical Report RC22176 (W0109-022), IBM Research Division.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "SRILM -an extensible language modeling toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke. 2002. SRILM -an extensible language mod- eling toolkit. In Proc. ICSLP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "EBMT, SMT, Hybrid and More: ATR spoken language translation system",
"authors": [
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Sumita et al. 2004. EBMT, SMT, Hybrid and More: ATR spoken language translation system. In Proc. of the International Workshop on Spoken Language Translation, Kyoto, Japan.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Extentions to HMM-based statistical word alignment models",
"authors": [
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ilhan",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Toutanova, H. T. Ilhan, and C. Manning. 2002. Exten- tions to HMM-based statistical word alignment mod- els. In Proc. of EMNLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Effective phrase translation extraction from alignment models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Venugopal",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Venugopal, S. Vogel, and A. Waibel. 2003. Effective phrase translation extraction from alignment models. In Proc. of ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "HMM based word alignment in statistical translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of the COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Vogel, H. Ney, and C. Tillmann. 1996. HMM based word alignment in statistical translation. In Proc. of the COLING.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression",
"authors": [
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
},
{
"first": "T",
"middle": [
"C"
],
"last": "Bell",
"suffix": ""
}
],
"year": 1991,
"venue": "In IEEE Trans. Inform Theory",
"volume": "37",
"issue": "",
"pages": "1085--1094",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. H. Witten and T. C. Bell. 1991. The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression. In IEEE Trans. Inform Theory, volume 37, pages 1085-1094, July.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "Balancing Word and Phrase Alignments",
"type_str": "figure"
},
"TABREF0": {
"text": "",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF2": {
"text": "FBIS Bitext Alignment Error Rate.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>3950</td><td/><td/><td/><td/><td/><td/><td/><td>40</td></tr><tr><td/><td>3600</td><td/><td/><td/><td/><td/><td/><td/><td>38</td></tr><tr><td># of hypothesized links</td><td>2550 2900 3250</td><td/><td/><td/><td colspan=\"2\">1\u22121 Links 1\u2212N Links Total Links Overall AER</td><td/><td/><td>36 32 34</td><td>Overall AER</td></tr><tr><td/><td>2200</td><td/><td/><td/><td/><td/><td/><td/><td>30</td></tr><tr><td/><td>1850</td><td/><td/><td/><td/><td/><td/><td/><td>28</td></tr><tr><td/><td>1500 0</td><td>2 2</td><td>4 4</td><td>6 6</td><td>\u03b7 \u03b7</td><td>8 8</td><td>10 10</td><td>12 12</td><td>26 14</td></tr></table>"
},
"TABREF5": {
"text": "",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Translation Analysis and Performance of PPI Extraction Procedures</td></tr><tr><td>words taken from the English side of the bitext; all</td></tr><tr><td>language models are built with the SRILM toolkit</td></tr><tr><td>using Kneser-Ney smoothing</td></tr></table>"
}
}
}
}