{ "paper_id": "D07-1045", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:19:26.181214Z" }, "title": "Smooth Bilingual N-gram Translation", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "", "affiliation": {}, "email": "schwenk@lismi.fr" }, { "first": "Jos\u00e9", "middle": [ "A R" ], "last": "Fonollosa", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We address the problem of smoothing translation probabilities in a bilingual N-grambased statistical machine translation system. It is proposed to project the bilingual tuples onto a continuous space and to estimate the translation probabilities in this representation. A neural network is used to perform the projection and the probability estimation. Smoothing probabilities is most important for tasks with a limited amount of training material. We consider here the BTEC task of the 2006 IWSLT evaluation. Improvements in all official automatic measures are reported when translating from Italian to English. Using a continuous space model for the translation model and the target language model, an improvement of 1.5 BLEU on the test data is observed.", "pdf_parse": { "paper_id": "D07-1045", "_pdf_hash": "", "abstract": [ { "text": "We address the problem of smoothing translation probabilities in a bilingual N-grambased statistical machine translation system. It is proposed to project the bilingual tuples onto a continuous space and to estimate the translation probabilities in this representation. A neural network is used to perform the projection and the probability estimation. Smoothing probabilities is most important for tasks with a limited amount of training material. We consider here the BTEC task of the 2006 IWSLT evaluation. Improvements in all official automatic measures are reported when translating from Italian to English. Using a continuous space model for the translation model and the target language model, an improvement of 1.5 BLEU on the test data is observed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The goal of statistical machine translation (SMT) is to produce a target sentence e from a source sentence f . Among all possible target language sentences the one with the highest probability is chosen: where Pr(f |e) is the translation model and Pr(e) is the target language model. This approach is usually referred to as the noisy source-channel approach in statistical machine translation (Brown et al., 1993) .", "cite_spans": [ { "start": 393, "end": 413, "text": "(Brown et al., 1993)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "e * =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "During the last few years, the use of context in SMT systems has provided great improvements in translation. SMT has evolved from the original word-based approach to phrase-based translation systems (Och et al., 1999; Koehn et al., 2003) . A phrase is defined as a group of source wordsf that should be translated together into a group of target words\u1ebd. The translation model in phrase-based systems includes the phrase translation probabilities in both directions, i.e. P (\u1ebd|f ) and P (f |\u1ebd).", "cite_spans": [ { "start": 199, "end": 217, "text": "(Och et al., 1999;", "ref_id": "BIBREF12" }, { "start": 218, "end": 237, "text": "Koehn et al., 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The use of a maximum entropy approach simplifies the introduction of several additional models explaining the translation process : The feature functions h i are the system models and the \u03bb i weights are typically optimized to maximize a scoring function on a development set (Och and Ney, 2002) . The phrase translation probabilities P (\u1ebd|f ) and P (f |\u1ebd) are usually obtained using relative frequency estimates. Statistical learning theory, however, tells us that relative frequency estimates have several drawbacks, in particular high variance and low bias. Phrase tables may contain several millions of entries, most of which appear only once or twice, which means that we are confronted with a data sparseness problem. Surprisingly, there seems to be little work addressing the issue of smoothing of the phrase table probabilities.", "cite_spans": [ { "start": 276, "end": 295, "text": "(Och and Ney, 2002)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "e * =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the other hand, smoothing of relative frequency estimates was extensively investigated in the area of language modeling. A systematic comparison can be for instance found in (Chen and Goodman, 1999) . Language models and phrase tables have in common that the probabilities of rare events may be overestimated. However, in language modeling probability mass must be redistributed in order to account for the unseen n-grams. Generalization to unseen events is less important in phrase-based SMT systems since the system searches only for the best segmentation and the best matching phrase pair among the existing ones.", "cite_spans": [ { "start": 177, "end": 201, "text": "(Chen and Goodman, 1999)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We are only aware of one work that performs a systematic comparison of smoothing techniques in phrase-based machine translation systems (Foster et al., 2006) . Two types of phrase-table smoothing were compared: black-box and glass-box methods. Black-methods do not look inside phrases but instead treat them as atomic objects. By these means, all the methods developed for language modeling can be used. Glass-box methods decompose P (\u1ebd|f ) into a set of lexical distributions P (e|f ). For instance, it was suggested to use IBM-1 probabilities (Och et al., 2004) , or other lexical translation probabilities (Koehn et al., 2003; Zens and Ney, 2004) . Some form of glass-box smoothing is now used in all state-of-the-art statistical machine translation systems.", "cite_spans": [ { "start": 136, "end": 157, "text": "(Foster et al., 2006)", "ref_id": "BIBREF7" }, { "start": 545, "end": 563, "text": "(Och et al., 2004)", "ref_id": "BIBREF13" }, { "start": 609, "end": 629, "text": "(Koehn et al., 2003;", "ref_id": "BIBREF8" }, { "start": 630, "end": 649, "text": "Zens and Ney, 2004)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another approach related to phrase table smoothing is the so-called N-gram translation model (Mari\u00f1o et al., 2006) . In this model, bilingual tuples are used instead of the phrase pairs and n-gram probabilities are considered rather than relative frequencies. Therefore, smoothing is obtained using the standard techniques developed for language modeling. In addition, a context dependence of the phrases is introduced. On the other hand, some restrictions on the segmentation of the source sentence must be used. N-gram-based translation models were extensively compared to phrase-based systems on several tasks and typically achieve comparable performance.", "cite_spans": [ { "start": 93, "end": 114, "text": "(Mari\u00f1o et al., 2006)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we propose to investigate improved smoothing techniques in the framework of the Ngram translation model. Despite the undeniable success of n-graam back-off models, these techniques have several drawbacks from a theoretical point of view: the words are represented in a discrete space, the vocabulary. This prevents \"true interpolation\" of the probabilities of unseen n-grams since a change in this word space can result in an arbitrary change of the n-gram probability. An alternative approach is based on a continuous representation of the words (Bengio et al., 2003) . The basic idea is to convert the word indices to a continuous representation and to use a probability estimator operating in this space. Since the resulting distributions are smooth functions of the word representation, better generalization to unknown n-grams can be expected. Probability estimation and interpolation in a continuous space is mathematically well understood and numerous powerful algorithms are available that can perform meaningful interpolations even when only a limited amount of training material is available. This approach was successfully applied to language modeling in large vocabulary continuous speech recognition (Schwenk, 2007) and to language modeling in phrase-based SMT systems (Schwenk et al., 2006) .", "cite_spans": [ { "start": 561, "end": 582, "text": "(Bengio et al., 2003)", "ref_id": "BIBREF1" }, { "start": 1227, "end": 1242, "text": "(Schwenk, 2007)", "ref_id": "BIBREF16" }, { "start": 1296, "end": 1318, "text": "(Schwenk et al., 2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we investigate whether this approach is useful to smooth the probabilities involved in the bilingual tuple translation model. Reliable estimation of unseen n-grams is very important in this translation model. Most of the trigram tuples encountered in the development or test data were never seen in the training data. N-gram hit rates are reported in the results section of this paper. We report experimental results for the BTEC corpus as used in the 2006 evaluations of the international workshop on spoken language translation IWSLT (Paul, 2006) . This task provides a very limited amount of resources in comparison to other tasks like the translation of journal texts (NIST evaluations) or of parliament speeches (TC-STAR evaluations). Therefore, new techniques must be deployed to take the best advantage of the limited resources. Among the language pairs tested in this years evaluation, Italian to English gave the best BLEU results in this year evaluation. The better the translation quality is, the more it is challenging to outperform it without adding more data. We show that a new smoothing technique for the translation model achieves a significant improvement in the BLEU score for a stateof-the-art statistical translation system. This paper is organized as follows. In the next section we first describe the baseline statistical machine translation systems. Section 3 presents the architecture and training algorithms of the continuous space translation model and section 4 summarizes the experimental evaluation. The paper concludes with a discussion of future research directions.", "cite_spans": [ { "start": 551, "end": 563, "text": "(Paul, 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The N -gram-based translation model has been derived from the finite-state perspective; more specifically, from the work of Casacuberta (2001) . However, different from it, where the translation model is implemented by using a finite-state transducer, the N -gram-based system implements a bilingual N -gram model. It actually constitutes a language model of bilingual units, referred to as tuples, which approximates the joint probability between source and target languages by using N -grams, such as described by the following equation:", "cite_spans": [ { "start": 124, "end": 142, "text": "Casacuberta (2001)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "N-gram-based Translation Model", "sec_num": "2" }, { "text": "p(e, f ) \u2248 K k=1 p((e, f ) k |(e, f ) k\u22121 , . . . , (e, f ) k\u22124 ) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram-based Translation Model", "sec_num": "2" }, { "text": "where e refers to target, f to source and (e, f ) k to the k th tuple of a given bilingual sentence pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram-based Translation Model", "sec_num": "2" }, { "text": "Bilingual units (tuples) are extracted from any word-to-word alignment according to the following constraints:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram-based Translation Model", "sec_num": "2" }, { "text": "\u2022 a monotonic segmentation of each bilingual sentence pairs is produced,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram-based Translation Model", "sec_num": "2" }, { "text": "\u2022 no word inside the tuple is aligned to words outside the tuple, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram-based Translation Model", "sec_num": "2" }, { "text": "\u2022 no smaller tuples can be extracted without violating the previous constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram-based Translation Model", "sec_num": "2" }, { "text": "As a consequence of these constraints, only one segmentation is possible for a given sentence pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram-based Translation Model", "sec_num": "2" }, { "text": "Two important issues regarding this translation model must be considered. First, it often occurs that a large number of single-word translation probabilities are left out of the model. This happens for all words that are always embedded in tuples containing two or more words, then no translation probability for an independent occurrence of these embedded words will exist. To overcome this problem, the tuple trigram model is enhanced by incorporating 1-gram translation probabilities for all the embedded words detected during the tuple extraction step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram-based Translation Model", "sec_num": "2" }, { "text": "These 1-gram translation probabilities are computed from the intersection of both the source-to-target and the target-to-source alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram-based Translation Model", "sec_num": "2" }, { "text": "The second issue has to do with the fact that some words linked to NULL end up producing tuples with NULL source sides. Since no NULL is actually expected to occur in translation inputs, this type of tuple is not allowed. Any target word that is linked to NULL is attached either to the word that precedes or the word that follows it. To determine this, an approach based on the IBM1 probabilities was used, as described in (Mari\u00f1o et al., 2006) .", "cite_spans": [ { "start": 424, "end": 445, "text": "(Mari\u00f1o et al., 2006)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "N-gram-based Translation Model", "sec_num": "2" }, { "text": "The following feature functions were used in the Ngram-based translation system:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.1" }, { "text": "\u2022 A target language model. In the baseline system, this feature consists of a 4-gram back-off model of words, which is trained from the target side of the bilingual corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.1" }, { "text": "\u2022 A source-to-target lexicon model and a target-to-source lexicon model. These feature, which are based on the lexical parameters of the IBM Model 1, provide a complementary probability for each tuple in the translation table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.1" }, { "text": "\u2022 A word bonus function. This feature introduces a bonus based on the number of target words contained in the partial-translation hypothesis. It is used to compensate for the system's preference for short output sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.1" }, { "text": "All these models are combined in the decoder. Additionally, the decoder allows for a non-monotonic search with the following distorsion model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.1" }, { "text": "\u2022 A word distance-based distorsion model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.1" }, { "text": "P (t K 1 ) = exp(\u2212 K k=1 d k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.1" }, { "text": "where d k is the distance between the first word of the k th tuple (unit), and the last word+1 of the (k \u2212 1) th tuple. Distance are measured in words referring to the units source side.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.1" }, { "text": "To reduce the computational cost we place limits on the search using two parameters: the distortion limit (the maximum distance measured in words that a tuple is allowed to be reordered, m) and the reordering limit (the maximum number of reordering jumps in a sentence, j). Tuples need to be extracted by an unfolding technique (Mari\u00f1o et al., 2006) . This means that the tuples are broken into smaller tuples, and these are sequenced in the order of the target words. In order not to lose the information on the correct order, the decoder performs a non-monotonic search. Figure 1 shows an example of tuple unfolding compared to the monotonic extraction. The unfolding technique produces a different bilingual n-gram language model with reordered source words.", "cite_spans": [ { "start": 328, "end": 349, "text": "(Mari\u00f1o et al., 2006)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 573, "end": 581, "text": "Figure 1", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Additional features", "sec_num": "2.1" }, { "text": "In order to combine the models in the decoder suitably, an optimization tool based on the Simplex algorithm is used to compute log-linear weights for each model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.1" }, { "text": "The architecture of the neural network n-gram model is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 72, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "A standard fully-connected multi-layer perceptron is used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "The inputs to the neural network are the indices of the n\u22121 previous units (words or tuples) in the vocabulary h j =w j\u2212n+1 , . . . , w j\u22122 , w j\u22121 and the outputs are the posterior probabilities of all units of the vocabulary: Figure 2 : Architecture of the continuous space LM. h j denotes the context w j\u2212n+1 , . . . , w j\u22121 . P is the size of one projection and H,N is the size of the hidden and output layer respectively. When shortlists are used the size of the output layer is much smaller than the size of the vocabulary.", "cite_spans": [], "ref_spans": [ { "start": 228, "end": 236, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "P dimensional vectors N w j\u22121 P H N P (w j =1|h j ) w j\u2212n+1 w j\u2212n+2 P (w j =i|h j ) P (w j =N|h j ) c l o i M V d j p 1 = p N = p i =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (w j = i|h j ) \u2200i \u2208 [1, N ]", "eq_num": "(3)" } ], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "where N is the size of the vocabulary. The input uses the so-called 1-of-n coding, i.e., the ith unit of the vocabulary is coded by setting the ith element of the vector to 1 and all the other elements to 0. The ith line of the N \u00d7 P dimensional projection matrix corresponds to the continuous representation of the ith unit. Let us denote c l these projections, d j the hidden layer activities, o i the outputs, p i their softmax normalization, and m jl , b j , v ij and k i the hidden and output layer weights and the corresponding biases. Using these notations, the neural network performs the following operations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d j = tanh l m jl c l + b j (4) o i = j v ij d j + k i (5) p i = e o i / N r=1 e or", "eq_num": "(6)" } ], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "The value of the output neuron p i corresponds directly to the probability P (w j = i|h j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "Training is performed with the standard backpropagation algorithm minimizing the following error function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "E = N i=1 t i log p i + \u03b2 \uf8eb \uf8ed jl m 2 jl + ij v 2 ij \uf8f6 \uf8f8 (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "where t i denotes the desired output, i.e., the probability should be 1.0 for the next unit in the training sentence and 0.0 for all the other ones. The first part of this equation is the cross-entropy between the output and the target probability distributions, and the second part is a regularization term that aims to prevent the neural network from over-fitting the training data (weight decay). The parameter \u03b2 has to be determined experimentally. Training is done using a re-sampling algorithm as described in (Schwenk, 2007) .", "cite_spans": [ { "start": 516, "end": 531, "text": "(Schwenk, 2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "It can be shown that the outputs of a neural network trained in this manner converge to the posterior probabilities. Therefore, the neural network directly minimizes the perplexity on the training data. Note also that the gradient is back-propagated through the projection-layer, which means that the neural network learns the projection of the units onto the continuous space that is best for the probability estimation task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "In general, the complexity to calculate one probability with this basic version of the neural network n-gram model is dominated by the dimension of the output layer since the size of the vocabulary (10k to 64k) is usually much larger than the dimension of the hidden layer (200 to 500). Therefore, in previous applications of the continuous space n-gram model, the output was limited to the s most frequent units, s ranging between 2k and 12k (Schwenk, 2007) . This is called a short-list.", "cite_spans": [ { "start": 443, "end": 458, "text": "(Schwenk, 2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Continuous Space N-gram Models", "sec_num": "3" }, { "text": "Words Train (bitexts) 20k 155.4/166.3k Dev 489 5.2k Eval 500 6k ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sents", "sec_num": null }, { "text": "In this work we report results on the Basic Traveling Expression Corpus (BTEC) as used in the 2006 evaluations of the international workshop on spoken language translation (IWSLT). This corpus consists of typical sentences from phrase books for tourists in several languages (Takezawa et al., 2002) . We report results on the supplied development corpus of 489 sentences and the official test set of the IWSLT'06 evaluation. The main measure is the BLEU score, using seven reference translations. The scoring is case insensitive and punctuations are ignored. Details on the available data are summarized in Table 1 . We concentrated first on the translation from Italian to English. All participants in the IWSLT evaluation achieved much better performances for this language pair than for the other considered translation directions. This makes it more difficult to achieve additional improvements. A non-monotonic search was performed following a local reordering named in Section 2, setting m = 5 and j = 3. Also we used histogram pruning in the decoder, i.e. the maximum number of hypotheses in a stack is limited to 50.", "cite_spans": [ { "start": 275, "end": 298, "text": "(Takezawa et al., 2002)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 607, "end": 614, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "4" }, { "text": "Italian contracted prepositions have been separated into preposition + article, such as 'alla'\u2192'a la', 'degli'\u2192'di gli' or 'dallo'\u2192'da lo', among others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language-dependent preprocessing", "sec_num": "4.1" }, { "text": "The training and development data for the bilingual back-off and neural network translation model were created as follows. Given the alignment of the training parallel corpus, we perform a unique segmentation of each parallel sentence following the criterion of unfolded segmentation seen in Section 2. This segmentation is used in a sequence as training text for building the language model. As an example, given the alignment and the unfold extraction of Figure 1 , we obtain the following training sentence:", "cite_spans": [], "ref_spans": [ { "start": 457, "end": 465, "text": "Figure 1", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Model training", "sec_num": "4.2" }, { "text": " how long#cu\u00e1nto does#NULL last#dura the#el flight#vuelo ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model training", "sec_num": "4.2" }, { "text": "The reference bilingual trigram back-off translation model was trained on these bilingual tuples us-ing the SRI LM toolkit (Stolcke, 2002) . Different smoothing techniques were tried, and best results were obtained using Good-Turing discounting.", "cite_spans": [ { "start": 123, "end": 138, "text": "(Stolcke, 2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Model training", "sec_num": "4.2" }, { "text": "The neural network approach was trained on exactly the same data. A context of two tuples was used (trigram model). The training corpus contains about 21,500 different bilingual tuples. We decided to limit the output of the neural network to the 8k most frequent tuples (short-list). This covers about 90% of the requested tuple n-grams in the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model training", "sec_num": "4.2" }, { "text": "Similar to previous applications, the neural network is not used alone but interpolation is performed to combine several n-gram models. First of all, the neural network and the reference back-off model are interpolated together -this always improved performance since both seem to be complementary. Second, four neural networks with different sizes of the continuous representation were trained and interpolated together. This usually achieves better generalization behavior than training one larger neural network. The interpolation coefficients were calculated by optimizing perplexity on the development data, using an EM procedure. The obtained values are 0.33 for the back-off translation model and about 0.16 for each neural network model respectively. This interpolation is used in all our experiments. For the sake of simplicity we will still call this the continuous space translation model. Each network was trained independently using early stopping on the development data. Convergence was achieved after about 10 iterations through the training data (less than 20 minutes of processing on a standard Linux machine). The other parameters are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model training", "sec_num": "4.2" }, { "text": "\u2022 Context of two tuples (trigram)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model training", "sec_num": "4.2" }, { "text": "\u2022 The dimension of the continuous representation of the tuples were c =120,140,150 and 200,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model training", "sec_num": "4.2" }, { "text": "\u2022 The dimension of the hidden layer was set to P = 200,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model training", "sec_num": "4.2" }, { "text": "\u2022 The initial learning rate was 0.005 with an exponential decay,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model training", "sec_num": "4.2" }, { "text": "\u2022 The weight decay coefficient was set to \u03b2 = 0.00005.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model training", "sec_num": "4.2" }, { "text": "N-gram models are usually evaluated using perplexity on some development data. In our case, i.e. using bilingual tuples as basic units (\"words\"), it is less obvious if perplexity is a useful measure. Nevertheless, we provide these numbers for completeness. The perplexity on the development data of the trigram back-off translation model is 227.0. This could be reduced to 170.4 using the neural network. It is also very informative to analyze the n-gram hit-rates of the back-off model on the development data: 10% of the probability requests are actually a true trigram, 40% a bigram and about 49% are finally estimated using unigram probabilities. This means that only a limited amount of phrase context is used in the standard N-gram-based translation model. This makes this an ideal candidate to apply the continuous space model since probabilities are interpolated for all possible contexts and never backed-up to shorter contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model training", "sec_num": "4.2" }, { "text": "The incorporation of the neural translation model is done using n-best list. Each hypothesis is composed of a sequence of bilingual tuples and the corresponding scores of all the feature functions. Figure 3 shows an example of such an n-best list. The neural trigram translation model is used to replace the scores of the trigram back-off translation model. This is followed by a re-optimization of the coefficients of all feature functions, i.e. maximization of the BLEU score on the development data using the numerical optimization tool CONDOR (Berghen and Bersini, 2005 ). An alternative would be to add a feature function and to combine both translation models under the log-linear model framework, using maximum BLEU training.", "cite_spans": [ { "start": 547, "end": 573, "text": "(Berghen and Bersini, 2005", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 198, "end": 206, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Results and analysis", "sec_num": "4.3" }, { "text": "Another open question is whether it might by better to already use the continuous space translation model during decoding. The continuous space model has a much higher complexity than a backoff n-gram. However, this can be heavily optimized when rescoring n-best lists, i.e. by grouping together all calls in the whole n-best list with the same context, resulting in only one forward pass through the neural network. This is more difficult to perform when the continuous space translation model is used during decoding. Therefore, this was not investigated in this work. spiacente#sorry tutto occupato#it 's full spiacente#i 'm sorry tutto occupato#it 's full spiacente#i 'm afraid tutto occupato#it 's full spiacente#sorry tutto#all occupato#busy spiacente#sorry tutto#all occupato#taken Figure 3 : Example of sentences in the n-best list of bilingual tuples. The special character '#' is used to separate the source and target sentence words. Several words in one tuple a grouped together using ' .'", "cite_spans": [], "ref_spans": [ { "start": 789, "end": 797, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Results and analysis", "sec_num": "4.3" }, { "text": "In all our experiments 1000-best lists were used. In order to evaluate the quality of these n-best lists, an oracle trigram back-off translation model was build on the development data. Rescoring the nbest lists with this translation model resulted in an increase of the BLEU score of about 10 points (see Table 2 ). While there is an decrease of about 6% for the position dependent word error rate (mWER), a smaller change in the position independent word error rate was observed (mPER). This suggests that most of the alternative translation hypothesis result in word reorderings and not in many alternative word choices. This is one of the major drawbacks of phrase-and N-gram-based translation systems: only translations observed in the training data can be used. There is no generalization to new phrase pairs. When the 1000-best lists are rescored with the neural network translation model the BLEU score increases by 1.5 points (42.34 to 43.87). Similar improvements were observed in the word error rates (see Table 2 ). For comparison, a 4-gram back-off translation model was also built, but no change of the BLEU score was observed. This suggests that careful smoothing is more important than increasing the context when estimating the translation probabilities in an N-gram-based statistical machine translation system.", "cite_spans": [], "ref_spans": [ { "start": 306, "end": 313, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1017, "end": 1024, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results and analysis", "sec_num": "4.3" }, { "text": "In previous work, we have investigated the use of the neural network approach to modeling the target language for the IWSLT task (Schwenk et al., 2006) . We also applied this technique to this improved Ngram-based translation system. In our implementation, the neural network target 4-gram language model gives an improvement of 1.3 points BLEU on the development data (42.34 to 43.66), in comparison to 1.5 points for the neural translation model (see Table 3 ). The neural translation and target language model were also applied to the test data, using of course the same feature function coefficients as for the development data. The results are given in Table 4 for all the official measures of the IWSLT evaluation. The new smoothing method of the translation probabilities achieves improvement in all measures. It gives also an additional gain (again in all measures) when used together with a neural target language model. Surprisingly, neural TM and neural LM improvements almost add up: when both techniques are used together, the BLEU scores increases by 1.5 points (36.97 \u2192 38.50) . Remember that the reference Ngram-based translation system already uses a local reordering approach. ", "cite_spans": [ { "start": 129, "end": 151, "text": "(Schwenk et al., 2006)", "ref_id": "BIBREF15" }, { "start": 1076, "end": 1091, "text": "(36.97 \u2192 38.50)", "ref_id": null } ], "ref_spans": [ { "start": 453, "end": 460, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 658, "end": 665, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Results and analysis", "sec_num": "4.3" }, { "text": "Phrase-based approaches are the de-facto standard in statistical machine translation. The phrases are extracted automatically from the word alignments of parallel texts, and the different possible translations of a phrase are weighted using relative frequency. This can be problematic when the data is sparse. However, there seems to be little work on possible improvements of the relative frequency estimates by some smoothing techniques. It is today common practice to use additional feature functions like IBM-1 scores to obtain some kind of smoothing (Och et al., 2004; Koehn et al., 2003; Zens and Ney, 2004) , but better estimation of the phrase probabilities is usually not addressed. An alternative way to represent phrases is to define bilingual tuples. Smoothing, and context dependency, is obtained by using an n-gram model on these tuples. In this work, we have extended this approach by using a new smoothing technique that operates on a continuous representation of the tuples. Our method is distinguished by two characteristics: better estimation of the numerous unseen n-grams, and a discriminative estimation of the tuple probabilities. Results are provided on the BTEC task of the 2006 IWSLT evaluation for the translation direction Italian to English. This task provides very limited amount of resources in comparison to other tasks. Therefore, new techniques must be deployed to take the best advantage of the limited resources. We have chosen the Italian to English task because it is challenging to enhance a good quality translation task (over 40 BLEU percentage). Using the continuous space model for the translation and target language model, an improvement of 2.5 BLEU on the development data and 1.5 BLEU on the test data was observed.", "cite_spans": [ { "start": 555, "end": 573, "text": "(Och et al., 2004;", "ref_id": "BIBREF13" }, { "start": 574, "end": 593, "text": "Koehn et al., 2003;", "ref_id": "BIBREF8" }, { "start": 594, "end": 613, "text": "Zens and Ney, 2004)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Despite these encouraging results, we believe that additional research on improved estimation of probabilities in N-gram-or phrase-based statistical machine translation systems is needed. In particular, the problem of generalization to new translations seems to be promising to us. This could be addressed by the so-called factored phrase-based model as implemented in the Moses decoder (Koehn et al., 2007) . In this approach words are decomposed into several factors. These factors are trans-lated and a target phrase is generated. This model could be complemented by a factored continuous tuple N-gram. Factored word language models were already successfully used in speech recognition (Bilmes and Kirchhoff, 2003; Alexandrescu and Kirchhoff, 2006) and an extension to machine translation seems to be promising.", "cite_spans": [ { "start": 387, "end": 407, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF9" }, { "start": 689, "end": 717, "text": "(Bilmes and Kirchhoff, 2003;", "ref_id": "BIBREF3" }, { "start": 718, "end": 751, "text": "Alexandrescu and Kirchhoff, 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "The described smoothing method was explicitly developed to tackle the data sparseness problem in tasks like the BTEC corpus. It is well known from language modeling that careful smoothing is less important when large amounts of data are available. We plan to investigate whether this also holds for smoothing of the probabilities in phrase-or tuplebased statistical machine translation systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" } ], "back_matter": [ { "text": "This work has been partially funded by the European Union under the integrated project TC-STAR (IST-2002-FP6-506738), by the French Government under the project INSTAR (ANR JCJC06 143038) and the the Spanish government under a FPU grant and the project AVIVAVOZ (TEC2006-13964-C03).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Factored neural language models", "authors": [ { "first": "A", "middle": [], "last": "Alexandrescu", "suffix": "" }, { "first": "K", "middle": [], "last": "Kirchhoff", "suffix": "" } ], "year": 2006, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Alexandrescu and K. Kirchhoff. 2006. Factored neu- ral language models. In HLT-NAACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A neural probabilistic language model", "authors": [ { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "P", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "C", "middle": [], "last": "Jauvin", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "2", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. 2003. A neural probabilistic language model. Journal of Ma- chine Learning Research, 3(2):1137-1155.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "CON-DOR, a new parallel, constrained extension of powell's UOBYQA algorithm: Experimental results and comparison with the DFO algorithm", "authors": [ { "first": "F", "middle": [], "last": "Vanden Berghen", "suffix": "" }, { "first": "H", "middle": [], "last": "Bersini", "suffix": "" } ], "year": 2005, "venue": "Journal of Computational and Applied Mathematics", "volume": "181", "issue": "", "pages": "157--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Vanden Berghen and H. Bersini. 2005. CON- DOR, a new parallel, constrained extension of pow- ell's UOBYQA algorithm: Experimental results and comparison with the DFO algorithm. Journal of Com- putational and Applied Mathematics, 181:157-175.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Factored language models and generalized backoff", "authors": [ { "first": "J", "middle": [ "A" ], "last": "Bilmes", "suffix": "" }, { "first": "K", "middle": [], "last": "Kirchhoff", "suffix": "" } ], "year": 2003, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. A. Bilmes and K. Kirchhoff. 2003. Factored language models and generalized backoff. In HLT-NAACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The mathematics of statistical machine translation", "authors": [ { "first": "P", "middle": [], "last": "Brown", "suffix": "" }, { "first": "S", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "R:", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Brown, S. Della Pietra, V. J. Della Pietra, and R: Mer- cer. 1993. The mathematics of statistical machine translation. Computational Linguistics, 19(2):263- 311.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Speech-to-speech translation based on finite-state transducers", "authors": [ { "first": "F", "middle": [], "last": "Casacuberta", "suffix": "" }, { "first": "D", "middle": [], "last": "Llorens", "suffix": "" }, { "first": "C", "middle": [], "last": "Mart\u00ednez", "suffix": "" }, { "first": "S", "middle": [], "last": "Molau", "suffix": "" }, { "first": "F", "middle": [], "last": "Nevado", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "M", "middle": [], "last": "Pastor", "suffix": "" }, { "first": "D", "middle": [], "last": "Pic\u00f3", "suffix": "" }, { "first": "A", "middle": [], "last": "Sanchis", "suffix": "" }, { "first": "E", "middle": [], "last": "Vidal", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Vilar", "suffix": "" } ], "year": 2001, "venue": "International Conference on Acoustic, Speech and Signal Processing", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Casacuberta, D. Llorens, C. Mart\u00ednez, S. Molau, F. Nevado, H. Ney, M. Pastor, D. Pic\u00f3, A. Sanchis, E. Vidal, and J.M. Vilar. 2001. Speech-to-speech translation based on finite-state transducers. Interna- tional Conference on Acoustic, Speech and Signal Pro- cessing, 1.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An empirical study of smoothing techniques for language modeling", "authors": [ { "first": "S", "middle": [ "F" ], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [ "T" ], "last": "Goodman", "suffix": "" } ], "year": 1999, "venue": "CSL", "volume": "13", "issue": "4", "pages": "359--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. F. Chen and J. T. Goodman. 1999. An empirical study of smoothing techniques for language modeling. CSL, 13(4):359-394.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Phrasetable smoothing for statistical machine translation", "authors": [ { "first": "G", "middle": [], "last": "Foster", "suffix": "" }, { "first": "R", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "H", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2006, "venue": "EMNLP06", "volume": "", "issue": "", "pages": "53--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Foster, R. Kuhn, and H. Johnson. 2006. Phrasetable smoothing for statistical machine translation. In EMNLP06, pages 53-61.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Statistical phrased-based machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Human Language Technology Conference (HLT-NAACL)", "volume": "", "issue": "", "pages": "127--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrased-based machine translation. In Human Lan- guage Technology Conference (HLT-NAACL), pages 127-133.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceed- ings of ACL, demonstration session.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Bilingual n-gram statistical machine translation", "authors": [ { "first": "J", "middle": [ "B" ], "last": "Mari\u00f1o", "suffix": "" }, { "first": "R", "middle": [ "E" ], "last": "Banchs", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Crego", "suffix": "" }, { "first": "A", "middle": [], "last": "Gispert", "suffix": "" }, { "first": "P", "middle": [], "last": "Lambert", "suffix": "" }, { "first": "J", "middle": [ "A R" ], "last": "Fonollosa", "suffix": "" }, { "first": "M", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" } ], "year": 2006, "venue": "Computational Linguistics", "volume": "32", "issue": "4", "pages": "527--549", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.B. Mari\u00f1o, R.E. Banchs, J.M. Crego, A. de Gispert, P. Lambert, J.A.R. Fonollosa, and M. R. Costa-juss\u00e0. 2006. Bilingual n-gram statistical machine transla- tion. Computational Linguistics, 32(4):527-549, De- cember.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Discriminative training and maximum entropy models for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "ACL", "volume": "", "issue": "", "pages": "295--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och and H. Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In ACL, pages 295-302.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improved alignment models for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1999, "venue": "Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Copora", "volume": "", "issue": "", "pages": "20--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och, C. Tillmann, and H. Ney. 1999. Improved alignment models for statistical machine translation. In Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Copora, pages 20-28.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A smorgasbord of features for statistical machine translation", "authors": [ { "first": "F.-J", "middle": [], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "A", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "K", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "A", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "S", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "L", "middle": [], "last": "Shen", "suffix": "" }, { "first": "D", "middle": [], "last": "Smith", "suffix": "" }, { "first": "K", "middle": [], "last": "Eng", "suffix": "" }, { "first": "V", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Z", "middle": [], "last": "Jin", "suffix": "" }, { "first": "D", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "161--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "F.-J. Och, D. Gildea, S. Khudanpur, A. Sarkar, K. Ya- mada, A. Fraser, S. Kumar, L. Shen, D. Smith, K. Eng, V. Jain, Z. Jin, and D. Radev. 2004. A smorgasbord of features for statistical machine translation. In HLT- NAACL, pages 161-168.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Overview of the IWSLT 2006 campaign", "authors": [ { "first": "M", "middle": [], "last": "Paul", "suffix": "" } ], "year": 2006, "venue": "IWSLT", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Paul. 2006. Overview of the IWSLT 2006 campaign. In IWSLT, pages 1-15.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Continuous space language models for the iwslt 2006 task. IWSLT", "authors": [ { "first": "H", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "M", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "J", "middle": [ "A R" ], "last": "Fonollosa", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "166--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Schwenk, M. R. Costa-juss\u00e0, and J. A. R. Fonollosa. 2006. Continuous space language models for the iwslt 2006 task. IWSLT, pages 166-173.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Continuous space language models", "authors": [ { "first": "H", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2007, "venue": "Computer Speech and Language", "volume": "21", "issue": "", "pages": "492--518", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Schwenk. 2007. Continuous space language models. Computer Speech and Language, 21:492-518.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "SRILM -an extensible language modeling toolkit", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "ICSLP, pages II", "volume": "", "issue": "", "pages": "901--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Stolcke. 2002. SRILM -an extensible language mod- eling toolkit. In ICSLP, pages II: 901-904.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Toward a borad-coverage bilingual corpus for speech translation of travel conversations in the real world", "authors": [ { "first": "T", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "F", "middle": [], "last": "Sugaya", "suffix": "" }, { "first": "H", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "S", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2002, "venue": "LREC", "volume": "", "issue": "", "pages": "147--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto. 2002. Toward a borad-coverage bilin- gual corpus for speech translation of travel conversa- tions in the real world. In LREC, pages 147-152.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Improvements in phrasebased statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "HLT/NACL", "volume": "", "issue": "", "pages": "257--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zens and H. Ney. 2004. Improvements in phrase- based statistical machine translation. In HLT/NACL, pages 257-264.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "i (e, f ))} (1)" }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Comparing regular and unfolded tuples." }, "TABREF0": { "type_str": "table", "html": null, "content": "", "num": null, "text": "" }, "TABREF2": { "type_str": "table", "html": null, "content": "
:Comparison of different N-gram-
translation models on the development data.
", "num": null, "text": "" }, "TABREF4": { "type_str": "table", "html": null, "content": "
: Combination of a neural translation model
(TM) and a neural language model (LM). BLEU
scores on the development data.
", "num": null, "text": "" }, "TABREF6": { "type_str": "table", "html": null, "content": "
: Test set scores for the combination of a
neural translation model (TM) and a neural language
model (LM).
", "num": null, "text": "" } } } }