{ "paper_id": "C10-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:56:01.551854Z" }, "title": "Local lexical adaptation in Machine Translation through triangulation: SMT helping SMT", "authors": [ { "first": "Josep", "middle": [ "Maria" ], "last": "Crego", "suffix": "", "affiliation": {}, "email": "jmcrego@limsi.fr" }, { "first": "Aur\u00e9lien", "middle": [], "last": "Max", "suffix": "", "affiliation": {}, "email": "amax@limsi.fr" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "", "affiliation": {}, "email": "yvon@limsi.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a framework where auxiliary MT systems are used to provide lexical predictions to a main SMT system. In this work, predictions are obtained by means of pivoting via auxiliary languages, and introduced into the main SMT system in the form of a low order language model, which is estimated on a sentenceby-sentence basis. The linear combination of models implemented by the decoder is thus extended with this additional language model. Experiments are carried out over three different translation tasks using the European Parliament corpus. For each task, nine additional languages are used as auxiliary languages to obtain the triangulated predictions. Translation accuracy results show that improvements in translation quality are obtained, even for large data conditions.", "pdf_parse": { "paper_id": "C10-1027", "_pdf_hash": "", "abstract": [ { "text": "We present a framework where auxiliary MT systems are used to provide lexical predictions to a main SMT system. In this work, predictions are obtained by means of pivoting via auxiliary languages, and introduced into the main SMT system in the form of a low order language model, which is estimated on a sentenceby-sentence basis. The linear combination of models implemented by the decoder is thus extended with this additional language model. Experiments are carried out over three different translation tasks using the European Parliament corpus. For each task, nine additional languages are used as auxiliary languages to obtain the triangulated predictions. Translation accuracy results show that improvements in translation quality are obtained, even for large data conditions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Important improvements are yet to come regarding the performance of Statistical Machine Translation systems. Dependence on training data and limited modelling expressiveness are the focus of many research efforts, such as using monolingual corpora for the former and syntactic models for the latter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another promising approach consists in exploiting complementary sources of information in order to build better translations, as done by consensus-based system combination (e.g. (Matusov et al., 2008) ). This, however, requires to have several systems available for the same language pair. Considering that the same training data would be available to all systems, differences in translation modelling are expected to produce redundant and complementary hypotheses. Multisource translation (e.g. (Och and Ney, 2001; Schwartz, 2008) ) is a variant, involving source texts available in several languages which can be translated by systems for different language pairs and whose outputs can be successfully combined into better translations (Schroeder et al., 2009) . One theoretical expectation of multisource translation is that it can successfully reduce ambiguity of the original source text, but does so under the rare conditions of availability of existing (accurate) translations. In contrast, pivot-based system combination (e.g. (Utiyama and Isahara, 2007; Wu and Wang, 2007) ) aims at compensating the lack of training data for a given language pair by producing translation hypotheses obtained by pivoting via an intermediary language for which better systems are available.", "cite_spans": [ { "start": 178, "end": 200, "text": "(Matusov et al., 2008)", "ref_id": "BIBREF9" }, { "start": 496, "end": 515, "text": "(Och and Ney, 2001;", "ref_id": "BIBREF12" }, { "start": 516, "end": 531, "text": "Schwartz, 2008)", "ref_id": "BIBREF17" }, { "start": 738, "end": 762, "text": "(Schroeder et al., 2009)", "ref_id": "BIBREF16" }, { "start": 1035, "end": 1062, "text": "(Utiyama and Isahara, 2007;", "ref_id": "BIBREF21" }, { "start": 1063, "end": 1081, "text": "Wu and Wang, 2007)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These techniques generally produce a search space that differs from that of the direct translation systems. As such, they create a new translation system out of various systems for which diagnosis becomes more difficult.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper instead focusses on improving a single system, which should be state-of-the-art as regards data and models. We propose a framework in which information coming from external sources is used to boost lexical choices and guide the decoder into making more informed choices. 1 Complementary sources can be of different nature: they can involve other automatic systems (for the same or different language pairs) and/or human knowledge. Furthermore, complementary information is injected at the lexical level, thus making targeted fine-grained lexical predictions useful. Importantly, those predictions are exploited at the sentence level 2 , so as to allow for efficient use of source contextual information.", "cite_spans": [ { "start": 282, "end": 283, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The second contribution of this paper is an instantiation of the proposed framework. Automatically pivoting via auxiliary languages is used to make complementary predictions that are exploited through language model adaptation by the decoder for a given language pair. For this apparently difficult condition, where predictions result from automatic translations involving two systems, we manage to report significant improvements, measured with respect to the target and the source text, under various configurations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organized as follows. We first review related work in section 2.1, and describe the distinctive characteristics of our approach in Section 2.2. Section 2.3 presents our instantiation of the framework based on lexical boosting via auxiliary language triangulation. Experiments involving three language pairs of various complexity and different amounts of training data are described in Section 3. We finally conclude by discussing the prospects offered by our proposed framework in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 A framework for sentence-level lexical boosting", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The idea of using more than one translation system to improve translation performance is not new and has been implemented in many different ways which we briefly review here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2.1" }, { "text": "System combination An often used strategy consists in combining the output of several systems for a fixed language pair, and to rescore the resulting set of hypotheses taking into account all the available translations and scores. Various to lead to measurable improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2.1" }, { "text": "proposals have been made to efficiently perform such a combination, using auxiliary data structures such as n-best lists, word lattices or consensus networks (see for instance (Kumar and Byrne, 2004; Rosti et al., 2007; Matusov et al., 2008; Hildebrand and Vogel, 2008; Tromble et al., 2008) ). Theses techniques have proven extremely effective and have allowed to deliver very significant gains in several recent evaluation campaigns (Callison-Burch et al., 2008) .", "cite_spans": [ { "start": 176, "end": 199, "text": "(Kumar and Byrne, 2004;", "ref_id": "BIBREF7" }, { "start": 200, "end": 219, "text": "Rosti et al., 2007;", "ref_id": "BIBREF15" }, { "start": 220, "end": 241, "text": "Matusov et al., 2008;", "ref_id": "BIBREF9" }, { "start": 242, "end": 269, "text": "Hildebrand and Vogel, 2008;", "ref_id": "BIBREF6" }, { "start": 270, "end": 291, "text": "Tromble et al., 2008)", "ref_id": "BIBREF20" }, { "start": 435, "end": 464, "text": "(Callison-Burch et al., 2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2.1" }, { "text": "Multisource translation A related, yet more resourceful approach, consists in trying to combine several systems providing translations from different sources into the same target, provided such multilingual sources are available. (Och and Ney, 2001) propose to select the most promising translation amongst the hypotheses produced by several Foreign\u2192English systems, where output selection is based on the translation scores. The intuition that if a system assigns a high figure of merits to the translation of a particular sentence, then this translation should be preferred, is implemented in the MAX combination heuristics, whose relative (lack of) success is discussed in (Schwartz, 2008) . A similar idea is explored in (Nomoto, 2004) , where the sole target language model score is used to rank competing outputs. (Schroeder et al., 2009) propose to combine the available sources prior to translation, under the form of a multilingual lattice, which is decoded with a multisource phrase table. (Chen et al., 2008) integrate the available auxiliary information in a different manner, and discuss how to improve the translation model of the primary system: the idea is to use the entries in the phrase table of the auxiliary system to filter out those accidental correspondences that pollute the main translation model. The most effective implementation of multisource translation to date however consists in using mono-source system combination techniques (Schroeder et al., 2009) .", "cite_spans": [ { "start": 230, "end": 249, "text": "(Och and Ney, 2001)", "ref_id": "BIBREF12" }, { "start": 676, "end": 692, "text": "(Schwartz, 2008)", "ref_id": "BIBREF17" }, { "start": 725, "end": 739, "text": "(Nomoto, 2004)", "ref_id": "BIBREF11" }, { "start": 820, "end": 844, "text": "(Schroeder et al., 2009)", "ref_id": "BIBREF16" }, { "start": 1000, "end": 1019, "text": "(Chen et al., 2008)", "ref_id": "BIBREF3" }, { "start": 1461, "end": 1485, "text": "(Schroeder et al., 2009)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2.1" }, { "text": "Translation through pivoting The use of auxiliary systems has also been proposed in another common situation, as a possible remedy to the lack of parallel data for a particular language pair, or for a particular domain. Assume, for instance, that one wishes to build a translation system for the pair A \u2192 B, for which the parallel data is sparse; assuming further that such parallel resources exist for pairs A \u2192 C and for C \u2192 B, it is then tempting to perform the translation indirectly through pivoting, by first translating from A to C, then from C to B. Direct implementations of this idea are discussed e.g. in (Utiyama and Isahara, 2007) . Pivoting can also intervene earlier in the process, for instance as a means to automatically generate the missing parallel resource, an idea that has also been considered to adapt an existing translation systems to new domains (Bertoldi and Federico, 2009) . Pivoting can finally be used to fix or improve the translation model: (Cohn and Lapata, 2007) augments the phrase table for a baseline bilingual system with supplementary phrases obtained by pivoting into a third language.", "cite_spans": [ { "start": 616, "end": 643, "text": "(Utiyama and Isahara, 2007)", "ref_id": "BIBREF21" }, { "start": 873, "end": 902, "text": "(Bertoldi and Federico, 2009)", "ref_id": "BIBREF0" }, { "start": 975, "end": 998, "text": "(Cohn and Lapata, 2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2.1" }, { "text": "Triangulation in translation Triangulation techniques are somewhat more general and only require the availabily of one auxiliary system (or one auxiliary parallel corpus). For instance, the authors of (Chen et al., 2008) propose to use the translation model of an auxiliary C \u2192 B system to filter-out the phrase-table of a primary A \u2192 B system.", "cite_spans": [ { "start": 201, "end": 220, "text": "(Chen et al., 2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2.1" }, { "text": "As in other works, we propose to make use of several MT systems (of any type) to improve translation performance, but contrarily to these works we concentrate on improving one particular system. Our framework is illustrated on Figure 1 . The main system (henceforth, direct system), corresponding to configuration 1, is a SMT system, translating from German to English in the example. Auxiliary information may originate from various sources (2-6) and enter into the decoder. A new model is dynamically built and is used to guide the exploration of the search space to the best hypothesis. Several auxiliary models can be used at once and can be weighted by standard optimization techniques using development data, so that bad sources are not used in practice, or by exploiting a priori information. In the implementation described in section 2.3, this information is updated by the auxiliary source at each sentence. We now briefly describe various possible configurations to make some links to previous works explicit. Configuration 2 translates the same source text by means of another system for the same language pair, as would be done in system combination, except that here a new complete decoding is performed by the direct system. Configuration 3, which will be detailed in section 2.3, uses translations obtained by triangulating via an auxiliary language (Spanish in the example). Using this two-step translation is common to pivot approaches, but our approach is different in that the result of the triangulation is only used as auxiliary information for the decoding of the direct system. Configurations 4 and 5 are instances of multisource translation, where a paraphrase or a translation of the source text is available. Lastly, configuration 6 illustrates the case where a human translator, with knowledge of the target language and at least of one of the available source languages, could influence the decoding by providing desired 3 words (e.g. only for source words or phrases that would be judged difficult to translate). This human supervision through a feedback text in real time is similar to the proposal of (Dymetman et al., 2003) .", "cite_spans": [ { "start": 2133, "end": 2156, "text": "(Dymetman et al., 2003)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 227, "end": 235, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Our framework", "sec_num": "2.2" }, { "text": "Given this framework, several questions arise, the most important underlying this work being whether the performance of SMT systems can be improved by using other SMT systems. Another point of interest is whether improvements made to auxiliary systems can yield improvement to the direct system, without the latter undergoing any modification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our framework", "sec_num": "2.2" }, { "text": "Auxiliary translations obtained by pivoting can be viewed as a source of adaptation data for the target language model of the direct system. Assuming we have computed n-best translation hypotheses of a sentence in the target language, we can then boost the likeliness of the words and phrases occurring in these hypotheses by deriving an auxiliary language model for each test sentence. This allows us to integrate this auxiliary information during the search and thus provides a tighter integration with the direct system. This idea has successfully been used in speech recognition, using for instance close captions (Placeway and Lafferty, 1996) or an imperfect translation (Paulik et al., 2005) to provide auxiliary in-domain adaptation data for the recognizer's language model. (Simard and Isabelle, 2009) proposed a similar approach in Machine Translation in which they use the target-side of an exact match in a translation memory to build language models on a per sentence basis used in their decoder. This strategy can be implemented in a straightforward manner, by simply training a language model using the n-best list as an adaptation corpus. Being automatically generated, hypotheses in the n-best list are not entirely reliable: in particular, they may contain very unlikely target sequences at the junction of two segments. It is however straightforward to filter these out using the available phrase alignment information.", "cite_spans": [ { "start": 618, "end": 647, "text": "(Placeway and Lafferty, 1996)", "ref_id": "BIBREF14" }, { "start": 676, "end": 697, "text": "(Paulik et al., 2005)", "ref_id": "BIBREF13" }, { "start": 782, "end": 809, "text": "(Simard and Isabelle, 2009)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical boosting via triangulation", "sec_num": "2.3" }, { "text": "This configuration is illustrated on Figure 2 : the direct system (configuration 1) makes use of predictions from pivoting through an auxiliary language (configuration 2), where n-best lists can be used to produce several hypotheses. In order to get a upper bound on the potential gains of this approach, we can run the artificial experiment (configuration 3) where a reference in the target language is used as a \"perfect\" source of information.", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 45, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Lexical boosting via triangulation", "sec_num": "2.3" }, { "text": "Furthermore, we are interested in the performance of the simple pivot system alone (configuration 4), as it gives an indication of the quality of the data used for LM adaptation. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical boosting via triangulation", "sec_num": "2.3" }, { "text": "In this study, we used our own machine translation engine, which implements the n-grambased approach to statistical machine translation (Mari\u00f1o et al., 2006 ). The translation model is implemented as a stochastic finite-state transducer trained using a n-gram language model of (source,target) pairs.", "cite_spans": [ { "start": 136, "end": 156, "text": "(Mari\u00f1o et al., 2006", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Translation engine", "sec_num": "3.1" }, { "text": "In addition to a bilingual n-gram model, our SMT system uses six additional models which are linearly combined following a discriminative modeling framework: two lexicalized reordering (Tillmann, 2004) models,a target-language model, two lexicon models, a 'weak' distancebased distortion model, a word bonus model and a translation unit bonus model. Coefficients in this linear combination are tuned over development data with the MERT optimization toolkit 4 , slightly modified to use our decoder's n-best lists.", "cite_spans": [ { "start": 185, "end": 201, "text": "(Tillmann, 2004)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Translation engine", "sec_num": "3.1" }, { "text": "For this study, we used 3-gram bilingual and 3-gram target language models built using modified Kneser-Ney smoothing (Chen and Goodman, 1996) ; model estimation was performed with the SRI language modeling toolkit. 5 Target language models were trained on the target side of the bitext corpora.", "cite_spans": [ { "start": 117, "end": 141, "text": "(Chen and Goodman, 1996)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Translation engine", "sec_num": "3.1" }, { "text": "After preprocessing the corpora with standard tokenization tools, word-to-word alignments are performed in both directions, source-to-target and target-to-source. In our system implementation, the GIZA++ toolkit 6 is used to compute the word alignments. Then, the grow-diag-final-and heuristic is used to obtain the final alignments from which translation units are extracted. Convergent studies have showed that systems built according to these principles typically achieve a performance comparable to that of the widely used MOSES phrase-based system for the language pairs under study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation engine", "sec_num": "3.1" }, { "text": "We have used the Europarl corpus 7 for our main and auxiliary languages. The eleven languages are: Danish (da), German (de), English (en), Spanish (es), Finnish (fi), French (fr), Greek (el), Italian (it), Dutch (nl), Portuguese (pt) and Swedish (sv).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.2" }, { "text": "We focussed on three translation tasks: one for which translation accuracy, as measured by automatic metrics, is rather high (f r \u2192 en), and two for which translation accuracy is lower (de \u2192 en) and (f r \u2192 de). This will allow us to check whether the improvements provided by our method carry over even in situations where the baseline is strong; conversely, it will allow us to assess whether the proposed techniques are applicable when the baseline is average or poor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.2" }, { "text": "In order to measure the contribution of each of the auxiliary languages we used a subset of the training corpus that is common to all language pairs, hereinafter referred to as the intersection data condition. We used the English side of all training language pairs to collect the same sentences in all languages, summing up to 320, 304 sentence pairs. Some statistics on the data used in this study are reported in Table 1 . Finally, in order to assess the impact of the training data size over the results obtained, we also considered a much more challenging condition for the f r \u2192 de pair, where we used the entire Europarl data (V5) made available for the fifth Workshop on Statistical Machine Translation 8 for training, and test our system on out-of-domain news data. The training corpus in this condition contains 43.6M French words and 37.2M German words.", "cite_spans": [], "ref_spans": [ { "start": 416, "end": 423, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Corpora", "sec_num": "3.2" }, { "text": "Development and test data for the first condition (intersection) were obtained by leaving out respectively 500 and 1000 sentences from the common subset (same sentences for all languages), while the first 500 sentences of news-test2008 and the entire newstest2009 official test sets were used for the full data condition. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.2" }, { "text": "In this section, we report on the experiments carried out to assess the benefits of introducing an auxiliary language model to the linear combination of models implemented in our SMT system. Table 2 reports translation accuracy (BLEU) results for the main translation tasks considered in this work (f r \u2192 de), (f r \u2192 en) and (de \u2192 en), as well as for multiple intermediate tasks needed for pivoting via auxiliary systems.", "cite_spans": [], "ref_spans": [ { "start": 191, "end": 198, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "3.3" }, { "text": "For each triplet of languages (src, aux, trg), columns 4 th to 6 th show BLEU scores for systems performing (src \u2192 aux), (aux \u2192 trg) and pivot translations using aux as the bridge language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.3" }, { "text": "The last two columns display BLEU scores for the main translation tasks (f r \u2192 de), (f r \u2192 en) and (de \u2192 en). Column src-trg refers to the baseline (direct) systems, for which no additional lan- guage model is used; column +auxLM refers to the same system augmented with the additional language model. Additional language models are built from hypotheses obtained by means of pivot translations, using aux as auxiliary language. The last score is shown in the form of the difference (improvement) with respect to the score of the baseline system. This table additionally displays the BLEU results obtained when building the additional language models directly from the English reference translations (see last row of each translation task). These numbers provide an upper-bound of the expected improvements. Note finally that numbers in boldface correspond to the best numbers in their column for a given language pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.3" }, { "text": "As detailed above, the additional language models are built using trg hypotheses obtained by pivoting via an auxiliary language: (src \u2192 aux) + (aux \u2192 trg). Hence, column pivot shows the quality (measured in terms of BLEU) of the hypotheses used to estimate the additional model. Note that we did not limit the language model to be estimated from the 1-best pivot hypotheses. Instead, we uses n-best translation hypotheses of the (src \u2192 aux) system and m-best hypotheses of the (aux \u2192 trg) system. Hence, n \u00d7 m target hypotheses were used as training data to estimate the additional models. Column +auxLM shows BLEU scores over the test set after performing four system optimizations on the development set to select the best combination of values used for n and m among: (1, 1), (10, 1), (10, 1) and (10, 10). All hypotheses used to estimate a language model are considered equally likely. Language models are learnt using Witten-Bell discounting. Approximately \u00b11.0 point must be added to BLEU scores shown in the last 2 columns for 95% confidence levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.3" }, { "text": "As expected, pivot translations yield lower quality scores than the corresponding direct translations hypotheses. However, pivot hypotheses may contain better lexical predictions, that the additional model helps transfer into the baseline system, yielding translations with a higher quality, as shown in many cases the +auxLM systems results. The case of using Finnish as an auxiliary language is particularly remarkable. Even though pivot hypotheses obtained through Finnish have the lowest scores 9 , they help improve the baseline performance as additional language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.3" }, { "text": "As expected, the translation results of the pair with a highest baseline (f r \u2192 en) were on average less improved than those of the pairs with lower baselines. As can also be seen, the contribution of each auxiliary language varies for each of the three translation tasks. For instance, Danish (da) provides a clear improvement to (de \u2192 en) translations, while no gain is observed for (f r \u2192 en). No clear patterns seems to emerge, though, and the correlation between the quality of the pivot translation and the boost provided by using these pivot hypotheses remains to be better analyzed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.3" }, { "text": "In order to assess whether the improvements obtained carry over larger data conditions, we trained our (f r \u2192 de), (f r \u2192 es) and (es \u2192 de) systems over the entire EPPS data. Results are reported in the bottom part of Table 2 . As can be seen, the (f r \u2192 de) system is still improved by using the additional language model. However, the absolute value of the gain under the full condition (+0.61) is lower than that of the intersection data condition (+0.96).", "cite_spans": [], "ref_spans": [ { "start": 218, "end": 225, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "3.3" }, { "text": "In some cases, automatic metrics such as BLEU cannot show significant differences that can be revealed by fine-grained focussed human evaluation (e.g. (Vilar et al., 2006) ). Furthermore, computing some similarity between a system's hypotheses and gold standard references puts a strong focus on the target side of translation, and does not allow evaluating translation performance from the source words that were actually translated. We therefore use the evaluation methodology described in (Max et al., 2010) for a complementary measure of translation performance that focuses on the contrastive ability of two systems to adequately translate source words. Source words from the test corpus were first aligned with target words in the reference, by automatically aligning the union of the training and test corpus using GIZA++. 10 The test corpus was analyzed by the TREETAGGER 11 so as to identify 10 The obtained alignments are thus strongly influenced by alignments from the training corpus. It could be noted that alignments could be manually corrected.", "cite_spans": [ { "start": 151, "end": 171, "text": "(Vilar et al., 2006)", "ref_id": "BIBREF22" }, { "start": 492, "end": 510, "text": "(Max et al., 2010)", "ref_id": "BIBREF10" }, { "start": 830, "end": 832, "text": "10", "ref_id": null }, { "start": 901, "end": 903, "text": "10", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Contrastive evaluation of lexical translation", "sec_num": "3.4" }, { "text": "11 http://www.ims.uni-stuttgart.de/ Table 3 : Contrastive lexical evaluation results per part-of-speech between the baseline French\u2192English system and our systems using various auxiliary languages. '-' (resp. '+') values indicate numbers of words that only the baseline system (resp. our system) correctly translated with respect to the reference translation.", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 43, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Contrastive evaluation of lexical translation", "sec_num": "3.4" }, { "text": "content words, which have a more direct impact on translation adequacy. When source words are aligned to several target words, each target word should be individually searched for in the candidate translation, and words from the reference can only be matched once. Table 3 shows contrastive results per part-ofspeech between the baseline fr\u2192en system and systems using various auxiliary languages. Values in the '-' row indicate the number of words that only the baseline system translated as in the reference translation, and values in the '+' row the number of words that only our corresponding system translated as in the reference. The most striking result is the contribution of Greek, which, while giving no gain in terms of BLEU, improved the translation of 82 content words. This could be explained, in addition to the lower Bleu3 and Bleu4 precision, by the fact that the quality of the translation of grammatical words may have decreased. On the contrary, Italian brings little improvement for content words save for nouns. The mostly negative results on the translation of pronouns were expected, because this depends on their antecedent in English and is not the object of specific modelling from the systems. The translation of nouns and adjectives benefits the most from auxiliary translations.", "cite_spans": [], "ref_spans": [ { "start": 265, "end": 272, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Contrastive evaluation of lexical translation", "sec_num": "3.4" }, { "text": "projekte/corplex/TreeTagger Figure 3 illustrates this evaluation by means of two examples. It should be noted that a recurrent type of improvement was that of avoiding missing words, which is here a direct result of their being boosted in the auxiliary hypotheses.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 36, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Contrastive evaluation of lexical translation", "sec_num": "3.4" }, { "text": "We have presented a framework where auxiliary MT systems are used to provide useful information to a main SMT system. Our experiments on auxiliary language triangulation have demonstrated its validity on a difficult configuration and have shown that improvements in translation quality could be obtained even under large training data conditions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "4" }, { "text": "The fact that low quality sources such as pivot translation can provide useful complementary information calls for a better understanding of the phenomena at play. It is very likely that, looking at our results on the contribution of auxiliary languages, improving the quality of an auxiliary source can also be achieved by identifying what a source is good for. For example, in the studied language configurations predictions of translations for pronouns in the source text by auxiliary triangulation does not give access to useful information. On the contrary, triangulation with Greek when translating from French to English seems to give useful information regarding the translation of adjectives, a result which was quite unexpected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "4" }, { "text": "Also, it would be interesting to use richer predictions than short n-grams, such as syntactic dependencies, but this would require significant changes on the decoders used. Using dynamic models at the discourse level rather than only at the sentence level would also be a useful improvement. Besides the improvements just mentioned, our future work includes working on several configurations of the framework described in section 2.2, in particular investigating the new type of system combination.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "4" }, { "text": "ref #357 this concession to the unions ignores the reality that all airlines have different safety procedures which even differ between aircrafts within each airline .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "4" }, { "text": "this concession unions ignores the fact that all airlines have different safety procedures which are even within each of the companies in accordance with the types of equipment . w.r.t. src cette concession aux syndicats ignore la r\u00e9alit\u00e9 selon laquelle toutes les compagnies a\u00e9riennes ont des proc\u00e9dures de s\u00e9curit\u00e9 diff\u00e9rentes qui diff\u00e8rent m\u00eame au sein de chacune des compagnies en fonction des types d ' appareils .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "bas", "sec_num": null }, { "text": "+aux this concession to the trade unions ignores the reality according to which all the airlines have different safety procedures which differ even within each of the companies in accordance with the types of equipment .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "bas", "sec_num": null }, { "text": "w.r.t. src cette concession aux syndicats ignore la r\u00e9alit\u00e9 selon laquelle toutes les compagnies a\u00e9riennes ont des proc\u00e9dures de s\u00e9curit\u00e9 diff\u00e9rentes qui diff\u00e8rent m\u00eame au sein de chacune des compagnies en fonction des types d ' appareils . Figure 3 : Example of automatic translations from French to English for the baseline system and when using Spanish as the auxiliary language. Bold marking indicates source/target words which were correctly translated according to the reference translation.", "cite_spans": [], "ref_spans": [ { "start": 241, "end": 249, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "bas", "sec_num": null }, { "text": "We performed initial experiments where the complementary information was exploited during n-best list reranking(Max et al., 2010), but except for the multisource condition the list of hypotheses contained too little useful variation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We plan to experiment next on using predictions at the document level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The proposal as it is limits the hypotheses produced by the system to those that are attainable given its training data. It is conceivable, however, to find ways of introducing new knowledge in this framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.statmt.org/moses 5 http://wwww.speech.sri.com/projects/ srilm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.fjoch.com/GIZA++.html 7 http://www.statmt.org/europarl", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.statmt.org/wmt10", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Given the agglutinative nature of morphological processes in Finnish, reflected in a much lower number of words per sentence, and a higher number of types (seeTable 1), BLEU scores for this language do not compare directly with the ones obtained for other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work has been partially funded by OSEO under the Quaero program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Domain adaptation for statistical machine translation with monolingual resources", "authors": [ { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2009, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bertoldi, Nicola and Marcello Federico. 2009. Do- main adaptation for statistical machine translation with monolingual resources. In Proceedings of WMT, Athens, Greece.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Further meta-evaluation of machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Cameron", "middle": [], "last": "Shaw Fordyce", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2008, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Callison-Burch, Chris, Cameron Shaw Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2008. Further meta-evaluation of machine transla- tion. In Proceedings of WMT, Columbus, USA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An empirical study of smoothing techniques for language modeling", "authors": [ { "first": "Stanley", "middle": [ "F" ], "last": "Chen", "suffix": "" }, { "first": "Joshua", "middle": [ "T" ], "last": "Goodman", "suffix": "" } ], "year": 1996, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Stanley F. and Joshua T. Goodman. 1996. An empirical study of smoothing techniques for lan- guage modeling. In Proceedings of ACL, Santa Cruz, USA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Improving statistical machine translation efficiency by triangulation", "authors": [ { "first": "Yu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Eisele", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Kay", "suffix": "" } ], "year": 2008, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Yu, Andreas Eisele, and Martin Kay. 2008. Im- proving statistical machine translation efficiency by triangulation. In Proceedings of LREC, Marrakech, Morocco.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Machine translation by triangulation: Making effective use of multi-parallel corpora", "authors": [ { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohn, Trevor and Mirella Lapata. 2007. Machine translation by triangulation: Making effective use of multi-parallel corpora. In Proceedings of ACL, Prague, Czech Republic.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Towards interactive text understanding", "authors": [ { "first": "Marc", "middle": [], "last": "Dymetman", "suffix": "" }, { "first": "Aur\u00e9lien", "middle": [], "last": "Max", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dymetman, Marc, Aur\u00e9lien Max, and Kenji Yamada. 2003. Towards interactive text understanding. In Proceedings of ACL, short paper session, Sapporo, Japan.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Combination of machine translation systems via hypothesis selection from combined n-best lists", "authors": [ { "first": "Almut", "middle": [], "last": "Hildebrand", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Silja", "suffix": "" }, { "first": "", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2008, "venue": "Proceedings of AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hildebrand, Almut Silja and Stephan Vogel. 2008. Combination of machine translation systems via hy- pothesis selection from combined n-best lists. In Proceedings of AMTA, Honolulu, USA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Minimum bayes-risk decoding for statistical machine translation", "authors": [ { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "William", "middle": [], "last": "Byrne", "suffix": "" } ], "year": 2004, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kumar, Shankar and William Byrne. 2004. Minimum bayes-risk decoding for statistical machine transla- tion. In Proceedings of NAACL-HLT, Boston, USA.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Fonollosa, and Martha Costa-juss\u00e0. 2006. N-gram based machine translation", "authors": [ { "first": "", "middle": [], "last": "Mari\u00f1o", "suffix": "" }, { "first": "Rafael", "middle": [ "E" ], "last": "Jos\u00e9", "suffix": "" }, { "first": "Josep", "middle": [ "Maria" ], "last": "Banchs", "suffix": "" }, { "first": "Adria", "middle": [], "last": "Crego", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "De Gispert", "suffix": "" }, { "first": "J", "middle": [ "A R" ], "last": "Lambert", "suffix": "" } ], "year": null, "venue": "Computational Linguistics", "volume": "32", "issue": "4", "pages": "527--549", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mari\u00f1o, Jos\u00e9, Rafael E. Banchs, Josep Maria Crego, Adria de Gispert, Patrick Lambert, J.A.R. Fonol- losa, and Martha Costa-juss\u00e0. 2006. N-gram based machine translation. Computational Linguistics, 32(4):527-549.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "System combination for machine translation of spoken and written language", "authors": [ { "first": "Evgeny", "middle": [], "last": "Matusov", "suffix": "" }, { "first": "Gregor", "middle": [], "last": "Leusch", "suffix": "" }, { "first": "Rafael", "middle": [ "E" ], "last": "Banchs", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Dechelotte", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Muntsin", "middle": [], "last": "Kolss", "suffix": "" }, { "first": "Young-Suk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Mari\u00f1o", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Paulik", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2008, "venue": "IEEE Transactions on Audio, Speech and Language Processing", "volume": "16", "issue": "7", "pages": "1222--1237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matusov, Evgeny, Gregor Leusch, Rafael E. Banchs, Nicola Bertoldi, Daniel Dechelotte, Marcello Fed- erico, Muntsin Kolss, Young-Suk Lee, Jose Mari\u00f1o, Matthias Paulik, Salim Roukos, Holger Schwenk, and Hermann Ney. 2008. System combination for machine translation of spoken and written language. IEEE Transactions on Audio, Speech and Language Processing, 16(7):1222-1237, September.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Contrastive Lexical Evaluation of Machine Translation", "authors": [ { "first": "", "middle": [], "last": "Max", "suffix": "" }, { "first": "Josep", "middle": [ "M" ], "last": "Aur\u00e9lien", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Crego", "suffix": "" }, { "first": "", "middle": [], "last": "Yvon", "suffix": "" } ], "year": 2010, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Max, Aur\u00e9lien, Josep M. Crego, and Fran\u00e7ois Yvon. 2010. Contrastive Lexical Evaluation of Machine Translation. In Proceedings of LREC, Valletta, Malta.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Multi-engine machine translation with voted language model", "authors": [ { "first": "Tadashi", "middle": [], "last": "Nomoto", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nomoto, Tadashi. 2004. Multi-engine machine trans- lation with voted language model. In Proceedings of ACL, Barcelona, Catalunya, Spain.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Statistical multi-source translation", "authors": [ { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2001, "venue": "Proceedings of MT Summit", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, Franz Josef and Hermann Ney. 2001. Statisti- cal multi-source translation. In Proceedings of MT Summit, Santiago de Compostela, Spain.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Document driven machine translation enhanced automatic speech recognition", "authors": [ { "first": "Matthias", "middle": [], "last": "Paulik", "suffix": "" }, { "first": "Christian", "middle": [], "last": "F\u00fcgen", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Schaaf", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Schultz", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "St\u00fcker", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2005, "venue": "Proceedings of InterSpeech", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paulik, Matthias, Christian F\u00fcgen, Thomas Schaaf, Tanja Schultz, Sebastian St\u00fcker, and Alex Waibel. 2005. Document driven machine translation en- hanced automatic speech recognition. In Proceed- ings of InterSpeech, Lisbon, Portugal.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Cheating with imperfect transcripts", "authors": [ { "first": "Paul", "middle": [], "last": "Placeway", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1996, "venue": "Proceedings of IC-SLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Placeway, Paul and John Lafferty. 1996. Cheating with imperfect transcripts. In Proceedings of IC- SLP, Philadelphia, USA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Combining outputs from multiple machine translation systems", "authors": [ { "first": "Antti-Veikko", "middle": [], "last": "Rosti", "suffix": "" }, { "first": "Necip", "middle": [], "last": "Fazil Ayan", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Spyros", "middle": [], "last": "Matsoukas", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwatz", "suffix": "" }, { "first": "Bonnie", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 2007, "venue": "Proceedings of NAACL-HTL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosti, Antti-Veikko, Necip Fazil Ayan, Bin Xiang, Spyros Matsoukas, Richard Schwatz, and Bonnie J. Dorr. 2007. Combining outputs from multiple machine translation systems. In Proceedings of NAACL-HTL, Rochester, USA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Word lattices for multi-source translation", "authors": [ { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schroeder, Josh, Trevor Cohn, and Philipp Koehn. 2009. Word lattices for multi-source translation. In Proceedings of EACL, Athens, Greece.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multi-source translation methods", "authors": [ { "first": "Lane", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 2008, "venue": "Proceedings of AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schwartz, Lane. 2008. Multi-source translation meth- ods. In Proceedings of AMTA, Honolulu, USA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Phrasebased machine translation in a computer-assisted translation environment", "authors": [ { "first": "Michel", "middle": [], "last": "Simard", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Isabelle", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Machine Translation Summit XII", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simard, Michel and Pierre Isabelle. 2009. Phrase- based machine translation in a computer-assisted translation environment. In Proceedings of Machine Translation Summit XII, Ottawa, Canada.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A unigram orientation model for statistical machine translation", "authors": [ { "first": "Christoph", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 2004, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tillmann, Christoph. 2004. A unigram orientation model for statistical machine translation. In Pro- ceedings of NAACL-HLT, Boston, USA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Lattice Minimum Bayes-Risk decoding for statistical machine translation", "authors": [ { "first": "Roy", "middle": [], "last": "Tromble", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Franz", "middle": [], "last": "Och", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tromble, Roy, Shankar Kumar, Franz Och, and Wolf- gang Macherey. 2008. Lattice Minimum Bayes- Risk decoding for statistical machine translation. In Proceedings of EMNLP, Honolulu, USA.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A comparison of pivot methods for phrase-based statistical machine translation", "authors": [ { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Hitoshi", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 2007, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Utiyama, Masao and Hitoshi Isahara. 2007. A com- parison of pivot methods for phrase-based statisti- cal machine translation. In Proceedings of NAACL- HLT, Rochester, USA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Error Analysis of Statistical Machine Translation Output", "authors": [ { "first": "David", "middle": [], "last": "Vilar", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Fernando D'haro", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vilar, David, Jia Xu, Luis Fernando d'Haro, and Her- mann Ney. 2006. Error Analysis of Statistical Ma- chine Translation Output. In Proceedings of LREC, Genoa, Italy.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Pivot language approach for phrase-based statistical machine translation", "authors": [ { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, Hua and Haifeng Wang. 2007. Pivot language approach for phrase-based statistical machine trans- lation. In Proceedings of ACL, Prague, Czech Re- public.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Lexical boosting framework with various configurations for auxiliary predictions", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "Architecture of a German\u2192English system for lexical boosting via triangulation through Spanish 3 Experiments and results", "type_str": "figure", "num": null }, "TABREF1": { "content": "
el es fi it sv | -27 + 62 -33 + 64 -44 + 49 -55 + 55 -40 + 69 | 21 29 25 38 40 31 39 39 30 46 | 114 136 106 136 106 120 128 145 138 144 | 25 27 26 22 20 23 35 36 29 23 | 99 286 +0.07 114 368 110 300 +0.61 117 377 92 302 +0.44 106 329 119 376 +0.32 121 396 109 346 +0.50 134 416 |