{ "paper_id": "P01-1050", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:30:06.065546Z" }, "title": "Towards a Unified Approach to Memory-and Statistical-Based Machine Translation", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California", "location": { "addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey", "postCode": "90292", "region": "CA" } }, "email": "marcu@isi.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a set of algorithms that enable us to translate natural language sentences by exploiting both a translation memory and a statistical-based translation model. Our results show that an automatically derived translation memory can be used within a statistical framework to often find translations of higher probability than those found using solely a statistical model. The translations produced using both the translation memory and the statistical model are significantly better than translations produced by two commercial systems: our hybrid system translated perfectly 58% of the 505 sentences in a test collection, while the commercial systems translated perfectly only 40-42% of them.", "pdf_parse": { "paper_id": "P01-1050", "_pdf_hash": "", "abstract": [ { "text": "We present a set of algorithms that enable us to translate natural language sentences by exploiting both a translation memory and a statistical-based translation model. Our results show that an automatically derived translation memory can be used within a statistical framework to often find translations of higher probability than those found using solely a statistical model. The translations produced using both the translation memory and the statistical model are significantly better than translations produced by two commercial systems: our hybrid system translated perfectly 58% of the 505 sentences in a test collection, while the commercial systems translated perfectly only 40-42% of them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Over the last decade, much progress has been made in the fields of example-based (EBMT) and statistical machine translation (SMT). EBMT systems work by modifying existing, human produced translation instances, which are stored in a translation memory (TMEM). Many methods have been proposed for storing translation pairs in a TMEM, finding translation examples that are relevant for translating unseen sentences, and modifying and integrating translation fragments to produce correct outputs. Sato (1992) , for example, stores complete parse trees in the TMEM and selects and generates new translations by performing similarity matchings on these trees. Veale and Way (1997) store complete sentences; new translations are generated by modifying the TMEM translation that is most similar to the input sentence. Others store phrases; new translations are produced by optimally partitioning the input into phrases that match examples from the TMEM (Maruyana and Watanabe, 1992) , or by finding all partial matches and then choosing the best possible translation using a multi-engine translation system (Brown, 1999) .", "cite_spans": [ { "start": 493, "end": 504, "text": "Sato (1992)", "ref_id": "BIBREF9" }, { "start": 654, "end": 674, "text": "Veale and Way (1997)", "ref_id": "BIBREF11" }, { "start": 945, "end": 974, "text": "(Maruyana and Watanabe, 1992)", "ref_id": "BIBREF6" }, { "start": 1099, "end": 1112, "text": "(Brown, 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With a few exceptions (Wu and Wong, 1998) , most SMT systems are couched in the noisy channel framework (see Figure 1) . In this framework, the source language, let's say English, is assumed to be generated by a noisy probabilistic source. 1 Most of the current statistical MT systems treat this source as a sequence of words (Brown et al., 1993) . (Alternative approaches exist, in which the source is taken to be, for example, a sequence of aligned templates/phrases (Wang, 1998; or a syntactic tree .) In the noisy-channel framework, a monolingual corpus is used to derive a statistical language model that assigns a probability to a sequence of words or phrases, thus enabling one to distinguish between sequences of words that are grammatically correct and sequences that are not. A sentence-aligned parallel corpus is then used in order to build a probabilistic translation model that explains how the source can be turned into the target and that assigns a probability to every way in which a source e can be mapped into a target f. Once the parameters of the language and translation models are estimated using traditional maximum likelihood and EM techniques (Dempster et al., 1977) , one can take as input any string in the target language f, and find the source e of highest probability that could have generated the target, a process called decoding (see Figure 1 ). It is clear that EBMT and SMT systems have different strengths and weaknesses. If a sentence to be translated or a very similar one can be found in the TMEM, an EBMT system has a good chance of producing a good translation. However, if the sentence to be translated has no close matches in the TMEM, then an EBMT system is less likely to succeed. In contrast, an SMT system may be able to produce perfect translations even when the sentence given as input does not resemble any sentence from the training corpus. However, such a system may be unable to generate translations that use idioms and phrases that reflect long-distance dependencies and contexts, which are usually not captured by current translation models.", "cite_spans": [ { "start": 22, "end": 41, "text": "(Wu and Wong, 1998)", "ref_id": "BIBREF14" }, { "start": 240, "end": 241, "text": "1", "ref_id": null }, { "start": 326, "end": 346, "text": "(Brown et al., 1993)", "ref_id": "BIBREF1" }, { "start": 469, "end": 481, "text": "(Wang, 1998;", "ref_id": "BIBREF13" }, { "start": 1168, "end": 1191, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 109, "end": 118, "text": "Figure 1)", "ref_id": "FIGREF0" }, { "start": 1367, "end": 1375, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper advances the state-of-the-art in two respects. First, we show how one can use an existing statistical translation model (Brown et al., 1993) in order to automatically derive a statistical TMEM. Second, we adapt a decoding algorithm so that it can exploit information specific both to the statistical TMEM and the translation model. Our experiments show that the automatically derived translation memory can be used within the statistical framework to often find translations of higher probability than those found using solely the statistical model. The translations produced using both the translation memory and the statistical model are significantly better than translations produced by two commercial systems.", "cite_spans": [ { "start": 131, "end": 151, "text": "(Brown et al., 1993)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For the work described in this paper we used a modified version of the statistical machine translation tool developed in the context of the 1999 Johns Hopkins' Summer Workshop (Al-Onaizan et al., 1999) , which implements IBM translation model 4 (Brown et al., 1993) .", "cite_spans": [ { "start": 176, "end": 201, "text": "(Al-Onaizan et al., 1999)", "ref_id": "BIBREF0" }, { "start": 245, "end": 265, "text": "(Brown et al., 1993)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "The IBM Model 4", "sec_num": "2" }, { "text": "IBM model 4 revolves around the notion of word alignment over a pair of sentences (see Figure 2). The word alignment is a graphical representation of an hypothetical stochastic process by which a source string e is converted into a target string f. The probability of a given alignment a and target sentence f given a source sentence e is given by", "cite_spans": [], "ref_spans": [ { "start": 87, "end": 93, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "The IBM Model 4", "sec_num": "2" }, { "text": "P(a, f e) = \u00a1 \u00a2 \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 n\u00a9 \u00a3 e \u00a3 \u00a1 \u00a2 \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a2 \u00a4 \u00a7 \u00a6 t\u00a8 \u00a3 e \u00a3 ! \u00a1 \u00a2 \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 # \" $ % ' & d \u00a6 ( \u00a3 ) \u00a6 1 0 3 2 5 4 2 7 6 8 @ 9 $ 9 e 4 B A C 2 7 6 8 @ 9 $ 9 \u00a3 ) \u00a6 D # ! \u00a1 \u00a2 \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 $ \u00a2 \u00a4 F E d % \u00a6 ( \u00a3 G 0 ( \u00a3 ) H I \u00a6 Q P 2 7 6 ) 8 @ 9 ! 9 \u00a3 R # ! S U T 0 \u00a9 & \u00a9 & V X W \u00a6 $ \u0178 \u00e0 0 W \u00a6 Q b I E $ Y Y \u00a2 \u00a4 \u00a7 \u00a6 d c & # NULL", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The IBM Model 4", "sec_num": "2" }, { "text": "where the factors delineated by symbols correspond to hypothetical steps in the following generative process: , which corresponds to the number of French words into which e is going to be translated. into which e \u00a3 is translated. For example, the English word \"no\" in Figure 2 is a word of fertility 2 that is translated into \"aucun\" and \"ne\". e The rest of the factors denote distorsion probabilities (d), which capture the probability that words change their position when translated from one language into another; the probability of some French words being generated from an invisible English NULL element (p \u00a6 ), etc. See (Brown et al., 1993) or (Germann et al., 2001 ) for a detailed discussion of this translation model and a description of its parameters.", "cite_spans": [ { "start": 627, "end": 647, "text": "(Brown et al., 1993)", "ref_id": "BIBREF1" }, { "start": 651, "end": 672, "text": "(Germann et al., 2001", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 268, "end": 276, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "The IBM Model 4", "sec_num": "2" }, { "text": "Companies that specialize in producing highquality human translations of documentation and news rely often on translation memory tools to increase their productivity (Sprung, 2000) . Building high-quality TMEM is an expensive process that requires many person-years of work. Since we are not in the fortunate position of having access to an existing TMEM, we decided to build one automatically.", "cite_spans": [ { "start": 166, "end": 180, "text": "(Sprung, 2000)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "We trained IBM translation model 4 on 500,000 English-French sentence pairs from the Hansard corpus. We then used the Viterbi alignment of each sentence, i.e., the alignment of highest probability, to extract tuples of the form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "g h \u00a3 a A h \u00a3 \u00a5 i \u00a7 \u00a6 p A r q r q r q p A h \u00a3 \u00a5 i t s D u r v R A D u r v B i \u00a7 \u00a6 r A r q r q r q p A D u v 5 i \u00a1 s C 8 w v R A C 8 w v B i \u00a7 \u00a6 r A q r q r q p A C 8 v B i \u00a1 y x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": ", where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "h \u00a3 A h \u00a3 \u00a5 i \u00a7 \u00a6 A r q r q r q p A h \u00a3 \u00a5 i represents a contiguous English phrase, u r v R A D u r v B i \u00a7 \u00a6 7 A r q r q r q A D u v 5 i \u00a1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "represents a contiguous French phrase, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "8 ! v R A C 8 ! v 5 i \u00a7 \u00a6 p A r q r q r q p A C 8 v B i \u00a1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "represents the Viterbi alignment between the two phrases. We selected only \"contiguous\" alignments, i.e., alignments in which the words in the English phrase generated only words in the French phrase and each word in the French phrase was generated either by the NULL word or a word from the English phrase. We extracted only tuples in which the English and French phrases contained at least two words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "For example, in the Viterbi alignment of the two sentences in Figure 2 , which was produced automatically, \"there\" and \".\" are words of fertility 0, NULL generates the French lexeme \".\", \"is\" generates \"est\", \"no\" generates \"aucun\" and \"ne\", and so on. From this alignment we extracted the Table 1 , because they were the only ones that satisfied all conditions mentioned above. For example, the pair g no one ; aucun syndicat particulier ne", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 70, "text": "Figure 2", "ref_id": "FIGREF3" }, { "start": 290, "end": 297, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "x does not occur in the translation memory because the French word \"syndicat\" is generated by the word \"union\", which does not occur in the English phrase \"no one\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "By extracting all tuples of the form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "g h s D u d s C 8 e x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "from the training corpus, we ended up with many duplicates and with French phrases that were paired with multiple English translations. We chose for each French phrase only one possible English translation equivalent. We tried out two distinct methods for choosing a translation equivalent, thus constructing two different probabilistic TMEMs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "e", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "The Frequency-based Translation MEMory (FTMEM) was created by associating with each French phrase the English equivalent that occurred most often in the collection of phrases that we extracted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a statistical translation memory", "sec_num": "3" }, { "text": "The Probability-based Translation MEMory (PTMEM) was created by associating with each French phrase the English equivalent that corresponded to the alignment of highest probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "e", "sec_num": null }, { "text": "In contrast to other TMEMs, our TMEMs explicitly encode not only the mutual translation pairs but also their corresponding word-level alignments, which are derived according to a certain translation model (in our case, IBM model 4). The mutual translations can be anywhere between two words long to complete sentences. Both methods yielded translation memories that contained around 11.8 million word-aligned translation pairs. Due to efficiency considerations and memory limitations -the software we wrote loads a complete TMEM into the memory -we used in our experiments only a fraction of the TMEMs, those that contained phrases at most 10 words long. This yielded a working FTMEM of 4.1 million and a PTMEM of 5.7 million phrase translation pairs aligned at the word level using IBM statistical model 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "e", "sec_num": null }, { "text": "To evaluate the quality of both TMEMs we built, we extracted randomly 200 phrase pairs from each TMEM. These phrases were judged by a bilingual speaker as e perfect translations if she could imagine contexts in which the aligned phrases could be mutual translations of each other; e almost perfect translations if the aligned phrases were mutual translations of each other and one phrase contained one single word with no equivalent in the other language 2 ; e incorrect translations if the judge could not imagine any contexts in which the aligned phrases could be mutual translations of each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "e", "sec_num": null }, { "text": "The results of the evaluation are shown in Table 2. A visual inspection of the phrases in our TMEMs and the judgments made by the evaluator suggest that many of the translations labeled as incorrect make sense when assessed in a larger context. For example, \"autres r\u00e9gions de le pays que\" and \"other parts of Canada than\" were judged as incorrect. However, when considered in a context in which it is clear that \"Canada\" and \"pays\" corefer, it would be reasonable to assume that the translation is correct. Table 3 shows a few examples of phrases from our FTMEM and their corresponding correctness judgments.", "cite_spans": [], "ref_spans": [ { "start": 508, "end": 515, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "e", "sec_num": null }, { "text": "Although we found our evaluation to be extremely conservative, we decided nevertheless to stick to it as it adequately reflects constraints specific to high-standard translation environments in which TMEMs are built manually and constantly checked for quality by specialized teams (Sprung, 2000) .", "cite_spans": [ { "start": 281, "end": 295, "text": "(Sprung, 2000)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "e", "sec_num": null }, { "text": "The results in Table 2 show that about 70% of the entries in our translation memory are correct or almost correct (very easy to fix). It is, though, an empirical question to what extend such TMEMs can be used to improve the performance of current translation systems. To determine this, we modified an existing decoding algorithm so that it can exploit information specific both to a statistical translation model and a statistical TMEM. The decoding algorithm that we use is a greedy one -see (Germann et al., 2001 ) for details. The decoder guesses first an English translation for the French sentence given as input and then attempts to improve it by exploring greedily alternative translations from the immediate translation space. We modified the greedy decoder described by Germann et al. (2001) so that it attempts to find good translation starting from two distinct points in the space of possible translations: one point corresponds to a word-for-word \"gloss\" of the French input; the other point corresponds to a translation that resembles most closely translations stored in the TMEM.", "cite_spans": [ { "start": 494, "end": 515, "text": "(Germann et al., 2001", "ref_id": "BIBREF4" }, { "start": 780, "end": 801, "text": "Germann et al. (2001)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Statistical decoding using both a statistical TMEM and a statistical translation model", "sec_num": "4" }, { "text": "As discussed by Germann et al. (2001) , the word-for-word gloss is constructed by aligning each French word f v with its most likely English translation e fk (e fk m l argmaxn t(e f v )). For example, in translating the French sentence \"Bien entendu , il parle de une belle victoire .\", the greedy decoder initially assumes that a good translation of it is \"Well heard , it talking a beautiful victory\" because the best translation of \"bien\" is \"well\", the best translation of \"entendu\" is \"heard\", and so on. A word-for-word gloss results (at best) in English words written in French word order.", "cite_spans": [ { "start": 16, "end": 37, "text": "Germann et al. (2001)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical decoding using both a statistical TMEM and a statistical translation model", "sec_num": "4" }, { "text": "The translation that resembles most closely translations stored in the TMEM is constructed by deriving a \"cover\" for the input sentence using phrases from the TMEM. The derivation attempts to cover with translation pairs from the TMEM as much of the input sentence as possible, using the longest phrases in the TMEM. The words in the input that are not part of any phrase extracted from the TMEM are glossed. For example, this approach may start the translation process from the phrase \"well , he is talking a beautiful victory\" if the TMEM contains the pairs If the input sentence is found \"as is\" in the translation memory, its translation is simply returned and there is no further processing. Otherwise, once an initial alignment is created, the greedy decoder tries to improve it, i.e., it tries to find an alignment (and implicitly a translation) of higher probability by modifying locally the initial alignment. The decoder attempts to find alignments and translations of higher probability by employing a set of simple operations, such as changing the translation of one or two words in the alignment under consideration, inserting into or deleting from the alignment words of fertility zero, and swapping words or segments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical decoding using both a statistical TMEM and a statistical translation model", "sec_num": "4" }, { "text": "In a stepwise fashion, starting from the initial gloss or initial cover, the greedy decoder iterates exhaustively over all alignments that are one such simple operation away from the alignment under consideration. At every step, the decoder chooses the alignment of highest probability, until the probability of the current alignment can no longer be improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical decoding using both a statistical TMEM and a statistical translation model", "sec_num": "4" }, { "text": "We extracted from the test corpus a collection of 505 French sentences, uniformly distributed across the lengths 6, 7, 8, 9, and 10. For each French sentence, we had access to the humangenerated English translation in the test corpus, and to translations generated by two commercial systems. We produced translations using three versions of the greedy decoder: one used only the statistical translation model, one used the translation model and the FTMEM, and one used the translation model and the PTMEM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "We initially assessed how often the translations obtained from TMEM seeds had higher proba-Sent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "Higher Same Higher length in prob. result prob. FTMEM from from FTMEM gloss 6 33 9 43 16 7 27 9 48 17 8 29 16 42 14 9 31 15 28 27 10 31 9 43 18 All (%) 30% 12% 40% 18% bility than the translations obtained from simple glosses. Tables 4 and 5 show that the translation memories significantly help the decoder find translations of high probability. In about 30% of the cases, the translations are simply copied from a TMEM and in about 13% of the cases the translations obtained from a TMEM seed have higher probability that the best translations obtained from a simple gloss. In 40% of the cases both seeds (the TMEM and the gloss) yield the same translation. Only in about 15-18% of the cases the translations obtained from the gloss are better than the translations obtained from the TMEM seeds. It appears that both TMEMs help the decoder find translations of higher probability consistently, across all sentence lengths. In a second experiment, a bilingual judge scored the human translations extracted from the automatically aligned test corpus; the translations produced by a greedy decoder that use both TMEM and gloss seeds; the translations produced by a greedy decoder that uses only the statistical model and the gloss seed; and translations produced by two commercial systems (A and B). If an English translation had the very same meaning as the French original, it was considered semantically correct. If the meaning was just a little different, the transla-tion was considered semantically incorrect. For example, \"this is rather provision disturbing\" was judged as a correct semantical translation of \"voil\u00e0 une disposition plot\u00f4t inqui\u00e9tante\", but \"this disposal is rather disturbing\" was judged as incorrect.", "cite_spans": [], "ref_spans": [ { "start": 7, "end": 174, "text": "Same Higher length in prob. result prob. FTMEM from from FTMEM gloss 6 33 9 43 16 7 27 9 48 17 8 29 16 42 14 9 31 15 28 27 10 31 9 43", "ref_id": "TABREF0" }, { "start": 261, "end": 275, "text": "Tables 4 and 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Found", "sec_num": null }, { "text": "e If a translation was perfect from a grammatical perspective, it was considered to be grammatical. Otherwise, it was considered incorrect. For example, \"this is rather provision disturbing\" was judged as ungrammatical, although one may very easily make sense of it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Found", "sec_num": null }, { "text": "We decided to use such harsh evaluation criteria because, in previous experiments, we repeatedly found that harsh criteria can be applied consistently. To ensure consistency during evaluation, the judge used a specialized interface: once the correctness of a translation produced by a system S was judged, the same judgment was automatically recorded with respect to the other systems as well. This way, it became impossible for a translation to be judged as correct when produced by one system and incorrect when produced by another system. Table 6 , which summarizes the results, displays the percent of perfect translations (both semantically and grammatically) produced by a variety of systems. Table 6 shows that translations produced using both TMEM and gloss seeds are much better than translations that do not use TMEMs. The translation systems that use both a TMEM and the statistical model outperform significantly the two commercial systems. The figures in Table 6 also reflect the harshness of our evaluation metric: only 82% of the human translations extracted from the test corpus were considered perfect translation. A few of the errors were genuine, and could be explained by failures of the sentence alignment program that was used to create the corpus (Melamed, 1999) . Most of the errors were judged as semantic, reflecting directly the harshness of our evaluation metric.", "cite_spans": [ { "start": 1270, "end": 1285, "text": "(Melamed, 1999)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 542, "end": 549, "text": "Table 6", "ref_id": null }, { "start": 699, "end": 706, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Found", "sec_num": null }, { "text": "The approach to translation described in this paper is quite general. It can be applied in conjunction with other statistical translation mod-Sentence Humans Greedy with Greedy with Greedy without Commercial Commercial length FTMEM PTMEM TMEM system A system B 6 92 72 70 52 55 59 7 73 58 52 37 42 43 8 80 53 52 30 38 29 9 84 53 53 37 40 35 10 85 57 60 36 40 37 All(%) 82% 58% 57% 38% 42% 40% Table 6 : Percent of perfect translations produced by various translation systems and algorithms.", "cite_spans": [], "ref_spans": [ { "start": 232, "end": 407, "text": "PTMEM TMEM system A system B 6 92 72 70 52 55 59 7 73 58 52 37 42 43 8 80 53 52 30 38 29 9 84 53 53 37 40 35 10 85 57 60 36 40 37 All(%)", "ref_id": "TABREF0" }, { "start": 432, "end": 439, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "els. And it can be applied in conjunction with existing translation memories. To do this, one would simply have to train the statistical model on the translation memory provided as input, determine the Viterbi alignments, and enhance the existing translation memory with word-level alignments as produced by the statistical translation model. We suspect that using manually produced TMEMs can only increase the performance as such TMEMs undergo periodic checks for quality assurance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The work that comes closest to using a statistical TMEM similar to the one we propose here is that of Vogel and Ney (2000) , who automatically derive from a parallel corpus a hierarchical TMEM. The hierarchical TMEM consists of a set of transducers that encode a simple grammar. The transducers are automatically constructed: they reflect common patterns of usage at levels of abstractions that are higher than the words. Vogel and Ney (2000) do not evaluate their TMEM-based system, so it is difficult to empirically compare their approach with ours. From a theoretical perspective, it appears though that the two approaches are complementary: Vogel and Ney (2000) identify abstract patterns of usage and then use them during translation. This may address the data sparseness problem that is characteristic to any statistical modeling effort and produce better translation parameters.", "cite_spans": [ { "start": 102, "end": 122, "text": "Vogel and Ney (2000)", "ref_id": "BIBREF12" }, { "start": 422, "end": 442, "text": "Vogel and Ney (2000)", "ref_id": "BIBREF12" }, { "start": 645, "end": 665, "text": "Vogel and Ney (2000)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "In contrast, our approach attempts to stir the statistical decoding process into directions that are difficult to reach when one relies only on the parameters of a particular translation model. For example, the two phrases \"il est mort\" and \"he kicked the bucket\" may appear only in one sentence in an arbitrary large corpus. The parameters learned from the entire corpus will very likely associate very low probability to the words \"kicked\" and \"bucket\" being translated into \"est\" and \"mort\". Because of this, a statistical-based MT system will have trouble producing a translation that uses the phrase \"kick the bucket\", no matter what decoding technique it employs. However, if the two phrases are stored in the TMEM, producing such a translation becomes feasible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "If optimal decoding algorithms capable of searching exhaustively the space of all possible translations existed, using TMEMs in the style presented in this paper would never improve the performance of a system. Our approach works because it biases the decoder to search in subspaces that are likely to yield translations of high probability, subspaces which otherwise may not be explored. The bias introduced by TMEMs is a practical alternative to finding optimal translations, which is NP-complete (Knight, 1999) .", "cite_spans": [ { "start": 499, "end": 513, "text": "(Knight, 1999)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "It is clear that one of the main strengths of the TMEM is its ability to encode contextual, longdistance dependencies that are incongruous with the parameters learned by current context poor, reductionist channel models. Unfortunately, the criterion used by the decoder in order to choose between a translation produced starting from a gloss and one produced starting from a TMEM is biased in favor of the gloss-based translation. It is possible for the decoder to produce a perfect translation using phrases from the TMEM, and yet, to discard the perfect translation in favor of an incorrect translation of higher probability that was obtained from a gloss (or from the TMEM). It would be desirable to develop alternative ranking techniques that would permit one to prefer in some instances a TMEM-based translation, even though that translation is not the best according to the probabilistic channel model. The examples in Table 7 shows though that this is not trivial: it is not always the case that the translation of high- est probability is the perfect one. The first French sentence in Table 7 is correctly translated with or without help from the translation memory. The second sentence is correctly translated only when the system uses a TMEM seed; and fortunately, the translation of highest probability is the one obtained using the TMEM seed. The translation obtained from the TMEM seed is also correct for the third sentence. But unfortunately, in this case, the TMEM-based translation is not the most probable.", "cite_spans": [], "ref_spans": [ { "start": 925, "end": 932, "text": "Table 7", "ref_id": "TABREF7" }, { "start": 1093, "end": 1100, "text": "Table 7", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "For the rest of this paper, we use the terms source and target languages according to the jargon specific to the noisy-channel framework. In this framework, the source language is the language into which the machine translation system translates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For example, the translation pair \"final , le secr\u00e9taire de\" and \"final act , the secretary of\" were labeled as almost perfect because the English word \"act\" has no French equivalent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgments. This work was supported by DARPA-ITO grant N66001-00-1-9814.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Statistical machine translation. Final Report, JHU Summer Workshop", "authors": [ { "first": "Yaser", "middle": [], "last": "Al-Onaizan", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Curin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Jahr", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" }, { "first": "Franz-Josef", "middle": [], "last": "Och", "suffix": "" }, { "first": "David", "middle": [], "last": "Purdy", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaser Al-Onaizan, Jan Curin, Michael Jahr, Kevin Knight, John Lafferty, Dan Melamed, Franz-Josef Och, David Purdy, Noah A. Smith, and David Yarowsky. 1999. Statistical machine translation. Final Report, JHU Summer Workshop.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Stephen", "middle": [ "A Della" ], "last": "Brown", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Pa- rameter estimation. Computational Linguistics, 19(2):263-311.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Adding linguistic knowledge to a lexical example-based translation system", "authors": [ { "first": "D", "middle": [], "last": "Ralph", "suffix": "" }, { "first": "", "middle": [], "last": "Brown", "suffix": "" } ], "year": 1999, "venue": "Proceedings of TMI'99", "volume": "", "issue": "", "pages": "22--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph D. Brown. 1999. Adding linguistic knowledge to a lexical example-based translation system. In Proceedings of TMI'99, pages 22-32, Chester, Eng- land.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Maximum likelihood from incomplete data via the em algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society", "volume": "39", "issue": "", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical So- ciety, 39(Ser B):1-38.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Fast decoding and optimal decoding for machine translation", "authors": [ { "first": "Ulrich", "middle": [], "last": "Germann", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Jahr", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL'01", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ulrich Germann, Mike Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. 2001. Fast decoding and optimal decoding for machine translation. In Proceedings of ACL'01, Toulouse, France.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Decoding complexity in wordreplacement translation models", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight. 1999. Decoding complexity in word- replacement translation models. Computational Linguistics, 25(4).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Tree cover search algorithm for example-based translation", "authors": [ { "first": "H", "middle": [], "last": "Maruyana", "suffix": "" }, { "first": "H", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 1992, "venue": "Proceedings of TMI'92", "volume": "", "issue": "", "pages": "173--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Maruyana and H. Watanabe. 1992. Tree cover search algorithm for example-based translation. In Proceedings of TMI'92, pages 173-184.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bitext maps and alignment via pattern recognition", "authors": [ { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "25", "issue": "1", "pages": "107--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Melamed. 1999. Bitext maps and alignment via pattern recognition. Computational Linguistics, 25(1):107-130.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improved alignment models for statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "Herman", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the EMNLP and VLC", "volume": "", "issue": "", "pages": "20--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och, Christoph Tillmann, and Herman Ney. 1999. Improved alignment models for sta- tistical machine translation. In Proceedings of the EMNLP and VLC, pages 20-28, University of Maryland, Maryland.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "CTM: an example-based translation aid system using the character-based match retrieval method", "authors": [ { "first": "S", "middle": [], "last": "Sato", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 14th International Conference on Computational Linguistics (COLING'92)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Sato. 1992. CTM: an example-based transla- tion aid system using the character-based match re- trieval method. In Proceedings of the 14th Inter- national Conference on Computational Linguistics (COLING'92), Nantes, France.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Translating Into Success: Cutting-Edge Strategies For Going Multilingual In A Global Age", "authors": [], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert C. Sprung, editor. 2000. Translating Into Suc- cess: Cutting-Edge Strategies For Going Multilin- gual In A Global Age. John Benjamins Publishers.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Gaijin: A template-based bootstrapping approach to examplebased machine translation", "authors": [ { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" } ], "year": 1997, "venue": "Proceedings of \"New Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tony Veale and Andy Way. 1997. Gaijin: A template-based bootstrapping approach to example- based machine translation. In Proceedings of \"New Methods in Natural Language Processing\", Sofia, Bulgaria.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Construction of a hierarchical translation memory", "authors": [ { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Herman", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proceedings of COLING'00", "volume": "", "issue": "", "pages": "1131--1135", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Vogel and Herman Ney. 2000. Construction of a hierarchical translation memory. In Proceedings of COLING'00, pages 1131-1135, Saarbr\u00fccken, Ger- many.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Grammar Inference and Statistical Machine Translation", "authors": [ { "first": "Ye-Yi", "middle": [], "last": "Wang", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ye-Yi Wang. 1998. Grammar Inference and Statis- tical Machine Translation. Ph.D. thesis, Carnegie Mellon University. Also available as CMU-LTI Technical Report 98-160.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Machine translation with a stochastic grammatical channel", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Hongsing", "middle": [], "last": "Wong", "suffix": "" } ], "year": 1998, "venue": "Proceedings of ACL'98", "volume": "", "issue": "", "pages": "1408--1414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu and Hongsing Wong. 1998. Machine trans- lation with a stochastic grammatical channel. In Proceedings of ACL'98, pages 1408-1414, Mon- treal, Canada.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A syntaxbased statistical translation model", "authors": [ { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL'01", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax- based statistical translation model. In Proceedings of ACL'01, Toulouse, France.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "The noisy channel model." }, "FIGREF3": { "num": null, "type_str": "figure", "uris": null, "text": "Example of Viterbi alignment produced by IBM model 4. six tuples shown in" }, "FIGREF4": { "num": null, "type_str": "figure", "uris": null, "text": "with the French phrase \"belle victoire\"." }, "FIGREF5": { "num": null, "type_str": "figure", "uris": null, "text": "e" }, "TABREF0": { "type_str": "table", "content": "
EnglishFrenchAlignment
one union no one union is no one union there is no one union is no one union involved there is no one union involved there is no f syndicat particulier one aucun syndicat particulier ne no f f h g particulieri ; unionf h g syndicati aucun, nei ; j g one f h g particulieri ; unionf syndicati h g aucun syndicat particulier ne est is f j g esti ; no f aucun, nei ; h g one f h g particulieri ; unionf syndicati h g aucun syndicat particulier ne est is f j g esti ; no f aucun, nei ; h g one f h g particulieri ; unionf syndicati h g aucun syndicat particulier ne est en cause is f j g esti ; no f aucun, nei ; h g one f h g particulieri ; unionf syndicati h g involved f en causei j g aucun syndicat particulier ne est en cause is f j g esti ; no f aucun, nei ; h g one f h g particulieri ; unionf syndicati h g involved f en causei j g j g esti ; no f aucun, nei ; h g one f h g particulieri ; unionf syndicati h g involved f j g en causei ; NULL f j g .
TMEMPerfect Almost IncorrectUnable
perfectto judge
FTMEM 62.5%8.5%27.0%2.0%
PTMEM 57.5%7.5%33.5%1.5%
", "num": null, "html": null, "text": "one union involved . aucun syndicat particulier ne est en cause . is Examples of automatically constructed statistical translation memory entries." }, "TABREF1": { "type_str": "table", "content": "", "num": null, "html": null, "text": "" }, "TABREF3": { "type_str": "table", "content": "
", "num": null, "html": null, "text": "Examples of TMEM entries with correctness judgments." }, "TABREF4": { "type_str": "table", "content": "
Sent.FoundHigherSame Higher
lengthinprob.resultprob.
FTMEMfromfrom
FTMEMgloss
63394316
727105014
830164114
931153619
1031153113
All (%)31%13%41%15%
", "num": null, "html": null, "text": "The utility of the FTMEM." }, "TABREF5": { "type_str": "table", "content": "", "num": null, "html": null, "text": "The utility of the PTMEM." }, "TABREF6": { "type_str": "table", "content": "
TranslationsDoes this translationIs thisIs this the translation
use TMEMtranslationof highest
phrases?correct?probability?
monsieur le pr\u00e9sident yesyesyes
i can you listen , brian .nonono
alors , je termine l\u00e0 -dessus .
therefore , i will conclude my remarks .yesyesno
therefore , i conclude -over .nonoyes
", "num": null, "html": null, "text": ", je aimerais savoir . mr. speaker , i would like to know . yes yes yes mr. speaker , i would like to know . no yes yes je ne peux vous entendre , brian . i cannot hear you , brian ." }, "TABREF7": { "type_str": "table", "content": "", "num": null, "html": null, "text": "Example of system outputs, obtained with or without TMEM help." } } } }