{ "paper_id": "C04-1015", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:19:16.404501Z" }, "title": "Example-based Machine Translation Based on Syntactic Transfer with Statistical Models", "authors": [ { "first": "Kenji", "middle": [], "last": "Imamura", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City\" Kyoto", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "country": "Japan" } }, "email": "kenji.imamura@atr.jp" }, { "first": "Hideo", "middle": [], "last": "Okuma", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City\" Kyoto", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "country": "Japan" } }, "email": "hideo.okuma@atr.jp" }, { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City\" Kyoto", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "country": "Japan" } }, "email": "taro.watanabe@atr.jp" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City\" Kyoto", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "country": "Japan" } }, "email": "eiichiro.sumita@atr.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents example-based machine translation (MT) based on syntactic transfer, which selects the best translation by using models of statistical machine translation. Example-based MT sometimes generates invalid translations because it selects similar examples to the input sentence based only on source language similarity. The method proposed in this paper selects the best translation by using a language model and a translation model in the same manner as statistical MT, and it can improve MT quality over that of 'pure' example-based MT. A feature of this method is that the statistical models are applied after word reordering is achieved by syntactic transfer. This implies that MT quality is maintained even when we only apply a lexicon model as the translation model. In addition, translation speed is improved by bottom-up generation, which utilizes the tree structure that is output from the syntactic transfer.", "pdf_parse": { "paper_id": "C04-1015", "_pdf_hash": "", "abstract": [ { "text": "This paper presents example-based machine translation (MT) based on syntactic transfer, which selects the best translation by using models of statistical machine translation. Example-based MT sometimes generates invalid translations because it selects similar examples to the input sentence based only on source language similarity. The method proposed in this paper selects the best translation by using a language model and a translation model in the same manner as statistical MT, and it can improve MT quality over that of 'pure' example-based MT. A feature of this method is that the statistical models are applied after word reordering is achieved by syntactic transfer. This implies that MT quality is maintained even when we only apply a lexicon model as the translation model. In addition, translation speed is improved by bottom-up generation, which utilizes the tree structure that is output from the syntactic transfer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In response to the ongoing expansion of bilingual corpora, many machine translation (MT) methods have been proposed that automatically acquire their knowledge or models from the corpora. Recently, two major approaches to such machine translation have emerged: example-based machine translation and statistical machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Example-based MT (Nagao, 1984) regards a bilingual corpus as a database and retrieves examples that are similar to an input sentence. Then, a translation is generated by modifying the target part of the examples while referring to translation dictionaries. Most example-based MT systems employ phrases or sentences as the unit for examples, so they can translate while considering case relations or idiomatic expressions. However, when some examples conflict during re- (Watanabe and Sumita, 2003) trieval, example-based MT selects the best example scored by the similarity between the input and the source part of the example. This implies that example-based MT does not check whether the translation of the given input sentence is correct or not. On the other hand, statistical MT employing IBM models (Brown et al., 1993) translates an input sentence by the combination of word transfer and word re-ordering. Therefore, when it is applied to a language pair in which the word order is quite different (e.g., English and Japanese, Figure 1 ), it becomes difficult to find a globally optimal solution due to the enormous search space (Watanabe and .", "cite_spans": [ { "start": 17, "end": 30, "text": "(Nagao, 1984)", "ref_id": "BIBREF9" }, { "start": 470, "end": 497, "text": "(Watanabe and Sumita, 2003)", "ref_id": "BIBREF16" }, { "start": 804, "end": 824, "text": "(Brown et al., 1993)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 1033, "end": 1042, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Statistical MT could generate high-quality translations if it succeeded in finding a globally optimal solution. Therefore, the models employed by statistical MT are superior indicators of the quality of machine translation. Using this feature, Akiba et al. (2002) achieved selection of the best translation among those output by multiple MT engines.", "cite_spans": [ { "start": 244, "end": 263, "text": "Akiba et al. (2002)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper presents an example-based MT method based on syntactic transfer, which selects the best translation by using models of statistical MT. This method is roughly structured using two modules ( Figure 2 ). One is an example-based syntactic transfer module. This module constructs tree structures of the target language by parsing and mapping the input sentence while referring to transfer rules. The other is a statistical generation module, which selects the best word sequence of the target language in the same manner as statistical MT. Therefore, this method is sequentially combined example-based and statistical MT. The proposed method has the following advantages.", "cite_spans": [], "ref_spans": [ { "start": 200, "end": 208, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 From the viewpoint of example-based MT, the quality of machine translation improves by selecting the best translation not only from the similarity judgment between the input sentence and the source part of the examples but also from the scoring of translation correctness represented by the word transfer and word connection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 From the viewpoint of statistical MT, an appropriate translation can be obtained even if we use simple models because a global search is applied after word re-ordering by syntactic transfer. In addition, the search space becomes smaller because the example-based transfer generates syntactically correct candidates for the most appropriate translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows: Section 2 describes the example-based syntactic transfer, Section 3 describes the statistical generation, Section 4 evaluates an experimental system that uses this method, and Section 5 compares other hybrid methods of example-based and statistical MT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The example-based syntactic transfer used in this paper is a revised version of the Hierarchical Phrase Alignment-based Translator (HPAT, refer to (Imamura, 2002) ). This section gives an overview with an example of Japanese-to-English machine translation.", "cite_spans": [ { "start": 147, "end": 162, "text": "(Imamura, 2002)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Example-based Syntactic Transfer", "sec_num": "2" }, { "text": "Transfer rules are automatically acquired from bilingual corpora by using hierarchical phrase alignment (HPA; (Imamura, 2001) ). HPA parses bilingual sentences and acquires corresponding syntactic nodes of the source and target sentences. The transfer rules are created from their node correspondences. Figure 3 shows an example of the transfer rules. Variables, such as X and Y in Figure 3, denote non-terminal symbols that correspond between source and target grammar. The set of transfer rules is regarded as synchronized context-free grammar.", "cite_spans": [ { "start": 110, "end": 125, "text": "(Imamura, 2001)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 303, "end": 311, "text": "Figure 3", "ref_id": null }, { "start": 382, "end": 388, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Transfer Rules", "sec_num": "2.1" }, { "text": "The difference between this approach and conventional synchronized context-free grammar is that source examples are added to each transfer rule. The source example is an instance (i.e., a headword) of the variables that appeared in the training corpora. For example, the source example of Rule 1 in Figure 3 is obtained from a phrase pair of the Japanese verb phrase \"furaito (flight) wo yoyaku-suru (reserve)\" and the English verb phrase \"make a reservation for the flight.\"", "cite_spans": [], "ref_spans": [ { "start": 299, "end": 307, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Transfer Rules", "sec_num": "2.1" }, { "text": "When an input sentence is given, the target tree structure is constructed in the following three steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Transfer Process", "sec_num": "2.2" }, { "text": "1. The input sentence is parsed by using the source grammar of the transfer rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Transfer Process", "sec_num": "2.2" }, { "text": "2. The nodes in the source tree are mapped to the target nodes by using transfer rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Transfer Process", "sec_num": "2.2" }, { "text": "3. If non-terminal symbols remain in the leaves of the target tree, candidates of translated words are inserted by referring to the translation dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Transfer Process", "sec_num": "2.2" }, { "text": "An example of the syntactic transfer process is shown in Figure 4 for the input sentence \"basu wa 11 ji ni de masu (The bus will leave at 11 o'clock).\" There are two points worthy of notice in this figure. First, nodes in which the word order is inverted are generated after transfer (cf. VP node represented by a bold frame). Word re-ordering is achieved by syntactic transfer. Second, words No.", "cite_spans": [ { "start": 379, "end": 392, "text": "Second, words", "ref_id": null } ], "ref_spans": [ { "start": 57, "end": 65, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Syntactic Transfer Process", "sec_num": "2.2" }, { "text": "Target Grammar Source Example 1 that do not correspond between the source and target sentences (e.g., the determiner 'a' or 'the') are automatically inserted or eliminated by the target grammar (cf. NP node represented by a bold frame). Namely, transfer rules work in a manner similar to the functions of distortion, fertility, and NULL in IBM models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Grammar", "sec_num": null }, { "text": "VP \u2192 X PP Y VP \u21d2 VP \u2192 Y VP X PP ((furaito (flight), yoyaku-suru (reserve)) ..) 2 VP \u2192 Y VP X ADVP ((soko (there), yuku (go)) ..) 3 VP \u2192 Y BEVP X NP ((hashi (bridge), aru (be)) ..) 4 S \u2192 X NP wa Y VP masu \u21d2 S \u2192 X NP Y VP ((kare (he), enso-suru (play)) ..) 5 S \u2192 X NP will Y VP ((basu (bus), tomaru (stop)) ..)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Grammar", "sec_num": null }, { "text": "Example-based transfer utilizes the source examples for disambiguation of mapping and parsing. Specifically, the semantic distance (Sumita and Iida, 1991) is calculated between the source examples and the headwords of the input sentence, and the transfer rules that contain the nearest example are used to construct the target tree structure. The semantic distance between words is defined as the distance from the leaf node to the most specific common abstraction (MSCA) in a thesaurus (Ohno and Hamanishi, 1984) .", "cite_spans": [ { "start": 131, "end": 154, "text": "(Sumita and Iida, 1991)", "ref_id": "BIBREF13" }, { "start": 487, "end": 513, "text": "(Ohno and Hamanishi, 1984)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Usage of Source Examples", "sec_num": "2.3" }, { "text": "For example, if the input phrase \"ie (home) ni kaeru (return)\" is given, Rules 1 to 3 in Figure 3 are used for the syntactic transfer, and three target nodes are generated without any disambiguation. However, when we compare the source examples with the headword of the variables X (ie) and Y (kaeru), only Rule 2 is used for the transfer because the semantic distance of the example (soko (there), yuku (go)) is the nearest. In the current implementation, all rules that contain examples of the same distance are used.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 97, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Usage of Source Examples", "sec_num": "2.3" }, { "text": "Consequently, example-based transfer achieves translation while considering case relations or idiomatic expressions based on the semantic distance from the source examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Usage of Source Examples", "sec_num": "2.3" }, { "text": "Statistical generation searches for the most appropriate sequence of target words from the target tree output from the example-based syntactic transfer. The most appropriate sequence is determined from the product of the translation model and the language model in the same manner as statistical MT. In other words, when F and E denote the channel target and channel source sequence, respectively, the output word sequence\u00ca that satisfies the following equation is searched for.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model and Language Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E = argmax E P (E|F ) = argmax E P (E)P (F |E).", "eq_num": "(1)" } ], "section": "Translation Model and Language Model", "sec_num": "3.1" }, { "text": "We only utilize the lexicon model as the translation model in this paper, similar to the models proposed by Vogel et al. (2003) . Namely, when f and e denote the channel target and channel source word, respectively, the translation probability is computed by the following equation.", "cite_spans": [ { "start": 108, "end": 127, "text": "Vogel et al. (2003)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model and Language Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (F |E) = j i t(f j |e i ).", "eq_num": "(2)" } ], "section": "Translation Model and Language Model", "sec_num": "3.1" }, { "text": "The IBM models include other models, such as fertility, NULL, and distortion models. As we described in Section 2.2, the quality of machine translation is maintained using only the lexicon model because syntactical correctness is already preserved by example-based transfer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model and Language Model", "sec_num": "3.1" }, { "text": "For the language model, we utilize a standard word n-gram model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model and Language Model", "sec_num": "3.1" }, { "text": "We can construct word graphs by serializing the target tree structure, which allows us to select the best word sequence from the graphs. However, the tree structure already shares nodes transferred from the same input sub-sequence. The cost of calculating probabilities is equivalent if we calculate the probabilities while serializing the tree structure. We call this method bottom-up generation in this paper. Figure 5 shows a partial example of bottomup generation when the target tree in Figure 4 is given. For each node, word sub-sequences and their probabilities (language and translation) are obtained from child nodes. Then, the new probabilities of the word sequence combination are calculated, and the n-best sequences are selected. These n-best sequences and their probabilities are reused to calculate the probabilities of parent nodes. When the translation probability is calculated, the source word sub-sequence is obtained by tracing transfer mapping, and the applied translation model is restricted to the source sub-sequence. In other words, the translation probability is locally calculated between the corresponding phrases. When the generation reaches the top node, the language probability is re-calculated with marks for start-of-sentence and end-of-sentence, and the n-best list is re-sorted. As a result, the translation \"The bus will leave at 11 o'clock\" is obtained from the tree of Figure 4 .", "cite_spans": [], "ref_spans": [ { "start": 412, "end": 420, "text": "Figure 5", "ref_id": null }, { "start": 492, "end": 500, "text": "Figure 4", "ref_id": "FIGREF2" }, { "start": 1409, "end": 1417, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Bottom-up Generation", "sec_num": "3.2" }, { "text": "Bottom-up generation calculates the probabilities of shared nodes only once, so it effectively uses tree information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bottom-up Generation", "sec_num": "3.2" }, { "text": "In order to evaluate the effect when models of statistical MT are integrated into example-based MT, we compared various methods that changed the statistical generation module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "The corpus used in the following experiments is the Basic Travel Expression Corpus (Takezawa et al., 2002; Kikui et al., 2003) . This is a collection of Japanese sentences and their English translations based on expressions that are usually found in phrasebooks for foreign tourists. We divided it into subsets for training and testing as shown in Table 1 .", "cite_spans": [ { "start": 83, "end": 106, "text": "(Takezawa et al., 2002;", "ref_id": "BIBREF14" }, { "start": 107, "end": 126, "text": "Kikui et al., 2003)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 348, "end": 355, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experimental Setting Bilingual Corpus", "sec_num": "4.1" }, { "text": "Transfer Rules Transfer rules were acquired from the training set using hierarchical phrase alignment, and low-frequency rules that appeared less than twice were removed. The number of rules was 24,310.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting Bilingual Corpus", "sec_num": "4.1" }, { "text": "We used a lexicon model of IBM Model 4 learned by GIZA++ (Och and Ney, 2003) and word bigram and trigram models learned by CMU-Cambridge Statistical Language Modeling Toolkit (Clarkson and Rosenfeld, 1997) .", "cite_spans": [ { "start": 50, "end": 76, "text": "GIZA++ (Och and Ney, 2003)", "ref_id": null }, { "start": 175, "end": 205, "text": "(Clarkson and Rosenfeld, 1997)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model and Language Model", "sec_num": null }, { "text": "We compared the following four methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Methods", "sec_num": null }, { "text": "The best translation that had the same semantic distance was randomly selected from the the bus TM: - Figure 5 : Example of Bottom-up Generation (TM and LM denote log probabilities of the translation and language models, respectively) tree that was output from the example-based transfer module. The translation words were selected in advance as those having the highest frequency in the training corpus. This is the baseline for translating a sentence when using only the example-based transfer.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 110, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "\u2022 Baseline (Example-based Transfer only)", "sec_num": null }, { "text": "The bottom-up generation selects the best translation from the outputs of the examplebased transfer. We used a 100-best criterion in this experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Bottom-up", "sec_num": null }, { "text": "\u2022 All Search For all combinations that can be generated from the outputs of the example-based transfer, we calculated the translation and language probabilities and selected the best translation. Namely, a globally optimal solution was selected when the search space was restricted by the example-based transfer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Bottom-up", "sec_num": null }, { "text": "In the same way as All Search, the best translation was searched for, but only the language model was used for calculating probabilities. The purpose of this experiment is to measure the influence of the translation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 LM Only", "sec_num": null }, { "text": "From the test set, 510 sentences were evaluated by the following automatic and subjective evaluation metrics. The number of reference translations for automatic evaluation was 16 per sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "BLEU: Automatic evaluation by BLEU score (Papineni et al., 2002) .", "cite_spans": [ { "start": 41, "end": 64, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "NIST: Automatic evaluation by NIST score (Doddington, 2002) . Table 2 shows the results of the MT quality and translation speed among each method. First, comparing the baseline with the statistical generations (Bottom-up and All Search), the MT quality of statistical generation improved in all evaluation metrics. Accordingly, the models of statistical MT are effective for improving the MT quality of example-based MT.", "cite_spans": [ { "start": 41, "end": 59, "text": "(Doddington, 2002)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 62, "end": 69, "text": "Table 2", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "Next, comparing Bottom-up with All Search, the MT quality of bottom-up generation was slightly low. Bottom-up generation locally applies the translation model to a partial tree. In other words, the probability is calculated without word alignment linked to the outside of the tree. This result indicates that the results of bottom-up generation are not equal to the global optimal solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Comparing LM Only with the statistical generations, the MT quality of ranks A+B+C by subjective evaluation significantly decreased. This is because the n-gram language model used here does not consider output length, and shorter translations are preferred. Although the language model was effective to some degree, it could not evaluate the equivalence of the translation and the input sentence. Therefore, we concluded that the translation model is necessary for improving MT quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Finally, focusing on translation speed, the worst time for Bottom-up generation was dramatically faster than that for All Search. Bottom-up generation effectively uses shared nodes of the target tree, so it can improve translation speed. Therefore, bottom-up generation is suitable for tasks that require real-time processing, such as spoken dialogue translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "We incorporated example-based MT in models of statistical MT. However, some methods to obtain initial solutions of statistical MT by example-based MT have already been proposed. For example, Marcu (2001) proposed a method in which initial translations are constructed by combining bilingual phrases from translation memory, which is followed by modifying the translations by greedy decoding (Germann et al., 2001 ). Watanabe and Sumita (2003) proposed a decoding algorithm in which translations that are similar to the input sentence are retrieved from bilingual corpora and then modified by greedy decoding.", "cite_spans": [ { "start": 191, "end": 203, "text": "Marcu (2001)", "ref_id": "BIBREF8" }, { "start": 391, "end": 412, "text": "(Germann et al., 2001", "ref_id": null }, { "start": 416, "end": 442, "text": "Watanabe and Sumita (2003)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "The difference between our method and these methods involves whether modification is applied. Our approach simply selects the best translation from candidates that are output from examplebased MT. Even though example-based MT can output appropriate translations to some degree, our method assumes that the candidates contain a globally optimal solution. This means that the upper bound of MT quality is limited by the example-based transfer, so we have to improve this stage in order to further improve MT quality. For instance, example-based MT can be improved by applying an optimization algorithm that uses an automatic evaluation of MT quality (Imamura et al., 2003) .", "cite_spans": [ { "start": 648, "end": 670, "text": "(Imamura et al., 2003)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "This paper demonstrated that example-based MT can be improved by incorporating it in models of statistical MT. The example-based MT used in this paper is based on syntactic transfer, so word reordering is achieved in the transfer module. Using this feature, the best translation was selected by using only a lexicon model and an n-gram language model. In addition, bottom-up generation achieved faster translation speed by using the tree structure of the target sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" } ], "back_matter": [ { "text": "The authors would like to thank Kadokawa Publishers, who permitted us to use the hierarchy of Ruigo-shin-jiten.The research reported here is supported in part by a contract with the Telecommunications Advancement Organization of Japan entitled, \"A study of speech dialogue translation technology based on a large corpus.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Using language and translation models to select the best among outputs from multiple MT systems", "authors": [ { "first": "Yasuhiro", "middle": [], "last": "Akiba", "suffix": "" }, { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2002, "venue": "Proceedings of COLING-2002", "volume": "", "issue": "", "pages": "8--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yasuhiro Akiba, Taro Watanabe, and Eiichiro Sumita. 2002. Using language and transla- tion models to select the best among outputs from multiple MT systems. In Proceedings of COLING-2002, pages 8-14.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The mathematics of machine translation: Parameter estimation", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Stephen", "middle": [ "A Della" ], "last": "Brown", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of machine translation: Pa- rameter estimation. Computational Linguistics, 19(2):263-311.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Statistical language modeling using the CMU-Cambridge toolkit", "authors": [ { "first": "Philip", "middle": [], "last": "Clarkson", "suffix": "" }, { "first": "Ronald", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1997, "venue": "Proceedings of Eu-roSpeech 97", "volume": "", "issue": "", "pages": "2707--2710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Clarkson and Ronald Rosenfeld. 1997. Statistical language modeling using the CMU- Cambridge toolkit. In Proceedings of Eu- roSpeech 97, pages 2707-2710.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics", "authors": [ { "first": "George", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2001, "venue": "Proceedings of 39th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "228--235", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proceedings of the HLT Conference, San Diego, California. Ulrich Germann, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. 2001. Fast decoding and optimal decoding for machine translation. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 228-235.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Feedback cleaning of machine translation rules using automatic evaluation", "authors": [ { "first": "Kenji", "middle": [], "last": "Imamura", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL 2003)", "volume": "", "issue": "", "pages": "447--454", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Imamura, Eiichiro Sumita, and Yuji Mat- sumoto. 2003. Feedback cleaning of machine translation rules using automatic evaluation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL 2003), pages 447-454.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Hierarchical phrase alignment harmonized with parsing", "authors": [ { "first": "Kenji", "middle": [], "last": "Imamura", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 6th Natural Language Processing Pacific Rim Symposium (NLPRS 2001)", "volume": "", "issue": "", "pages": "377--384", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Imamura. 2001. Hierarchical phrase align- ment harmonized with parsing. In Proceed- ings of the 6th Natural Language Processing Pacific Rim Symposium (NLPRS 2001), pages 377-384.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Application of translation knowledge acquired by hierarchical phrase alignment for pattern-based MT", "authors": [ { "first": "Kenji", "middle": [], "last": "Imamura", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 9th Conference on Theoretical and Methodological Issues in Machine Translation (TMI-2002)", "volume": "", "issue": "", "pages": "74--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Imamura. 2002. Application of transla- tion knowledge acquired by hierarchical phrase alignment for pattern-based MT. In Proceed- ings of the 9th Conference on Theoretical and Methodological Issues in Machine Translation (TMI-2002), pages 74-84.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Creating corpora for speech-to-speech translation", "authors": [ { "first": "Genichiro", "middle": [], "last": "Kikui", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Toshiyuki", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "Seiichi", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2003, "venue": "Proceedings of EuroSpeech 2003", "volume": "", "issue": "", "pages": "381--384", "other_ids": {}, "num": null, "urls": [], "raw_text": "Genichiro Kikui, Eiichiro Sumita, Toshiyuki Takezawa, and Seiichi Yamamoto. 2003. Cre- ating corpora for speech-to-speech translation. In Proceedings of EuroSpeech 2003, pages 381-384.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Towards a unified approach to memory-and statistical-based machine translation", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2001, "venue": "Proceedings of 39th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "386--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu. 2001. Towards a unified approach to memory-and statistical-based machine trans- lation. In Proceedings of 39th Annual Meeting of the Association for Computational Linguis- tics, pages 386-393.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A framework of mechanical translation between Japanese and English by analogy principle", "authors": [ { "first": "Makoto", "middle": [], "last": "Nagao", "suffix": "" } ], "year": 1984, "venue": "Artificial and Human Intelligence", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Makoto Nagao. 1984. A framework of mechani- cal translation between Japanese and English by analogy principle. In Artificial and Human In- telligence, pages 173-180, Amsterdam: North- Holland.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Ruigo-Shin-Jiten. Kadokawa, Tokyo", "authors": [ { "first": "Susumu", "middle": [], "last": "Ohno", "suffix": "" }, { "first": "Masato", "middle": [], "last": "Hamanishi", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Susumu Ohno and Masato Hamanishi. 1984. Ruigo-Shin-Jiten. Kadokawa, Tokyo. in Japanese.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for au- tomatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311-318.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Experiments and prospects of example-based machine translation", "authors": [ { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Hitoshi", "middle": [], "last": "Iida", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 29th ACL", "volume": "", "issue": "", "pages": "185--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eiichiro Sumita and Hitoshi Iida. 1991. Experi- ments and prospects of example-based machine translation. In Proceedings of the 29th ACL, pages 185-192.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world", "authors": [ { "first": "Toshiyuki", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Fumiaki", "middle": [], "last": "Sugaya", "suffix": "" }, { "first": "Hirofumi", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "Seiichi", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Third International Conference on Language Resources and Evaluation (LREC 2002)", "volume": "", "issue": "", "pages": "147--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Toshiyuki Takezawa, Eiichiro Sumita, Fumiaki Sugaya, Hirofumi Yamamoto, and Seiichi Ya- mamoto. 2002. Toward a broad-coverage bilin- gual corpus for speech translation of travel con- versations in the real world. In Proceedings of the Third International Conference on Lan- guage Resources and Evaluation (LREC 2002), pages 147-152.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The CMU statistical machine translation system", "authors": [ { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Alicia", "middle": [], "last": "Tribble", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Venugopal", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 9th Machine Translation Summit", "volume": "", "issue": "", "pages": "402--409", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Vogel, Ying Zhang, Fei Huang, Alicia Tribble, Ashish Venugopal, Bing Zhao, and Alex Waibel. 2003. The CMU statistical ma- chine translation system. In Proceedings of the 9th Machine Translation Summit (MT Summit IX), pages 402-409.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Example-based decoding for statistical machine translation", "authors": [ { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2003, "venue": "Proceedings of Machine Translation Summit IX", "volume": "", "issue": "", "pages": "410--417", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taro Watanabe and Eiichiro Sumita. 2003. Example-based decoding for statistical machine translation. In Proceedings of Machine Trans- lation Summit IX, pages 410-417.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Example of Word Alignment between English and Japanese", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "Structure of Proposed Method", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "Example of Syntactic Transfer Process (Bold frames are syntactic nodes mentioned in text)", "type_str": "figure", "num": null }, "TABREF3": { "text": "Corpus Size", "num": null, "type_str": "table", "content": "", "html": null }, "TABREF6": { "text": "MT Quality and Translation Speed vs. Generation Methods", "num": null, "type_str": "table", "content": "
", "html": null } } } }