{ "paper_id": "C04-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:21:09.085611Z" }, "title": "Improving Statistical Word Alignment with a Rule-Based Machine Translation System", "authors": [ { "first": "W", "middle": [ "U" ], "last": "Hua", "suffix": "", "affiliation": { "laboratory": "", "institution": "Toshiba (China) Research", "location": { "addrLine": "& Development Center 5/F., Oriental Plaza, No.1, East Chang An Ave., Dong Cheng District", "postCode": "W2, 100738", "settlement": "Tower, Beijing", "country": "China" } }, "email": "wuhua@rdc.toshiba.com.cn" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Toshiba (China) Research", "location": { "addrLine": "& Development Center 5/F., Oriental Plaza, No.1, East Chang An Ave., Dong Cheng District", "postCode": "W2, 100738", "settlement": "Tower, Beijing", "country": "China" } }, "email": "wanghaifeng@rdc.toshiba.com.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The main problems of statistical word alignment lie in the facts that source words can only be aligned to one target word, and that the inappropriate target word is selected because of data sparseness problem. This paper proposes an approach to improve statistical word alignment with a rule-based translation system. This approach first uses IBM statistical translation model to perform alignment in both directions (source to target and target to source), and then uses the translation information in the rule-based machine translation system to improve the statistical word alignment. The improved alignments allow the word(s) in the source language to be aligned to one or more words in the target language. Experimental results show a significant improvement in precision and recall of word alignment.", "pdf_parse": { "paper_id": "C04-1005", "_pdf_hash": "", "abstract": [ { "text": "The main problems of statistical word alignment lie in the facts that source words can only be aligned to one target word, and that the inappropriate target word is selected because of data sparseness problem. This paper proposes an approach to improve statistical word alignment with a rule-based translation system. This approach first uses IBM statistical translation model to perform alignment in both directions (source to target and target to source), and then uses the translation information in the rule-based machine translation system to improve the statistical word alignment. The improved alignments allow the word(s) in the source language to be aligned to one or more words in the target language. Experimental results show a significant improvement in precision and recall of word alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Bilingual word alignment is first introduced as an intermediate result in statistical machine translation (SMT) (Brown et al. 1993) . Besides being used in SMT, it is also used in translation lexicon building (Melamed 1996) , transfer rule learning (Menezes and Richardson 2001) , example-based machine translation (Somers 1999) , etc. In previous alignment methods, some researches modeled the alignments as hidden parameters in a statistical translation model (Brown et al. 1993; Och and Ney 2000) or directly modeled them given the sentence pairs (Cherry and Lin 2003) . Some researchers used similarity and association measures to build alignment links (Ahrenberg et al. 1998; Tufis and Barbu 2002) . In addition, Wu (1997) used a stochastic inversion transduction grammar to simultaneously parse the sentence pairs to get the word or phrase alignments.", "cite_spans": [ { "start": 112, "end": 131, "text": "(Brown et al. 1993)", "ref_id": null }, { "start": 209, "end": 223, "text": "(Melamed 1996)", "ref_id": "BIBREF5" }, { "start": 249, "end": 278, "text": "(Menezes and Richardson 2001)", "ref_id": "BIBREF7" }, { "start": 315, "end": 328, "text": "(Somers 1999)", "ref_id": "BIBREF9" }, { "start": 462, "end": 481, "text": "(Brown et al. 1993;", "ref_id": null }, { "start": 482, "end": 499, "text": "Och and Ney 2000)", "ref_id": "BIBREF8" }, { "start": 550, "end": 571, "text": "(Cherry and Lin 2003)", "ref_id": "BIBREF4" }, { "start": 657, "end": 680, "text": "(Ahrenberg et al. 1998;", "ref_id": "BIBREF0" }, { "start": 681, "end": 702, "text": "Tufis and Barbu 2002)", "ref_id": "BIBREF11" }, { "start": 718, "end": 727, "text": "Wu (1997)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Generally speaking, there are four cases in word alignment: word to word alignment, word to multi-word alignment, multi-word to word alignment, and multi-word to multi-word alignment. One of the most difficult tasks in word alignment is to find out the alignments that include multi-word units. For example, the statistical word alignment in IBM translation models (Brown et al. 1993 ) can only handle word to word and multi-word to word alignments.", "cite_spans": [ { "start": 365, "end": 383, "text": "(Brown et al. 1993", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Some studies have been made to tackle this problem. Och and Ney (2000) performed translation in both directions (source to target and target to source) to extend word alignments. Their results showed that this method improved precision without loss of recall in English to German alignments. However, if the same unit is aligned to two different target units, this method is unlikely to make a selection. Some researchers used preprocessing steps to identity multi-word units for word alignment (Ahrenberg et al. 1998; Tiedemann 1999; Melamed 2000) . The methods obtained multi-word candidates based on continuous N-gram statistics. The main limitation of these methods is that they cannot handle separated phrases and multi-word units in low frequencies.", "cite_spans": [ { "start": 52, "end": 70, "text": "Och and Ney (2000)", "ref_id": "BIBREF8" }, { "start": 495, "end": 518, "text": "(Ahrenberg et al. 1998;", "ref_id": "BIBREF0" }, { "start": 519, "end": 534, "text": "Tiedemann 1999;", "ref_id": "BIBREF10" }, { "start": 535, "end": 548, "text": "Melamed 2000)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to handle all of the four cases in word alignment, our approach uses both the alignment information in statistical translation models and translation information in a rule-based machine translation system. It includes three steps. (1) A statistical translation model is employed to perform word alignment in two directions 1 (English to Chinese, Chinese to English). (2) A rule-based English to Chinese translation system is employed to obtain Chinese translations for each English word or phrase in the source language. (3) The translation information in step (2) is used to improve the word alignment results in step (1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A critical reader may pose the question \"why not use a translation dictionary to improve statistical word alignment?\" Compared with a translation dictionary, the advantages of a rule-based machine translation system lie in two aspects: (1) It can recognize the multi-word units, particularly separated phrases, in the source language. Thus, our method is able to handle the multi-word alignments with higher accuracy, which will be described in our experiments. (2) It can perform word sense disambiguation and select appropriate translations while a translation dictionary can only list all translations for each word or phrase. Experimental results show that our approach improves word alignments in both precision and recall as compared with the state-of-the-art technologies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Statistical translation models (Brown, et al. 1993) only allow word to word and multi-word to word alignments. Thus, some multi-word units cannot be correctly aligned. In order to tackle this problem, we perform translation in two directions (English to Chinese and Chinese to English) as described in Och and Ney (2000) . The GIZA++ toolkit is used to perform statistical alignment. Thus, for each sentence pair, we can get two alignment results. We use and to represent the alignment sets with English as the source language and Chinese as the target language or vice versa. For alignment links in both sets, we use i for English words and j for Chinese words.", "cite_spans": [ { "start": 31, "end": 51, "text": "(Brown, et al. 1993)", "ref_id": null }, { "start": 302, "end": 320, "text": "Och and Ney (2000)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment", "sec_num": "2" }, { "text": "1 S 2 S } 0 }, { | ) , {( 1 \u2265 = = j j j j a a A j A S } 0 }, { | ) , {( 2 \u2265 = = i i i i a a A A i S Where,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment", "sec_num": "2" }, { "text": "represents the index position of the source word aligned to the target word in position x. For example, if a Chinese word in position j is connected to an English word in position i, then . If a Chinese word in position j is connected to English words in positions i and , then .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment", "sec_num": "2" }, { "text": ") , ( j i x a x = i a j = , { 2 1 i i A j = ) 1 ( > k 1 2 i } 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment", "sec_num": "2" }, { "text": "We call an element in the alignment set an alignment link. If the link includes a word that has no translation, we call it a null link. If k words have null links, we treat them as k different null links, not just one link.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment", "sec_num": "2" }, { "text": "Based on and , we obtain their intersection set, union set and subtraction set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment", "sec_num": "2" }, { "text": "1 S 2 S Intersection: 2 1 S S S \u2229 = Union: 2 1 S S P \u222a = Subtraction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment", "sec_num": "2" }, { "text": "S \u2212 = P F Thus, the subtraction set contains two different alignment links for each English word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Word Alignment", "sec_num": "2" }, { "text": "We use the translation information in a rulebased English-Chinese translation system 3 to improve the statistical word alignment result. This translation system includes three modules: source language parser, source to target language transfer module, and target language generator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-Based Translation System", "sec_num": "3" }, { "text": "From the transfer phase, we get Chinese translation candidates for each English word. This information can be considered as another word alignment result, which is denoted as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-Based Translation System", "sec_num": "3" }, { "text": ")} , {( 3 k C k S = .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-Based Translation System", "sec_num": "3" }, { "text": "C the set including the translation candidates for the k-th English word or phrase. The difference between S and the common alignment set is that each English word or phrase in S has one or more translation candidates. A translation example for the English sentence \"He is used to pipe smoking.\" is shown in Table 1 Table 1 , it can be seen that (1) the translation system can recognize English phrases (e.g. is used to); (2) the system can provide one or more translations for each source word or phrase;", "cite_spans": [], "ref_spans": [ { "start": 308, "end": 315, "text": "Table 1", "ref_id": null }, { "start": 316, "end": 323, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Rule-Based Translation System", "sec_num": "3" }, { "text": "(3) the translation system can perform word selection or word sense disambiguation. For example, the word \"pipe\" has several meanings such as \"tube\", \"tube used for smoking\" and \"wind instrument\". The system selects \"tube used for smoking\" and translates it into Chinese words \"\u70df\u6597\" and \"\u70df\u7b52\". The recognized translation candidates will be used to improve statistical word alignment in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-Based Translation System", "sec_num": "3" }, { "text": "As described in Section 2, we have two alignment sets for each sentence pair, from which we obtain the intersection set S and the subtraction set . We will improve the word alignments in S and with the translation candidates produced by the rule-based machine translation system. In the following sections, we will first describe how to calculate monolingual word similarity used in our algorithm. Then we will describe the algorithm used to improve word alignment results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.1 Word Alignment Improvement", "sec_num": "4" }, { "text": "This section describes the method for monolingual word similarity calculation. This method calculates word similarity by using a bilingual dictionary, which is first introduced by Wu and Zhou (2003) . The basic assumptions of this method are that the translations of a word can express its meanings and that two words are similar in meanings if they have mutual translations.", "cite_spans": [ { "start": 180, "end": 198, "text": "Wu and Zhou (2003)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Word Similarity Calculation", "sec_num": null }, { "text": "Given a Chinese word, we get its translations with a Chinese-English bilingual dictionary. The translations of a word are used to construct its feature vector. The similarity of two words is estimated through their feature vectors with the cosine measure as shown in (Wu and Zhou 2003) . If there are a Chinese word or phrase w and a Chinese word set Z , the word similarity between them is calculated as shown in Equation (1).", "cite_spans": [ { "start": 267, "end": 285, "text": "(Wu and Zhou 2003)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Word Similarity Calculation", "sec_num": null }, { "text": ")) ' , ( ( ) , ( ' w w sim Max Z w sim Z w \u2208 = (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Similarity Calculation", "sec_num": null }, { "text": "As the word alignment links in the intersection set are more reliable than those in the subtraction set, we adopt two different strategies for the alignments in the intersection set S and the subtraction set . For alignments in S , we will modify them when they are inconsistent with the translation information in S . For alignments in , we classify them into two cases and make selection between two different alignment links or modify them into a new link. In the intersection set S , there are only word to word alignment links, which include no multiword units. The main alignment error type in this set is that some words should be combined into one phrase and aligned to the same word(s) in the target sentence. For example, for the sentence pair in Figure 1 , \"used\" is aligned to the Chinese word \"\u4e60\u60ef\", and \"is\" and \"to\" have null links in . But in the translation set , \"is used to\" is a phrase. Thus, we combine the three alignment links into a new link. The words \"is\", \"used\" and \" to\" are all aligned to the Chinese word \"\u4e60\u60ef\", denoted as (is used to, \u4e60\u60ef). Figure 2 describes the algorithm employed to improve the word alignment in the intersection set S . Final word alignment set WA For each alignment link( in , do:", "cite_spans": [], "ref_spans": [ { "start": 757, "end": 765, "text": "Figure 1", "ref_id": null }, { "start": 1070, "end": 1078, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": ", i S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "(1) If all of the following three conditions are satisfied, add the new alignment link", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "WA ph k \u2209 ) ( w ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "to WA . a) There is an element( , and the English word i is a constituent of the phrase . Figure 2 . Algorithm for the Intersection Set", "cite_spans": [], "ref_spans": [ { "start": 90, "end": 98, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "In the subtraction set, there are two different links for each English word. Thus, we need to select one link or to modify the links according to the translation information in .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "For each English word i in the subtraction set, there are two cases:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "Case 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "In , there is a word to word alignment link(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": ". In , there is a word to word or word to multi-word alignment link", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "1 S 1 S ) , j i \u2208 2 S 2 ) , S A i i \u2208 ( 5 . Case 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "In , there is a multi-word to word alignment link ( . In S , there is a word to word or word to multi-word alignment link( .", "cite_spans": [ { "start": 50, "end": 51, "text": "(", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "1 S , A i j j A i S j A \u2208 \u2208 & ) , 1 2 ) S \u2208 2 i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "For Case 1, we first examine the translation set . If there is an element( , we calculate the Chinese word similarity between j in and with Equation (1) shown in Section 4.1. We also combine the words in A ) into a phrase and get the word similarity between this new phrase and C . The alignment link with a higher similarity score is selected and added to WA .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "3 S ) j \u2208 ) , A i 3 ) , S C i i \u2208 i 1 , S i ( ( S i \u2208 ( i C i 2 Input: Alignment sets S and 1 2 S Translation unit( 3 ) , S C ph k k \u2208", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "(1) For each sub-sequence 6 s of , get the sets and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "k ph } 1 ) \u2208 , ( | { 1 1 1 S t s t T = } ) 2 2 S t \u2208 , ( | { 2 2 s t T =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "(2) Combine words in T and T into phrases and respectively. combined with other words to form phrases. In this case, we modify the alignment links into a multi-word to multi-word alignment link. The algorithm is described in Figure 3 ", "cite_spans": [], "ref_spans": [ { "start": 225, "end": 233, "text": "Figure 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": ". 5 ( ) , i A i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "represents both the word to word and word to multi-word alignment links. 6 If a phrase consists of three words w , the subsequences of this phrase are w", "cite_spans": [ { "start": 73, "end": 74, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": ". 3 2 1 w w 2 2 1 , , w w w 3 3 2 1 , , w w w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "For example, given a sentence pair in Figure 4 , in S , the word \"whipped\" is aligned to \"\u7a81\u7136\" and \"out\" is aligned to \"\u62bd\u51fa\". In S , the word \"whipped\" is aligned to both \"\u7a81\u7136\" and \"\u62bd\u51fa\" and \"out\" has a null link. In , \"whipped out\" is a phrase and translated into \"\u8fc5\u901f\u62bd\u51fa\". And the word similarity between \"\u7a81\u7136\u62bd\u51fa\" and \"\u8fc5\u901f\u62bd \u51fa \" is larger than the threshold . If true, we combine the words in ( ( ) into a word or phrase and calculate the similarity between this new word or phrase and C in the same way as in Case 1. If the similarity is higher than a threshold", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 46, "text": "Figure 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "3 3 S 2 S \u2208 i ) , C i i \u2208 ) , A i i i A 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "\u03b4 , we add the alignment link", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "into WA . ) , i A i (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "If there is an element( and i is a constituent of ph , we combine the English words in A ( ) into a phrase. If it is the same as the phrase and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "3 ) , S C ph k k \u2208 1 k ph ( k , j A j j ) ( S \u2208 1 ) , \u03b4 > k C j sim", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": ", we add ( into WA . Otherwise, we use the multi-word to multi-word alignment algorithm in Figure 3 to modify the links.", "cite_spans": [ { "start": 9, "end": 10, "text": "(", "ref_id": null } ], "ref_spans": [ { "start": 91, "end": 99, "text": "Figure 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": ") , A j j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "After applying the above two strategies, there are still some words not aligned. For each sentence pair, we use E and C to denote the sets of the source words and the target words that are not aligned, respectively. For each source word in E, we construct a link with each target word in C. We use L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "} , | ) , {( C j E i j i \u2208 \u2208 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "to denote the alignment candidates. For each candidate in L, we look it up in the translation set S . If there is an element ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Improvement Algorithm", "sec_num": "4.2" }, { "text": "We did experiments on a sentence aligned English-Chinese bilingual corpus in general domains. There are about 320,000 bilingual sentence pairs in the corpus, from which, we randomly select 1,000 sentence pairs as testing data. The remainder is used as training data. The Chinese sentences in both the training set and the testing set are automatically segmented into words. The segmentation errors in the testing set are post-corrected. The testing set is manually annotated. It has totally 8,651 alignment links including 2,149 null links. Among them, 866 alignment links include multi-word units, which accounts for about 10% of the total links.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Testing Set", "sec_num": "5.2" }, { "text": "There are several different evaluation methods for word alignment (Ahrenberg et al. 2000) . In our evaluation, we use evaluation metrics similar to those in Och and Ney (2000) . However, we do not classify alignment links into sure links and possible links. We consider each alignment as a sure link.", "cite_spans": [ { "start": 66, "end": 89, "text": "(Ahrenberg et al. 2000)", "ref_id": "BIBREF1" }, { "start": 157, "end": 175, "text": "Och and Ney (2000)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": null }, { "text": "If we use S to indicate the alignments identified by the proposed methods and S to denote the reference alignments, the precision, recall and f-measure are calculated as described in Equation (2), (3) and (4). According to the definition of the alignment error rate (AER) in Och and Ney (2000) , AER can be calculated with Equation (5).", "cite_spans": [ { "start": 275, "end": 293, "text": "Och and Ney (2000)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": null }, { "text": "G C | S | S S | G G \u2229 = precision (2) | S | | S S | C C G \u2229 = recall (3) | | | | * 2 G G S S fmeasure + = (4) fmeasure S S S AER G G \u2212 + \u2229 \u2212 = 1 | | | | * 2 1 (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": null }, { "text": "In this paper, we give two different alignment results in Table 2 and Table 3 . Table 2 presents alignment results that include null links. Table 3 presents alignment results that exclude null links. The precision and recall in the tables are obtained to ensure the smallest AER for each method. In the above tables, the row \"Ours\" presents the result of our approach. The results are obtained by setting the word similarity thresholds to", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 77, "text": "Table 2 and Table 3", "ref_id": null }, { "start": 80, "end": 87, "text": "Table 2", "ref_id": null }, { "start": 140, "end": 147, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results", "sec_num": null }, { "text": "1 . 0 1 \uff1d \u03b4 and 5 . 0 2 \uff1d \u03b4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": null }, { "text": ". The Chinese-English dictionary used to calculate the word similarity has 66,696 entries. Each entry has two English translations on average. The row \"Dic\" shows the result of the approach that uses a bilingual dictionary instead of the rule-based machine translation system to improve statistical word alignment. The dictionary used in this method is the same translation dictionary used in the rulebased machine translation system. It includes 57,684 English words and each English word has about two Chinese translations on average. The rows \"IBM E-C\" and \"IBM C-E\" show the results obtained by IBM Model-4 when treating English as the source and Chinese as the target or vice versa. The row \"IBM Inter\" shows results obtained by taking the intersection of the alignments produced by \"IBM E-C\" and \"IBM C-E\". The row \"IBM Refined\" shows the results by refining the results of \"IBM Inter\" as described in Och and Ney (2000) .", "cite_spans": [ { "start": 908, "end": 926, "text": "Och and Ney (2000)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": null }, { "text": "Generally, the results excluding null links are better than those including null links. This indicates that it is difficult to judge whether a word has counterparts in another language. It is because the translations of some source words can be omitted. Both the rule-based translation system and the bilingual dictionary provide no such information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": null }, { "text": "It can be also seen that our approach performs the best among others in both cases. Our approach achieves a relative error rate reduction of 26% and 25% when compared with \"IBM E-C\" and \"IBM C-E\" respectively 7 . Although the precision of our method is lower than that of the \"IBM Inter\" method, it achieves much higher recall, resulting in a 30% relative error rate reduction. Compared with the \"IBM refined\" method, our method also achieves a relative error rate reduction of 30%. In addition, our method is better than the \"Dic\" method, achieving a relative error rate reduction of 8.8%. In order to provide the detailed word alignment information, we classify word alignment results in Table 3 into two classes. The first class includes the alignment links that have no multiword units. The second class includes at least one multi-word unit in each alignment link. The detailed information is shown in Table 4 and Table 5 . In Table 5 , we do not include the method \"Inter\" because it has no multi-word alignment links. Table 5 . Multi-Word Alignment Results All of the methods perform better on single word alignment than on multi-word alignment. In Table 4 , the precision of our method is close to the \"IBM Inter\" approach, and the recall of our method is much higher, achieving a 47% relative error rate reduction. Our method also achieves a 37% relative error rate reduction over the \"IBM Refined\" method. Compared with the \"Dic\" method, our approach achieves much higher precision without loss of recall, resulting in a 12% relative error rate reduction.", "cite_spans": [], "ref_spans": [ { "start": 690, "end": 697, "text": "Table 3", "ref_id": null }, { "start": 907, "end": 927, "text": "Table 4 and Table 5", "ref_id": null }, { "start": 933, "end": 940, "text": "Table 5", "ref_id": null }, { "start": 1026, "end": 1033, "text": "Table 5", "ref_id": null }, { "start": 1157, "end": 1164, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results", "sec_num": null }, { "text": "7 The error rate reductions in this paragraph are obtained from Table 2 . The error rate reductions in Table 3 are omitted.", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 71, "text": "Table 2", "ref_id": null }, { "start": 103, "end": 110, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Our method also achieves much better results on multi-word alignment than other methods. However, our method only obtains one third of the correct alignment links. It indicates that it is the hardest to align the multi-word units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Readers may pose the question \"why the rulebased translation system performs better on word alignment than the translation dictionary?\" For single word alignment, the rule-based translation system can perform word sense disambiguation, and select the appropriate Chinese words as translation. On the contrary, the dictionary can only list all translations. Thus, the alignment precision of our method is higher than that of the dictionary method. Figure 5 shows alignment precision and recall values under different similarity values for single word alignment including null links. From the figure, it can be seen that our method consistently achieves higher precisions as compared with the dictionary method. The tscore value (t=10.37, p=0.05) shows the improvement is statistically significant.", "cite_spans": [], "ref_spans": [ { "start": 447, "end": 455, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "For multi-word alignment links, the translation system also outperforms the translation dictionary. The result is shown in Table 5 in Section 5.2. This is because (1) the translation system can automatically recognize English phrases with higher accuracy than the translation dictionary; (2) The translation system can detect separated phrases while the dictionary cannot. For example, for the sentence pairs in Figure 6 , the solid link lines describe the alignment result of the rulebase translation system while dashed lines indicate the alignment result of the translation dictionary. In example (1), the phrase \"be going to\" indicates the tense not the phrase \"go to\" as the dictionary shows. In example (2), our method detects the separated phrase \"turn \u2026 on\" while the dictionary does not. Thus, the dictionary method produces the wrong alignment link. ", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 130, "text": "Table 5", "ref_id": null }, { "start": 412, "end": 420, "text": "Figure 6", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Figure 5. Recall-Precision Curves", "sec_num": null }, { "text": "This paper proposes an approach to improve statistical word alignment results by using a rulebased translation system. Our contribution is that, given a rule-based translation system that provides appropriate translation candidates for each source word or phrase, we select appropriate alignment links among statistical word alignment results or modify them into new links. Especially, with such a translation system, we can identify both the continuous and separated phrases in the source language and improve the multi-word alignment results. Experimental results indicate that our approach can achieve a precision of 85% and a recall of 71% for word alignment including null links in general domains. This result significantly outperforms those of the methods that use a bilingual dictionary to improve word alignment, and that only use statistical translation models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "Our future work mainly includes three tasks. First, we will further improve multi-word alignment results by using other technologies in natural language processing. For example, we can use named entity recognition and transliteration technologies to improve person name alignment. Second, we will extract translation rules from the improved word alignment results and apply them back to our rule-based machine translation system. Third, we will further analyze the effect of the translation system on the alignment results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "We use English-Chinese word alignment as a case study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the following of this paper, we will use the position number of a word to refer to the word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This system is developed based on the Toshiba English-Japanese translation system(Amano et al. 1989). It achieves above-average performance as compared with the English-Chinese translation systems available in the market.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We define an operation \"combine\" on a set consisting of position numbers of words. We first sort the position numbers in the set ascendly and then regard them as a phrase. For example, there is a set {{2,3}, 1, 4}, the result after applying the combine operation is(1, 2, 3, 4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Simple Hybrid Aligner for Generating Lexical Correspondences in Parallel Texts", "authors": [ { "first": "Lars", "middle": [], "last": "Ahrenberg", "suffix": "" }, { "first": "Magnus", "middle": [], "last": "Merkel", "suffix": "" }, { "first": "Mikael", "middle": [], "last": "Andersson", "suffix": "" } ], "year": 1998, "venue": "Proc. of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th Int. Conf. on Computational Linguistics", "volume": "", "issue": "", "pages": "29--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lars Ahrenberg, Magnus Merkel, and Mikael Anders- son 1998. A Simple Hybrid Aligner for Generating Lexical Correspondences in Parallel Texts. In Proc. of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th Int. Conf. on Computational Linguistics, pp. 29-35.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Evaluation of word alignment systems", "authors": [ { "first": "Lars", "middle": [], "last": "Ahrenberg", "suffix": "" }, { "first": "Magnus", "middle": [], "last": "Merkel", "suffix": "" }, { "first": "Anna", "middle": [ "Sagvall" ], "last": "Hein", "suffix": "" }, { "first": "Jorg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2000, "venue": "Proc. of the Second Int. Conf. on Linguistic Resources and Evaluation", "volume": "", "issue": "", "pages": "1255--1261", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lars Ahrenberg, Magnus Merkel, Anna Sagvall Hein and Jorg Tiedemann 2000. Evaluation of word alignment systems. In Proc. of the Second Int. Conf. on Linguistic Resources and Evaluation, pp. 1255- 1261.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Toshiba Machine Translation System", "authors": [ { "first": "Shinya", "middle": [], "last": "Amano", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Hirakawa", "suffix": "" }, { "first": "Hiroyasu", "middle": [], "last": "Nogami", "suffix": "" }, { "first": "Akira", "middle": [], "last": "Kumano", "suffix": "" } ], "year": 1989, "venue": "Future Computing Systems", "volume": "2", "issue": "3", "pages": "227--246", "other_ids": {}, "num": null, "urls": [], "raw_text": "ShinYa Amano, Hideki Hirakawa, Hiroyasu Nogami, and Akira Kumano 1989. Toshiba Machine Trans- lation System. Future Computing Systems, 2(3):227-246.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A Probability Model to Improve Word Alignment", "authors": [ { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2003, "venue": "Proc. of the 41st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "88--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Cherry and Dekang Lin 2003. A Probability Model to Improve Word Alignment. In Proc. of the 41st Annual Meeting of the Association for Com- putational Linguistics, pp. 88-95.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic Construction of Clean Broad-Coverage Translation Lexicons", "authors": [ { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 1996, "venue": "Proc. of the 2 nd Conf. of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "125--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Melamed 1996. Automatic Construction of Clean Broad-Coverage Translation Lexicons. In Proc. of the 2 nd Conf. of the Association for Ma- chine Translation in the Americas, pp. 125-134.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Word-to-Word Models of Translational Equivalence among Words", "authors": [ { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "2", "pages": "221--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Melamed 2000. Word-to-Word Models of Translational Equivalence among Words. Compu- tational Linguistics, 26(2): 221-249.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A Best-first Alignment Algorithm for Automatic Extraction of Transfer Mappings from Bilingual Corpora", "authors": [ { "first": "Arul", "middle": [], "last": "Menezes", "suffix": "" }, { "first": "Stephan", "middle": [ "D" ], "last": "Richardson", "suffix": "" } ], "year": 2001, "venue": "Proc. of the ACL 2001 Workshop on Data-Driven Methods in Machine Translation", "volume": "", "issue": "", "pages": "39--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arul Menezes and Stephan D. Richardson 2001. A Best-first Alignment Algorithm for Automatic Ex- traction of Transfer Mappings from Bilingual Cor- pora. In Proc. of the ACL 2001 Workshop on Data- Driven Methods in Machine Translation, pp. 39-46.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improved Statistical Alignment Models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proc.of the 38th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney 2000. Improved Statistical Alignment Models. In Proc.of the 38th Annual Meeting of the Association for Computa- tional Linguistics, pp. 440-447.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Review Article: Example-Based Machine Translation", "authors": [ { "first": "Harold", "middle": [], "last": "Somers", "suffix": "" } ], "year": 1999, "venue": "Machine Translation", "volume": "14", "issue": "", "pages": "113--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harold Somers 1999. Review Article: Example-Based Machine Translation. Machine Translation 14:113- 157.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Word Alignment -Step by Step", "authors": [ { "first": "Jorg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 1999, "venue": "Proc. of the 12th Nordic Conf. on Computational Linguistics", "volume": "", "issue": "", "pages": "216--227", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jorg Tiedemann 1999. Word Alignment -Step by Step. In Proc. of the 12th Nordic Conf. on Computational Linguistics, pp. 216-227.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Lexical Token Alignment: Experiments, Results and Application", "authors": [ { "first": "Dan", "middle": [], "last": "Tufis", "suffix": "" }, { "first": "Ana", "middle": [ "Maria" ], "last": "Barbu", "suffix": "" } ], "year": 2002, "venue": "Proc. of the Third Int. Conf. on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "458--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Tufis and Ana Maria Barbu. 2002. Lexical Token Alignment: Experiments, Results and Application. In Proc. of the Third Int. Conf. on Language Re- sources and Evaluation, pp. 458-465.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "3", "pages": "377--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Cor- pora. Computational Linguistics, 23(3):377-403.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Optimizing Synonym Extraction Using Monolingual and Bilingual Resources", "authors": [ { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2003, "venue": "Proc. of the 2nd Int. Workshop on Paraphrasing", "volume": "", "issue": "", "pages": "72--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hua Wu and Ming Zhou 2003. Optimizing Synonym Extraction Using Monolingual and Bilingual Re- sources. In Proc. of the 2nd Int. Workshop on Para- phrasing, pp. 72-79.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "" }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Figure 1. Multi-Word Alignment Example Input: Intersection set S , Translation set , 3 S" }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "other words in the phrase ph also have alignment links in S . k c) For each word s in ph , we get k words in T into a phrase w , and the similar-Word alignment set WA" }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "Figure 3. Multi-Word to Multi-Word Alignment Algorithm If, in S , there is an element( and i is a constituent of , the English word i of the alignment links in both S and should be 3" }, "FIGREF4": { "uris": null, "type_str": "figure", "num": null, "text": "target words in the Chinese sentence into \"\u7a81\u7136\u62bd\u51fa\". The final alignment link should be (whipped out, \u7a81\u7136 \u62bd\u51fa)." }, "FIGREF5": { "uris": null, "type_str": "figure", "num": null, "text": "Multi-Word to Multi-Word Alignment Example For Case 2, we first examine S to see whether there is an element(" }, "FIGREF7": { "uris": null, "type_str": "figure", "num": null, "text": "Alignment Comparison Examples" }, "TABREF0": { "type_str": "table", "text": ".", "html": null, "content": "
k is
3
3
English WordsChinese Translations
He\u4ed6
is used to
", "num": null } } } }