{ "paper_id": "P07-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:49:49.211019Z" }, "title": "Word Sense Disambiguation Improves Statistical Machine Translation", "authors": [ { "first": "Yee", "middle": [ "Seng" ], "last": "Chan", "suffix": "", "affiliation": {}, "email": "chanys@comp.nus.edu.sg" }, { "first": "Hwee", "middle": [ "Tou" ], "last": "Ng", "suffix": "", "affiliation": {}, "email": "" }, { "first": "David", "middle": [], "last": "Chiang", "suffix": "", "affiliation": {}, "email": "chiang@isi.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent research presents conflicting evidence on whether word sense disambiguation (WSD) systems can help to improve the performance of statistical machine translation (MT) systems. In this paper, we successfully integrate a state-of-the-art WSD system into a state-of-the-art hierarchical phrase-based MT system, Hiero. We show for the first time that integrating a WSD system improves the performance of a state-ofthe-art statistical MT system on an actual translation task. Furthermore, the improvement is statistically significant.", "pdf_parse": { "paper_id": "P07-1005", "_pdf_hash": "", "abstract": [ { "text": "Recent research presents conflicting evidence on whether word sense disambiguation (WSD) systems can help to improve the performance of statistical machine translation (MT) systems. In this paper, we successfully integrate a state-of-the-art WSD system into a state-of-the-art hierarchical phrase-based MT system, Hiero. We show for the first time that integrating a WSD system improves the performance of a state-ofthe-art statistical MT system on an actual translation task. Furthermore, the improvement is statistically significant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many words have multiple meanings, depending on the context in which they are used. Word sense disambiguation (WSD) is the task of determining the correct meaning or sense of a word in context. WSD is regarded as an important research problem and is assumed to be helpful for applications such as machine translation (MT) and information retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In translation, different senses of a word w in a source language may have different translations in a target language, depending on the particular meaning of w in context. Hence, the assumption is that in resolving sense ambiguity, a WSD system will be able to help an MT system to determine the correct translation for an ambiguous word. To determine the correct sense of a word, WSD systems typically use a wide array of features that are not limited to the local context of w, and some of these features may not be used by state-of-the-art statistical MT systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To perform translation, state-of-the-art MT systems use a statistical phrase-based approach (Marcu and Wong, 2002; Och and Ney, 2004) by treating phrases as the basic units of translation. In this approach, a phrase can be any sequence of consecutive words and is not necessarily linguistically meaningful. Capitalizing on the strength of the phrase-based approach, Chiang (2005) introduced a hierarchical phrase-based statistical MT system, Hiero, which achieves significantly better translation performance than Pharaoh (Koehn, 2004a) , which is a state-of-the-art phrasebased statistical MT system.", "cite_spans": [ { "start": 92, "end": 114, "text": "(Marcu and Wong, 2002;", "ref_id": "BIBREF14" }, { "start": 115, "end": 133, "text": "Och and Ney, 2004)", "ref_id": "BIBREF17" }, { "start": 366, "end": 379, "text": "Chiang (2005)", "ref_id": "BIBREF4" }, { "start": 522, "end": 536, "text": "(Koehn, 2004a)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, some researchers investigated whether performing WSD will help to improve the performance of an MT system. Carpuat and Wu (2005) integrated the translation predictions from a Chinese WSD system (Carpuat et al., 2004) into a Chinese-English word-based statistical MT system using the ISI ReWrite decoder (Germann, 2003) . Though they acknowledged that directly using English translations as word senses would be ideal, they instead predicted the HowNet sense of a word and then used the English gloss of the HowNet sense as the WSD model's predicted translation. They did not incorporate their WSD model or its predictions into their translation model; rather, they used the WSD predictions either to constrain the options available to their decoder, or to postedit the output of their decoder. They reported the negative result that WSD decreased the performance of MT based on their experiments.", "cite_spans": [ { "start": 117, "end": 138, "text": "Carpuat and Wu (2005)", "ref_id": "BIBREF1" }, { "start": 204, "end": 226, "text": "(Carpuat et al., 2004)", "ref_id": "BIBREF2" }, { "start": 313, "end": 328, "text": "(Germann, 2003)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In another work (Vickrey et al., 2005) , the WSD problem was recast as a word translation task. The translation choices for a word w were defined as the set of words or phrases aligned to w, as gathered from a word-aligned parallel corpus. The authors showed that they were able to improve their model's accuracy on two simplified translation tasks: word translation and blank-filling.", "cite_spans": [ { "start": 16, "end": 38, "text": "(Vickrey et al., 2005)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, Cabezas and Resnik (2005) experimented with incorporating WSD translations into Pharaoh, a state-of-the-art phrase-based MT system . Their WSD system provided additional translations to the phrase table of Pharaoh, which fired a new model feature, so that the decoder could weigh the additional alternative translations against its own. However, they could not automatically tune the weight of this feature in the same way as the others. They obtained a relatively small improvement, and no statistical significance test was reported to determine if the improvement was statistically significant.", "cite_spans": [ { "start": 10, "end": 35, "text": "Cabezas and Resnik (2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Note that the experiments in (Carpuat and Wu, 2005) did not use a state-of-the-art MT system, while the experiments in (Vickrey et al., 2005) were not done using a full-fledged MT system and the evaluation was not on how well each source sentence was translated as a whole. The relatively small improvement reported by Cabezas and Resnik (2005) without a statistical significance test appears to be inconclusive. Considering the conflicting results reported by prior work, it is not clear whether a WSD system can help to improve the performance of a state-of-the-art statistical MT system.", "cite_spans": [ { "start": 29, "end": 51, "text": "(Carpuat and Wu, 2005)", "ref_id": "BIBREF1" }, { "start": 119, "end": 141, "text": "(Vickrey et al., 2005)", "ref_id": "BIBREF21" }, { "start": 319, "end": 344, "text": "Cabezas and Resnik (2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we successfully integrate a stateof-the-art WSD system into the state-of-the-art hierarchical phrase-based MT system, Hiero (Chiang, 2005) . The integration is accomplished by introducing two additional features into the MT model which operate on the existing rules of the grammar, without introducing competing rules. These features are treated, both in feature-weight tuning and in decoding, on the same footing as the rest of the model, allowing it to weigh the WSD model predictions against other pieces of evidence so as to optimize translation accuracy (as measured by BLEU). The contribution of our work lies in showing for the first time that integrating a WSD system significantly improves the performance of a state-of-the-art statistical MT system on an actual translation task.", "cite_spans": [ { "start": 139, "end": 153, "text": "(Chiang, 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the next section, we describe our WSD system. Then, in Section 3, we describe the Hiero MT system and introduce the two new features used to integrate the WSD system into Hiero. In Section 4, we describe the training data used by the WSD system. In Section 5, we describe how the WSD translations provided are used by the decoder of the MT system. In Section 6 and 7, we present and analyze our experimental results, before concluding in Section 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Prior research has shown that using Support Vector Machines (SVM) as the learning algorithm for WSD achieves good results (Lee and Ng, 2002) . For our experiments, we use the SVM implementation of (Chang and Lin, 2001) as it is able to work on multiclass problems to output the classification probability for each class. Our implemented WSD classifier uses the knowledge sources of local collocations, parts-of-speech (POS), and surrounding words, following the successful approach of (Lee and Ng, 2002) . For local collocations, we use 3 features, w \u22121 w +1 , w \u22121 , and w +1 , where w \u22121 (w +1 ) is the token immediately to the left (right) of the current ambiguous word occurrence w. For parts-of-speech, we use 3 features, P \u22121 , P 0 , and P +1 , where P 0 is the POS of w, and P \u22121 (P +1 ) is the POS of w \u22121 (w +1 ). For surrounding words, we consider all unigrams (single words) in the surrounding context of w. These unigrams can be in a different sentence from w. We perform feature selection on surrounding words by including a unigram only if it occurs 3 or more times in some sense of w in the training data.", "cite_spans": [ { "start": 122, "end": 140, "text": "(Lee and Ng, 2002)", "ref_id": "BIBREF12" }, { "start": 197, "end": 218, "text": "(Chang and Lin, 2001)", "ref_id": "BIBREF3" }, { "start": 485, "end": 503, "text": "(Lee and Ng, 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Word Sense Disambiguation", "sec_num": "2" }, { "text": "To measure the accuracy of our WSD classifier, we evaluate it on the test data of SENSEVAL-3 Chinese lexical-sample task. We obtain accuracy that compares favorably to the best participating system in the task (Carpuat et al., 2004) .", "cite_spans": [ { "start": 210, "end": 232, "text": "(Carpuat et al., 2004)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Word Sense Disambiguation", "sec_num": "2" }, { "text": "Hiero (Chiang, 2005 ) is a hierarchical phrase-based model for statistical machine translation, based on weighted synchronous context-free grammar (CFG) (Lewis and Stearns, 1968) . A synchronous CFG consists of rewrite rules such as the following:", "cite_spans": [ { "start": 6, "end": 19, "text": "(Chiang, 2005", "ref_id": "BIBREF4" }, { "start": 153, "end": 178, "text": "(Lewis and Stearns, 1968)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Hiero", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X \u2192 \u03b3, \u03b1", "eq_num": "(1)" } ], "section": "Hiero", "sec_num": "3" }, { "text": "where X is a non-terminal symbol, \u03b3 (\u03b1) is a string of terminal and non-terminal symbols in the source (target) language, and there is a one-to-one correspondence between the non-terminals in \u03b3 and \u03b1 indicated by co-indexation. Hence, \u03b3 and \u03b1 always have the same number of non-terminal symbols. For instance, we could have the following grammar rule:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hiero", "sec_num": "3" }, { "text": "X \u2192 X 1 , go to X 1 every month to (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hiero", "sec_num": "3" }, { "text": "where boxed indices represent the correspondences between non-terminal symbols. Hiero extracts the synchronous CFG rules automatically from a word-aligned parallel corpus. To translate a source sentence, the goal is to find its most probable derivation using the extracted grammar rules. Hiero uses a general log-linear model (Och and Ney, 2002) where the weight of a derivation D for a particular source sentence and its translation is", "cite_spans": [ { "start": 326, "end": 345, "text": "(Och and Ney, 2002)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Hiero", "sec_num": "3" }, { "text": "w(D) = i \u03c6 i (D) \u03bb i (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hiero", "sec_num": "3" }, { "text": "where \u03c6 i is a feature function and \u03bb i is the weight for feature \u03c6 i . To ensure efficient decoding, the \u03c6 i are subject to certain locality restrictions. Essentially, they should be defined as products of functions defined on isolated synchronous CGF rules; however, it is possible to extend the domain of locality of the features somewhat. A n-gram language model adds a dependence on (n\u22121) neighboring target-side words (Wu, 1996; Chiang, 2007) , making decoding much more difficult but still polynomial; in this paper, we add features that depend on the neighboring source-side words, which does not affect decoding complexity at all because the source string is fixed.", "cite_spans": [ { "start": 424, "end": 434, "text": "(Wu, 1996;", "ref_id": "BIBREF22" }, { "start": 435, "end": 448, "text": "Chiang, 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Hiero", "sec_num": "3" }, { "text": "In principle we could add features that depend on arbitrary source-side context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hiero", "sec_num": "3" }, { "text": "To incorporate WSD into Hiero, we use the translations proposed by the WSD system to help Hiero obtain a better or more probable derivation during the translation of each source sentence. To achieve this, when a grammar rule R is considered during decoding, and we recognize that some of the terminal symbols (words) in \u03b1 are also chosen by the WSD system as translations for some terminal symbols (words) in \u03b3, we compute the following features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Features in Hiero for WSD", "sec_num": "3.1" }, { "text": "\u2022 P wsd (t | s)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Features in Hiero for WSD", "sec_num": "3.1" }, { "text": "gives the contextual probability of the WSD classifier choosing t as a translation for s, where t (s) is some substring of terminal symbols in \u03b1 (\u03b3). Because this probability only applies to some rules, and we don't want to penalize those rules, we must add another feature,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Features in Hiero for WSD", "sec_num": "3.1" }, { "text": "\u2022 P ty wsd = exp(\u2212|t|), where t is the translation chosen by the WSD system. This feature, with a negative weight, rewards rules that use translations suggested by the WSD module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Features in Hiero for WSD", "sec_num": "3.1" }, { "text": "Note that we can take the negative logarithm of the rule/derivation weights and think of them as costs rather than probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Features in Hiero for WSD", "sec_num": "3.1" }, { "text": "Our experiments were for Chinese to English translation. Hence, in the context of our work, a synchronous CFG grammar rule X \u2192 \u03b3, \u03b1 gathered by Hiero consists of a Chinese portion \u03b3 and a corresponding English portion \u03b1, where each portion is a sequence of words and non-terminal symbols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gathering Training Examples for WSD", "sec_num": "4" }, { "text": "Our WSD classifier suggests a list of English phrases (where each phrase consists of one or more English words) with associated contextual probabilities as possible translations for each particular Chinese phrase. In general, the Chinese phrase may consist of k Chinese words, where k = 1, 2, 3, . . .. However, we limit k to 1 or 2 for experiments reported in this paper. Future work can explore enlarging k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gathering Training Examples for WSD", "sec_num": "4" }, { "text": "Whenever Hiero is about to extract a grammar rule where its Chinese portion is a phrase of one or two Chinese words with no non-terminal symbols, we note the location (sentence and token offset) in the Chinese half of the parallel corpus from which the Chinese portion of the rule is extracted. The actual sentence in the corpus containing the Chinese phrase, and the one sentence before and the one sentence after that actual sentence, will serve as the context for one training example for the Chinese phrase, with the corresponding English phrase of the grammar rule as its translation. Hence, unlike traditional WSD where the sense classes are tied to a specific sense inventory, our \"senses\" here consist of the English phrases extracted as translations for each Chinese phrase. Since the extracted training data may be noisy, for each Chinese phrase, we remove English translations that occur only once. Furthermore, we only attempt WSD classification for those Chinese phrases with at least 10 training examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gathering Training Examples for WSD", "sec_num": "4" }, { "text": "Using the WSD classifier described in Section 2, we classified the words in each Chinese source sentence to be translated. We first performed WSD on all single Chinese words which are either noun, verb, or adjective. Next, we classified the Chinese phrases consisting of 2 consecutive Chinese words by simply treating the phrase as a single unit. When performing classification, we give as output the set of English translations with associated context-dependent probabilities, which are the probabilities of a Chinese word (phrase) translating into each English phrase, depending on the context of the Chinese word (phrase). After WSD, the ith word c i in every Chinese sentence may have up to 3 sets of associated translations provided by the WSD system: a set of translations for c i as a single word, a second set of translations for c i\u22121 c i considered as a single unit, and a third set of translations for c i c i+1 considered as a single unit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gathering Training Examples for WSD", "sec_num": "4" }, { "text": "The following tasks are done for each rule that is considered during decoding:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating WSD during Decoding", "sec_num": "5" }, { "text": "\u2022 identify Chinese words to suggest translations for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating WSD during Decoding", "sec_num": "5" }, { "text": "\u2022 match suggested translations against the English side of the rule", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating WSD during Decoding", "sec_num": "5" }, { "text": "\u2022 compute features for the rule", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating WSD during Decoding", "sec_num": "5" }, { "text": "The WSD system is able to predict translations only for a subset of Chinese words or phrases. Hence, we must first identify which parts of the Chinese side of the rule have suggested translations available. Here, we consider substrings of length up to two, and we give priority to longer substrings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating WSD during Decoding", "sec_num": "5" }, { "text": "Next, we want to know, for each Chinese substring considered, whether the WSD system supports the Chinese-English translation represented by the rule. If the rule is finally chosen as part of the best derivation for translating the Chinese sentence, then all the words in the English side of the rule will appear in the translated English sentence. Hence, we need to match the translations suggested by the WSD system against the English side of the rule. It is for these matching rules that the WSD features will apply.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating WSD during Decoding", "sec_num": "5" }, { "text": "The translations proposed by the WSD system may be more than one word long. In order for a proposed translation to match the rule, we require two conditions. First, the proposed translation must be a substring of the English side of the rule. For example, the proposed translation \"every to\" would not match the chunk \"every month to\". Second, the match must contain at least one aligned Chinese-English word pair, but we do not make any other requirements about the alignment of the other Chinese or English words. 1 If there are multiple possible matches, we choose the longest proposed translation; in the case of a tie, we choose the proposed translation with the highest score according to the WSD model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating WSD during Decoding", "sec_num": "5" }, { "text": "Define a chunk of a rule to be a maximal substring of terminal symbols on the English side of the rule. For example, in Rule (2), the chunks would be \"go to\" and \"every month to\". Whenever we find a matching WSD translation, we mark the whole chunk on the English side as consumed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating WSD during Decoding", "sec_num": "5" }, { "text": "Finally, we compute the feature values for the rule. The feature P wsd (t | s) is the sum of the costs (according to the WSD model) of all the matched translations, and the feature P ty wsd is the sum of the lengths of all the matched translations. Figure 1 shows the pseudocode for the rule scoring algorithm in more detail, particularly with regards to resolving conflicts between overlapping matches. To illustrate the algorithm given in Figure 1 , consider Rule (2). Hereafter, we will use symbols to represent the Chinese and English words in the rule: c 1 , c 2 , and c 3 will represent the Chinese words \" \", \" \", and \" \" respectively. Similarly, e 1 , e 2 , e 3 , e 4 , and e 5 will represent the English words go, to, every, month, and to respectively. Hence, Rule (2) has two chunks: e 1 e 2 and e 3 e 4 e 5 . When the rule is extracted from the parallel corpus, it has these alignments between the words of its Chinese and English portion: {c 1 -e 3 ,c 2 -e 4 ,c 3 -e 1 ,c 3 -e 2 ,c 3 -e 5 }, which means that c 1 is aligned to e 3 , c 2 is aligned to Input: rule R considered during decoding with its own associated costR Lc = list of symbols in Chinese portion of R WSDcost = 0 i = 1 while i \u2264 len(Lc): ci = ith symbol in Lc if ci is a Chinese word (i.e., not a non-terminal symbol): seenChunk = \u2205 // seenChunk is a global variable and is passed by reference to matchWSD if (ci is not the last symbol in Lc) and (ci+1 is a terminal symbol): then ci+1=(i+1)th symbol in Lc, else ci+1 = NULL if (ci+1!=NULL) and (ci, ci+1) as a single unit has WSD translations:", "cite_spans": [], "ref_spans": [ { "start": 249, "end": 257, "text": "Figure 1", "ref_id": null }, { "start": 441, "end": 450, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Incorporating WSD during Decoding", "sec_num": "5" }, { "text": "W SDc = set of WSD translations for (ci, ci+1) as a single unit with context-dependent probabilities WSDcost = WSDcost + matchWSD(ci, W SDc, seenChunk) WSDcost = WSDcost + matchWSD(ci+1, W SDc, seenChunk) i = i + 1 else:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating WSD during Decoding", "sec_num": "5" }, { "text": "W SDc = set of WSD translations for ci with context-dependent probabilities WSDcost = WSDcost + matchWSD(ci, W SDc, seenChunk) i = i + 1 costR = costR + WSDcost matchWSD(c, W SDc, seenChunk): // seenChunk is the set of chunks of R already examined for possible matching WSD translations cost = 0 ChunkSet = set of chunks in R aligned to c for chunkj in ChunkSet: if chunkj not in seenChunk: seenChunk = seenChunk \u222a { chunkj } E chunk j = set of English words in chunk j aligned to c Candidate wsd = \u2205 for wsd k in W SDc: if (wsd k is sub-sequence of chunk j ) and (wsd k contains at least one word in E chunk j ) Candidate wsd = Candidate wsd \u222a { wsd k } wsd best = best matching translation in Candidate wsd against chunkj cost = cost + costByWSDfeatures(wsd best ) // costByWSDfeatures sums up the cost of the two WSD features return cost Figure 1 : WSD translations affecting the cost of a rule R considered during decoding. e 4 , and c 3 is aligned to e 1 , e 2 , and e 5 . Although all words are aligned here, in general for a rule, some of its Chinese or English words may not be associated with any alignments.", "cite_spans": [], "ref_spans": [ { "start": 841, "end": 849, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Incorporating WSD during Decoding", "sec_num": "5" }, { "text": "In our experiment, c 1 c 2 as a phrase has a list of translations proposed by the WSD system, including the English phrase \"every month\". matchWSD will first be invoked for c 1 , which is aligned to only one chunk e 3 e 4 e 5 via its alignment with e 3 . Since \"every month\" is a sub-sequence of the chunk and also contains the word e 3 (\"every\"), it is noted as a candidate translation. Later, it is determined that the most number of words any candidate translation has is two words. Since among all the 2-word candidate translations, the translation \"every month\" has the highest translation probability as assigned by the WSD classifier, it is chosen as the best matching translation for the chunk. matchWSD is then invoked for c 2 , which is aligned to only one chunk e 3 e 4 e 5 . However, since this chunk has already been examined by c 1 with which it is considered as a phrase, no further matching is done for c 2 . Next, matchWSD is invoked for c 3 , which is aligned to both chunks of R. The English phrases \"go to\" and \"to\" are among the list of translations proposed by the WSD system for c 3 , and they are eventually chosen as the best matching translations for the chunks e 1 e 2 and e 3 e 4 e 5 , respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating WSD during Decoding", "sec_num": "5" }, { "text": "As mentioned, our experiments were on Chinese to English translation. Similar to (Chiang, 2005) (Chiang, 2005) .", "cite_spans": [ { "start": 81, "end": 95, "text": "(Chiang, 2005)", "ref_id": "BIBREF4" }, { "start": 96, "end": 110, "text": "(Chiang, 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "the English portion of the FBIS corpus and the Xinhua portion of the Gigaword corpus, we trained a trigram language model using the SRI Language Modelling Toolkit (Stolcke, 2002) . Following (Chiang, 2005) , we used the version 11a NIST BLEU script with its default settings to calculate the BLEU scores (Papineni et al., 2002) based on case-insensitive ngram matching, where n is up to 4. First, we performed word alignment on the FBIS parallel corpus using GIZA++ (Och and Ney, 2000) in both directions. The word alignments of both directions are then combined into a single set of alignments using the \"diag-and\" method of . Based on these alignments, synchronous CFG rules are then extracted from the corpus. While Hiero is extracting grammar rules, we gathered WSD training data by following the procedure described in section 4.", "cite_spans": [ { "start": 163, "end": 178, "text": "(Stolcke, 2002)", "ref_id": "BIBREF20" }, { "start": 191, "end": 205, "text": "(Chiang, 2005)", "ref_id": "BIBREF4" }, { "start": 304, "end": 327, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF19" }, { "start": 466, "end": 485, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "Using the MT 2002 test set, we ran the minimumerror rate training (MERT) (Och, 2003) with the decoder to tune the weights for each feature. The weights obtained are shown in the row Hiero of Table 2 . Using these weights, we run Hiero's decoder to perform the actual translation of the MT 2003 test sentences and obtained a BLEU score of 29.73, as shown in the row Hiero of Table 1. This is higher than the score of 28.77 reported in (Chiang, 2005) , perhaps due to differences in word segmentation, etc. Note that comparing with the MT systems used in (Carpuat and Wu, 2005) and (Cabezas and Resnik, 2005 ), the Hiero system we are using represents a much stronger baseline MT system upon which the WSD system must improve.", "cite_spans": [ { "start": 73, "end": 84, "text": "(Och, 2003)", "ref_id": "BIBREF18" }, { "start": 434, "end": 448, "text": "(Chiang, 2005)", "ref_id": "BIBREF4" }, { "start": 553, "end": 575, "text": "(Carpuat and Wu, 2005)", "ref_id": "BIBREF1" }, { "start": 580, "end": 605, "text": "(Cabezas and Resnik, 2005", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 191, "end": 198, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Hiero Results", "sec_num": "6.1" }, { "text": "We then added the WSD features of Section 3.1 into Hiero and reran the experiment. The weights obtained by MERT are shown in the row Hiero+WSD of Table 2 . We note that a negative weight is learnt for P ty wsd . This means that in general, the model prefers grammar rules having chunks that matches WSD translations. This matches our intuition. Using the weights obtained, we translated the test sentences and obtained a BLEU score of 30.30, as shown in the row Hiero+WSD of Table 1. The improvement of 0.57 is statistically significant at p < 0.05 using the sign-test as described by Collins et al. (2005) , with 374 (+1), 318 (\u22121) and 227 (0). Using the bootstrap-sampling test described in (Koehn, 2004b) , the improvement is statistically significant at p < 0.05. Though the improvement is modest, it is statistically significant and this positive result is important in view of the negative findings in (Carpuat and Wu, 2005 ) that WSD does not help MT. Furthermore, note that Hiero+WSD has higher n-gram precisions than Hiero.", "cite_spans": [ { "start": 585, "end": 606, "text": "Collins et al. (2005)", "ref_id": "BIBREF6" }, { "start": 693, "end": 707, "text": "(Koehn, 2004b)", "ref_id": "BIBREF11" }, { "start": 908, "end": 929, "text": "(Carpuat and Wu, 2005", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 146, "end": 153, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Hiero+WSD Results", "sec_num": "6.2" }, { "text": "Ideally, the WSD system should be suggesting highquality translations which are frequently part of the reference sentences. To determine this, we note the set of grammar rules used in the best derivation for translating each test sentence. From the rules of each test sentence, we tabulated the set of translations proposed by the WSD system and check whether they are found in the associated reference sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "7" }, { "text": "On the entire set of NIST MT 2003 evaluation test sentences, an average of 10.36 translations proposed result in more n-gram matches as the reference sentences used phrases such as \"will not gain\", \"will not get\", etc. Since the BLEU metric is precision based, the longer sentence translation by Hiero+WSD gets a lower BLEU score instead.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "7" }, { "text": "We have shown that WSD improves the translation performance of a state-of-the-art hierarchical phrase-based statistical MT system and this improvement is statistically significant. We have also demonstrated one way to integrate a WSD system into an MT system without introducing any rules that compete against existing rules, and where the feature-weight tuning and decoding place the WSD system on an equal footing with the other model components. For future work, an immediate step would be for the WSD classifier to provide translations for longer Chinese phrases. Also, different alternatives could be tried to match the translations provided by the WSD classifier against the chunks of rules. Finally, besides our proposed approach of integrating WSD into statistical MT via the introduction of two new features, we could explore other alternative ways of integration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "In order to check this requirement, we extended Hiero to make word alignment information available to the decoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Yee Seng Chan is supported by a Singapore Millennium Foundation Scholarship (ref no. SMF-2004-1076. David Chiang was partially supported under the GALE program of the Defense Advanced Research Projects Agency, contract HR0011-06-C-0022.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": " Table 3 : Number of WSD translations used and proportion that matches against respective reference sentences. WSD translations longer than 4 words are very sparse (less than 10 occurrences) and thus they are not shown. by the WSD system were used for each sentence. When limited to the set of 374 sentences which were judged by the Collins sign-test to have better translations from Hiero+WSD than from Hiero, a higher number (11.14) of proposed translations were used on average. Further, for the entire set of test sentences, 73.01% of the proposed translations are found in the reference sentences. This increased to a proportion of 73.22% when limited to the set of 374 sentences. These figures show that having more, and higher-quality proposed translations contributed to the set of 374 sentences being better translations than their respective original translations from Hiero. Table 3 gives a detailed breakdown of these figures according to the number of words in each proposed translation. For instance, over all the test sentences, the WSD module gave 7087 translations of single-word length, and 77.31% of these translations match their respective reference sentences. We note that although the proportion of matching 2word translations is slightly lower for the set of 374 sentences, the proportion increases for translations having more words.After the experiments in Section 6 were completed, we visually inspected the translation output of Hiero and Hiero+WSD to categorize the ways in which integrating WSD contributes to better translations. The first way in which WSD helps is when it enables the integrated Hiero+WSD system to output extra appropriate English words. For example, the translations for the Chinese sentence \". . . \" are as follows.\u2022 Hiero: . . . or other bad behavior \", will be more aid and other concessions.\u2022 Hiero+WSD: . . . or other bad behavior \", will be unable to obtain more aid and other concessions.Here, the Chinese words \" \" are not translated by Hiero at all. By providing the correct translation of \"unable to obtain\" for \" \", the translation output of Hiero+WSD is more complete.A second way in which WSD helps is by correcting a previously incorrect translation. For example, for the Chinese sentence \". . . . . . \", the WSD system helps to correct Hiero's original translation by providing the correct translation of \"all ethnic groups\" for the Chinese phrase \" \":\u2022 Hiero: . . . , and people of all nationalities across the country, . . .\u2022 Hiero+WSD: . . . , and people of all ethnic groups across the country, . . .We also looked at the set of 318 sentences that were judged by the Collins sign-test to be worse translations. We found that in some situations, Hiero+WSD has provided extra appropriate English words, but those particular words are not used in the reference sentences. An interesting example is the translation of the Chinese sentence \" \".\u2022 Hiero: Australian foreign minister said that North Korea bad behavior will be more aid\u2022 Hiero+WSD: Australian foreign minister said that North Korea bad behavior will be unable to obtain more aid This is similar to the example mentioned earlier. In this case however, those extra English words provided by Hiero+WSD, though appropriate, do not", "cite_spans": [], "ref_spans": [ { "start": 1, "end": 8, "text": "Table 3", "ref_id": null }, { "start": 886, "end": 893, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Using WSD techniques for lexical selection in statistical machine translation", "authors": [ { "first": "C", "middle": [], "last": "Cabezas", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Cabezas and P. Resnik. 2005. Using WSD techniques for lexical selection in statistical machine translation. Technical report, University of Maryland.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Word sense disambiguation vs. statistical machine translation", "authors": [ { "first": "M", "middle": [], "last": "Carpuat", "suffix": "" }, { "first": "D", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2005, "venue": "Proc. of ACL05", "volume": "", "issue": "", "pages": "387--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Carpuat and D. Wu. 2005. Word sense disambiguation vs. statistical machine translation. In Proc. of ACL05, pages 387-394.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Augmenting ensemble classification for word sense disambiguation with a kernel PCA model", "authors": [ { "first": "M", "middle": [], "last": "Carpuat", "suffix": "" }, { "first": "W", "middle": [], "last": "Su", "suffix": "" }, { "first": "D", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2004, "venue": "Proc. of SENSEVAL-3", "volume": "", "issue": "", "pages": "88--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Carpuat, W. Su, and D. Wu. 2004. Augmenting ensemble classification for word sense disambiguation with a kernel PCA model. In Proc. of SENSEVAL-3, pages 88-92.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "LIBSVM: a library for support vector machines", "authors": [ { "first": "C", "middle": [ "C" ], "last": "Chang", "suffix": "" }, { "first": "C", "middle": [ "J" ], "last": "Lin", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. C. Chang and C. J. Lin, 2001. LIBSVM: a library for sup- port vector machines. Software available at http://www. csie.ntu.edu.tw/\u02dccjlin/libsvm.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "D", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proc. of ACL05", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Chiang. 2005. A hierarchical phrase-based model for sta- tistical machine translation. In Proc. of ACL05, pages 263- 270.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Hierarchical phrase-based translation", "authors": [ { "first": "D", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Chiang. 2007. Hierarchical phrase-based translation. To appear in Computational Linguistics, 33(2).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Clause restructuring for statistical machine translation", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "I", "middle": [], "last": "Kucerova", "suffix": "" } ], "year": 2005, "venue": "Proc. of ACL05", "volume": "", "issue": "", "pages": "531--540", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins, P. Koehn, and I. Kucerova. 2005. Clause restruc- turing for statistical machine translation. In Proc. of ACL05, pages 531-540.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Greedy decoding for statistical machine translation in almost linear time", "authors": [ { "first": "U", "middle": [], "last": "Germann", "suffix": "" } ], "year": 2003, "venue": "Proc. of HLT-NAACL03", "volume": "", "issue": "", "pages": "72--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "U. Germann. 2003. Greedy decoding for statistical machine translation in almost linear time. In Proc. of HLT-NAACL03, pages 72-79.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Statistical phrasebased translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proc. of HLT-NAACL03", "volume": "", "issue": "", "pages": "48--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase- based translation. In Proc. of HLT-NAACL03, pages 48-54.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Noun Phrase Translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn. 2003. Noun Phrase Translation. Ph.D. thesis, Uni- versity of Southern California.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Pharaoh: A beam search decoder for phrasebased statistical machine translation models", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proc. of AMTA04", "volume": "", "issue": "", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn. 2004a. Pharaoh: A beam search decoder for phrase- based statistical machine translation models. In Proc. of AMTA04, pages 115-124.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Statistical significance tests for machine translation evaluation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proc. of EMNLP04", "volume": "", "issue": "", "pages": "388--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn. 2004b. Statistical significance tests for machine translation evaluation. In Proc. of EMNLP04, pages 388- 395.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An empirical evaluation of knowledge sources and learning algorithms for word sense disambiguation", "authors": [ { "first": "Y", "middle": [ "K" ], "last": "Lee", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Ng", "suffix": "" } ], "year": 2002, "venue": "Proc. of EMNLP02", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. K. Lee and H. T. Ng. 2002. An empirical evaluation of knowledge sources and learning algorithms for word sense disambiguation. In Proc. of EMNLP02, pages 41-48.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Syntax-directed transduction", "authors": [ { "first": "P", "middle": [ "M" ], "last": "Lewis", "suffix": "" }, { "first": "R", "middle": [ "E" ], "last": "Stearns", "suffix": "" } ], "year": 1968, "venue": "Journal of the ACM", "volume": "15", "issue": "3", "pages": "465--488", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. M. II Lewis and R. E. Stearns. 1968. Syntax-directed trans- duction. Journal of the ACM, 15(3):465-488.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A phrase-based, joint probability model for statistical machine translation", "authors": [ { "first": "D", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "W", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2002, "venue": "Proc. of EMNLP02", "volume": "", "issue": "", "pages": "133--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Marcu and W. Wong. 2002. A phrase-based, joint proba- bility model for statistical machine translation. In Proc. of EMNLP02, pages 133-139.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Improved statistical alignment models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proc. of ACL00", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och and H. Ney. 2000. Improved statistical alignment models. In Proc. of ACL00, pages 440-447.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Discriminative training and maximum entropy models for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL02", "volume": "", "issue": "", "pages": "295--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och and H. Ney. 2002. Discriminative training and max- imum entropy models for statistical machine translation. In Proc. of ACL02, pages 295-302.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The alignment template approach to statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "417--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och and H. Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguis- tics, 30(4):417-449.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proc. of ACL03", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och. 2003. Minimum error rate training in statistical ma- chine translation. In Proc. of ACL03, pages 160-167.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "BLEU: A method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W", "middle": [ "J" ], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL02", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, T. Ward, and W. J. Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proc. of ACL02, pages 311-318.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "SRILM -an extensible language modeling toolkit", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proc. of ICSLP02", "volume": "", "issue": "", "pages": "901--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Stolcke. 2002. SRILM -an extensible language modeling toolkit. In Proc. of ICSLP02, pages 901-904.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Word-sense disambiguation for machine translation", "authors": [ { "first": "D", "middle": [], "last": "Vickrey", "suffix": "" }, { "first": "L", "middle": [], "last": "Biewald", "suffix": "" }, { "first": "M", "middle": [], "last": "Teyssier", "suffix": "" }, { "first": "D", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2005, "venue": "Proc. of EMNLP05", "volume": "", "issue": "", "pages": "771--778", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Vickrey, L. Biewald, M. Teyssier, and D. Koller. 2005. Word-sense disambiguation for machine translation. In Proc. of EMNLP05, pages 771-778.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A polynomial-time algorithm for statistical machine translation", "authors": [ { "first": "D", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1996, "venue": "Proc. of ACL96", "volume": "", "issue": "", "pages": "152--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Wu. 1996. A polynomial-time algorithm for statistical ma- chine translation. In Proc. of ACL96, pages 152-158.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
SystemBLEU-4Individual n-gram precisions
1234
Hiero29.7374.73 40.14 21.83 11.93
Hiero+WSD30.3074.82 40.40 22.45 12.42
", "text": ", we trained the Hiero system on the FBIS corpus, used the NIST MT 2002 evaluation test set as our development set to tune the feature weights, and the NIST MT 2003 evaluation test set as our test data. Using", "type_str": "table", "num": null, "html": null }, "TABREF1": { "content": "
Features
", "text": "BLEU scores", "type_str": "table", "num": null, "html": null }, "TABREF2": { "content": "", "text": "Weights for each feature obtained by MERT training. The first eight features are those used by Hiero in", "type_str": "table", "num": null, "html": null } } } }