|
{ |
|
"paper_id": "Y11-1016", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:39:10.443019Z" |
|
}, |
|
"title": "Improving Sampling-based Alignment by Investigating the Distribution of N-grams in Phrase Translation Tables", |
|
"authors": [ |
|
{ |
|
"first": "Juan", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Waseda University", |
|
"location": { |
|
"addrLine": "2-7 Hibikino, Wakamatsu-ku", |
|
"postCode": "808-0135", |
|
"settlement": "Kitakyushu", |
|
"region": "Fukuoka", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Adrien", |
|
"middle": [], |
|
"last": "Lardilleux", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "TLP Group, LIMSI-CNRS, BP 133", |
|
"institution": "", |
|
"location": { |
|
"postCode": "91403", |
|
"settlement": "Orsay Cedex", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "adrien.lardilleux@limsi.fr" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Lepage", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Waseda University", |
|
"location": { |
|
"addrLine": "2-7 Hibikino, Wakamatsu-ku", |
|
"postCode": "808-0135", |
|
"settlement": "Kitakyushu", |
|
"region": "Fukuoka", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "yves.lepage@aoni.waseda.jp" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes an approach to improve the performance of sampling-based multilingual alignment on translation tasks by investigating the distribution of n-grams in the translation tables. This approach consists in enforcing the alignment of n-grams. The quality of phrase translation tables output by this approach and that of MGIZA++ is compared in statistical machine translation tasks. Significant improvements for this approach are reported. In addition, merging translation tables is shown to outperform state-of-the-art techniques.", |
|
"pdf_parse": { |
|
"paper_id": "Y11-1016", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes an approach to improve the performance of sampling-based multilingual alignment on translation tasks by investigating the distribution of n-grams in the translation tables. This approach consists in enforcing the alignment of n-grams. The quality of phrase translation tables output by this approach and that of MGIZA++ is compared in statistical machine translation tasks. Significant improvements for this approach are reported. In addition, merging translation tables is shown to outperform state-of-the-art techniques.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Phrase translation tables play an important role in the process of building machine translation systems. The quality of translation table, which identifies the relations between words or phrases in the source language and those in the target language, is crucial for the quality of the output of most machine translation systems. Currently, the most widely used state-of-the-art tool to generate phrase translation tables is GIZA++ (Och and Ney, 2003) , which trains the ubiquitous IBM models (Brown et al., 1993) and the HMM introduced by (Vogel et al., 1996) , in combination with the Moses toolkit (Koehn et al., 2007) . MGIZA++, a multi-threaded word aligner based on GIZA++, is proposed by (Gao and Vogel, 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 432, |
|
"end": 451, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 513, |
|
"text": "(Brown et al., 1993)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 540, |
|
"end": 560, |
|
"text": "(Vogel et al., 1996)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 601, |
|
"end": 621, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 695, |
|
"end": 716, |
|
"text": "(Gao and Vogel, 2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we investigate a different approach to the production of phrase translation tables: the sampling-based approach (Lardilleux and Lepage, 2009b) . This approach is implemented in a free open-source tool called Anymalign. 1 Being in line with the associative alignment trend illustrated by (Gale and Church, 1991; Melamed, 2000; Moore, 2005) , it is much simpler than the models implemented in MGIZA++, which are in line with the estimating trend illustrated by (Brown et al., 1991; Och and Ney, 2003; Liang et al., 2006) . In addition, it is capable of aligning multiple languages simultaneously; but we will not use this feature here as we will restrain ourselves to bilingual experiments in this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 157, |
|
"text": "(Lardilleux and Lepage, 2009b)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 325, |
|
"text": "(Gale and Church, 1991;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 340, |
|
"text": "Melamed, 2000;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 353, |
|
"text": "Moore, 2005)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 474, |
|
"end": 494, |
|
"text": "(Brown et al., 1991;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 495, |
|
"end": 513, |
|
"text": "Och and Ney, 2003;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 514, |
|
"end": 533, |
|
"text": "Liang et al., 2006)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In sampling-based alignment, only those sequences of words sharing the exact same distribution (i.e., they appear exactly in the same sentences of the corpus) are considered for alignment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The key idea is to make more words share the same distribution by artificially reducing their frequency in multiple random subcorpora obtained by sampling. Indeed, the smaller a subcorpus, the less frequent its words, and the more likely they are to share the same distribution; hence the higher the proportion of words aligned in this subcorpus. In practice, the majority of these words turn out to be hapaxes, that is, words that occur only once in the input corpus. Hapaxes have been shown to safely align across languages (Lardilleux and Lepage, 2009a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 526, |
|
"end": 556, |
|
"text": "(Lardilleux and Lepage, 2009a)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The subcorpus selection process is guided by a probability distribution which ensures a proper coverage of the input parallel corpus:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(k) = \u22121 k log(1 \u2212 k/n) (to be normalized)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "where k denotes the size (number of sentences) of a subcorpus and n the size of the complete input corpus. Note that this function is very close to 1/k 2 : it gives much more credit to small subcorpora, which happen to be the most productive (Lardilleux and Lepage, 2009b) . Once the size of a subcorpus has been chosen according to this distribution, its sentences are randomly selected from the complete input corpus according to a uniform distribution. Then, from each subcorpus, sequences of words that share the same distribution are extracted to constitute alignments along with the number of times they were aligned. 2 Eventually, the list of alignments is turned into a full-fledged translation table, by calculating various features for each alignment. In the following, we use two translation probabilities and two lexical weights as proposed by (Koehn et al., 2003) , as well as the commonly used phrase penalty, for a total of five features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 272, |
|
"text": "(Lardilleux and Lepage, 2009b)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 856, |
|
"end": 876, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One important feature of the sampling-based alignment method is that it is implemented with an anytime algorithm: the number of random subcorpora to be processed is not set in advance, so the alignment process can be interrupted at any moment. Contrary to many approaches, after a very short amount of time, quality is no more a matter of time, however quantity is: the longer the aligner runs (i.e. the more subcorpora processed), the more alignments produced, and the more reliable their associated translation probabilities, as they are calculated on the basis of the number of time each alignment was obtained. This is possible because high frequency alignments are quickly output with a fairly good estimation of their translation probabilities. As time goes, their estimation is refined, while less frequent alignments are output in addition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Intuitively, since the sampling-based alignment process can be interrupted without sacrificing the quality of alignments, it should be possible to allot more processing time for n-grams of similar lengths in both languages and less time to very different lengths. For instance, a source bigram is much less likely to be aligned with a target 9-gram than with a bigram or a trigram. The experiments reported in this paper make use of the anytime feature of Anymalign and of the possibility of allotting time freely.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper is organized as follows: Section 2 describes a preliminary experiment on the sampling-based alignment approach implemented in Anymalign baseline and provides the experimental results from which the problem is defined. In Section 3, we propose a variant in order to improve its performance on statistical machine translation tasks. Section 4 introduces standard normal distribution of time to bias the distribution of n-grams in phrase translation tables. Section 5 describes the effects of pruning on the translation quality. Section 6 presents the merge of two aligners' phrase translation tables. Finally, in Section 7, conclusions and possible directions for future work are presented.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to measure the performance of the sampling-based alignment approach implemented in Anymalign in statistical machine translation tasks, we conducted a preliminary experiment and compared with the standard alignment setting: symmetric alignments obtained from MGIZA++. Although Anymalign and MGIZA++ are both capable of parallel processing, for fair comparison in time, we run them as single processes in all our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preliminary Experiment", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A sample of the French-English parts of the Europarl parallel corpus was used for training, tuning and testing. A detailed description of the data used in the experiments is given in Table 1 . The training corpus is made of 100k sentences. The development set contains 500 sentences, and 1,000 sentences were used for testing. To perform the experiments, a standard statistical machine translation system was built for each different alignment setting, using the Moses decoder (Koehn et al., 2007) , MERT (Minimum Error Rate Training) to tune the parameters of translation tables , and the SRI Language Modeling toolkit (Stolcke, 2002) to build the target language model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 477, |
|
"end": 497, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 620, |
|
"end": 635, |
|
"text": "(Stolcke, 2002)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 190, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "As for the evaluation of translations, four standard automatic evaluation metrics were used: mWER (Nie\u00dfen et al., 2000) , BLEU (Papineni et al., 2002) , NIST (Doddington, 2002) , and TER (Snover et al., 2006) . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 119, |
|
"text": "(Nie\u00dfen et al., 2000)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 127, |
|
"end": 150, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 176, |
|
"text": "(Doddington, 2002)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 208, |
|
"text": "(Snover et al., 2006)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In a first setting, we evaluated the quality of translations output by the Moses decoder using the phrase table obtained by making MGIZA++'s alignments symmetric in a second setting. This phrase table was simply replaced by that produced by Anymalign. Since Anymalign can be stopped at any time, for a fair comparison it was run for the same amount of time as MGIZA++: seven hours in total. The experimental results are shown in Table 2 . In order to investigate the differences between MGIZA++ and Anymalign phrase translation tables, we analyzed the distribution of n-grams of both aligners, The distributions are shown in Table 7 (a) and Table 7 (b). In Anymalign's phrase translation table, the number of alignments is 8 times that of 1 \u00d7 1 n-grams in MGIZA++ translation table, or twice the number of 1 \u00d7 2 n-grams or 2 \u00d7 1 n-grams in MGIZA++ translation table. Along the diagonal (m \u00d7 m n-grams), the number of alignments in Anymalign table is more than 10 times less than in MGIZA++ table. This confirms the results given in (Lardilleux et al., 2009 ) that the sampling-based approach excels in aligning unigrams, which makes it better at multilingual lexicon induction than, e.g., MGIZA++. However, its phrase tables do not reach the performance of symmetric alignments from MGIZA++ on translation tasks. This basically comes from the fact that Anymalign does not align enough long n-grams (Lardilleux et al., 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1032, |
|
"end": 1056, |
|
"text": "(Lardilleux et al., 2009", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1398, |
|
"end": 1423, |
|
"text": "(Lardilleux et al., 2009)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 429, |
|
"end": 436, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 625, |
|
"end": 632, |
|
"text": "Table 7", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 641, |
|
"end": 648, |
|
"text": "Table 7", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "3 Anymalign1-N", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "To solve the above-mentioned problem, we propose a method to force the sampling-based approach to align more n-grams.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enforcing Alignment of N-grams", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Consider that we have a parallel input corpus, i.e., a list of (source, target) sentence pairs, for instance, in French and English. Groups of characters that are separated by spaces in these sentences are considered as words. Single words are referred to as unigrams, and sequences of two and three words are called bigrams and trigrams, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enforcing Alignment of N-grams", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Theoretically, since the sampling-based alignment method excels at aligning unigrams, we could improve it by making it align bigrams, trigrams, or even longer n-grams as if they were unigrams. We do this by replacing spaces between words by underscore symbols and reduplicating words as many times as needed, which allows to make bigrams, trigrams, and longer n-grams appear as unigrams. Table 3 depicts the way of forcing n-grams into unigrams.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 388, |
|
"end": 395, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Enforcing Alignment of N-grams", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Similar works on the idea of enlarging n-grams have been reported in (Ma et al., 2007) , in which \"word packing\" is used to obtain 1-to-n alignments based on co-occurrence frequencies, and (Henr\u00edquez Q. et al., 2010) , in which collocation segmentation is performed on bilingual corpus to extract n-to-m alignments. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 86, |
|
"text": "(Ma et al., 2007)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 216, |
|
"text": "(Henr\u00edquez Q. et al., 2010)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enforcing Alignment of N-grams", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "It it thus possible to use various parallel corpora, with different segmentation schemes in the source and target parts. We refer to a parallel corpus where source n-grams and target m-grams are assimilated to unigrams as an unigramized n-m corpus. These corpora are then used as input to Anymalign to produce phrase translation subtables, as shown in Table 4 . Practically, we call Anymalign1-N the process of running Anymalign with all possible unigramized n-m corpora, with n and m both ranging from 1 to a given N. In total, Anymalign is thus run N \u00d7 N times. All phrase translation subtables are finally merged together into one large translation table, where translation probabilities are re-estimated given the complete set of alignments. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 352, |
|
"end": 359, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Phrase Translation Subtables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 \u2022 \u2022 N-grams unigrams TT 1 \u00d7 1 TT 1 \u00d7 2 TT 1 \u00d7 3 \u2022 \u2022 \u2022 TT 1 \u00d7 N bigrams TT 2 \u00d7 1 TT 2 \u00d7 2 TT 2 \u00d7 3 \u2022 \u2022 \u2022 TT 2 \u00d7 N trigrams TT 3 \u00d7 1 TT 3 \u00d7 2 TT 3 \u00d7 3 \u2022 \u2022 \u2022 TT 3 \u00d7 N \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 N-grams TT N \u00d7 1 TT N \u00d7 2 TT N \u00d7 3 \u2022 \u2022 \u2022 TT N \u00d7 N", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrase Translation Subtables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Although Anymalign is capable of directly producing alignments of sequences of words, we use it with a simple filter 3 so that it only produces (typographic) unigrams in output, i.e., n-grams and m-grams assimilated to unigrams in the input corpus. This choice was made because it is useless to produce alignment of sequences of words, since we are only interested in phrases in the subsequent machine translation tasks. Those phrases are already contained in our (typographic) unigrams: all we need to do to get the original segmentation is to remove underscores from the alignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrase Translation Subtables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The same experimental process (i.e., replacing the translation table) as in the preliminary experiment was carried out on Anymalign1-N with equal time distribution, which is, uniformly distributed time among subtables. For a fair comparison, the same amount of time was given: seven hours in total. The results are shown in Table 6 . On the whole, MGIZA++ significantly outperforms Anymalign, by more than 4 BLEU points. The proposed approach, Anymalign1-N, produces better results than Anymalign in its basic version, with the best increase with Anymalign1-3 or Anymalign1-4 (+1.3 BP).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 324, |
|
"end": 331, |
|
"text": "Table 6", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Results with Equal Time Configuration", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The comparison of Table 7 (c) (see last page) and Table 7 (a) shows that Anymalign1-N delivers too many alignments outside of the diagonal (m \u00d7 m n-grams) and still not enough along the diagonal. Consequently, this number of alignments should be lowered. A way of doing so is by giving less time for alignments outside of the diagonal.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 25, |
|
"text": "Table 7", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 50, |
|
"end": 57, |
|
"text": "Table 7", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Results with Equal Time Configuration", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In order to increase the number of phrase pairs along the diagonal of the translation table matrix and decrease this number outside the diagonal (Table 4) , we distribute the total alignment time among translation subtables according to the standard normal distribution:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 154, |
|
"text": "(Table 4)", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Time Distribution among Subtables", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c6 (n, m) = 1 \u221a 2\u03c0 e \u2212 1 2 (n\u2212m) 2", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Time Distribution among Subtables", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The alignment time allotted to the subtable between source n-grams and target m-grams will thus be proportional to \u03c6 (n, m). Table 5 shows an example of alignment times allotted to each subtable up to 4-grams, for a total processing time of 7 hours.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 132, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Time Distribution among Subtables", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We performed a third evaluation using the standard normal distribution of time, as in previous experiments, again with a total amount of processing time (7 hours).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Results with Standard Normal Time Distribution", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The comparison between MGIZA++, Anymalign in its standard use, and Anymalign1-N with standard normal time distribution is shown in Table 6 . Anymalign1-4 shows the best performance in terms of mWER and BLEU scores, while Anymalign1-3 gets the best results for the two other evaluation metrics. There is an increase in BLEU scores for almost all Anymalign1-N, from Anymalign1-3 to Anymalign1-10, when compared with the translation qualities of Anymalign1-N with equal time distribution. The greatest increase in BLEU is obtained for Anymalign1-10 (almost +2 BP). Anymalign1-4 shows the best translation qualities among all other settings, but gets a less significant improvement (+0.2 BP). Again, we investigated the number of entries in Anymalign1-N run with this normal time distribution. We compare the number of entries in Table 7 in Anymalign1-4 with (c) equal time distribution and (d) standard normal time distribution (see last page). The number of phrase pairs on the diagonal roughly doubled when using standard normal time distribution. We can see a significant increase in the number of phrase pairs of similar lengths, while the number of phrase pairs with different lengths tends to decrease slightly. This means that the standard normal time distribution allowed us to produce much more numerous useful alignments (a priori, phrase pairs with similar lengths), while maintaining the noise (phrase pairs with different lengths) to a low level, which is a neat advantage over the original method.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 138, |
|
"text": "Table 6", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 826, |
|
"end": 833, |
|
"text": "Table 7", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Results with Standard Normal Time Distribution", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Until now, we were concerned with the shape of phrase translation tables in standard configurations. However, (Johnson et al., 2007) have shown that substantially pruning the phrase translation tables can lead to slight but consistent improvements in translation quality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 132, |
|
"text": "(Johnson et al., 2007)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation Table Pruning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "They use Fisher's exact significance test to eliminate a substantial number of phrase pairs. The significance of the association between a (source, target) phrase pair is evaluated and their probability of co-occurrence in the corpus is calculated. The hypergeometric distribution is used to compute the observed probability of joint occurrence C(s,t), withs a source phrase andt a target phrase:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation Table Pruning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "p h (C(s,t)) = C(s) C(s,t) N\u2212C(s) C(t)\u2212C(s,t) N C(t) (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation Table Pruning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Here, N is the number of sentences in the input parallel corpus. The p-value is calculated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation Table Pruning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p-value(C(s,t)) = \u221e \u2211 k=C(s,t) p h (k)", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Translation Table Pruning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Any phrase pair with a p-value greater than a given threshold will thus be filtered out. In practice, this mainly removes phrase pairs with different frequencies. A special case happens when a source phrase and a target phrase, hence the resulting phrase pair as well, occur only once in the corpus (called a 1-1-1 phrase pair in (Johnson et al., 2007) ). By considering a p-value of \u03b1 = log(N), \u03b1 + \u03b5 (where \u03b5 is very small) is the smallest threshold that results in none of the 1-1-1 phrase pairs being included, while \u03b1 \u2212 \u03b5 is the largest threshold that results in those pairs being included.", |
|
"cite_spans": [ |
|
{ |
|
"start": 330, |
|
"end": 352, |
|
"text": "(Johnson et al., 2007)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation Table Pruning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We investigate the impact of pruning on Anymalign's translation tables in terms of n-gram distribution and final translation quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation Table Pruning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In a fourth set of experiments, we thus compare the phrase translation tables of MGIZA++, and Anymalign1-N (standard normal time distribution), after applying this pruning. The \u03b1 \u2212 \u03b5 filter was used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Results with Pruning", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Evaluation results on machine translation tasks with pruned translation tables are given in Table 6. The phrase table size reduction brings gains in BLEU scores. Among all Anymalign1-N, Anymalign1-4 once again gets the highest BLEU score of 0.2511 and shows the best performance in all evaluation metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Results with Pruning", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "As an example, the number of entries in Anymalign1-4's translation table, after pruning, is shown in Table 7 (e). The largest difference when compared with the non-pruned translation table (Table 7 (d)) is visible in the cell corresponding to 1-1 entries: a substantial decrease of almost 200,000 entries is observed, which corresponds to a reduction of 76%. As a consequence, the most numerous entries are now 2-2 phrase pairs, which account for 19% of the total number of phrase pairs. On the whole, 54% of entries were filtered out from Anymalign1-4's translation table.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 108, |
|
"text": "Table 7", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 202, |
|
"text": "(Table 7 (d))", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Results with Pruning", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In order to check exactly how different the translation table of MGIZA++ and that of Anymalign are, we performed an additional set of experiments in which MGIZA++'s translation table is merged with that of Anymalign baseline and we used the union of the two translation tables. As for the feature scores in the translation tables for the intersection part of both aligners, i.e., entries in two translation tables share the same phrase pairs but with different feature scores, we adopted parameters computed either by MGIZA++ or by Anymalign for evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging translation tables", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Evaluation results on machine translation tasks with merged translation tables are given in Table 6 . This setting outperforms MGIZA++ on BLEU scores. The translation table with Anymalign parameters for the intersection part is slightly behind the translation table with MGIZA++ parameters. This may indicate that the feature scores in Anymalign translation table need to be revised.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 99, |
|
"text": "Table 6", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Merging translation tables", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have presented a method to improve the translation quality of the sampling-based subsentential alignment approach for statistical machine translation tasks. Our approach is based on adapting the number of n-grams by investigating their distribution in phrase translation tables. Furthermore, we inspected the influence of pruning the translation tables, a technique described in (Johnson et al., 2007) , and merging the translation tables from two aligners (i.e., Anymalign and MGIZA++). Adapting the number of n-grams leads to significantly better evaluation results than the original approach. Merging two translation tables outperforms MGIZA++ alone. As for future work, we plan to modify the computation of the feature scores in Anymalign's phrase translation tables to make them closer to those of MGIZA++. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 382, |
|
"end": 404, |
|
"text": "(Johnson et al., 2007)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Contrary to the widely used terminology where it denotes a set of links between the source and target words of a sentence pair, we call \"alignment\" a (source, target) phrase pair, i.e., it corresponds to an entry in the so-called [phrase] translation tables.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Option -N 1 in the program.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The mathematics of statistical machine translation: Parameter estimation", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"Della" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"Della" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brown, Peter, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer. 1993. The math- ematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2), 263-311.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Aligning sentences in parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics (ACL'91)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "169--176", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brown, Peter, Jennifer Lai, and Robert Mercer. 1991. Aligning sentences in parallel corpora. In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics (ACL'91), pp. 169-176, Berkeley (Californie, USA), jun.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Automatic evaluation of machine translation quality using N-gram co-occurrence statistics", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Doddington", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Second International Conference on Human Language Technology Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "138--145", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Doddington, George. 2002. Automatic evaluation of machine translation quality using N-gram co-occurrence statistics. In Proceedings of the Second International Conference on Human Language Technology Research, pp. 138-145, San Diego. Morgan Kaufmann Publishers Inc.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Identifying word correspondences in parallel texts", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Gale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Church", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the fourth DARPA workshop on Speech and Natural Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "152--157", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gale, William and Kenneth Church. 1991. Identifying word correspondences in parallel texts. In Proceedings of the fourth DARPA workshop on Speech and Natural Language, pp. 152-157, Pacific Grove, feb.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Parallel implementations of word alignment tool", |
|
"authors": [ |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Software Engineering, Testing, and Quality Assurance for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gao, Qin and Stephan Vogel. 2008. Parallel implementations of word alignment tool. In Associa- tion for Computational Linguistics, ed., Software Engineering, Testing, and Quality Assurance for Natural Language Processing, pp. 49-57, Columbus, Ohio.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Using collocation segmentation to augment the phrase table", |
|
"authors": [ |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Henr\u00edquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Carlos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Costa-Juss\u00e0", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vidas", |
|
"middle": [], |
|
"last": "Daudaravicius", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"Rafael" |
|
], |
|
"last": "Banchs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B. Jos\u00e9", |
|
"middle": [], |
|
"last": "Mari\u00f1o", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, WMT '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "98--102", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Henr\u00edquez Q., A. Carlos, R. Marta Costa-juss\u00e0, Vidas Daudaravicius, E. Rafael Banchs, and B. Jos\u00e9 Mari\u00f1o. 2010. Using collocation segmentation to augment the phrase table. In Pro- ceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, WMT '10, pp. 98-102, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Improving translation quality by discarding most of the phrasetable", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Howard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kuhn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "967--975", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johnson, J Howard, Joel Martin, George Foster, and Roland Kuhn. 2007. Improving translation quality by discarding most of the phrasetable. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pp. 967-975, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Moses: Open source toolkit for statistical 157 machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wade", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL 2007)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koehn, Philipp, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bo- jar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical 157 machine translation. In Proceedings of the 45th Annual Meeting of the Association for Com- putational Linguistics (ACL 2007), pp. 177-180, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Statistical phrase-based translation", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Philipp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "48--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koehn, Philipp, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pp. 48-54, Edmonton.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Lexicons or phrase tables? An investigation in sampling-based multilingual alignment", |
|
"authors": [ |
|
{ |
|
"first": "Adrien", |
|
"middle": [], |
|
"last": "Lardilleux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Chevelu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Lepage", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ghislain", |
|
"middle": [], |
|
"last": "Putois", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Gosme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the third workshop on example-based machine translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lardilleux, Adrien, Jonathan Chevelu, Yves Lepage, Ghislain Putois, and Julien Gosme. 2009. Lexicons or phrase tables? An investigation in sampling-based multilingual alignment. In Mikel Forcada and Andy Way, eds., Proceedings of the third workshop on example-based machine translation, pp. 45-52, Dublin, Ireland.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Hapax Legomena : their Contribution in Number and Efficiency to Word Alignment", |
|
"authors": [ |
|
{ |
|
"first": "Adrien", |
|
"middle": [], |
|
"last": "Lardilleux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Lepage", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Lecture notes in computer science", |
|
"volume": "5603", |
|
"issue": "", |
|
"pages": "440--450", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lardilleux, Adrien and Yves Lepage. 2009a. Hapax Legomena : their Contribution in Number and Efficiency to Word Alignment. Lecture notes in computer science, 5603, 440-450.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Sampling-based multilingual alignment", |
|
"authors": [ |
|
{ |
|
"first": "Adrien", |
|
"middle": [], |
|
"last": "Lardilleux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Lepage", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "International Conference on Recent Advances in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "214--218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lardilleux, Adrien and Yves Lepage. 2009b. Sampling-based multilingual alignment. In Inter- national Conference on Recent Advances in Natural Language Processing (RANLP 2009), pp. 214-218, Borovets, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Alignment by agreement", |
|
"authors": [ |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "104--111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang, Percy, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the Human Language Technology Conference of the NAACL, pp. 104-111, New York City, jun.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Bootstrapping word alignment via word packing", |
|
"authors": [ |
|
{ |
|
"first": "Yanjun", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Stroppa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "304--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ma, Yanjun, Nicolas Stroppa, and Andy Way. 2007. Bootstrapping word alignment via word packing. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pp. 304-311, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Models of translational equivalence among words", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Melamed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computational Linguistics", |
|
"volume": "26", |
|
"issue": "2", |
|
"pages": "221--249", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Melamed, Dan. 2000. Models of translational equivalence among words. Computational Lin- guistics, 26(2), 221-249, jun.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Association-based bilingual word alignment", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the ACL Workshop on Building and Using Parallel Texts", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Moore, Robert. 2005. Association-based bilingual word alignment. In Proceedings of the ACL Workshop on Building and Using Parallel Texts, pp. 1-8, Ann Arbor, jun.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "An evaluation tool for machine translation: Fast evaluation for machine translation research", |
|
"authors": [ |
|
{ |
|
"first": "Sonja", |
|
"middle": [], |
|
"last": "Nie\u00dfen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"Josef" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregor", |
|
"middle": [], |
|
"last": "Leusch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the Second International Conference on Language Resources and Evaluation (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "39--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nie\u00dfen, Sonja, Franz Josef Och, Gregor Leusch, and Hermann Ney. 2000. An evaluation tool for machine translation: Fast evaluation for machine translation research. In Proceedings of the Second International Conference on Language Resources and Evaluation (LREC), pp. 39-45, Athens.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Minimum error rate training in statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Josef", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Och, Franz Josef. 2003. Minimum error rate training in statistical machine translation. In Pro- ceedings of the 41st Annual Meeting on Association for Computational Linguistics -Volume 1, ACL '03, pp. 160-167, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A systematic comparison of various statistical alignment models", |
|
"authors": [ |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"Josef" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "", |
|
"pages": "19--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Och, Franz Josef and Hermann Ney. 2003. A systematic comparison of various statistical align- ment models. In Computational Linguistics, volume 29, pp. 19-51.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "BLEU: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002), pp. 311-318, Philadelphia.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A study of translation edit rate with targeted human annotation", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Snover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Dorr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linnea", |
|
"middle": [], |
|
"last": "Micciulla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Makhoul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of Association for Machine Translation in the Americas (AMTA 2006)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "223--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Snover, Matthew, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas (AMTA 2006), pp. 223-231, Cambridge, Massachusetts.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "SRILM-an extensible language modeling toolkit", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Seventh International Conference on Spoken Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "901--904", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stolcke, A. 2002. SRILM-an extensible language modeling toolkit. In Seventh International Conference on Spoken Language Processing, volume 2, pp. 901-904, Denver, Colorado.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Hmm-based word alignment in statistical translation", |
|
"authors": [ |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Tillman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 16th International Conference on Computational Linguistics (Coling'96)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "836--841", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vogel, Stephan, Hermann Ney, and Christoph Tillman. 1996. Hmm-based word alignment in statistical translation. In Proceedings of the 16th International Conference on Computational Linguistics (Coling'96), pp. 836-841, Copenhagen, aug.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"text": "Statistics on the French-English parallel corpus used for the training, development, and test sets.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td>French</td><td>English</td></tr><tr><td colspan=\"2\">Train sentences words words/sentence</td><td colspan=\"2\">100,000 3,986,438 2,824,579 100,000 38 27</td></tr><tr><td>Dev</td><td>sentences words words/sentence</td><td>500 18,120 36</td><td>500 13,261 26</td></tr><tr><td>Test</td><td>sentences words words/sentence</td><td>1,000 38,936 37</td><td>1,000 27,965 27</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"text": "Evaluation results on a statistical machine translation task using phrase tables obtained from MGIZA++ and Anymalign (baseline).", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>mWER BLEU NIST</td><td>TER</td></tr><tr><td colspan=\"2\">MGIZA++ 0.5714 0.2742 6.6747 0.6170 Anymalign 0.6186 0.2285 6.0764 0.6634</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"text": "Transforming n-grams into unigrams by inserting underscores and reduplicating words for both the French part and English part of the input parallel corpus.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>n</td><td>French</td><td>English</td></tr><tr><td>1</td><td>le debat est clos .</td><td>the debate is closed .</td></tr><tr><td>2</td><td>le debat debat est est clos clos .</td><td>the debate debate is is closed closed .</td></tr><tr><td>3</td><td>le debat est debat est clos est clos .</td><td>the debate is debate is closed is closed .</td></tr><tr><td>4</td><td>le debat est clos debat est clos .</td><td>the debate is closed debate is closed .</td></tr><tr><td>5</td><td>le debat est clos .</td><td>the debate is closed .</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"text": "List of n-gram translation subtables (TT) generated from the training corpus. These subtables are then merged together into a single translation table.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Target</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"text": "Alignment time in seconds allotted to each unigramized parallel corpus of Anymalign1-4. The sum of the figures in all cells amounts to seven hours (25,200 seconds).", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td colspan=\"4\">Target unigrams bigrams trigrams 4-grams</td></tr><tr><td>Source</td><td>unigrams bigrams trigrams 4-grams</td><td>3,072 1,863 416 34</td><td>1,863 3,072 1,863 416</td><td>416 1,863 3,072 1,863</td><td>34 416 1,863 3,072</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"text": "Evaluation results.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td>mWER</td><td>BLEU</td><td>NIST</td><td>TER</td></tr><tr><td>MGIZA++</td><td>0.5714</td><td>0.2742</td><td>6.6747</td><td>0.6170</td></tr><tr><td>Anymalign</td><td>0.6186</td><td>0.2285</td><td>6.0764</td><td>0.6634</td></tr><tr><td>Anymalign1-N</td><td>equal time distribution</td><td colspan=\"2\">std.norm.distribution</td><td>pruning</td></tr><tr><td/><td colspan=\"4\">mWER BLEU NIST TER mWER BLEU NIST TER mWER BLEU NIST TER</td></tr><tr><td>Anymalign1-1</td><td colspan=\"4\">0.6818 0.1984 5.6353 0.7188 0.6818 0.1984 5.6353 0.7188 0.6871 0.1953 5.6042 0.7258</td></tr><tr><td>Anymalign1-2 Anymalign1-3 Anymalign1-4 Anymalign1-5</td><td colspan=\"4\">0.6121 0.2406 6.2789 0.6536 0.6121 0.2404 6.2674 0.6535 0.6102 0.2425 6.3093 0.6515 0.6075 0.2403 6.3009 0.6507 0.6079 0.2441 6.2928 0.6517 0.6117 0.2413 6.2501 0.6561 0.6142 0.2423 6.2087 0.6583 0.6071 0.2442 6.2844 0.6526 0.5978 0.2511 6.3985 0.6435 0.6099 0.2376 6.2331 0.6551 0.6134 0.2436 6.2426 0.6548 0.6076 0.2457 6.3120 0.6504</td></tr><tr><td>Anymalign1-6</td><td colspan=\"4\">0.6193 0.2349 6.1574 0.6634 0.6165 0.2403 6.1595 0.6589 0.6104 0.2459 6.2687 0.6545</td></tr><tr><td>Anymalign1-7</td><td colspan=\"4\">0.6157 0.2371 6.2107 0.6559 0.6136 0.2405 6.2124 0.6564 0.6079 0.2419 6.2569 0.6516</td></tr><tr><td>Anymalign1-8</td><td colspan=\"4\">0.6353 0.2253 5.9777 0.6794 0.6151 0.2366 6.1639 0.6597 0.6060 0.2446 6.2986 0.6496</td></tr><tr><td>Anymalign1-9</td><td colspan=\"4\">0.6279 0.2296 6.0261 0.6722 0.6136 0.2402 6.1928 0.6564 0.6078 0.2461 6.2974 0.6493</td></tr><tr><td>Anymalign1-10</td><td colspan=\"4\">0.6475 0.2182 5.8534 0.6886 0.6192 0.2361 6.1803 0.6587 0.6076 0.2459 6.3079 0.6490</td></tr><tr><td>Merge</td><td>mWER</td><td>BLEU</td><td>NIST</td><td>TER</td></tr><tr><td>Anymalign param. MGIZA++ param.</td><td>0.5671 0.5685</td><td>0.2747 0.2754</td><td>6.7101 6.7060</td><td>0.6128 0.6142</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"text": "Distribution of phrase pairs in translation tables. (a) Distribution of phrase pairs in MGIZA++'s translation table. Anymalign1-4 with equal time for each n \u00d7 m n-grams alignments.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Target</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |