Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N07-1029",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:48:38.608553Z"
},
"title": "Combining Outputs from Multiple Machine Translation Systems",
"authors": [
{
"first": "Antti",
"middle": [],
"last": "-Veikko",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "I",
"middle": [],
"last": "Rosti",
"suffix": "",
"affiliation": {},
"email": "arosti@bbn.com"
},
{
"first": "Necip",
"middle": [],
"last": "Fazil",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Technologies",
"location": {
"addrLine": "10 Moulton Street",
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": "",
"affiliation": {},
"email": "bxiang@bbn.com"
},
{
"first": "Spyros",
"middle": [],
"last": "Matsoukas",
"suffix": "",
"affiliation": {},
"email": "smatsouk@bbn.com"
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": "",
"affiliation": {},
"email": "schwartz\u00a3@bbn.com"
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": "",
"affiliation": {},
"email": "bonnie\u00a3@umiacs.umd.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Currently there are several approaches to machine translation (MT) based on different paradigms; e.g., phrasal, hierarchical and syntax-based. These three approaches yield similar translation accuracy despite using fairly different levels of linguistic knowledge. The availability of such a variety of systems has led to a growing interest toward finding better translations by combining outputs from multiple systems. This paper describes three different approaches to MT system combination. These combination methods operate on sentence, phrase and word level exploiting information from \u00a4-best lists, system scores and target-to-source phrase alignments. The word-level combination provides the most robust gains but the best results on the development test sets (NIST MT05 and the newsgroup portion of GALE 2006 dry-run) were achieved by combining all three methods.",
"pdf_parse": {
"paper_id": "N07-1029",
"_pdf_hash": "",
"abstract": [
{
"text": "Currently there are several approaches to machine translation (MT) based on different paradigms; e.g., phrasal, hierarchical and syntax-based. These three approaches yield similar translation accuracy despite using fairly different levels of linguistic knowledge. The availability of such a variety of systems has led to a growing interest toward finding better translations by combining outputs from multiple systems. This paper describes three different approaches to MT system combination. These combination methods operate on sentence, phrase and word level exploiting information from \u00a4-best lists, system scores and target-to-source phrase alignments. The word-level combination provides the most robust gains but the best results on the development test sets (NIST MT05 and the newsgroup portion of GALE 2006 dry-run) were achieved by combining all three methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, machine translation systems based on new paradigms have emerged. These systems employ more than just the surface-level information used by the state-of-the-art phrase-based translation systems. For example, hierarchical (Chiang, 2005) and syntax-based (Galley et al., 2006) systems have recently improved in both accuracy and scalability.",
"cite_spans": [
{
"start": 237,
"end": 251,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF2"
},
{
"start": 269,
"end": 290,
"text": "(Galley et al., 2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Combined with the latest advances in phrase-based translation systems, it has become more attractive to take advantage of the various outputs in forming consensus translations (Frederking and Nirenburg, 1994; Bangalore et al., 2001; Jayaraman and Lavie, 2005; Matusov et al., 2006) .",
"cite_spans": [
{
"start": 176,
"end": 208,
"text": "(Frederking and Nirenburg, 1994;",
"ref_id": "BIBREF4"
},
{
"start": 209,
"end": 232,
"text": "Bangalore et al., 2001;",
"ref_id": "BIBREF0"
},
{
"start": 233,
"end": 259,
"text": "Jayaraman and Lavie, 2005;",
"ref_id": "BIBREF6"
},
{
"start": 260,
"end": 281,
"text": "Matusov et al., 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "System combination has been successfully applied in state-of-the-art speech recognition evaluation systems for several years (Fiscus, 1997) . Even though the underlying modeling techniques are similar, many systems produce very different outputs with approximately the same accuracy. One of the most successful approaches is consensus network decoding (Mangu et al., 2000) which assumes that the confidence of a word in a certain position is based on the sum of confidences from each system output having the word in that position. This requires aligning the system outputs to form a consensus network and -during decoding -simply finding the highest scoring path through this network. The alignment of speech recognition outputs is fairly straightforward due to the strict constraint in word order. However, machine translation outputs do not have this constraint as the word order may be different between the source and target languages. MT systems employ various re-ordering (distortion) models to take this into account.",
"cite_spans": [
{
"start": 125,
"end": 139,
"text": "(Fiscus, 1997)",
"ref_id": "BIBREF3"
},
{
"start": 352,
"end": 372,
"text": "(Mangu et al., 2000)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Three MT system combination methods are presented in this paper. They operate on the sentence, phrase and word level. The sentence-level combination is based on selecting the best hypothesis out of the merged N-best lists. This method does not generate new hypotheses -unlike the phrase and word-level methods. The phrase-level combination is based on extracting sentence-specific phrase translation tables from system outputs with alignments to source and running a phrasal decoder with this new translation table. This approach is similar to the multi-engine MT framework proposed in (Frederking and Nirenburg, 1994) which is not capable of re-ordering. The word-level combination is based on consensus network decoding. Translation edit rate (TER) (Snover et al., 2006) is used to align the hypotheses and minimum Bayes risk decoding under TER (Sim et al., 2007) is used to select the alignment hypothesis. All combination methods use weights which may be tuned using Powell's method (Brent, 1973) on \u00a4 -best lists. Both sentence and phrase-level combination methods can generate \u00a4 best lists which may also be used as new system outputs in the word-level combination.",
"cite_spans": [
{
"start": 586,
"end": 618,
"text": "(Frederking and Nirenburg, 1994)",
"ref_id": "BIBREF4"
},
{
"start": 751,
"end": 772,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 847,
"end": 865,
"text": "(Sim et al., 2007)",
"ref_id": "BIBREF13"
},
{
"start": 987,
"end": 1000,
"text": "(Brent, 1973)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experiments on combining six machine translation system outputs were performed. Three systems were phrasal, two hierarchical and one syntaxbased. The systems were evaluated on NIST MT05 and the newsgroup portion of the GALE 2006 dryrun sets. The outputs were evaluated on both TER and BLEU. As the target evaluation metric in the GALE program was human-mediated TER (HTER) (Snover et al., 2006) , it was found important to improve both of these automatic metrics.",
"cite_spans": [
{
"start": 373,
"end": 394,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows. Section 2 describes the evaluation metrics and a generic discriminative optimization technique used in tuning of the various system combination weights. Sentence, phrase and word-level system combination methods are presented in Sections 3, 4 and 5. Experimental results on Arabic and Chinese to English newswire and newsgroup test data are presented in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The official metric of the 2006 DARPA GALE evaluation was human-mediated translation edit rate (HTER). HTER is computed as the minimum translation edit rate (TER) between a system output and a targeted reference which preserves the meaning and fluency of the sentence (Snover et al., 2006) . The targeted reference is generated by human posteditors who make edits to a reference translation so as to minimize the TER between the reference and the MT output without changing the meaning of the reference. Computing the HTER is very time consuming due to the human post-editing. It is desirable to have an automatic evaluation metric that correlates well with the HTER to allow fast evaluation of the MT systems during development. Correlations of different evaluation metrics have been studied (Snover et al., 2006) ",
"cite_spans": [
{
"start": 268,
"end": 289,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 793,
"end": 814,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics and Discriminative Tuning",
"sec_num": "2"
},
{
"text": "\u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a7 \u00a6 \u00a9 ! # \" % $ ' & ( ) \" 1 0 3 2 5 4 6 \" 7 0 3 8 5 9 A @ \u00a4 B C E D F D H G (1) where \u00a4 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics and Discriminative Tuning",
"sec_num": "2"
},
{
"text": "is the total number of words in the ref- is the average number of words in the reference translations and the final TER is computed using the minimum number of edits. The NIST BLEU-4 is a variant of BLEU (Papineni et al., 2002) and is computed as",
"cite_spans": [
{
"start": 204,
"end": 227,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics and Discriminative Tuning",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "I Q P \u00a2 R S \u00a6 \u00a9 T U & W V 3 X \u1ef2 C a c b d e g f i h ( q p F r t s e \u00a6 \u00a9 \u00a3 T u w v \u00a6 \u00a9 \u00a3 T",
"eq_num": "(2)"
}
],
"section": "Evaluation Metrics and Discriminative Tuning",
"sec_num": "2"
},
{
"text": "where s e \u00a6 \u1e8d \u00a3 T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics and Discriminative Tuning",
"sec_num": "2"
},
{
"text": "is the precision of y -grams in the hypothesis given the reference",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics and Discriminative Tuning",
"sec_num": "2"
},
{
"text": "and v \u00a6 \u1e8d \u00a3 T C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics and Discriminative Tuning",
"sec_num": "2"
},
{
"text": "is a brevity penalty. The y -gram counts from multiple references are accumulated in estimating the precisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics and Discriminative Tuning",
"sec_num": "2"
},
{
"text": "All system combination methods presented in this paper may be tuned to directly optimize either one of these automatic evaluation metrics. The tuning uses \u00a4 -best lists of hypotheses with various feature scores. The feature scores may be combined with tunable weights forming an arbitrary scoring function. As the derivatives of this function are not usually available, Brent's modification of Powell's method (Brent, 1973 ) may be used to find weights that optimize the appropriate evaluation metric in the re-scored \u00a4 -best list. The optimization starts at a random initial point in the s -dimensional parameter space, first searching through an initial set of basis vectors. As searching repeatedly through the set of basis vectors is inefficient, the direction of the vectors is gradually moved toward a larger positive change in the evaluation metric. To improve the chances of finding a global optimum, the algorithm is repeated with varying initial values. The modified Powell's method has been previously used in optimizing the weights of a standard feature-based MT decoder in (Och, 2003) where a more efficient algorithm for log-linear models was proposed. However, this is specific to log-linear models and cannot be easily extended for more complicated functions.",
"cite_spans": [
{
"start": 410,
"end": 422,
"text": "(Brent, 1973",
"ref_id": "BIBREF1"
},
{
"start": 1086,
"end": 1097,
"text": "(Och, 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics and Discriminative Tuning",
"sec_num": "2"
},
{
"text": "The first combination method is based on re-ranking a merged \u00a4 -best list. A confidence score from each system is assigned to each unique hypothesis in the merged list. The confidence scores for each hypothesis are used to produce a single score which, combined with a 5-gram language model score, determines a new ranking of the hypotheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-Level Combination",
"sec_num": "3"
},
{
"text": "Generalized linear models (GLMs) have been applied for confidence estimation in speech recognition (Siu and Gish, 1999 ). The logit model, which models the log odds of an event as a linear function of the features, can be used in confidence estimation. The confidence 5. System-specific bias.",
"cite_spans": [
{
"start": 99,
"end": 118,
"text": "(Siu and Gish, 1999",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Confidence Estimation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00a2 \u00a1 \u00a4 \u00a3 for a system \u00a5 generating a hypothesis \u00a6 may be modeled as (p F r \u00a1 \u00a4 \u00a3 C \u00a7 \u00a1 \u00a4 \u00a3 \u00a9 d f i h \u00a1 \u00a1 \u00a4 \u00a3",
"eq_num": "(3)"
}
],
"section": "Hypothesis Confidence Estimation",
"sec_num": "3.1"
},
{
"text": "If the system \u00a5 did not generate the hypothesis \u00a6 , the confidence is set to zero. To prevent overflow in exponentiating the summation in Equation 3, the features have to be scaled. In the experiments, feature scaling factors were estimated from the tuning data to limit the feature values between D C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Confidence Estimation",
"sec_num": "3.1"
},
{
"text": ". The same scaling factors have to be applied to the features obtained from the test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Confidence Estimation",
"sec_num": "3.1"
},
{
"text": "The total confidence score of hypothesis \u00a6 is obtained from the system confidences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Confidence Estimation",
"sec_num": "3.1"
},
{
"text": "\u00a1 \u00a3 as \u00a3 ! \u00a4 \" \u00a3 \u00a4 $ # \" & % C \u00a4 $ # ' ) ( d \u00a1 f i h \u00a1 \u00a4 \u00a3 \" v \" 0 2 1 V \u00a1 \u00a1 \u00a4 \u00a3 \" 4 3 C \u00a4 \u00a3 ' ) ( d \u00a1 f i h \u00a1 \u00a4 \u00a3 (4) where \u00a4 \u00a3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Confidence Estimation",
"sec_num": "3.1"
},
{
"text": "is the number of systems generating the hypothesis \u00a6 (i.e., the number of non-zero",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Confidence Estimation",
"sec_num": "3.1"
},
{
"text": "\u00a1 \u00a4 \u00a3 for \u00a6 ) and \u00a4 #",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Confidence Estimation",
"sec_num": "3.1"
},
{
"text": "is the number of systems. The weights through 3 are constrained to sum to one; i.e., there are three free parameters. These weights can balance the total confidence between the number of systems generating the hypothesis (votes), and the sum, maximum and average of the system confidences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Confidence Estimation",
"sec_num": "3.1"
},
{
"text": "The second feature in the GLM is the sentence posterior estimated from the \u00a4 -best list. A sentence posterior may simply be estimated from an \u00a4 -best list by scaling the system scores for all hypotheses to sum to one. When combining several systems based on different translation paradigms and feature sets, the system scores may not be comparable. The total scores may be scaled to obtain more consistent sentence posteriors. The scaled posterior estimated from an \u00a4 -best list may be written as ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Posterior Estimation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "5 \u00a1 \u00a4 \u00a3 & W V w X Y 7 6 \u00a1 8 9 \u00a1 \u00a4 \u00a3 \u00a7 (p F r Y ' d @ f i h & W V w X \u00a66 \u00a1 A \u00a1 @ u 5 u",
"eq_num": "(5)"
}
],
"section": "Sentence Posterior Estimation",
"sec_num": "3.2"
},
{
"text": ". The scaling factors may be tuned to optimize the evaluation metric in the same fashion as the logit model weights in Section 3.1. Equation 4 may be used to assign total posteriors for each unique hypothesis and the weights may be tuned using Powell's method on \u00a4 -best lists as described in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a6",
"sec_num": null
},
{
"text": "The hypothesis confidence may be log-linearly combined with a 5-gram language model (LM) score to yield the final score as follows Testing the sentence-level combination has the same steps as the tuning apart from all estimation steps; i.e., steps 1, 3, 5 and 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Re-ranking",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00a3 (p F r \u00a3 \" C B E D G F \u00a3 \" I H Q P \u00a3",
"eq_num": "(6)"
}
],
"section": "Hypothesis Re-ranking",
"sec_num": "3.3"
},
{
"text": "The phrase-level combination is based on extracting a new phrase translation table from each system's target-to-source phrase alignments and re-decoding the source sentence using this new translation table and a language model. In this work, the target-tosource phrase alignments were available from the individual systems. If the alignments are not available, they can be automatically generated; e.g., using GIZA++ (Och and Ney, 2003) . The phrase translation table is generated for each source sentence using confidence scores derived from sentence posteriors with system-specific total score scaling factors and similarity scores based on the agreement among the phrases from all systems.",
"cite_spans": [
{
"start": 417,
"end": 436,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Level Combination",
"sec_num": "4"
},
{
"text": "Each phrase has an initial confidence based on the sentence posterior ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Confidence Estimation",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(p F r Y d \u00a1 \u00a7 \u00a1 \u00a9 \u00a2 \u00a1 \u00a6 \u00a3 \" \" % C \u00a1 \u00a7 ! f # \" \u00a1 $ \u00a9 d \u00a1 \u00a7 % & \u00a6 ' f # \" \u00a1 ( \u00a9 \u00a2 \u00a1 \u00a6 \u00a3 \" v \" 0 1 V \u00a1 d \u00a1 \u00a9 \u00a2 \u00a1 \u00a4 \u00a3 u",
"eq_num": "(7)"
}
],
"section": "Phrase Confidence Estimation",
"sec_num": "4.1"
},
{
"text": "where \u00a1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Confidence Estimation",
"sec_num": "4.1"
},
{
"text": "are system weights and \u00a9 are similarity score weights. The parameters through v interpolate between the sum, average and maximum of the similarity scores. These interpolation weights and the system weights \u00a1 are constrained to sum to one. The number of tunable combination weights, in addition to normal decoder weights, is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Confidence Estimation",
"sec_num": "4.1"
},
{
"text": "\u00a4 2 # \" \u00a4 \" C where \u00a4 #",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Confidence Estimation",
"sec_num": "4.1"
},
{
"text": "is the number of systems and \u00a4 is the number of similarity levels; i.e., \u00a4 # \u00a7 1 C free system weights, \u00a4 similarity score weights and two free interpolation weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Confidence Estimation",
"sec_num": "4.1"
},
{
"text": "The phrasal decoder used in the phrase-level combination is based on standard beam search (Koehn, 2004) . The decoder features are: a trigram language model score, number of target phrases, number of target words, phrase distortion, phrase distortion computed over the original translations and phrase translation confidences estimated in Section 4.1. The total score for a hypothesis is computed as a log-linear combination of these features. The feature weights and combination weights (system and similarity) may be tuned using Powell's method on \u00a4 -best lists as described in Section 2. The phrase-level combination tuning can be summarized as follows: 5. Estimate new decoder and combination weights as described above.",
"cite_spans": [
{
"start": 90,
"end": 103,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Decoding",
"sec_num": "4.2"
},
{
"text": "Testing the phrase-level combination is performed by following steps 1 through 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Decoding",
"sec_num": "4.2"
},
{
"text": "The third combination method is based on confusion network decoding. In confusion network decoding, the words in all hypotheses are aligned against each other to form a graph with word alternatives (including nulls) for each alignment position. Each aligned word is assigned a score relative to the votes or word confidence scores (Fiscus, 1997; Mangu et al., 2000) derived from the hypotheses. The decoding is carried out by picking the words with the highest scores along the graph. In speech recognition, this results in minimum expected word error rate (WER) hypothesis (Mangu et al., 2000) or equivalently minimum Bayes risk (MBR) under WER with uniform target sentence posterior distribution (Sim et al., 2007) . In machine translation, aligning hypotheses is more complicated compared to speech recognition since the target words do not necessarily appear in the same order. So far, confusion networks have been applied in MT system combination using three different alignment procedures: WER (Bangalore et al., 2001) , GIZA++ alignments (Matusov et al., 2006) and TER (Sim et al., 2007) . WER alignments do not allow shifts, GIZA++ alignments require careful training and are not always reliable. TER alignments do not guarantee that similar but lexically different words are aligned correctly but TER does not require training new models and allows shifts (Snover et al., 2006) . This work extends the approach proposed in (Sim et al., 2007) .",
"cite_spans": [
{
"start": 331,
"end": 345,
"text": "(Fiscus, 1997;",
"ref_id": "BIBREF3"
},
{
"start": 346,
"end": 365,
"text": "Mangu et al., 2000)",
"ref_id": "BIBREF8"
},
{
"start": 574,
"end": 594,
"text": "(Mangu et al., 2000)",
"ref_id": "BIBREF8"
},
{
"start": 698,
"end": 716,
"text": "(Sim et al., 2007)",
"ref_id": "BIBREF13"
},
{
"start": 1000,
"end": 1024,
"text": "(Bangalore et al., 2001)",
"ref_id": "BIBREF0"
},
{
"start": 1045,
"end": 1067,
"text": "(Matusov et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 1076,
"end": 1094,
"text": "(Sim et al., 2007)",
"ref_id": "BIBREF13"
},
{
"start": 1365,
"end": 1386,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 1432,
"end": 1450,
"text": "(Sim et al., 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word-Level Combination",
"sec_num": "5"
},
{
"text": "Due to the varying word order in the MT hypotheses, the decision of confusion network skeleton is essential. The skeleton determines the general word order of the combined hypothesis. One option would be to use the output from the system with the best performance on some development set. However, it was found that this approach did not always yield better combination output compared to the best single system on all evaluation metrics. Instead of using a single system output as the skeleton, the hypothesis that best agrees with the other hypotheses on average may be used. In this paper, the minimum average TER score of one hypothesis against all other hypotheses was used as follows",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confusion Network Generation",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 \u00a2 \u00a1 r 0 \u00a4 \u00a3 \u00a1 ' ( d \u00a3 f i h \u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a7 \u00a6 \u00a3 \u00a1",
"eq_num": "(8)"
}
],
"section": "Confusion Network Generation",
"sec_num": "5.1"
},
{
"text": "This may be viewed as the MBR hypothesis under TER given uniform target sentence posterior distribution (Sim et al., 2007) . It is also possible to compute the MBR hypothesis under BLEU.",
"cite_spans": [
{
"start": 104,
"end": 122,
"text": "(Sim et al., 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Confusion Network Generation",
"sec_num": "5.1"
},
{
"text": "Finding the MBR hypothesis requires computing the TER against all hypotheses to be aligned. It was found that aligning more than one hypothesis (\u00a4 C E D ) from each system to the skeleton improves the combination outputs. However, only the rank-1 hypotheses were considered as skeletons due to the complexity of the TER alignment. The confidence score assigned to each word was chosen to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confusion Network Generation",
"sec_num": "5.1"
},
{
"text": "C \u00a1 3 \u00a6C \" \u00a3 \u00a2 \u00a5 \u00a4 y \u00a7 \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confusion Network Generation",
"sec_num": "5.1"
},
{
"text": "where the \u00a2 \u00a4 y \u00a7 \u00a6 was based on the rank of the aligned hypothesis in the system's \u00a4 best. This was found to yield better scores than simple votes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confusion Network Generation",
"sec_num": "5.1"
},
{
"text": "The word-level combination method described so far does not require any tuning. To allow a variety of outputs with different degrees of confidence to be combined, system weights may be used. A confusion network may be represented as a standard word lattice with all paths traveling via all nodes. The links in this lattice represent the alternative words (including nulls) at the corresponding position in the string. Confusion network decoding may be viewed as finding the highest scoring path through this lattice with summing all word scores along the path. The standard lattice decoding algorithms may also be used to generate \u00a4 -best lists from the confusion network. The simplest way to introduce system weights is to accumulate system-specific scores along the paths and combine these scores linearly with the weights. The system weights may be tuned using Powell's method on \u00a4 -best lists as described in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tunable System Weights",
"sec_num": "5.2"
},
{
"text": "The word-level combination tuning can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tunable System Weights",
"sec_num": "5.2"
},
{
"text": "1. Extract 10-best lists from the MT outputs;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tunable System Weights",
"sec_num": "5.2"
},
{
"text": "2. Align each 10-best against each rank-1 hypothesis using TER;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tunable System Weights",
"sec_num": "5.2"
},
{
"text": "3. Choose the skeleton (Equation 8);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tunable System Weights",
"sec_num": "5.2"
},
{
"text": "4. Generate a confusion network lattice with the current system weights;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tunable System Weights",
"sec_num": "5.2"
},
{
"text": "5. Generate \u00a4 -best list hypothesis and score files from the lattice; 6. Estimate system weights as described above; Testing the word-level combination has the same steps as the tuning apart from steps 6 and 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tunable System Weights",
"sec_num": "5.2"
},
{
"text": "Six systems trained on all data available for GALE 2006 evaluation were used in the experiments to demonstrate the performance of all three system combination methods on Arabic and Chinese to English MT tasks. Three systems were phrase-based (A, C and E), two hierarchical (B and D) and one syntax-based (F). The phrase-based systems used different sets of features and re-ordering approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "The hierarchical systems used different rule sets. All systems were tuned on NIST MT02 evaluation sets with four references. Systems A and B were tuned to minimize TER, the other systems were tuned to maximize BLEU. As discussed in Section 2, the system combination tuning metric was chosen so that gains were observed in both TER and BLEU on development test sets. NIST MT05 comprising only newswire data (1056 Arabic and 1082 Chinese sentences) with four reference translations and the newsgroup portion of the GALE 2006 dry-run (203 Arabic and 126 Chinese sentences) with one reference translation were used as the test sets. It was found that minimizing TER on Arabic also resulted in higher BLEU scores compared to the best single system. However, minimizing TER on Chinese resulted in significantly lower BLEU. So, TER was used in tuning the combination weights on Arabic and BLEU on Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "The sentence and phrase-level combination weights were tuned on NIST MT03 evaluation sets. On the tuning sets, both methods yield about 0.5%-1.0% gain in TER and BLEU. The mixed-case TER and BLEU scores on both test sets are shown in Table 1 for Arabic and Table 2 for Chinese (phrcomb represents phrase and sentcomb sentence-level combination). The phrase-level combination seems to outperform the sentence-level combination in terms of both metrics on Arabic although gains over the best single system are modest, if any. On Chinese, the sentence-level combination yields higher BLEU scores than the phrase-level combination. The combination BLEU scores on the newsgroup data are not higher than the best system, though.",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 264,
"text": "Table 1 for Arabic and Table 2",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "The word-level combination was evaluated in three settings. First, simple confusion network decoding with six systems without system weights was performed (no weights 6 in the tables). Second, system weights were trained for combining six systems (TER/BLEU 6 in the tables). Finally, all six system outputs as well as the sentence and phrase-level combination outputs were combined with system weights (TER/BLEU 8 in the tables). The 6-way combination weights were tuned on merged NIST MT03 and MT04 evaluation sets and the 8-way combination weights were tuned only on NIST MT04 since the sentence and phraselevel combination methods were already tuned on NIST MT03. The word-level combination yields about 2.0%-3.0% gain in TER and 2.0%-4.0% gain in BLEU on the tuning sets. The test set results show that the simple confusion network decoding without system weights yields very good scores, mostly better than either sentence or phrase-level combination. The system weights seem to yield even higher BLEU scores but not always lower TER scores on both languages. Despite slightly hurting the TER score on Arabic, the TER 8 combination result was considered the best due to the highest BLEU and significantly lower TER compared to any single system. Similarly, the BLEU 8 was considered the best combination result on Chinese. Internal HTER experiments showed that BLEU 8 yielded lower scores after post-editing even though no weights 6 had lower automatic TER score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Three methods for machine translation system combination were presented in this paper. The sentencelevel combination was based on re-ranking a merged \u00a4 -best list using generalized linear models with features derived from each system's output. The combination yields slight gains on the tuning set. However, the gains were very small, if any, on the test sets. The re-ranked \u00a4 -best lists were used successfully in the word-level combination method as new system outputs. Various other features may be explored in this framework although the tuning may be limited by the chosen optimization method in the higher dimensional parameter space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The phrase-level combination was based on deriving a new phrase translation table from the alignments to source provided in all system outputs. The phrase translation scores were based on the level of agreement between the system outputs and sentence posterior estimates. A standard phrasal decoder with the new phrase table was used to produce the final combination output. The handling of the alignments from non-phrasal decoders may not be optimal, though. The phrase-level combination yields fairly good gains on the tuning sets. However, the performance does not seem to generalize to the test sets used in this work. As usual, the phrasal decoder can generate \u00a4 -best lists which were used successfully in the word-level combination method as new system outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The word-level combination method based on consensus network decoding seems to be very robust and yield good gains over the best single system even without any tunable weights. The decision of the skeleton is crucial. Minimum Bayes Risk decoding under translation edit rate was used to select the skeleton. Compared to the best possible skeleton decision -according to an oracle experiment -further gains might be obtained by using better decision approach. Also, the alignment may be improved by taking the target-to-source alignments into account and allowing synonyms to align. The confusion network decoding at the word level does not necessarily retain coherent phrases as no language model constraints are taken into account. LM re-scoring might alleviate this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "This paper has provided evidence that outputs from six very different MT systems, tuned for two different evaluation metrics, may be combined to yield better outputs in terms of different evaluation metrics. The focus of the future work will be to address the individual issues in the combination methods mentioned above. It would also be interesting to investigate how much different systems contribute to the overall gain achieved via system combination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "This work was supported by DARPA/IPTO Contract No. HR0011-06-C-0022 under the GALE program (approved for public release, distribution unlimited). The authors would like to thank ISI and University of Edinburgh for sharing their MT system outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Computing consensus translation from multiple machine translation systems",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Bordel",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Riccardi",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. ASRU",
"volume": "",
"issue": "",
"pages": "351--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore, German Bordel, and Giuseppe Ric- cardi. 2001. Computing consensus translation from multiple machine translation systems. In Proc. ASRU, pages 351-354.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Algorithms for Minimization Without Derivatives",
"authors": [
{
"first": "Richard",
"middle": [
"P"
],
"last": "Brent",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard P. Brent. 1973. Algorithms for Minimization Without Derivatives. Prentice-Hall.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. ACL, pages 263-270.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (ROVER)",
"authors": [
{
"first": "Jonathan",
"middle": [
"G"
],
"last": "Fiscus",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. ASRU",
"volume": "",
"issue": "",
"pages": "347--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan G. Fiscus. 1997. A post-processing system to yield reduced word error rates: Recognizer output vot- ing error reduction (ROVER). In Proc. ASRU, pages 347-354.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Three heads are better than one",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Frederking",
"suffix": ""
},
{
"first": "Sergei",
"middle": [],
"last": "Nirenburg",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. ANLP",
"volume": "",
"issue": "",
"pages": "95--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Frederking and Sergei Nirenburg. 1994. Three heads are better than one. In Proc. ANLP, pages 95- 100.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Scalable inferences and training of context-rich syntax translation models",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Graehl",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Deneefe",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. COL-ING/ACL",
"volume": "",
"issue": "",
"pages": "961--968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inferences and training of context-rich syntax translation models. In Proc. COL- ING/ACL, pages 961-968.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multiengine machine translation guided by explicit word matching",
"authors": [
{
"first": "Shyamsundar",
"middle": [],
"last": "Jayaraman",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. EAMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shyamsundar Jayaraman and Alon Lavie. 2005. Multi- engine machine translation guided by explicit word matching. In Proc. EAMT.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Pharaoh: a beam search decoder for phrase-based statistical machine translation models",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation mod- els. In Proc. AMTA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Finding consensus in speech recognition: Word error minimization and other applications of confusion networks",
"authors": [
{
"first": "Lidia",
"middle": [],
"last": "Mangu",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2000,
"venue": "Computer Speech and Language",
"volume": "14",
"issue": "4",
"pages": "373--400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lidia Mangu, Eric Brill, and Andreas Stolcke. 2000. Finding consensus in speech recognition: Word error minimization and other applications of confusion net- works. Computer Speech and Language, 14(4):373- 400.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Computing consensus translation from multiple machine translation systems using enhanced hypotheses alignment",
"authors": [
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Ueffing",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. EACL",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeny Matusov, Nicola Ueffing, and Hermann Ney. 2006. Computing consensus translation from multiple machine translation systems using enhanced hypothe- ses alignment. In Proc. EACL, pages 33-40.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och. 2003. Minimum error rate training in sta- tistical machine translation. In Proc. ACL, pages 160- 167.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proc. ACL, pages 311-318.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Consensus network decoding for statistical machine translation system combination",
"authors": [
{
"first": "Khe Chai",
"middle": [],
"last": "Sim",
"suffix": ""
},
{
"first": "William",
"middle": [
"J"
],
"last": "Byrne",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Mark",
"suffix": ""
},
{
"first": "Hichem",
"middle": [],
"last": "Gales",
"suffix": ""
},
{
"first": "Phil",
"middle": [
"C"
],
"last": "Sahbi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Woodland",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khe Chai Sim, William J. Byrne, Mark J.F. Gales, Hichem Sahbi, and Phil C. Woodland. 2007. Con- sensus network decoding for statistical machine trans- lation system combination. In Proc. ICASSP.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Evaluation of word confidence for speech recognition systems",
"authors": [
{
"first": "Manhung",
"middle": [],
"last": "Siu",
"suffix": ""
},
{
"first": "Herbert",
"middle": [],
"last": "Gish",
"suffix": ""
}
],
"year": 1999,
"venue": "Computer Speech and Language",
"volume": "13",
"issue": "4",
"pages": "299--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manhung Siu and Herbert Gish. 1999. Evaluation of word confidence for speech recognition systems. Computer Speech and Language, 13(4):299-319.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciula",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciula, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proc. AMTA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "erence translation . In the case of multiple references, the edits are counted against all references, \u00a4",
"num": null
},
"TABREF3": {
"text": "is the number of words in hypothesis \u00a6 . The number of words is commonly used in LM rescoring to balance the LM scores between hypotheses of different lengths. The number of free parameters in the sentence-level combination method is given by",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>where</td><td>P \u00a3</td></tr><tr><td colspan=\"2\">\u00a4 # \" ber of systems and</td><td>\u00a4 # is the number of features; i.e., \" \u00a1 where \u00a4 is the num-#</td></tr><tr><td colspan=\"3\">\u00a4 # system score scaling factors ( polation weights (Equation 4) for the scaling factor 6 ) , three free inter-\u00a1</td></tr><tr><td colspan=\"3\">estimation, terpolation weights (Equation 4) for the hypothesis \u00a4 # GLM weights ( ), three free in-\u00a1</td></tr><tr><td colspan=\"3\">confidence estimation and two free LM re-scoring</td></tr><tr><td colspan=\"3\">weights (Equation 6). All parameters may be tuned</td></tr><tr><td colspan=\"3\">using Powell's method on</td><td>\u00a4 -best lists as described</td></tr><tr><td colspan=\"2\">in Section 2.</td></tr><tr><td colspan=\"3\">The tuning of the sentence-level combination</td></tr><tr><td colspan=\"3\">method may be summarized as follows:</td></tr><tr><td colspan=\"3\">1. Merge individual</td><td>\u00a4 -best lists to form a large</td></tr><tr><td colspan=\"3\">\u00a4 -best list with unique hypotheses;</td></tr><tr><td colspan=\"3\">2. Estimate total score scaling factors as described</td></tr><tr><td colspan=\"3\">in Section 3.2;</td></tr><tr><td colspan=\"3\">3. Collect GLM feature scores for each unique hy-</td></tr><tr><td colspan=\"2\">pothesis;</td></tr><tr><td colspan=\"3\">4. Estimate GLM feature scaling factors as de-</td></tr><tr><td colspan=\"3\">scribed in Section 3.1;</td></tr><tr><td colspan=\"3\">5. Scale the GLM features;</td></tr><tr><td colspan=\"3\">6. Estimate GLM weights, combination weights</td></tr><tr><td colspan=\"3\">and LM re-scoring weights as described above;</td></tr><tr><td colspan=\"3\">7. Re-rank the merged</td></tr></table>"
},
"TABREF7": {
"text": "Mixed-case TER and BLEU scores on Chinese NIST MT05 (newswire) and the newsgroups portion of the GALE 2006 dry-run data.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}