Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N12-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:04:29.358561Z"
},
"title": "Insertion and Deletion Models for Statistical Machine Translation",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "RWTH Aachen University",
"location": {
"postCode": "D-52056",
"settlement": "Aachen",
"country": "Germany"
}
},
"email": "huck@cs.rwth-aachen.de"
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "RWTH Aachen University",
"location": {
"postCode": "D-52056",
"settlement": "Aachen",
"country": "Germany"
}
},
"email": "ney@cs.rwth-aachen.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We investigate insertion and deletion models for hierarchical phrase-based statistical machine translation. Insertion and deletion models are designed as a means to avoid the omission of content words in the hypotheses. In our case, they are implemented as phrase-level feature functions which count the number of inserted or deleted words. An English word is considered inserted or deleted based on lexical probabilities with the words on the foreign language side of the phrase. Related techniques have been employed before by Och et al. (2003) in an n-best reranking framework and by Mauser et al. (2006) and Zens (2008) in a standard phrase-based translation system. We propose novel thresholding methods in this work and study insertion and deletion features which are based on two different types of lexicon models. We give an extensive experimental evaluation of all these variants on the NIST Chinese\u2192English translation task.",
"pdf_parse": {
"paper_id": "N12-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "We investigate insertion and deletion models for hierarchical phrase-based statistical machine translation. Insertion and deletion models are designed as a means to avoid the omission of content words in the hypotheses. In our case, they are implemented as phrase-level feature functions which count the number of inserted or deleted words. An English word is considered inserted or deleted based on lexical probabilities with the words on the foreign language side of the phrase. Related techniques have been employed before by Och et al. (2003) in an n-best reranking framework and by Mauser et al. (2006) and Zens (2008) in a standard phrase-based translation system. We propose novel thresholding methods in this work and study insertion and deletion features which are based on two different types of lexicon models. We give an extensive experimental evaluation of all these variants on the NIST Chinese\u2192English translation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In hierarchical phrase-based translation (Chiang, 2005) , we deal with rules X \u2192 \u03b1, \u03b2, \u223c where \u03b1, \u03b2 is a bilingual phrase pair that may contain symbols from a non-terminal set, i.e. \u03b1 \u2208 (N \u222a V F ) + and \u03b2 \u2208 (N \u222aV E ) + , where V F and V E are the source and target vocabulary, respectively, and N is a non-terminal set which is shared by source and target. The left-hand side of the rule is a non-terminal symbol X \u2208 N , and the \u223c relation denotes a oneto-one correspondence between the non-terminals in \u03b1 and in \u03b2. Let J \u03b1 denote the number of terminal symbols in \u03b1 and I \u03b2 the number of terminal symbols in \u03b2. Indexing \u03b1 with j, i.e. the symbol \u03b1 j , 1 \u2264 j \u2264 J \u03b1 , denotes the j-th terminal symbol on the source side of the phrase pair \u03b1, \u03b2 , and analogous with \u03b2 i , 1 \u2264 i \u2264 I \u03b2 , on the target side.",
"cite_spans": [
{
"start": 41,
"end": 55,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion and Deletion Models",
"sec_num": "1"
},
{
"text": "With these notational conventions, we now define our insertion and deletion models, each in both source-to-target and target-to-source direction. We give phrase-level scoring functions for the four features. In our implementation, the feature values are precomputed and written to the phrase table. The features are then incorporated directly into the loglinear model combination of the decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion and Deletion Models",
"sec_num": "1"
},
{
"text": "Our insertion model in source-to-target direction t s2tIns (\u2022) counts the number of inserted words on the target side \u03b2 of a hierarchical rule with respect to the source side \u03b1 of the rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion and Deletion Models",
"sec_num": "1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t s2tIns (\u03b1, \u03b2) = I \u03b2 i=1 J\u03b1 j=1 p(\u03b2 i |\u03b1 j ) < \u03c4 \u03b1 j",
"eq_num": "(1)"
}
],
"section": "Insertion and Deletion Models",
"sec_num": "1"
},
{
"text": "Here, [\u2022] denotes a true or false statement: The result is 1 if the condition is true and 0 if the condition is false. The model considers an occurrence of a target word e an insertion iff no source word f exists within the phrase where the lexical translation probability p(e|f ) is greater than a corresponding threshold \u03c4 f . We employ lexical translation probabilities from two different types of lexicon models, a model which is extracted from word-aligned training data and-given the word alignment matrix-relies on pure relative frequencies, and the IBM model 1 lexicon (cf. Section 2). For \u03c4 f , previous authors have used a fixed heuristic value which was equal for all f \u2208 V f . In Section 3, we describe how such a global threshold can be computed and set in a reasonable way based on the characteristics of the model. We also propose several novel thresholding techniques with distinct thresholds \u03c4 f for each source word f . In an analogous manner to the source-to-target direction, the insertion model in target-to-source direction t t2sIns (\u2022) counts the number of inserted words on the source side \u03b1 of a hierarchical rule with respect to the target side \u03b2 of the rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion and Deletion Models",
"sec_num": "1"
},
{
"text": "t t2sIns (\u03b1, \u03b2) = J\u03b1 j=1 I \u03b2 i=1 [p(\u03b1 j |\u03b2 i ) < \u03c4 \u03b2 i ] (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion and Deletion Models",
"sec_num": "1"
},
{
"text": "Target-to-source lexical translation probabilities p(f |e) are thresholded with values \u03c4 e which may be distinct for each target word e. The model considers an occurrence of a source word f an insertion iff no target word e exists within the phrase with p(f |e) greater than or equal to \u03c4 e .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion and Deletion Models",
"sec_num": "1"
},
{
"text": "Our deletion model, compared to the insertion model, interchanges the connection of the direction of the lexical probabilities and the order of source and target in the sum and product of the term. The source-to-target deletion model thus differs from the target-to-source insertion model in that it employs a source-to-target word-based lexicon model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion and Deletion Models",
"sec_num": "1"
},
{
"text": "The deletion model in source-to-target direction t s2tDel (\u2022) counts the number of deleted words on the source side \u03b1 of a hierarchical rule with respect to the target side \u03b2 of the rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion and Deletion Models",
"sec_num": "1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t s2tDel (\u03b1, \u03b2) = J\u03b1 j=1 I \u03b2 i=1 p(\u03b2 i |\u03b1 j ) < \u03c4 \u03b1 j",
"eq_num": "(3)"
}
],
"section": "Insertion and Deletion Models",
"sec_num": "1"
},
{
"text": "It considers an occurrence of a source word f a deletion iff no target word e exists within the phrase with p(e|f ) greater than or equal to \u03c4 f . The target-to-source deletion model t t2sDel (\u2022) correspondingly considers an occurrence of a target word e a deletion iff no source word f exists within the phrase with p(f |e) greater than or equal to \u03c4 e :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion and Deletion Models",
"sec_num": "1"
},
{
"text": "t t2sDel (\u03b1, \u03b2) = I \u03b2 i=1 J\u03b1 j=1 [p(\u03b1 j |\u03b2 i ) < \u03c4 \u03b2 i ] (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion and Deletion Models",
"sec_num": "1"
},
{
"text": "We restrict ourselves to the description of the source-to-target direction of the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Models",
"sec_num": "2"
},
{
"text": "Given a word-aligned parallel training corpus, we are able to estimate single-word based translation probabilities p RF (e|f ) by relative frequency (Koehn et al., 2003) . With N (e, f ) denoting counts of aligned cooccurrences of target word e and source word f , we can compute",
"cite_spans": [
{
"start": 149,
"end": 169,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Lexicon from Word-Aligned Data",
"sec_num": "2.1"
},
{
"text": "p RF (e|f ) = N (e, f )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Lexicon from Word-Aligned Data",
"sec_num": "2.1"
},
{
"text": "e N (e , f ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Lexicon from Word-Aligned Data",
"sec_num": "2.1"
},
{
"text": "If an occurrence of e has multiple aligned source words, each of the alignment links contributes with a fractional count. We denote this model as relative frequency (RF) word lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Lexicon from Word-Aligned Data",
"sec_num": "2.1"
},
{
"text": "The IBM model 1 lexicon (IBM-1) is the first and most basic one in a sequence of probabilistic generative models (Brown et al., 1993) . For IBM-1, several simplifying assumptions are made, so that the probability of a target sentence e I 1 given a source sentence f J 0 (with f 0 = NULL) can be modeled as",
"cite_spans": [
{
"start": 113,
"end": 133,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2.2"
},
{
"text": "P r(e I 1 |f J 1 ) = 1 (J + 1) I I i=1 J j=0 p ibm1 (e i |f j ) . (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2.2"
},
{
"text": "The parameters of IBM-1 are estimated iteratively by means of the Expectation-Maximization algorithm with maximum likelihood as training criterion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2.2"
},
{
"text": "We introduce thresholding methods for insertion and deletion models which set thresholds based on the characteristics of the lexicon model that is applied. For all the following thresholding methods, we disregard entries in the lexicon model with probabilities that are below a fixed floor value of 10 \u22126 . Again, we restrict ourselves to the description of the source-totarget direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thresholding Methods",
"sec_num": "3"
},
{
"text": "individual \u03c4 f is a distinct value for each f , computed as the arithmetic average of all entries p(e|f ) of any e with the given f in the lexicon model. global The same value \u03c4 f = \u03c4 is used for all f . We compute this global threshold by averaging over the individual thresholds. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thresholding Methods",
"sec_num": "3"
},
{
"text": "histogram n \u03c4 f is a distinct value for each f . \u03c4 f is set to the value of the n + 1-th largest probability p(e|f ) of any e with the given f .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thresholding Methods",
"sec_num": "3"
},
{
"text": "1 Concrete values from our experiments are: 0.395847 for the source-to-target RF lexicon, 0.48127 for the target-to-source RF lexicon. 0.0512856 for the source-to-target IBM-1, and 0.0453709 for the target-to-source IBM-1. Mauser et al. (2006) mention that they chose their heuristic thresholds for use with IBM-1 between 10 \u22121 and 10 \u22124 .",
"cite_spans": [
{
"start": 223,
"end": 243,
"text": "Mauser et al. (2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Thresholding Methods",
"sec_num": "3"
},
{
"text": "all All entries with probabilities larger than the floor value are not thresholded. This variant may be considered as histogram \u221e. We only apply it with RF lexicons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thresholding Methods",
"sec_num": "3"
},
{
"text": "median \u03c4 f is a median-based distinct value for each f , i.e. it is set to the value that separates the higher half of the entries from the lower half of the entries p(e|f ) for the given f .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thresholding Methods",
"sec_num": "3"
},
{
"text": "We present empirical results obtained with the different insertion and deletion model variants on the Chinese\u2192English 2008 NIST task. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "4"
},
{
"text": "To set up our systems, we employ the open source statistical machine translation toolkit Jane (Vilar et al., 2010; Vilar et al., 2012) , which is freely available for non-commercial use. Jane provides efficient C++ implementations for hierarchical phrase extraction, optimization of log-linear feature weights, and parsing-based decoding algorithms. In our experiments, we use the cube pruning algorithm (Huang and Chiang, 2007) to carry out the search. We work with a parallel training corpus of 3.0M Chinese-English sentence pairs (77.5M Chinese / 81.0M English running words). The counts for the RF lexicon models are computed from a symmetrized word alignment (Och and Ney, 2003) , the IBM-1 models are produced with GIZA++. When extracting phrases, we apply several restrictions, in particular a maximum length of 10 on source and target side for lexical phrases, a length limit of five (including non-terminal symbols) for hierarchical phrases, and no more than two gaps per phrase. The models integrated into the baseline are: phrase translation probabilities and RF lexical translation probabilities on phrase level, each for both translation directions, length penalties on word and phrase level, binary features marking hierarchical phrases, glue rule, and rules with non-terminals at the boundaries, source-to-target and target-to-source phrase length ratios, four binary features marking phrases that have been seen more than one, two, three or five times, respectively, and an n-gram language model. The language model is a 4-gram with modified Kneser-Ney smoothing which was trained with the SRILM toolkit (Stolcke, 2002) on a large collection of English data including the target side of the parallel corpus and the LDC Gigaword v3.",
"cite_spans": [
{
"start": 94,
"end": 114,
"text": "(Vilar et al., 2010;",
"ref_id": "BIBREF14"
},
{
"start": 115,
"end": 134,
"text": "Vilar et al., 2012)",
"ref_id": "BIBREF15"
},
{
"start": 404,
"end": 428,
"text": "(Huang and Chiang, 2007)",
"ref_id": "BIBREF2"
},
{
"start": 664,
"end": 683,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF8"
},
{
"start": 1620,
"end": 1635,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Model weights are optimized against BLEU (Papineni et al., 2002) with standard Minimum Error Rate Training (Och, 2003) , performance is measured with BLEU and TER (Snover et al., 2006) . We employ MT06 as development set, MT08 is used as unseen test set. The empirical evaluation of all our setups is presented in ",
"cite_spans": [
{
"start": 41,
"end": 64,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF11"
},
{
"start": 107,
"end": 118,
"text": "(Och, 2003)",
"ref_id": "BIBREF10"
},
{
"start": 163,
"end": 184,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "With the best model variant, we obtain a significant improvement (90% confidence) of +1.0 points BLEU over the baseline on MT08. A consistent trend towards one of the variants cannot be observed. The results on the test set with RF lexicons or IBM-1, insertion or deletion models, and (in most of the cases) with all of the thresholding methods are roughly at the same level. For comparison we also give a result with an unaligned word count model (+0.4 BLEU). Huck et al. (2011) recently reported substantial improvements over typical hierarchical baseline setups by just including phrase-level IBM-1 scores. When we add the IBM-1 models directly, our baseline is outperformed by +1.7 BLEU. We tried to get improvements with insertion and deletion models over this setup again, but the positive effect was largely diminished. In one of our strongest setups, which includes discriminative word lexicon models (DWL), triplet lexicon models and a discriminative reordering model (discrim. RO) , insertion models still yield a minimal gain, though.",
"cite_spans": [
{
"start": 461,
"end": 479,
"text": "Huck et al. (2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "Our results with insertion and deletion models for Chinese\u2192English hierarchical machine translation are twofold. On the one hand, we achieved significant improvements over a standard hierarchical baseline. We were also able to report a slight gain by adding the models to a very strong setup with discriminative word lexicons, triplet lexicon models and a discriminative reordering model. On the other hand, the positive impact of the models was mainly noticeable when we exclusively applied lexical smoothing with word lexicons which are simply extracted from word-aligned training data, which is however the standard technique in most state-ofthe-art systems. If we included phrase-level lexical scores with IBM model 1 as well, the systems barely benefited from our insertion and deletion models. Compared to an unaligned word count model, insertion and deletion models perform well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This work was achieved as part of the Quaero Programme, funded by OSEO, French State agency for innovation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "19",
"issue": "",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The Mathemat- ics of Statistical Machine Translation: Parameter Es- timation. Computational Linguistics, 19(2):263-311, June.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Hierarchical Phrase-Based Model for Statistical Machine Translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of the 43rd Annual Meeting of the Assoc. for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A Hierarchical Phrase-Based Model for Statistical Machine Translation. In Proc. of the 43rd Annual Meeting of the Assoc. for Computa- tional Linguistics (ACL), pages 263-270, Ann Arbor, MI, USA, June.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Forest Rescoring: Faster Decoding with Integrated Language Models",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of the Annual Meeting of the Assoc. for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "144--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and David Chiang. 2007. Forest Rescoring: Faster Decoding with Integrated Language Models. In Proc. of the Annual Meeting of the Assoc. for Com- putational Linguistics (ACL), pages 144-151, Prague, Czech Republic, June.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Lexicon Models for Hierarchical Phrase-Based Machine Translation",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Saab",
"middle": [],
"last": "Mansour",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Wiesler",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of the Int. Workshop on Spoken Language Translation (IWSLT)",
"volume": "",
"issue": "",
"pages": "191--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Huck, Saab Mansour, Simon Wiesler, and Her- mann Ney. 2011. Lexicon Models for Hierarchi- cal Phrase-Based Machine Translation. In Proc. of the Int. Workshop on Spoken Language Translation (IWSLT), pages 191-198, San Francisco, CA, USA, December.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Discriminative Reordering Extensions for Hierarchical Phrase-Based Machine Translation",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Peitz",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of the 16th Annual Conference of the European Association for Machine Translation (EAMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Huck, Stephan Peitz, Markus Freitag, and Her- mann Ney. 2012. Discriminative Reordering Exten- sions for Hierarchical Phrase-Based Machine Transla- tion. In Proc. of the 16th Annual Conference of the Eu- ropean Association for Machine Translation (EAMT), Trento, Italy, May.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Statistical Phrase-Based Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Joseph"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the Human Language Technology Conf. / North American Chapter of the Assoc. for Computational Linguistics (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proc. of the Human Language Technology Conf. / North American Chapter of the Assoc. for Computational Linguistics (HLT-NAACL), pages 127-133, Edmonton, Canada, May/June.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The RWTH Statistical Machine Translation System for the IWSLT",
"authors": [
{
"first": "Arne",
"middle": [],
"last": "Mauser",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "Sa\u0161a",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arne Mauser, Richard Zens, Evgeny Matusov, Sa\u0161a Hasan, and Hermann Ney. 2006. The RWTH Statisti- cal Machine Translation System for the IWSLT 2006",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Evaluation",
"authors": [],
"year": null,
"venue": "Proc. of the Int. Workshop on Spoken Language Translation (IWSLT)",
"volume": "",
"issue": "",
"pages": "103--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evaluation. In Proc. of the Int. Workshop on Spoken Language Translation (IWSLT), pages 103-110, Ky- oto, Japan, November.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Systematic Comparison of Various Statistical Alignment Models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19-51, March.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Syntax for Statistical Machine Translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Libin",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Eng",
"suffix": ""
},
{
"first": "Viren",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och, Daniel Gildea, Sanjeev Khudanpur, Anoop Sarkar, Kenji Yamada, Alex Fraser, Shankar Kumar, Libin Shen, David Smith, Katherine Eng, Viren Jain, Zhen Jin, and Dragomir Radev. 2003. Syn- tax for Statistical Machine Translation. Technical re- port, Johns Hopkins University 2003 Summer Work- shop on Language Engineering, Center for Language and Speech Processing, Baltimore, MD, USA, August.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Minimum Error Rate Training for Statistical Machine Translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the Annual Meeting of the Assoc. for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum Error Rate Training for Statistical Machine Translation. In Proc. of the An- nual Meeting of the Assoc. for Computational Linguis- tics (ACL), pages 160-167, Sapporo, Japan, July.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bleu: a Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the 40th Annual Meeting of the Assoc. for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a Method for Automatic Evalu- ation of Machine Translation. In Proc. of the 40th An- nual Meeting of the Assoc. for Computational Linguis- tics (ACL), pages 311-318, Philadelphia, PA, USA, July.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Study of Translation Edit Rate with Targeted Human Annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Conf. of the Assoc. for Machine Translation in the Americas (AMTA)",
"volume": "",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annota- tion. In Conf. of the Assoc. for Machine Translation in the Americas (AMTA), pages 223-231, Cambridge, MA, USA, August.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "SRILM -an Extensible Language Modeling Toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the Int. Conf. on Spoken Language Processing (ICSLP)",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -an Extensible Lan- guage Modeling Toolkit. In Proc. of the Int. Conf. on Spoken Language Processing (ICSLP), volume 3, Denver, CO, USA, September.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Jane: Open Source Hierarchical Translation, Extended with Reordering and Lexicon Models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL 2010 Joint Fifth Workshop on Statistical Machine Translation and Metrics MATR",
"volume": "",
"issue": "",
"pages": "262--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Vilar, Daniel Stein, Matthias Huck, and Hermann Ney. 2010. Jane: Open Source Hierarchical Transla- tion, Extended with Reordering and Lexicon Models. In ACL 2010 Joint Fifth Workshop on Statistical Ma- chine Translation and Metrics MATR, pages 262-270, Uppsala, Sweden, July.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Jane: an advanced freely available hierarchical machine translation toolkit",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2012,
"venue": "Machine Translation",
"volume": "",
"issue": "",
"pages": "1--20",
"other_ids": {
"DOI": [
"10.1007/s10590-011-9120-y"
]
},
"num": null,
"urls": [],
"raw_text": "David Vilar, Daniel Stein, Matthias Huck, and Hermann Ney. 2012. Jane: an advanced freely available hier- archical machine translation toolkit. Machine Trans- lation, pages 1-20. http://dx.doi.org/10.1007/s10590- 011-9120-y.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Phrase-based Statistical Machine Translation: Models, Search, Training",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Zens. 2008. Phrase-based Statistical Machine Translation: Models, Search, Training. Ph.D. thesis, RWTH Aachen University, Aachen, Germany, Febru- ary.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "http://www.itl.nist.gov/iad/mig/tests/",
"content": "<table><tr><td>mt/2008/</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}