{ "paper_id": "C14-1042", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:21:23.631464Z" }, "title": "Confusion Network for Arabic Name Disambiguation and Transliteration in Statistical Machine Translation", "authors": [ { "first": "Young-Suk", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM T. J. Watson Research Center", "location": { "addrLine": "1101 Kitchawan Road Yorktown Heights", "postCode": "10598", "region": "NY", "country": "USA" } }, "email": "ysuklee@us.ibm.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Arabic words are often ambiguous between name and non-name interpretations, frequently leading to incorrect name translations. We present a technique to disambiguate and transliterate names even if name interpretations do not exist or have relatively low probability distributions in the parallel training corpus. The key idea comprises named entity classing at the preprocessing step, decoding of a simple confusion network created from the name class label and the input word at the statistical machine translation step, and transliteration of names at the post-processing step. Human evaluations indicate that the proposed technique leads to a statistically significant translation quality improvement of highly ambiguous evaluation data sets without degrading the translation quality of a data set with very few names.", "pdf_parse": { "paper_id": "C14-1042", "_pdf_hash": "", "abstract": [ { "text": "Arabic words are often ambiguous between name and non-name interpretations, frequently leading to incorrect name translations. We present a technique to disambiguate and transliterate names even if name interpretations do not exist or have relatively low probability distributions in the parallel training corpus. The key idea comprises named entity classing at the preprocessing step, decoding of a simple confusion network created from the name class label and the input word at the statistical machine translation step, and transliteration of names at the post-processing step. Human evaluations indicate that the proposed technique leads to a statistically significant translation quality improvement of highly ambiguous evaluation data sets without degrading the translation quality of a data set with very few names.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Arabic person and location names are often ambiguous between name and non-name interpretations, as noted in (Hermjakob et al., 2008; Zayed et al., 2013) . (1) and(2) illustrate such ambiguities for Iraqi Arabic, where the ambiguous names and their translations are in bold-face and the Buckwalter transliteration of Arabic is provided in parentheses: 1", "cite_spans": [ { "start": 108, "end": 132, "text": "(Hermjakob et al., 2008;", "ref_id": "BIBREF9" }, { "start": 133, "end": 152, "text": "Zayed et al., 2013)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) a.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u202b\u0628\u202c \u202b\u0627\ufedf\ufee4\ufeaa\u0631\ufeb3\ufe94\u202c \u202b\ufbfe\ufee2\u202c \u202b\ufe91\ufeb8\ufed8\ufe94\u202c \u202b\ufeb3\ufe8e\ufedb\ufee6\u202c \u202b\u0627\ufee7\ufef2\u202c \u202b\ufea7\ufec0\ufeae\u0627\u0621\u202c (Any sAkn b$qp ym Almdrsp bxDrA') I live in an apartment near the school in Khadraa In this paper, we propose a technique for disambiguating and transliterating Arabic names in an end-to-end statistical machine translation system. The key idea lies in name classing at the preprocessing step, decoding of a simple confusion network created from the class label $name, and the input word at the machine translation step, and transliteration of names by a character-based phrase transliteration model at the post-processing step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While propose confusion network decoding to handle multiple speech recognition outputs for phrase translation and Dyer et al. (2008) generalize lattice decoding algorithm to tackle word segmentation ambiguities for hierarchical phrase-based translation, the current proposal is the first to deploy a confusion network for name disambiguation and translation. The character-based phrase transliteration model captures the asymmetry between Arabic and English vowel systems by treating English vowels as spontaneous words attachable to the neighboring target phrases for phrase (a sequence of characters) acquisition.", "cite_spans": [ { "start": 114, "end": 132, "text": "Dyer et al. (2008)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Confusion network decoding enables the system to choose between name and other translations of the source word on the basis of the decoding cost computed from all of the decoder feature functions which incorporate name tag scores into translation model scores. Probabilistic choice between name versus non-name interpretations makes the technique robust to name classing errors, without stipulating the frequency threshold of the names to be transliterated in order to avoid translation quality degradation (Hermjakob et al., 2008; Li et al., 2013) . A tight integration of named entity detection and classing into the machine translation system, coupled with a generative approach to name transliteration, enables the system to produce reliable name translations even when name interpretations do not exist or have relatively low distributions in the parallel corpus, distinguishing the current proposal from Hermjakob et al. (2008) .", "cite_spans": [ { "start": 507, "end": 531, "text": "(Hermjakob et al., 2008;", "ref_id": "BIBREF9" }, { "start": 532, "end": 548, "text": "Li et al., 2013)", "ref_id": "BIBREF17" }, { "start": 910, "end": 933, "text": "Hermjakob et al. (2008)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Section 2, we give an overview of the translation system. In Section 3, we discuss the model training and confusion network decoding. In Section 4, we detail name transliteration model. We present the experimental results in Section 5. We discuss related work in Section 6 and conclude the paper in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Arabic name disambiguation and transliteration techniques are incorporated into an end-to-end phrase translation system (Och and Ney, 2002; Koehn et al., 2003; Koehn et al., 2007) . Our phrase translation system builds on Tillmann (2003) for translation model training and an in-house implementation of Ney and Tillmann (2003) for beam search phrase decoding.", "cite_spans": [ { "start": 120, "end": 139, "text": "(Och and Ney, 2002;", "ref_id": null }, { "start": 140, "end": 159, "text": "Koehn et al., 2003;", "ref_id": "BIBREF13" }, { "start": 160, "end": 179, "text": "Koehn et al., 2007)", "ref_id": "BIBREF14" }, { "start": 222, "end": 237, "text": "Tillmann (2003)", "ref_id": "BIBREF24" }, { "start": 303, "end": 326, "text": "Ney and Tillmann (2003)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "End-to-end Translation System Overview", "sec_num": "2" }, { "text": "Iraqi Arabic to English end-to-end phrase translation systems are trained on DARPA TransTac data (Hewavitharana et al., 2013) , comprising 766,410 sentence pairs (~6.8 million morpheme tokens in Arabic, ~7.3 million word tokens in English; ~55k unique vocabulary in Arabic and ~35k unique vocabulary in English). The data consist of sub-corpora of several domains including military combined operations, medical, humanitarian aid, disaster relief, etc., and have been created primarily for speechto-speech translations. The process flow of Arabic to English translation incorporating the proposed technique is shown in Figure 1 . The components relevant to name disambiguation and transliteration are in bold face.", "cite_spans": [ { "start": 97, "end": 125, "text": "(Hewavitharana et al., 2013)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 619, "end": 627, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "End-to-end Translation System Overview", "sec_num": "2" }, { "text": "Given the input sentence (3), the spelling normalizer normalizes \u202b\u0622\ufee7\ufef2\u202c to \u202b.\u0627\ufee7\ufef2\u202c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End-to-end Translation System Overview", "sec_num": "2" }, { "text": "(3) \u202b\ufe91\ufea8\ufec0\ufeae\u0627\u0621\u202c \u202b\u0627\ufedf\ufee4\ufeaa\u0631\ufeb3\ufe94\u202c \u202b\ufbfe\ufee2\u202c \u202b\ufe91\ufeb8\ufed8\ufe94\u202c \u202b\ufeb3\ufe8e\ufedb\ufee6\u202c \u202b\u0622\ufee7\ufef2\u202c (|ny sAkn b$qp ym Almdrsp bxDrA')", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End-to-end Translation System Overview", "sec_num": "2" }, { "text": "The morpheme segmenter segments a word into morphemes (Lee et al., 2003; Lee, 2004; Habash and Sadat, 2006) as in (4), where # indicates that the morpheme is a prefix.", "cite_spans": [ { "start": 54, "end": 72, "text": "(Lee et al., 2003;", "ref_id": "BIBREF15" }, { "start": 73, "end": 83, "text": "Lee, 2004;", "ref_id": "BIBREF16" }, { "start": 84, "end": 107, "text": "Habash and Sadat, 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "End-to-end Translation System Overview", "sec_num": "2" }, { "text": "(4) \u202b\u0628\u202c \u202b\ufeb3\ufe8e\ufedb\ufee6\u202c \u202b\u0627\ufee7\ufef2\u202c # \u202b\u0627\u0644\u202c \u202b\ufbfe\ufee2\u202c \u202b\ufeb7\ufed8\ufe94\u202c # \u202b\u0628\u202c \u202b\ufee3\ufeaa\u0631\ufeb3\ufe94\u202c # \u202b\ufea7\ufec0\ufeae\u0627\u0621\u202c (Any sAkn b# $qp ym Al# mdrsp b# xDrA')", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End-to-end Translation System Overview", "sec_num": "2" }, { "text": "Part-of-speech tagging is applied to the morphemes, identifying a name with the tag NOUN_PROP. The input word tagged as NOUN_PROP is classified as name, denoted by the label $name in (5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End-to-end Translation System Overview", "sec_num": "2" }, { "text": "(5) \u202b)\ufea7\ufec0\ufeae\u0627\u0621(_\u202a$name\u202c\u202c \u202b\u0628\u202c \u202b\ufeb3\ufe8e\ufedb\ufee6\u202c \u202b\u0627\ufee7\ufef2\u202c # \u202b\u0627\u0644\u202c \u202b\ufbfe\ufee2\u202c \u202b\ufeb7\ufed8\ufe94\u202c # \u202b\u0628\u202c \u202b\ufee3\ufeaa\u0631\ufeb3\ufe94\u202c # (Any sAkn b# $qp ym Al# mdrsp b# $name_(xDrA'))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End-to-end Translation System Overview", "sec_num": "2" }, { "text": "The token \u202b)\ufea7\ufec0\ufeae\u0627\u0621(_\u202a$name\u202c\u202c is decomposed into the class label $name and the source word \u202b,\ufe91\ufea8\ufec0\ufeae\u0627\u0621\u202c creating a simple confusion network for decoding. The beam search phrase decoder computes the translation costs for all possible input phrases including the phrase pair \"$name | $name\", 2 using all of the decoder feature functions. Assuming that the translation cost for $name being translated into $name is the lowest, the decoder produces the translation (6), where the name classed source word \u202b\ufea7\ufec0\ufeae\u0627\u0621\u202c retains its Arabic spelling .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End-to-end Translation System Overview", "sec_num": "2" }, { "text": "(6) I live in an apartment near the school in \u202b\ufea7\ufec0\ufeae\u0627\u0621\u202c The Arabic word \u202b\ufea7\ufec0\ufeae\u0627\u0621\u202c in (6) is transliterated into khadraa by the NAME/OOV transliteration module. And the system produces the final translation output (7). We use an in-house implementation of the maximum entropy part-of-speech tagger described in Adwait (1996) We train translation and language models with name classing to obtain proper translation and language model probabilities of the class label $name. We extend the baseline phrase beam search decoder to handle a relatively simple confusion network (CN hereafter) and incorporate the name part-of-speech tagging scores into the decoder feature functions.", "cite_spans": [ { "start": 306, "end": 319, "text": "Adwait (1996)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "End-to-end Translation System Overview", "sec_num": "2" }, { "text": "For any name classed input word, $name_(\u01a1\u01da \u0191 \u04bb) in (5), we would like to have the name translation, $name \u2192 $name, always available in addition to other translations of the input word obtainable from the parallel training corpus. In order to estimate $name distributions without obfuscating the distributions of other training vocabulary, we apply name classing only to words that occur less than 3 times in the training corpus and part-ofspeech tagged with NOUN_PROP. The reasons are three-fold: 1) we need to keep all non-name translations of the training vocabulary, 2) typical low frequency words include names and typos, 3) even with $name classing on low frequency words only, the overall $name count is high enough for a robust probability estimation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.1" }, { "text": "After name classing of words occurring less than 3 times, $name occurs 6,944 times (122 th most frequent token) in Arabic and 9,707 times (108 th most frequent token) in English. We train both phrase translation and distortion models on the name classed parallel corpus. Note that the frequency restriction applies only to model training. During decoding, any word labeled with $name may be name transliterated regardless of its frequency in the training corpus, differentiating the current technique from (Li et al., 2013) .", "cite_spans": [ { "start": 506, "end": 523, "text": "(Li et al., 2013)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.1" }, { "text": "To properly capture the name and non-name ambiguities, we interpolate two types language models: 1) 5gram language model trained on the English side of the parallel corpus without name classing (LM1), 2) 5-gram language model trained on the English side of the parallel corpus and additional monolingual corpora with name classing (LM2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Models", "sec_num": "3.2" }, { "text": "Each language model is smoothed with modified Kneser-Ney (Chen and Goodman, 1998) . The two sets of language models are interpolated, as in (8), where \u03b1 is set to 0.1. We find the optimal interpolation weight on the basis of BLEU scores of the development test data set containing about 30k word tokens in Arabic and about 43k word tokens in English.", "cite_spans": [ { "start": 57, "end": 81, "text": "(Chen and Goodman, 1998)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Language Models", "sec_num": "3.2" }, { "text": "(8) \u03b1 \u2022 LM1 + (1-\u03b1) \u2022 LM2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Models", "sec_num": "3.2" }, { "text": "The confusion network containing the class label $name and the source word is handled by an extension of the baseline phrase decoder. The baseline decoder utilizes 11 feature functions including those in (9) 3through 14, where f denotes the source phrase and e , the target phrase, and s, the source sentence, t, the target sentence and a, a word alignment. We use the in-house implementation of the simplex algorithm in Zhao et al. (2009) for decoder parameter optimization. We augment the baseline decoder in two ways: First, we incorporate the maximum entropy part-ofspeech tagging scores of names into the translation scores in (9), (12) and (13). We simply add the name part-of-speech tag cost, i.e. -log probability, to the translation model costs. Second, the decoder can activate more than one edge from one source word position to another, as shown in Figure 2 . 5 The name classed input is split into two tokens $name and xDrA', leading to two separate decoding paths. The choice between the two paths depends on the overall decoding cost of each path, computed from all of the decoder feature functions.", "cite_spans": [ { "start": 421, "end": 439, "text": "Zhao et al. (2009)", "ref_id": "BIBREF28" }, { "start": 872, "end": 873, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 861, "end": 869, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Confusion Network Decoding", "sec_num": "3.3" }, { "text": "Since the decoding path to $name is always available when the input word is classed as $name at the pre-processing step, the technique can discover the name interpretation of an input word even if the name interpretation is absent in the parallel training corpus. Even when the input word occurs as a name in the training corpus but has a lower name translation probability than non-name translations in the baseline phrase table, it can be correctly translated into a name as long as the word is labeled as $name and the decoder feature functions support the $name path in the given context. When a non-name token is mistakenly labeled as $name, the confusion network decoder can recover from the mistake if the non-name path receives a lower decoding cost than the $name path. 6 If the input token is name classed and the correct name translation also exists in the baseline phrase table with a high probability, either path will lead to the correct translation, and the decoder chooses the path with the lower translation cost. All instances of un-translated input words, which include names and OOVs, are transliterated in the postprocessing step. Character-based phrase transliteration models are trained on 9,737 unique name pairs. 965 name pairs are obtained from a name lexicon and the remaining 8,772 name pairs are automatically derived from the parallel training corpus as follows: 1) Take each side of the parallel corpus, i.e. Iraqi Arabic or English. 2) Mark names manually or automatically. 3) Apply word alignment to the namemarked parallel corpus in both directions. 4) Extract name pairs aligned in both directions. For name marking, we used the manual mark-up that was provided in the original data. 5-gram character language models are trained on about 120k entries of names in English. In addition to about 9.7k names from the English side of the parallel names, about 110k entries are collected from wiki pages, English Gigaword 5 th Edition (LDC2011T07), and various name lexicons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Network Decoding", "sec_num": "3.3" }, { "text": "Short vowels are optional in written Arabic, whereas all vowels have to be obligatorily specified in English for a word to be valid (Stalls and Knight, 1998; Al-Onaizan and Knight, 2002b) . We model the asymmetrical nature of vowels between the two languages by treating all instances of unaligned English vowels -a, e, i, o, u -as spontaneous words which can be attached to the left or to the right of an aligned English character for phrase extractions. An example GIZA++ (Och and Ney, 2003) character alignment is shown in Figure 3 . Arabic name is written left to right to illustrate the monotonicity of the alignments.", "cite_spans": [ { "start": 132, "end": 157, "text": "(Stalls and Knight, 1998;", "ref_id": "BIBREF23" }, { "start": 158, "end": 187, "text": "Al-Onaizan and Knight, 2002b)", "ref_id": null }, { "start": 474, "end": 493, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 526, "end": 534, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Phrase Extraction with English Vowels as Spontaneous Words", "sec_num": "4.1" }, { "text": "In Figure 3 , solid lines indicate the automatic machine alignments. English vowels in rectangular boxes indicate null alignments by the aligner. The dotted lines indicate the potential attachment sites of the unaligned vowels for phrase extractions. The first instance of unaligned a (denoted by a 1 ) may be a part of the phrases containing the preceding consonant sequence g h, or the following consonant d. The second instance of unaligned a (denoted by a 2 ) may be a part of the phrases containing the preceding consonant d or the following consonant r. 7", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Figure 3. Automatic Character Alignment between Arabic and English names", "sec_num": null }, { "text": "We use exact match accuracy 8 to evaluate transliteration qualities. Systems are tested on 500 unique name pairs including 52 names unseen in the training corpus. Experimental results are shown in Table 1 . 9 Note that using English vowels as spontaneous words dramatically improves the accuracy from 21.6% to 89.2%.", "cite_spans": [ { "start": 207, "end": 208, "text": "9", "ref_id": null } ], "ref_spans": [ { "start": 197, "end": 204, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4.2" }, { "text": "Decoding is carried out by the baseline phrase decoder discussed in Section 3.3, using the same decoder feature functions except for the distortion models. Using only phrase translation and language model probabilities for decoding results in 74.4% accuracy on SYSTEM4, much lower than 90% accuracy with all decoder feature functions. The same language model is used for all experiments. For the end-to-7 Attachment of unaligned English vowels takes place after phrase extractions and should be distinguished from a heuristic alignment of unaligned English vowels to Arabic characters before phrase extractions. 8 A transliteration is correct if and only if it exactly matches the truth, i.e. gold standard. 9 GIZA++ word aligner is trained with 5 iterations of IBM MODEL 1, 5 iterations of HMM, 5 iterations of IBM MODEL 3 and 5 iterations of IBM MODEL 4. HMM word aligner (Vogel et al., 1996) ", "cite_spans": [ { "start": 874, "end": 894, "text": "(Vogel et al., 1996)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4.2" }, { "text": "End-to-end translation quality experiments are carried out on 3 evaluation data sets shown in Table 2 . TransTac.eval has a low out-of-vocabulary (OOV) and a low name ratios, and has been used as the test data for system development among DARPA BOLT-C 11 program partners. TransTac.oov has a high OOV and a high name ratios, and has been created in-house for OOV detection system developement. Tran-sTac.name has a low OOV and a high name ratios, and was used for the TransTac 2008 name translation evaluations. ", "cite_spans": [], "ref_spans": [ { "start": 94, "end": 101, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "End-to-end Translation System Experimental Results", "sec_num": "5" }, { "text": "End-to-end translation system evaluation results are shown in Table 3 . Bold-faced and italicized scores indicate that the system's translation quality is statistically significantly better than all other systems with over 95% confidence, i.e. two-tailed P value < 0.05 in paired t-tests. The system baseline is trained without name classing and decoded by the baseline decoder without name classing. The system OOVTranslit is trained and decoded the same way as the baseline except that all instances of un-translated OOVs are transliterated at the post-processing step. The system name_t is trained without name classing and decoded by the baseline decoder with name classing. 12 The system CN is trained with name classing and decoded by the CN decoder with name classing. 13 We evaluate the systems, using automatic BLEU (Papineni et al., 2002) , and 6-point scale human evaluations. Lowercased BLEU scores are computed with 1 reference translation up to 4-grams. Scoring criteria for human evaluations are as follows. 0: exceptionally poor; 1: poor; 2: not good enough; 3: good enough; 4: very good; 5: excellent. Human evaluations are conducted on a subset of the automatic evaluation data containing names. 14 We exclude the input sentences for which all systems produce the same translation output. This leaves 201 sentences from TransTac.eval, 197 sentences from Tran-sTac.oov, 64 sentences from TransTac.name.", "cite_spans": [ { "start": 679, "end": 681, "text": "12", "ref_id": null }, { "start": 776, "end": 778, "text": "13", "ref_id": null }, { "start": 825, "end": 848, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 62, "end": 69, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Systems, Metrics and Results", "sec_num": "5.1" }, { "text": "We observe that human evaluation scores are relatively consistent with BLEU scores on two data sets, TransTac.eval and TransTac.oov. TransTac.eval contains very few names. Therefore, incorrect name classing at the pre-processing step hurts the translation quality for the system name_t. The CN decoder can improve the translation quality by recovering from a name classing error by choosing the non-name path. Transliteration of OOVs (OOVTranslit) can improve the translation quality if any of the OOVs are names. Human evaluations capture the behaviors of the CN decoder and OOVTranslit by giving a slightly higher (statistically insignificant) score to OOVTranslit, 3.22, and the CN decoder, 3.20, than to the baseline, 3.16. All three systems, baseline, OOVTranslit and CN, however, received the same BLEU scores, 33.35. This seems to reflect the fact humans can easily capture the spelling variation of names whereas the automatic evaluation with 1 reference cannot.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result Analysis", "sec_num": "5.2" }, { "text": "Transtac.oov has a high OOV and a high name ratios and all OOVs are names. Therefore, name classing improves the translation quality as long as the correctly classed names out-number the incorrectly classed ones, explaining the higher translation quality of name_t than the baseline. OOVTranslit improves the translation quality over the baseline because all OOVs are names. The CN decoder out-performs all three other systems by correctly disambiguating non-OOV names and transliterating name OOVs. BLEU scores and human evaluation scores show the same pattern.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result Analysis", "sec_num": "5.2" }, { "text": "For TransTac.name with a high name and a low OOV ratios, however, human evaluation and BLEU scores show the opposite pattern, although none of the BLEU scores are statistically significantly better than others (note the small evaluation data size of 79 segments and 514 tokens). Since most names in this data set are known to the translation vocabulary and is highly ambiguous, we expect the CN decoder to out-perform all other systems. This expectation is borne out in the human evaluations, but not in BLEU scores. Our analysis indicates that the apparent inconsistency between BLEU and human evaluation scores is primarily due to spelling variations of a name, which are not captured by BLEU with just one reference, cf. (Li et al., 2013) . Out of the human evaluated 64 names in TransTac.name, the baseline system produced the same spelling as the reference 34 times (53.13%), which contrasts with 28 times (43.75%) by the CN decoder. Overall, the CN decoder produced 62 correct name translations, about 20% more than 49 correct translations by the baseline system. Table 4 shows the names for which the reference spelling agrees with the baseline system, but disagrees with the CN decoding followed by transliteration. To verify that the inconsistency between BLEU and human evaluation scores is due to name spelling variations which humans capture but automatic metrics does not, we recomputed BLEU scores after normalizing spellings of the system outputs to be consistent with the reference translation spelling. The recomputed BLEU scores are denoted by TransTac.name_spnorm in Table 3 , which shows that the recomputed BLEU scores are indeed consistent with the human evaluation scores. 15 Also note that the translation quality improvement by transliterating OOV names is well captured in human evaluation scores, 3.19 in the baseline vs. 3.36 in the system OOVTranslit, but not in BLEU scores, 35.03 in both baseline and OOV-Translit.", "cite_spans": [ { "start": 724, "end": 741, "text": "(Li et al., 2013)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 1070, "end": 1077, "text": "Table 4", "ref_id": null }, { "start": 1586, "end": 1593, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Result Analysis", "sec_num": "5.2" }, { "text": "We point out that the same name is often spelled differently in various parts of our training corpus and even in the same reference translation, e.g. al-aswad vs. aswad, jassim vs. jasim, risha vs. rasha, mahadi vs. mehdi vs. mahdi, etc., as had been noted in Al-Onaizan and Knight (2002b) , Huang et al. (2008) .", "cite_spans": [ { "start": 150, "end": 289, "text": "al-aswad vs. aswad, jassim vs. jasim, risha vs. rasha, mahadi vs. mehdi vs. mahdi, etc., as had been noted in Al-Onaizan and Knight (2002b)", "ref_id": null }, { "start": 292, "end": 311, "text": "Huang et al. (2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Reference", "sec_num": null }, { "text": "Al- Onaizan and Knight (2002a) propose an Arabic named entity translation algorithm that performs at near human translation accuracy when evaluated as an independent name translation module. Hassan et al. (2007) propose to improve named entity translation by exploiting comparable and parallel corpora. Hermjakob et al. (2008) present a method to learn when to transliterate Arabic names. They search for name translation candidates in large lists of English words/phrases. Therefore, they cannot accurately translate a name if the correct English name is missing in the word lists. Their restriction of named entity transliteration to rare words cannot capture name interpretations of frequent words, e.g. \u202b\ufebb\ufe92\ufe8e\u062d\u202c (Sabah/morning), if the name interpretations are absent in the parallel corpus. Li et al. (2013) propose a Name-aware machine translation approach which tightly integrates high accuracy name processing into a Chinese-English MT model. Similar to Hermjakob et al. (2008) , they restrict the use of name translation to names occurring less than 5 times in the training data. They train the translation model by merging the name-replaced parallel data with the original parallel data to prevent the quality degradation of high frequency names.", "cite_spans": [ { "start": 4, "end": 30, "text": "Onaizan and Knight (2002a)", "ref_id": null }, { "start": 191, "end": 211, "text": "Hassan et al. (2007)", "ref_id": "BIBREF8" }, { "start": 303, "end": 326, "text": "Hermjakob et al. (2008)", "ref_id": "BIBREF9" }, { "start": 794, "end": 810, "text": "Li et al. (2013)", "ref_id": "BIBREF17" }, { "start": 960, "end": 983, "text": "Hermjakob et al. (2008)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Onish et al. (2010) present a lattice decoding for paraphrase translations, which can handle OOV phrases as long as their paraphrases are found in the training corpus. They build the paraphrase lattices of the input sentence, which are given to the Moses lattice decoder. They deploy the source-side language model of paraphrases as a decoding feature. Stalls and Knight (1998) propose a back-transliteration technique to recover original spelling in Roman script given a foreign name or a loanword in Arabic text, which consist of three models: a model to convert an Arabic string to English phone sequences, a model to convert English phone sequences to English phrases, a language model to rescore the English phrases. They use weighted finite state transducers for decoding. Al-Onaizan and Knight (2002b) propose a spelling-based source-channel model for transliteration (Brown et al., 1993) , which directly maps English letter sequences into Arabic letter sequences, and therefore overcomes Stalls and Knight's major drawback that needs a manual lexicon of English pronunciations. Sherif and Kondrak (2007) propose a substring-based transliteration technique inspired by phrase based translation models and show that substring (i.e. phrase) models out-perform letter (i.e. word) models of Al-Onaizan and Knight (2002b) . Their approach is most similar to the current approach in that we both adopt phrase-based translation models for transliteration. The current approach and Sherif and Kondrak (2007) , however, diverge in most technical details including word alignments, phrase extraction heuristics and decoding, although it is not clear how they estimate transliteration probabilities. Crucially, we use the same set of decoder feature functions (excluding distortion models) as the end-to-end phrase translation system including lexical weights for phrases and a sentence in both directions and word/phrase penalties, whereas Sherif and Kondrak (2007) use only transliteration and language models for substring transducer. We noted in Section 4 that inclusion of all decoder feature functions improves the accuracy by 15.6% absolute, compared with using just translation and language models for decoding.", "cite_spans": [ { "start": 353, "end": 377, "text": "Stalls and Knight (1998)", "ref_id": "BIBREF23" }, { "start": 875, "end": 895, "text": "(Brown et al., 1993)", "ref_id": "BIBREF4" }, { "start": 1087, "end": 1112, "text": "Sherif and Kondrak (2007)", "ref_id": "BIBREF22" }, { "start": 1295, "end": 1324, "text": "Al-Onaizan and Knight (2002b)", "ref_id": null }, { "start": 1482, "end": 1507, "text": "Sherif and Kondrak (2007)", "ref_id": "BIBREF22" }, { "start": 1938, "end": 1963, "text": "Sherif and Kondrak (2007)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We proposed a confusion network decoding to disambiguate Arabic names between name and nonname interpretations of an input word and character-based phrase transliteration models for NAME/OOV transliteration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Name classing at the pre-processing step, coupled with name transliteration at the post-processing step, enables the system to accurately translate OOV names. Robust TM/LM probability estimations of names on the class label $name enable the system to correctly translate names even when the name interpretation of an in-vocabulary word is absent from the training data. Confusion network decoding can recover from name classing errors by choosing an alternative decoding path supported by decoder feature functions, obviating the need for stipulating a count threshold of an input token for name translation. The character-based phrase transliteration system achieves 90% exact match accuracy on 500 unique name pairs, utilizing all of the phrase decoder feature functions except for distortion models. We capture the asymmetries of English and Arabic vowel systems by treating any instance of an unaligned English vowel as a spontaneous word that can be attached to the preceding or following target phrases for phrase acquisition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Although we proposed the confusion network decoding and character-based phrase transliteration models in the contexts of Arabic name disambiguation and transliteration tasks, the techniques are language independent and may be applied to any languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The source phrase $name translates to the target phrase $name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Bi-directional word alignment symmetrization methods, as defined inOch and Ney (2003), include union, intersection and refined. 11 BOLT stands for Broad Operational Language Translation and BOLT-C focuses on speech-to-speech translation with dialog management.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We ensure that any name classed input word $name is translated into $name by adding $name to the translation vocabulary, and the input word for $name is transliterated in the post-processing stage.13 We also evaluated another system, called name_st, which is trained with name classing and decoded with name classing using the baseline decoder. BLEU scores on TransTac.eval and TransTac.oov indicated that model training and decoding with name classing (name_st) is only slightly better than model training without name classing and decoding with name classing (name_t). 14 For TransTac.eval data, we selected the sentences containing words tagged as name, i.e. NOUN_PROP, by the automatic part-ofspeech tagger. The name ratio around 0.5% inTable 2is computed on the basis of human annotations on the reference translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The spellings of the CN decoder output are normalized as follows: 38 instances of names, 2 instances of 's to is, 2 instances of the city of arar to arar city and 1 instance of talk with to speak to. Only name spelling normalizations were necessary for other system outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work has been funded by the Defense Advanced Research Projects Agency BOLT program, Contract No. HR0011-12-C-0015. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the view of DARPA. We would like to thank Lazkin Tahir for his tireless effort on human evaluations. We also thank anonymous reviewers for their helpful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Translating Named Entities Using Monolingual and Bilingual Resources", "authors": [ { "first": "K", "middle": [], "last": "Al-Onaizan", "suffix": "" }, { "first": "", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40 th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "400--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Al-Onaizan and K. Knight. 2002. Translating Named Entities Using Monolingual and Bilingual Resources. In Proceedings of the 40 th Annual Meeting of the Association for Computational Linguistics, pages 400-408.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Machine Transliteration of Names in Arabic Text", "authors": [ { "first": "Y", "middle": [], "last": "Al-Onaizan", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Association for Computational Linguistics Workshop on Computational Approaches to Semitic Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Al-Onaizan and K. Knight. 2002. Machine Transliteration of Names in Arabic Text. In Proceedings of the Association for Computational Linguistics Workshop on Computational Approaches to Semitic Languages.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Distortion models for Statistical Machine Translation", "authors": [ { "first": "Y", "middle": [], "last": "Al-Onaizan", "suffix": "" }, { "first": "K", "middle": [], "last": "Papineni", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21 st International Conference on Computational Linguistics and the 44 th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "529--536", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Al-Onaizan and K. Papineni. 2006. Distortion models for Statistical Machine Translation. In Proceedings of the 21 st International Conference on Computational Linguistics and the 44 th Annual Meeting of the Associa- tion for Computational Linguistics, pages 529-536.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Speech translation by confusion network decoding", "authors": [ { "first": "N", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "M", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "1297--1300", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Bertoldi, R. Zens, and M. Federico.2007. Speech translation by confusion network decoding. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 1297-1300.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "P", "middle": [], "last": "Brown", "suffix": "" }, { "first": "S", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "R", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Brown, S. Della Pietra, V. Della Pietra and R. Mercer. 1993. The mathematics of statistical machine transla- tion: Parameter estimation. In Computational Linguistics, 19(2), pages 263-311.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An Empirical Study of Smoothing Techniques for Language Modeling", "authors": [ { "first": "S", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Chen and J. Goodman. 1998. An Empirical Study of Smoothing Techniques for Language Modeling. TR-10- 98. Computer Science Group. Harvard University.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Generalizing Word Lattice Translation", "authors": [ { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "S", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "Proceeding of the 46 th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1012--1020", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Dyer, S. Muresan, and P. Resnik. 2008. Generalizing Word Lattice Translation. In Proceeding of the 46 th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1012- 1020.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Arabic Preprocessing Schemes for Statistical Machine Translation", "authors": [ { "first": "N", "middle": [], "last": "Habash", "suffix": "" }, { "first": "F", "middle": [], "last": "Sadat", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "49--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Habash and F. Sadat. 2006. Arabic Preprocessing Schemes for Statistical Machine Translation, In Proceed- ings of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 49-52.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improving Named Entity Translation by Exploiting Comparable and Parallel Corpora", "authors": [ { "first": "A", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "H", "middle": [], "last": "Fahmy", "suffix": "" }, { "first": "H", "middle": [], "last": "Hassan", "suffix": "" } ], "year": 2007, "venue": "Proceeding RANLP'07", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Hassan, H. Fahmy, and H. Hassan. 2007. Improving Named Entity Translation by Exploiting Comparable and Parallel Corpora. In Proceeding RANLP'07, pages 1-6.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Name Translation in Statistical Machine Translation:Learning When to Transliterate", "authors": [ { "first": "U", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "H", "middle": [], "last": "Daume", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46 th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "389--397", "other_ids": {}, "num": null, "urls": [], "raw_text": "U. Hermjakob, K. Knight, and H. Daume III. 2008. Name Translation in Statistical Machine Transla- tion:Learning When to Transliterate. In Proceedings of the 46 th Annual Meeting of the Association for Compu- tational Linguistics, pages 389-397.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Incremental Topic-Based Translation Model Adaptation for Conversational Spoken Language Translation", "authors": [ { "first": "S", "middle": [], "last": "Hewavitharana", "suffix": "" }, { "first": "D", "middle": [], "last": "Mehay", "suffix": "" }, { "first": "S", "middle": [], "last": "Ananthakrishnan", "suffix": "" }, { "first": "P", "middle": [], "last": "Natarajan", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51 st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "697--701", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Hewavitharana, D. Mehay, S. Ananthakrishnan, and P. Natarajan. 2013. Incremental Topic-Based Translation Model Adaptation for Conversational Spoken Language Translation. In Proceedings of the 51 st Annual Meet- ing of the Association for Computational Linguistics, pages 697-701.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "When Harry Met Harri, and : Cross-lingual Name Spell", "authors": [ { "first": "F", "middle": [], "last": "Huang", "suffix": "" }, { "first": "A", "middle": [], "last": "Emami", "suffix": "" }, { "first": "I", "middle": [], "last": "Zitouni", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Huang, A. Emami, and I. Zitouni. 2008. When Harry Met Harri, and : Cross-lingual Name Spell-", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Proceedings of the Empirical Methods in Natural Language Processing", "authors": [ { "first": "", "middle": [], "last": "Ing Normalization", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "391--399", "other_ids": {}, "num": null, "urls": [], "raw_text": "ing Normalization. In Proceedings of the Empirical Methods in Natural Language Processing, pages 391-399.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Statistical Phrase-Based Translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "F", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "1", "issue": "", "pages": "127--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, F. Josef Och, and D. Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Lan- guage Technology -Volume 1, pages 127-133.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "H", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "A", "middle": [], "last": "Birch", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "M", "middle": [], "last": "Federico", "suffix": "" }, { "first": "N", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "B", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "W", "middle": [], "last": "Shen", "suffix": "" }, { "first": "C", "middle": [], "last": "Moran", "suffix": "" }, { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "O", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "A", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "E", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45 th Annual Meeting of the Association of Computational Linguistics on Interactive Poster and Demonstration Sessions", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical ma- chine translation. In Proceedings of the 45 th Annual Meeting of the Association of Computational Linguistics on Interactive Poster and Demonstration Sessions, pages 177-180.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Proceedings of the 41 st Annual Meeting of Association for Computational Linguistics", "authors": [ { "first": "Y", "middle": [], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "O", "middle": [], "last": "Emam", "suffix": "" }, { "first": "H", "middle": [], "last": "Hassan", "suffix": "" } ], "year": 2003, "venue": "", "volume": "1", "issue": "", "pages": "399--406", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Lee, K. Papineni, S. Roukos, O. Emam and H. Hassan. 2003. In Proceedings of the 41 st Annual Meeting of Association for Computational Linguistics -Volume 1, pages 399-406.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Morphological Analysis for Statistical Machine Translation", "authors": [ { "first": "Y", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Human Language Technology Conference/North American Chapter of the Association for Computational Linguistics: Short Papers", "volume": "", "issue": "", "pages": "57--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Lee. 2004. Morphological Analysis for Statistical Machine Translation. In Proceedings of Human Language Technology Conference/North American Chapter of the Association for Computational Linguistics: Short Pa- pers, pages 57-60.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Name-aware Machine Translation", "authors": [ { "first": "H", "middle": [], "last": "Li", "suffix": "" }, { "first": "J", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "H", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Q", "middle": [], "last": "Li", "suffix": "" }, { "first": "W", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "604--614", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Li, J. Zheng, H. Ji, Q. Li and W. Wang. 2013. Name-aware Machine Translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 604-614.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A Systematic Comparison of Various Statistical Alignment Models", "authors": [ { "first": "F", "middle": [], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Och and H. Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. In Computational Linguistics 29(1), pages 19-51. MIT Press.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Paraphrase Lattice for Statistical Machine Translation", "authors": [ { "first": "T", "middle": [], "last": "Onish", "suffix": "" }, { "first": "M", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48 th Annual Meeting of the Association for Computational Linguistics Short Papers", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Onish, M. Utiyama and E. Sumita. 2010. Paraphrase Lattice for Statistical Machine Translation. In Proceed- ings of the 48 th Annual Meeting of the Association for Computational Linguistics Short Papers, pages 1-5.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "BLEU: a Method for Automatic Evaluation of Machine Translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40 th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40 th Annual Meeting of the Association for Computational Linguistics, pages 311-318.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A Maximum Entropy Model for Part-Of-Speech Tagging", "authors": [ { "first": "A", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "133--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Ratnaparkhi. 1996. A Maximum Entropy Model for Part-Of-Speech Tagging. In Proceedings of the Empiri- cal Methods in Natural Language Processing, pages 133-142.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Substring-Based Transliteration", "authors": [ { "first": "T", "middle": [], "last": "Sherif", "suffix": "" }, { "first": "G", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45 th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "944--951", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Sherif and G. Kondrak. 2007. Substring-Based Transliteration. In Proceedings of the 45 th Annual Meeting of the Association for Computational Linguistics, pages 944-951.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Translating Names and Technical Terms in Arabic Text", "authors": [ { "first": "B", "middle": [ "G" ], "last": "Stalls", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the COLING/ACL Workshop on Computational Approaches to Semitic Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. G. Stalls and K. Knight. 1998. Translating Names and Technical Terms in Arabic Text. In Proceedings of the COLING/ACL Workshop on Computational Approaches to Semitic Languages.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A Projection Extension Algorithm for Statistical Machine Translation", "authors": [ { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Tillmann. 2003. A Projection Extension Algorithm for Statistical Machine Translation. In Proceedings of the Empirical Methods in Natural Language Processing, pages 1-8.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Word Reordering and a Dynamic Programming Beam-Search Algorithm for Statistical MT", "authors": [ { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "97--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Tillmann and H. Ney. 2003. Word Reordering and a Dynamic Programming Beam-Search Algorithm for Sta- tistical MT. Computational Linguistics 29(1), pages 97-133. MIT Press.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "HMM-based word alignment in statistical machine translation", "authors": [ { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16 th International Conference on Computational Linguistics", "volume": "2", "issue": "", "pages": "836--841", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Vogel, H. Ney and C. Tillmann. 1996. HMM-based word alignment in statistical machine translation. In Pro- ceedings of the 16 th International Conference on Computational Linguistics, Volume 2, pages 836-841.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "An Approach for Extracting and Disambiguating Arabic Person's Names Using Clustered Dictionaries and Scored Patterns", "authors": [ { "first": "O", "middle": [], "last": "Zayed", "suffix": "" }, { "first": "S", "middle": [], "last": "El-Beltagy", "suffix": "" }, { "first": "O", "middle": [], "last": "Haggag", "suffix": "" } ], "year": 2013, "venue": "Natural Language Processing and Information Systems Lecture Notes in Computer Science", "volume": "7934", "issue": "", "pages": "201--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Zayed, S. El-Beltagy, O. Haggag. An Approach for Extracting and Disambiguating Arabic Person's Names Using Clustered Dictionaries and Scored Patterns. In Natural Language Processing and Information Systems Lecture Notes in Computer Science. Vol. 7934, 2013, pages 201-212.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A Simplex Armijo Downhill Algorithm for Optimizing Statistical Machine Translation Decoding", "authors": [ { "first": "B", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "S", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technology Conference/North American Chapter of the Association for Computational Linguistics Short Papers", "volume": "", "issue": "", "pages": "21--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Zhao and S. Chen. 2009. A Simplex Armijo Downhill Algorithm for Optimizing Statistical Machine Transla- tion Decoding. In Proceedings of Human Language Technology Conference/North American Chapter of the Association for Computational Linguistics Short Papers, pages 21-24.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Improvements in phrase-based statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zen", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Human Language Technology Conference/North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "257--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zen and H. Ney. 2004. Improvements in phrase-based statistical machine translation. In Proceedings of Hu- man Language Technology Conference/North American Chapter of the Association for Computational Lin- guistics, pages 257-264.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "I live in an apartment near the school in khadraa" }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Process Flow of Arabic to English Phrase Translation Decoding" }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "(Koehn et al., 2003) (13) Lexical weights p w (t|s,a) & p w (s|t,a) (14) Word and phrase penalties (Zens and Ney, 2004) ) is computed according to(15), where j = 1, \u2026, n source word positions and i = 1,\u2026, m target word positions within a phrase, N = source phrase length, w(e|f) = the lexical probability distribution: 4Lexical weight p w (t|s,a) in (13) is computed according to (16), where K = number of phrases in the input sentence, k = k th phrase," }, "FIGREF3": { "num": null, "uris": null, "type_str": "figure", "text": "Confusion Network Decoding Paths for Name Classed Input 4 Estimated in the manner described inKoehn et al. (2003).5 Arabic is represented by Buckwalter transliteration scheme.6 The decoding scores are computed as cost on the basis of -log likelihood of various component models. And therefore, a smaller decoding cost indicates a higher translation quality." }, "TABREF1": { "html": null, "num": null, "content": "
3
Arabic Input
Spelling normalization
Morpheme Segmentation
Pre-processing
SMT Decoding
Post-processing
De-tokenization
English Output
for name classing. The part-of-speech tagger is trained on the combination of LDC-released Arabic Treebank data containing about 3 million morpheme tokens from MSA (modern stan-dard Arabic) and in-house annotated TransTac Iraqi Arabic data containing about 63k morpheme to-kens. F-score of the tagger on proper noun tags, NOUN_PROP, is about 93% on 2,044 MSA name tokens derived from Arabic Treebank: Part 3 v 3.2 (LDC2010T08), and about 81.4% on 2,631 Iraqi Arabic name tokens derived from the DARPA TransTac corpus.
", "type_str": "table", "text": "" }, "TABREF2": { "html": null, "num": null, "content": "
end translation quality Systems Character Alignments Symmetrization 10 SYSTEM1 GIZA++ Union SYSTEM2 HMM Refined SYSTEM3 GIZA++ Union SYSTEM4 GIZA++ & HMM Union Table 1Target spontaneous words None None All English vowels: a, e, i, o, u All English vowels: a, e, i, o, uAccuracy 21.6% 86.8% 89.2% 90.0%
\u202b\u063a\u202c\u202b\u062f\u202c\u202b\u0631\u202c\u202b\u0627\u202c\u202b\u0646\u202c\u202b\u0627\u202c\u202b\u0648\u202c\u202b\u064a\u202c
gha 1da 2ranawi
", "type_str": "table", "text": "is trained with 15 iterations of IBM MODEL 1 and 6 iterations of HMM. evaluations in Section 5, we use SYSTEM4. Exact match accuracy of SYSTEM4 on the 52 unseen name pairs is 46%. Name transliteration accuracy on 500 names according to various phrase extraction techniques" } } } }