Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N13-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:41:25.468730Z"
},
"title": "Dialectal Arabic to English Machine Translation: Pivoting through Modern Standard Arabic",
"authors": [
{
"first": "Wael",
"middle": [],
"last": "Salloum",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": "",
"affiliation": {},
"email": "habash@ccls.columbia.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Modern Standard Arabic (MSA) has a wealth of natural language processing (NLP) tools and resources. In comparison, resources for dialectal Arabic (DA), the unstandardized spoken varieties of Arabic, are still lacking. We present ELISSA, a machine translation (MT) system for DA to MSA. ELISSA employs a rule-based approach that relies on morphological analysis, transfer rules and dictionaries in addition to language models to produce MSA paraphrases of DA sentences. ELISSA can be employed as a general preprocessor for DA when using MSA NLP tools. A manual error analysis of ELISSA's output shows that it produces correct MSA translations over 93% of the time. Using ELISSA to produce MSA versions of DA sentences as part of an MSA-pivoting DA-to-English MT solution, improves BLEU scores on multiple blind test sets between 0.6% and 1.4%.",
"pdf_parse": {
"paper_id": "N13-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "Modern Standard Arabic (MSA) has a wealth of natural language processing (NLP) tools and resources. In comparison, resources for dialectal Arabic (DA), the unstandardized spoken varieties of Arabic, are still lacking. We present ELISSA, a machine translation (MT) system for DA to MSA. ELISSA employs a rule-based approach that relies on morphological analysis, transfer rules and dictionaries in addition to language models to produce MSA paraphrases of DA sentences. ELISSA can be employed as a general preprocessor for DA when using MSA NLP tools. A manual error analysis of ELISSA's output shows that it produces correct MSA translations over 93% of the time. Using ELISSA to produce MSA versions of DA sentences as part of an MSA-pivoting DA-to-English MT solution, improves BLEU scores on multiple blind test sets between 0.6% and 1.4%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Much work has been done on Modern Standard Arabic (MSA) natural language processing (NLP) and machine translation (MT), especially Statistical MT (SMT). MSA has a wealth of resources in terms of morphological analyzers, disambiguation systems, and parallel corpora. In comparison, research on dialectal Arabic (DA), the unstandardized spoken varieties of Arabic, is still lacking in NLP in general and MT in particular. In this paper we present ELISSA, our DA-to-MSA MT system, and show how it can help improve the translation of highly dialectal Arabic text into English by pivoting on MSA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The ELISSA approach can be summarized as follows. First, ELISSA uses different techniques to identify dialectal words and multi-word constructions (phrases) in a source sentence. Then, ELISSA produces MSA paraphrases for the selected words and phrase using a rule-based component that depends on the existence of a dialectal morphological analyzer, a list of morphosyntactic transfer rules, and DA-MSA dictionaries. The resulting MSA is in a lattice form that we pass to a language model for nbest decoding. The output of ELISSA, whether a top-1 choice sentence or n-best sentences, is passed to an MSA-English SMT system to produce the English translation sentence. ELISSA-based MSA-pivoting for DA-to-English SMT improves BLEU scores (Papineni et al., 2002) on three blind test sets between 0.6% and 1.4%. A manual error analysis of translated words shows that ELISSA produces correct MSA translations over 93% of the time.",
"cite_spans": [
{
"start": 736,
"end": 759,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is structured as follows: Section 2 motivates the use of ELISSA to improve DA-English SMT with an example. Section 3 discusses some of the challenges associated with processing Arabic and its dialects. Section 4 presents related work. Section 5 details ELISSA and its approach and Section 6 presents results evaluating ELISSA under a variety of conditions. Table 1 shows a motivating example of how pivoting on MSA can dramatically improve the translation quality of a statistical MT system that is trained on mostly MSA-to-English parallel corpora. In this example, we use Google Translate's online Arabic-English SMT system. 1 The table is divided into two parts. The top part shows a dialectal (Levantine) sentence, its reference translation to English, and its Google Translate translation. The Google Translate translation clearly struggles with most of the DA words, which were probably unseen in the training data (i.e., out-of-vocabulary -OOV) and were con-DA source bhAlHAlh hAy mA Hyktbwlw \u03c2HyT AlSfHh Al\u0161xSyh tb\u03c2w wlA bdn yAh yb\u03c2tln kwmyntAt l\u00c2nw mAxbrhwn AymtA rH yrwH \u03c2Albld.",
"cite_spans": [
{
"start": 650,
"end": 651,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 380,
"end": 387,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this case, they will not write on his profile wall and they do not want him to send them comments because he did not tell them when he will go to the country.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Reference",
"sec_num": null
},
{
"text": "Bhalhalh Hi Hictpoulo Ahat Profile Tbau not hull Weah Abatln Comintat Anu Mabarhun Oamta welcomed calls them Aalbuld.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Google Translate",
"sec_num": null
},
{
"text": "Human DA-to-MSA fy h\u00f0h AlHAlh ln yktbwA lh \u03c2l\u00fd HA\u0177T SfHth Al\u0161xSyh wlA yrydwnh \u00c2n yrsl lhm t\u03c2lyqAt l\u00c2nh lm yxbrhm mt\u00fd sy\u00f0hb Al\u00fd Albld.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Google Translate",
"sec_num": null
},
{
"text": "In this case it would not write to him on the wall of his own and do not want to send their comments because he did not tell them when going to the country. sidered proper nouns (transliterated and capitalized). The lack of DA-English parallel corpora suggests pivoting on MSA can improve the translation quality. In the bottom part of the table, we show a human MSA translation of the DA sentence above and its Google translation. We see that the results are quite promising. The goal of ELISSA is to model this DA-MSA translation automatically. In Section 5.4, we revisit this example to discuss ELISSA's performance on it. We show its output and its corresponding Google translation in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 689,
"end": 696,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Google Translate",
"sec_num": null
},
{
"text": "Contemporary Arabic is in fact a collection of varieties: MSA, the official language of the Arab World, which has a standard orthography and is used in formal settings; and DAs, the commonly used informal native varieties, which have no standard orthographies but have an increasing presence on the web. Arabic, in general, is a morphologically complex language which has rich inflectional morphology, expressed both templatically and affixationally, and several classes of attachable clitics. For example, the Arabic word w+s+y-ktb-wn+hA 2 'and they will write it' has two proclitics (+ w+ 'and' and + s+ 'will'), one prefix -y-'3rd 2 Arabic transliteration throughout the paper is presented in the Habash-Soudi-Buckwalter scheme (Habash et al., 2007) : (in alphabetical order) Abt\u03b8jHxd\u00f0rzs\u0161SDT\u010e\u03c2\u03b3fqklmnhwy and the additional symbols: ' , \u00c2 ,\u01cd ,\u0100 ,\u0175 ,\u0177 ,h , \u00fd .",
"cite_spans": [
{
"start": 634,
"end": 635,
"text": "2",
"ref_id": null
},
{
"start": 731,
"end": 752,
"text": "(Habash et al., 2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges for Processing Arabic and its Dialects",
"sec_num": "3"
},
{
"text": "person', one suffix --wn 'masculine plural' and one pronominal enclitic + +hA 'it/her'. DAs differ from MSA phonologically, morphologically and to a lesser degree syntactically. The morphological differences are most noticeably expressed in the use of clitics and affixes that do not exist in MSA. For instance, the Levantine Arabic equivalent of the MSA example above is w+H+yktb-w+hA 'and they will write it'. The optionality of vocalic diacritics helps hide some of the differences resulting from vowel changes; compare the diacritized forms: Levantine wHayikitbuwhA and MSA wasayaktubuwnahA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges for Processing Arabic and its Dialects",
"sec_num": "3"
},
{
"text": "All of the NLP challenges of MSA (e.g., optional diacritics and spelling inconsistency) are shared by DA. However, the lack of standard orthographies for the dialects and their numerous varieties pose new challenges. Additionally, DAs are rather impoverished in terms of available tools and resources compared to MSA, e.g., there is very little parallel DA-English corpora and almost no MSA-DA parallel corpora. The number and sophistication of morphological analysis and disambiguation tools in DA is very limited in comparison to MSA (Duh and Kirchhoff, 2005; Habash and Rambow, 2006; Abo Bakr et al., 2008; Habash, 2010; Salloum and Habash, 2011; Habash et al., 2013) . MSA tools cannot be effectively used to handle DA, e.g., Habash and Rambow (2006) report that over onethird of Levantine verbs cannot be analyzed using an MSA morphological analyzer.",
"cite_spans": [
{
"start": 536,
"end": 561,
"text": "(Duh and Kirchhoff, 2005;",
"ref_id": "BIBREF8"
},
{
"start": 562,
"end": 586,
"text": "Habash and Rambow, 2006;",
"ref_id": "BIBREF11"
},
{
"start": 587,
"end": 609,
"text": "Abo Bakr et al., 2008;",
"ref_id": "BIBREF1"
},
{
"start": 610,
"end": 623,
"text": "Habash, 2010;",
"ref_id": "BIBREF15"
},
{
"start": 624,
"end": 649,
"text": "Salloum and Habash, 2011;",
"ref_id": "BIBREF28"
},
{
"start": 650,
"end": 670,
"text": "Habash et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 730,
"end": 754,
"text": "Habash and Rambow (2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges for Processing Arabic and its Dialects",
"sec_num": "3"
},
{
"text": "Dialectal Arabic NLP. Several researchers have explored the idea of exploiting existing MSA rich resources to build tools for DA NLP (Chiang et al., 2006) . Such approaches typically expect the presence of tools/resources to relate DA words to their MSA variants or translations. Given that DA and MSA do not have much in terms of parallel corpora, rule-based methods to translate DA-to-MSA or other methods to collect word-pair lists have been explored. For example, Abo Bakr et al. 2008introduced a hybrid approach to transfer a sentence from Egyptian Arabic into MSA. This hybrid system consisted of a statistical system for tokenizing and tagging, and a rule-based system for constructing diacritized MSA sentences. Moreover, Al-Sabbagh and Girju (2010) described an approach of mining the web to build a DA-to-MSA lexicon. In the context of DA-to-English SMT, Riesa and Yarowsky (2006) presented a supervised algorithm for online morpheme segmentation on DA that cut the OOV words by half.",
"cite_spans": [
{
"start": 133,
"end": 154,
"text": "(Chiang et al., 2006)",
"ref_id": "BIBREF5"
},
{
"start": 865,
"end": 890,
"text": "Riesa and Yarowsky (2006)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Machine Translation for Closely Related Languages. Using closely related languages has been shown to improve MT quality when resources are limited. Haji\u010d et al. (2000) argued that for very close languages, e.g., Czech and Slovak, it is possible to obtain a better translation quality by using simple methods such as morphological disambiguation, transfer-based MT and word-for-word MT. Zhang (1998) introduced a Cantonese-Mandarin MT that uses transformational grammar rules. In the context of Arabic dialect translation, Sawaf (2010) built a hybrid MT system that uses both statistical and rule-based approaches for DA-to-English MT. In his approach, DA is normalized into MSA using a dialectal morphological analyzer. In previous work, we presented a rule-based DA-MSA system to improve DA-to-English MT (Salloum and Habash, 2011; Salloum and Habash, 2012) . Our approach used a DA morphological analyzer (ADAM) and a list of hand-written morphosyntactic transfer rules. This use of \"resource-rich\" related languages is a specific variant of the more general approach of using pivot/bridge languages (Utiyama and Isahara, 2007; Kumar et al., 2007) . In the case of MSA and DA variants, it is plausible to consider the MSA variants of a DA phrase as monolingual paraphrases (Callison-Burch et al., 2006; Du et al., 2010) . Also related is the work by Nakov and Ng (2011) , who use morphological knowledge to generate paraphrases for a morphologically rich language, Malay, to extend the phrase table in a Malay-to-English SMT system.",
"cite_spans": [
{
"start": 148,
"end": 167,
"text": "Haji\u010d et al. (2000)",
"ref_id": "BIBREF16"
},
{
"start": 386,
"end": 398,
"text": "Zhang (1998)",
"ref_id": "BIBREF34"
},
{
"start": 806,
"end": 832,
"text": "(Salloum and Habash, 2011;",
"ref_id": "BIBREF28"
},
{
"start": 833,
"end": 858,
"text": "Salloum and Habash, 2012)",
"ref_id": "BIBREF29"
},
{
"start": 1102,
"end": 1129,
"text": "(Utiyama and Isahara, 2007;",
"ref_id": "BIBREF32"
},
{
"start": 1130,
"end": 1149,
"text": "Kumar et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 1275,
"end": 1304,
"text": "(Callison-Burch et al., 2006;",
"ref_id": "BIBREF4"
},
{
"start": 1305,
"end": 1321,
"text": "Du et al., 2010)",
"ref_id": "BIBREF7"
},
{
"start": 1352,
"end": 1371,
"text": "Nakov and Ng (2011)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Pivoting on MSA or acquiring more DA-English data? Zbib et al. (2012) demonstrated an approach to cheaply obtaining DA-English data. They used Amazon's Mechanical Turk (MTurk) to create a DA-English parallel corpus of 1.5M words and added it to a 150M MSA-English parallel corpus to create the training corpus of their SMT system. They also used MTurk to translate their dialectal test set to MSA in order to compare the MSA-pivoting approach to the direct translation from DA to English approach. They showed that even though pivoting on MSA (produced by Human translators in an oracle experiment) can reduce OOV rate to 0.98% from 2.27% for direct translation (without pivoting), it improves by 4.91% BLEU while direct translation improves by 6.81% BLEU over their 12.29% BLEU baseline (direct translation using the 150M MSA system). They concluded that simple vocabulary coverage is not sufficient and the domain mismatch is a more important problem. The approach we take in this paper is orthogonal to such efforts to build parallel data. We plan to study interactions between the two types of solutions in the future.",
"cite_spans": [
{
"start": 51,
"end": 69,
"text": "Zbib et al. (2012)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Our work is most similar to Sawaf (2010)'s MSApivoting approach. In his approach, DA is normalized into MSA using character-based DA normalization rules, a DA morphological analyzer, a DA normalization decoder that relies on language models, and a lexicon. Similarly, we use some character normalization rules, a DA morphological analyzer, and DA-MSA dictionaries. In contrast, we use hand-written morphosyntactic transfer rules that focus on translating DA morphemes and lemmas to their MSA equivalents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "In our previous work (Salloum and Habash, 2011; Salloum and Habash, 2012) , we applied our approach to tokenized Arabic and our DA-MSA transfer component used feature transfer rules only. We did not use a language model to pick the best path; instead we kept the ambiguity in the lattice and passed it to our SMT system. In contrast, in this paper, we run ELISSA on untokenized Arabic, we use feature, lemma, and surface form transfer rules, and we pick the best path of the generated MSA lattice through a language model. Certain aspects of our approach are similar to Riesa and Yarowsky (2006) 's, in that we use morphological analysis for DA to help DA-English MT; but unlike them, we use a rule-based approach to model DA morphology.",
"cite_spans": [
{
"start": 21,
"end": 47,
"text": "(Salloum and Habash, 2011;",
"ref_id": "BIBREF28"
},
{
"start": 48,
"end": 73,
"text": "Salloum and Habash, 2012)",
"ref_id": "BIBREF29"
},
{
"start": 570,
"end": 595,
"text": "Riesa and Yarowsky (2006)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "ELISSA is a DA-to-MSA MT System. ELISSA uses a rule-based approach (with some statistical components) that relies on the existence of a DA morphological analyzer, a list of hand-written transfer rules, and DA-MSA dictionaries to create a mapping of DA to MSA words and construct a lattice of possible sentences. ELISSA uses a language model to rank and select the generated sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ELISSA",
"sec_num": "5"
},
{
"text": "ELISSA supports untokenized (raw) input only. ELISSA supports three types of output: top-1 choice, an n-best list or a map file that maps source words/phrases to target phrases. The top-1 and nbest lists are determined using an untokenized MSA language model to rank the paths in the MSA translation output lattice. This variety of output types makes it easy to plug ELISSA with other systems and to use it as a DA preprocessing tool for other MSA systems, e.g., MADA (Habash and Rambow, 2005) or AMIRA (Diab et al., 2007) .",
"cite_spans": [
{
"start": 468,
"end": 493,
"text": "(Habash and Rambow, 2005)",
"ref_id": "BIBREF10"
},
{
"start": 503,
"end": 522,
"text": "(Diab et al., 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ELISSA",
"sec_num": "5"
},
{
"text": "ELISSA's approach consists of three major steps preceded by a preprocessing and normalization step, that prepares the input text to be handled (e.g., UTF-8 cleaning, Alif/Ya normalization, word-lengthening normalization), and followed by a post-processing step, that produces the output in the desired form (e.g., encoding choice). The three major steps are Selection, Translation, and Language Modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ELISSA",
"sec_num": "5"
},
{
"text": "In the first step, ELISSA identifies which words or phrases to paraphrase and which words or phrases to leave as is. ELISSA provides different methods (techniques) for selection, and can be configured to use different subsets of them. In Section 6 we use the term \"selection mode\" to denote a subset of selection methods. Selection methods are classified into Word-based selection and Phrase-based selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "5.1"
},
{
"text": "Word-based selection. Methods of this type fall in the following categories: a. User token-based selection: The user can mark specific words for selection using the tag '/DIA' (stands for 'dialect') after each word to select.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "5.1"
},
{
"text": "b. User type-based selection: The user can specify a list of words to select from, e.g., OOVs. Also the user can provide a list of words and their frequencies and specify a cut-off threshold to prevent selecting a frequent word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "5.1"
},
{
"text": "c. Morphology-based word selection: ELISSA uses ADAM (Salloum and Habash, 2011) to select words that have DA analyses only (DIAONLY) or DA/MSA analyses (DIAMSA).",
"cite_spans": [
{
"start": 53,
"end": 79,
"text": "(Salloum and Habash, 2011)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "5.1"
},
{
"text": "d. Dictionary-based selection: ELISSA selects words based on their existence in the DA side of our DA-MSA dictionaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "5.1"
},
{
"text": "e. All: ELISSA selects every word in an input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "5.1"
},
{
"text": "Phrase-based selection. This selection type uses hand-written rules to identify dialectal multi-word constructions that are mappable to single or multiword MSA constructions. The current count of these rules is 25. Table 2 presents some rule categories and related examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 222,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Selection",
"sec_num": "5.1"
},
{
"text": "In the current version of ELISSA, words can be selected using either the phrase-based selection method or a word-based selection method, but not both. Phrase-based selection has precedence. We evaluate different settings for selection step in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "5.1"
},
{
"text": "In this step, ELISSA translates the selected words and phrases to their MSA equivalent paraphrases. The specific type of selection determines the type of the translation, e.g., phrase-based selected words are translated using phrase-based translation rules. The MSA paraphrases are then used to form an MSA lattice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation",
"sec_num": "5.2"
},
{
"text": "Word-based translation. This category has two types of translation techniques: surface translation that uses DA-to-MSA surface-to-surface (S2S) transfer rules (TRs) and deep (morphological) translation that uses the classic rule-based machine translation flow: analysis, transfer and generation. The",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation",
"sec_num": "5.2"
},
{
"text": "Aljy\u0161 AlwTny btA\u03c2nA jy\u0161nA AlwTny",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Examples Translation Examples Dialectal Idafa",
"sec_num": null
},
{
"text": "'the-army the-national ours' 'our-army the-national' Verb +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Examples Translation Examples Dialectal Idafa",
"sec_num": null
},
{
"text": "HDrlhA yAhn HDrhm lhA flipped direct and indirect objects 'he-prepared-for-her them' 'he-prepared-them for-her' Special dialectal expressions bdw AyAhA yrydhA 'his-desire her'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Examples Translation Examples Dialectal Idafa",
"sec_num": null
},
{
"text": "'he-desires-her' Negation + verb wmA Hyktbwlw wln yktbwA lh 'and-not they-will-write-to-him'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Examples Translation Examples Dialectal Idafa",
"sec_num": null
},
{
"text": "'and-will-not they-write to-him' Negation + agent noun fm\u0161 lAqyh flA tjd 'so-not finding'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Examples Translation Examples Dialectal Idafa",
"sec_num": null
},
{
"text": "'so-not she-finds' Negation + closed-class words mA \u03c2dkm lys ldykm 'not with-you'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Examples Translation Examples Dialectal Idafa",
"sec_num": null
},
{
"text": "'not with-you' Figure 1 : An example illustrating the analysis-transfer-generation steps to translate a dialectal multi-word expression into its MSA equivalent phrase.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selection Examples Translation Examples Dialectal Idafa",
"sec_num": null
},
{
"text": "dialectal morphological analysis step uses ADAM (Salloum and Habash, 2011) to get a list of dialectal analyses. The morphosyntactic transfer step uses lemma-to-lemma (L2L) and features-tofeatures (F2F) transfer rules to change lemmas, clitics or features, and even split up the dialectal word into multiple MSA word analyses (such as splitting negation words and indirect objects). The MSA morphological generation step uses the general tokenizer/generator TOKAN (Habash, 2007) to generate untokenized surface form words. For more details, see Salloum and Habash (2011) .",
"cite_spans": [
{
"start": 48,
"end": 74,
"text": "(Salloum and Habash, 2011)",
"ref_id": "BIBREF28"
},
{
"start": 463,
"end": 477,
"text": "(Habash, 2007)",
"ref_id": "BIBREF14"
},
{
"start": 544,
"end": 569,
"text": "Salloum and Habash (2011)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Examples Translation Examples Dialectal Idafa",
"sec_num": null
},
{
"text": "Phrase-based translation. Unlike the wordbased translation techniques which map single DA words to single or multi-word MSA sequences, this technique uses hand-written multi-word transfer rules that map multi-word DA constructions to single or multi-word MSA constructions. In the current system, there are 47 phrase-based transfer rules. Many of the word-based morphosyntactic transfer rules are re-used for phrase-based translation. Figure 1 shows an example of a phrase-based morphological translation of the two-word DA sequence wmA rAHwlA 'And they did not go to her'. If these two words were spelled as a single word, wmArAHwlA, we would still get the same result using the word-based translation technique only. Table 2 shows some rule categories along with selection and translation examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 435,
"end": 443,
"text": "Figure 1",
"ref_id": null
},
{
"start": 719,
"end": 726,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Selection Examples Translation Examples Dialectal Idafa",
"sec_num": null
},
{
"text": "The language model (LM) component uses the SRILM lattice-tool for weight assignment and nbest decoding (Stolcke, 2002 (bhAlHAlh hAy) 1 (mA Hyktbwlw) 2 \u03c2HyT 3 (AlSfHh Al\u0161xSyh tb\u03c2w) 4 wlA (bdn yAh) 5 yb\u03c2tln 6 kwmyntAt 7 l\u00c2nw 8 mAxbrhwn 9 AymtA 10 (rH yrwH) 11 \u03c2Albld 12 .",
"cite_spans": [
{
"start": 103,
"end": 117,
"text": "(Stolcke, 2002",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling",
"sec_num": "5.3"
},
{
"text": "In this case, they will not write on his profile wall and they do not want him to send them comments because he did not tell them when he will go to the country. (fy h\u00f0h AlHAlh) 1 (ln yktbwA lh) 2 (\u03c2ly HA\u0177T) 3 (SfHth Al\u0161xSyh) 4 wlA (yrydwnh An) 5 (yrsl Alyhm) 6 t\u03c2lyqAt 7 lAnh 8 (lm yxbrhm) 9 mty 10 sy\u00f0hb 11 (Aly Albld) 12 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Reference",
"sec_num": null
},
{
"text": "In this case it would not write to him on the wall of his own and do not want to send them comments that he did not tell them when going to the country. kenized Arabic words of Arabic Gigaword (Parker et al., 2009) . Users can specify their own LM file and/or interpolate it with our default LM. This is useful for adapting ELISSA's output to the Arabic side of the training data.",
"cite_spans": [
{
"start": 193,
"end": 214,
"text": "(Parker et al., 2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Google Translate",
"sec_num": null
},
{
"text": "We revisit our motivating example in Section 2 and show automatic MSA-pivoting through ELISSA. Table 3 is divided into two parts. The first part is copied from Table 1 for convenience. The second part shows ELISSA's output on the dialectal sentence and its Google Translate translation. The produced MSA is not perfect, but is clearly an improvement over doing nothing as far as usability for MT into English.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 167,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Revisiting our Motivating Example",
"sec_num": "5.4"
},
{
"text": "In this section, we present two evaluations of ELISSA. The first is an extrinsic evaluation of ELISSA as part of MSA-pivoting for DA-to-English SMT. And the second is an intrinsic evaluation of the quality of ELISSA's MSA output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "We use the open-source Moses toolkit (Koehn et al., 2007) to build a phrase-based SMT system trained on mostly MSA data (64M words on the Arabic side) obtained from several LDC corpora including some limited DA data. Our system uses a standard phrase-based architecture. The parallel corpus is word-aligned using GIZA++ (Och and Ney, 2003) . Phrase translations of up to 10 words are extracted in the Moses phrase table. The language model for our system is trained on the English side of the bitext augmented with English Gigaword (Graff and Cieri, 2003) . We use a 5-gram language model with modified Kneser-Ney smoothing. Feature weights are tuned to maximize BLEU on the NIST MTEval 2006 test set using Minimum Error Rate Training (Och, 2003) . This is only done on the baseline systems. The English data is tokenized using simple punctuation-based rules. The Arabic side is segmented according to the Arabic Treebank (ATB) tokenization scheme (Maamouri et al., 2004) using the MADA+TOKAN morphological analyzer and tokenizer v3.1 (Habash and Rambow, 2005; Roth et al., 2008) . The Arabic text is also Alif/Ya normalized. MADA-produced Arabic lemmas are used for word alignment.",
"cite_spans": [
{
"start": 37,
"end": 57,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF17"
},
{
"start": 320,
"end": 339,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF22"
},
{
"start": 532,
"end": 555,
"text": "(Graff and Cieri, 2003)",
"ref_id": "BIBREF9"
},
{
"start": 735,
"end": 746,
"text": "(Och, 2003)",
"ref_id": "BIBREF23"
},
{
"start": 948,
"end": 971,
"text": "(Maamouri et al., 2004)",
"ref_id": "BIBREF20"
},
{
"start": 1035,
"end": 1060,
"text": "(Habash and Rambow, 2005;",
"ref_id": "BIBREF10"
},
{
"start": 1061,
"end": 1079,
"text": "Roth et al., 2008)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1.1"
},
{
"text": "We use the same development (dev) and test sets used by Salloum and Habash (2011) (we will call them speech-dev and speech-test, respectively) and we compare to them in the next sections. We also evaluate on two web-crawled blind test sets: the Levantine test set presented in Zbib et al. (2012) (we will call it web-lev-test) and the Egyptian Dev-MT-v2 development data of the DARPA BOLT program (we will call it web-egy-test). The speech-dev set has 1,496 sentences with 32,047 untokenized Arabic words. The speech-test set has 1,568 sentences with 32,492 untokenized Arabic words. The web-levtest set has 2,728 sentences with 21,179 untokenized Arabic words. The web-egy-test set has 1,553 sentences with 21,495 untokenized Arabic words. The two speech test sets contain multi-dialect (e.g., Iraqi, Levantine, Gulf, and Egyptian) broadcast conversational (BC) segments (with three reference translations), and broadcast news (BN) segments (with only one reference, replicated three times). The web-egy-test has two references while the web-levtest has only one reference. Results are presented in terms of BLEU (Papineni et al., 2002) . All evaluation results are case insensitive.",
"cite_spans": [
{
"start": 1114,
"end": 1137,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1.1"
},
{
"text": "We experimented with different method combinations in the selection and translation components in ELISSA. We use the term selection mode and translation mode to denote a certain combination of methods in selection or translation, respectively. Due to limited space, we only present the best selection mode variation experiments. Other selection modes were tried but they proved to be consistently lower than the rest. The 'F2F+L2L; S2S' wordbased translation mode (using morphological transfer of features and lemmas along with surface form transfer) showed to be consistently better than other method combinations across all selection modes. In this paper we only use 'F2F+L2L; S2S' word-based translation mode. Phrase-based translation mode is used when phrase-based selection mode is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on the Development Set",
"sec_num": "6.1.2"
},
{
"text": "To rank paraphrases in the generated MSA lattice, we combine two 5-gram untokenized Arabic language models: one is trained on Arabic Gigaword data and the other is trained the Arabic side of our SMT training data. The use of the latter LM gave frequent dialectal phrases a higher chance to appear in ELISSA's output; thus, making the output \"more dialectal\" but adapting it to our SMT input. Experiments showed that using both LMs is better than using each one alone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on the Development Set",
"sec_num": "6.1.2"
},
{
"text": "In all the experiments, we run the DA sentence through ELISSA to generate a top-1 MSA translation, which we then tokenize through MADA before sending to the MSA-English SMT system. Our baseline is to not run ELISSA at all; instead, we send the DA sentence through MADA before applying the MSA-English MT system. Table 4 summarizes the experiments and results on the dev set. The rows of the table are the different systems (baseline and ELISSA's experiments). All differences in BLEU scores from the baseline are statistically significant above the 95% level. Statistical significance is computed using paired bootstrap re-sampling (Koehn, 2004) . The name of the system in ELISSA's experiments denotes the combination of selection method. ELISSA's experiments are grouped into three groups: simple selection, frequency-based selection, and phrase-based selection. Simple selection group consists of five systems: OOV, ADAM, OOV U ADAM, DICT, and OOV U ADAM U DICT. The OOV selection mode identifies the untokenized OOV words. In the ADAM selection mode, or the morphological selection mode, we use ADAM to identify dialectal words. Experiments showed that ADAM's DI-AMSA mode (selecting words that have at least one dialectal analysis) is slightly better than ADAM's DIAONLY mode (selecting words that have only dialectal analyses and no MSA ones). The OOV U ADAM selection mode is the union of the OOVs and ADAM selection modes. In DICT selection mode, we select dialectal words that exist in our DA-MSA dictionaries. The OOV U ADAM U DICT selection mode is the union of the OOVs, ADAM, and DICT selection modes. The results show that combining the output of OOV selection method and ADAM selection method is the best. DICT selection method hurts the performance of the system when used because dictionaries usually have frequent dialectal words that the SMT system already knows how to handle. In the frequency-based selection group, we exclude from word selection all words with number of occurrences in the training data that is above a certain threshold. This threshold was determined empirically to be 50. The string '-(Freq >= 50)' means that all words with frequencies of 50 or more should not be selected. The results show that excluding frequent dialectal words improves the best simple selection system. It also shows that using DICT selection improves the best system if frequent words are excluded.",
"cite_spans": [
{
"start": 632,
"end": 645,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 312,
"end": 319,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on the Development Set",
"sec_num": "6.1.2"
},
{
"text": "In the last system group, phrase+word-based selection, phrase-based selection is used to select phrases and add them on top of the best performers of the previous two groups. Phrase-based trans- Table 4 : Results for the speech-dev set in terms of BLEU. The 'Diff.' column shows result differences from the baseline. The rows of the table are the different systems (baseline and ELISSA's experiments). The name of the system in ELISSA's experiments denotes the combination of selection method. In all ELISSA's experiments, all wordbased translation methods are tried. Phrase-based translation methods are used when phrase-based selection is used (i.e., the last three rows). The best system is in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on the Development Set",
"sec_num": "6.1.2"
},
{
"text": "lation is also added to word-based translation. Results show that selecting and translating phrases improve the three best performers of word-based selection. The best performer, shown in the last raw, suggests using phrase-based selection and restricted word-based selection. The restriction is to include OOV words and selected low frequency words that have at least one dialectal analysis or appear in our dialectal dictionaries. Comparing the best performer to the OOV selection mode system shows that translating low frequency in-vocabulary dialectal words and phrases to their MSA paraphrases can improve the English translation. This is a similar conclusion to our previous work in Salloum and Habash (2011) .",
"cite_spans": [
{
"start": 689,
"end": 714,
"text": "Salloum and Habash (2011)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results on the Development Set",
"sec_num": "6.1.2"
},
{
"text": "We run the system settings that performed best on the dev set along with the OOV selection mode system on the three blind test set. Results and their differences from the baseline are reported in Table 5 . We see that OOV selection mode system always improves over the baseline for all test sets. Also, the best performer on the dev is the best performer for all test sets. The improvements of the best performer over the OOV selection mode system on all test sets confirm that translating low frequency invocabulary dialectal words and phrases to their MSA paraphrases can improve the English translation. Its improvements over the baseline for the three test sets are: 0.95% absolute BLEU (or 2.5% relative) for the speech-test, 1.41% absolute BLEU (or 15.4% rela-tive) for the web-lev-test, and 0.61% absolute BLEU (or 3.2% relative) for the web-egy-test.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 203,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results on the Blind Test Sets",
"sec_num": "6.1.3"
},
{
"text": "We next examine an example in some detail. Table 6 shows a dialectal sentence along with its ELISSA's translation, English references, the output of the baseline system and the output of our best system. The example shows a dialectal word hAlmbl\u03b3 'this-amount/sum', which is not translated by the baseline (although it appears in the training data, but quite infrequently such that all of its phrase table occurrences have restricted contexts, making it effectively an OOV). The dialectal proclitic + hAl+ 'this-' comes sometimes in the dialectal construction: 'hAl+NOUN DEM' (as in this example: hAlmbl\u03b3 h\u00f0A 'this-amount/sum this'). ELISSA's selection component captures this multi-word expression and its translation component produces the following paraphrases: h\u00f0A Almbl\u03b3 'this amount/sum' (h\u00f0A is used with masculine singular nouns), h\u00f0h Almbl\u03b3 'this amount/sum' (h\u00f0h is used with feminine singular or irrational plural nouns), and h\u0175lA' Almbl\u03b3 'these amount/sum' (h\u0175lA' is used with rational plural nouns). ELISSA's language modeling component picks the first MSA paraphrase, which perfectly fits the context and satisfies the gender/number/rationality agreement (note that the word Almbl\u03b3 is an irrational masculine singular noun). For more on Arabic morpho-syntactic agreement patterns, see Alkuhlani and Habash (2011) . Finally, the best system translation for the selected phrase is 'this sum'. We can see how both the accuracy and fluency of the sentence have improved.",
"cite_spans": [
{
"start": 1299,
"end": 1326,
"text": "Alkuhlani and Habash (2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 43,
"end": 50,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Case Study",
"sec_num": "6.1.4"
},
{
"text": "fmA mA AtSwr hAlmbl\u03b3 h\u00f0A y\u03c2ny. ELISSA's output fmA mA AtSwr h\u00f0A Almbl\u03b3 y\u03c2ny. Table 6 : An example of handling dialectal words/phrases using ELISSA and its effect on the accuracy and fluency of the English translation. Words of interest are bolded.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "DA sentence",
"sec_num": null
},
{
"text": "We conducted a manual error analysis comparing ELISSA's input (the original dev set) to its output using our best system settings from the experiments above. Out of 708 affected sentences, we randomly selected 300 sentences (42%). Out of the 482 handled tokens, 449 (93.15%) tokens have good MSA translations, and 33 (6.85%) tokens have wrong MSA translations. Most of the wrong translations are due to spelling errors, proper nouns, and weak input sentence fluency (especially due to speech effect). This analysis clearly validates ELISSA's MSA output. Of course, a correct MSA output can still be mistranslated by the MT system we used above if it is not in the vocabulary of the MT system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DA-to-MSA Translation Quality",
"sec_num": "6.2"
},
{
"text": "We presented ELISSA, a tool for DA-MSA translation. ELISSA employs a rule-based MT approach that relies on morphological analysis, transfer rules and dictionaries in addition to language models to produce MSA paraphrases of dialectal sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Using ELISSA to produce MSA versions of dialectal sentences as part of an MSA-pivoting DA-to-English MT solution, improves BLEU scores on three blind test sets by: 0.95% absolute BLEU (or 2.5% relative) for a speech multi-dialect (Iraqi, Levantine, Gulf, Egyptian) test set, 1.41% absolute BLEU (or 15.4% relative) for a web-crawled Levantine test set, and 0.61% absolute BLEU (or 3.2% relative) for a web-crawled Egyptian test set. A manual error analysis of translated selected words shows that our system produces correct MSA translations over 93% of the time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "In the future, we plan to extend ELISSA's coverage of phenomena in the handled dialects and to new dialects. We also plan to automatically learn additional rules from limited available data (DA-MSA or DA-English). We also would like to do additional MT experiments where we use ELISSA to preprocess the training data, comparable to experiments done by Sawaf (2010) . We are interested in studying how our approach can be combined with solutions that simply add more dialectal training data since the two directions are complementary in that they address linguistic normalization and domain coverage. Finally, we look forward to experimenting with ELISSA as a preprocessing system for a variety of dialect NLP applications similar to Chiang et al. (2006) 's work on dialect parsing, for example.",
"cite_spans": [
{
"start": 352,
"end": 364,
"text": "Sawaf (2010)",
"ref_id": "BIBREF30"
},
{
"start": 733,
"end": 753,
"text": "Chiang et al. (2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "ELISSA will be publicly available. Please contact the authors for more information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "The system was used on February 21, 2013.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This paper is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-12-C-0014. Any opinions, findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of DARPA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "So I do not I do not think this cost I mean. So I do not imagine this sum I mean Baseline So i don't think hAlmblg this means. Best system So i don't think this sum i mean",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "So I do not I do not think this cost I mean. So I do not imagine this sum I mean Baseline So i don't think hAlmblg this means. Best system So i don't think this sum i mean.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Hybrid Approach for Converting Written Egyptian Colloquial Dialect into Diacritized Arabic",
"authors": [
{
"first": "Khaled",
"middle": [],
"last": "Hitham Abo Bakr",
"suffix": ""
},
{
"first": "Ibrahim",
"middle": [],
"last": "Shaalan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ziedan",
"suffix": ""
}
],
"year": 2008,
"venue": "The 6th International Conference on Informatics and Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hitham Abo Bakr, Khaled Shaalan, and Ibrahim Ziedan. 2008. A Hybrid Approach for Converting Written Egyptian Colloquial Dialect into Diacritized Arabic. In The 6th International Conference on Informatics and Systems, INFOS2008. Cairo University.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Mining the Web for the Induction of a Dialectical Arabic Lexicon",
"authors": [
{
"first": "Rania",
"middle": [],
"last": "Al",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Sabbagh",
"suffix": ""
},
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
}
],
"year": 2010,
"venue": "LREC. European Language Resources Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rania Al-Sabbagh and Roxana Girju. 2010. Mining the Web for the Induction of a Dialectical Arabic Lexicon. In Nicoletta Calzolari, Khalid Choukri, Bente Mae- gaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Rosner, and Daniel Tapias, editors, LREC. Eu- ropean Language Resources Association.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Corpus for Modeling Morpho-Syntactic Agreement in Arabic: Gender, Number and Rationality",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Alkuhlani",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL'11)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Alkuhlani and Nizar Habash. 2011. A Corpus for Modeling Morpho-Syntactic Agreement in Ara- bic: Gender, Number and Rationality. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL'11), Portland, Ore- gon, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improved statistical machine translation using paraphrases",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Burch",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Philipp Koehn, and Miles Os- borne. 2006. Improved statistical machine transla- tion using paraphrases. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 17-24.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Owen Rambow, and Safiullah Shareef",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the European Chapter of ACL (EACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang, Mona Diab, Nizar Habash, Owen Ram- bow, and Safiullah Shareef. 2006. Parsing Arabic Dialects. In Proceedings of the European Chapter of ACL (EACL).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automated Methods for Processing Arabic Text: From Tokenization to Base Phrase Chunking",
"authors": [
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Kadri",
"middle": [],
"last": "Hacioglu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2007,
"venue": "Arabic Computational Morphology: Knowledge-based and Empirical Methods",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mona Diab, Kadri Hacioglu, and Daniel Jurafsky. 2007. Automated Methods for Processing Arabic Text: From Tokenization to Base Phrase Chunking. In Antal van den Bosch and Abdelhadi Soudi, editors, Arabic Computational Morphology: Knowledge-based and Empirical Methods. Kluwer/Springer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Facilitating translation using source language paraphrase lattices",
"authors": [
{
"first": "Jinhua",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Andy",
"middle": [
"Way"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP'10",
"volume": "",
"issue": "",
"pages": "420--429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhua Du, Jie Jiang, and Andy Way. 2010. Facil- itating translation using source language paraphrase lattices. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Process- ing, EMNLP'10, pages 420-429, Cambridge, Mas- sachusetts.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "POS tagging of dialectal Arabic: a minimally supervised approach",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Kirchhoff",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages, Semitic '05",
"volume": "",
"issue": "",
"pages": "55--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Duh and Katrin Kirchhoff. 2005. POS tagging of dialectal Arabic: a minimally supervised approach. In Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages, Semitic '05, pages 55-62, Ann Arbor, Michigan.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "English Gigaword",
"authors": [
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Cieri",
"suffix": ""
}
],
"year": 2003,
"venue": "LDC Catalog No.: LDC2003T05. Linguistic Data Consortium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Graff and Christopher Cieri. 2003. English Gi- gaword, LDC Catalog No.: LDC2003T05. Linguistic Data Consortium, University of Pennsylvania.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Arabic Tokenization, Part-of-Speech Tagging and Morphological Disambiguation in One Fell Swoop",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "573--580",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Habash and Owen Rambow. 2005. Arabic Tok- enization, Part-of-Speech Tagging and Morphological Disambiguation in One Fell Swoop. In Proceedings of the 43rd Annual Meeting of the Association for Com- putational Linguistics (ACL'05), pages 573-580, Ann Arbor, Michigan.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "MAGEAD: A Morphological Analyzer and Generator for the Arabic Dialects",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Habash and Owen Rambow. 2006. MAGEAD: A Morphological Analyzer and Generator for the Ara- bic Dialects. In Proceedings of the 21st International Conference on Computational Linguistics and 44th",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Nizar Habash, Abdelhadi Soudi, and Tim Buckwalter",
"authors": [],
"year": 2007,
"venue": "Arabic Computational Morphology: Knowledge-based and Empirical Methods",
"volume": "",
"issue": "",
"pages": "681--688",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 681-688, Sydney, Australia. Nizar Habash, Abdelhadi Soudi, and Tim Buckwalter. 2007. On Arabic Transliteration. In A. van den Bosch and A. Soudi, editors, Arabic Computational Mor- phology: Knowledge-based and Empirical Methods. Springer.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Morphological Analysis and Disambiguation for Dialectal Arabic",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Ramy",
"middle": [],
"last": "Eskander",
"suffix": ""
},
{
"first": "Abdelati",
"middle": [],
"last": "Hawwari",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Habash, Ramy Eskander, and Abdelati Hawwari. 2012. A Morphological Analyzer for Egyptian Ara- bic. In Proceedings of the Twelfth Meeting of the Spe- cial Interest Group on Computational Morphology and Phonology, pages 1-9, Montr\u00e9al, Canada. Nizar Habash, Ryan Roth, Owen Rambow, Ramy Eskan- der, and Nadi Tomeh. 2013. Morphological Analysis and Disambiguation for Dialectal Arabic. In Proceed- ings of the 2013 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies (NAACL-HLT), Atlanta, GA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Arabic Morphological Representations for Machine Translation",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2007,
"venue": "Arabic Computational Morphology: Knowledge-based and Empirical Methods",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Habash. 2007. Arabic Morphological Representa- tions for Machine Translation. In A. van den Bosch and A. Soudi, editors, Arabic Computational Mor- phology: Knowledge-based and Empirical Methods. Springer.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Introduction to Arabic Natural Language Processing",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Habash. 2010. Introduction to Arabic Natural Language Processing. Morgan & Claypool Publish- ers.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Machine Translation of Very Close Languages",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hric",
"suffix": ""
},
{
"first": "Vladislav",
"middle": [],
"last": "Kubon",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 6th Applied Natural Language Processing Conference (ANLP'2000)",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d, Jan Hric, and Vladislav Kubon. 2000. Ma- chine Translation of Very Close Languages. In Pro- ceedings of the 6th Applied Natural Language Pro- cessing Conference (ANLP'2000), pages 7-12, Seat- tle.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Moses: open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Christo- pher Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Christopher Dyer, Ondrej Bo- jar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meet- ing of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177-180, Prague, Czech Re- public.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP 2004",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP 2004, pages 388-395, Barcelona, Spain, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving word alignment with bridge languages",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "42--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shankar Kumar, Franz J. Och, and Wolfgang Macherey. 2007. Improving word alignment with bridge lan- guages. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP-CoNLL), pages 42-50, Prague, Czech Re- public.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The Penn Arabic Treebank: Building a Large-Scale Annotated Arabic Corpus",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Maamouri",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Buckwalter",
"suffix": ""
},
{
"first": "Wigdan",
"middle": [],
"last": "Mekki",
"suffix": ""
}
],
"year": 2004,
"venue": "NEMLAR Conference on Arabic Language Resources and Tools",
"volume": "",
"issue": "",
"pages": "102--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Maamouri, Ann Bies, Tim Buckwalter, and Wigdan Mekki. 2004. The Penn Arabic Treebank: Building a Large-Scale Annotated Arabic Corpus. In NEMLAR Conference on Arabic Language Resources and Tools, pages 102-109, Cairo, Egypt.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Translating from Morphologically Complex Languages: A Paraphrase-Based Approach",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Meeting of the Association for Computational Linguistics (ACL'2011)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov and Hwee Tou Ng. 2011. Translat- ing from Morphologically Complex Languages: A Paraphrase-Based Approach. In Proceedings of the Meeting of the Association for Computational Linguis- tics (ACL'2011), Portland, Oregon, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Minimum Error Rate Training for Statistical Machine Translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum Error Rate Training for Statistical Machine Translation. In Proceedings of the 41st Annual Conference of the Association for Computational Linguistics, pages 160-167, Sapporo, Japan.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "BLEU: a Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a Method for Automatic Eval- uation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computa- tional Linguistics, pages 311-318, Philadelphia, PA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Arabic Gigaword Fourth Edition",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Parker",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2009,
"venue": "LDC catalog number No. LDC2009T30",
"volume": "",
"issue": "",
"pages": "1--58563",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Parker, David Graff, Ke Chen, Junbo Kong, and Kazuaki Maeda. 2009. Arabic Gigaword Fourth Edi- tion. LDC catalog number No. LDC2009T30, ISBN 1-58563-532-4.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Minimally Supervised Morphological Segmentation with Applications to Machine Translation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Riesa",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas (AMTA06)",
"volume": "",
"issue": "",
"pages": "185--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Riesa and David Yarowsky. 2006. Minimally Su- pervised Morphological Segmentation with Applica- tions to Machine Translation. In Proceedings of the 7th Conference of the Association for Machine Trans- lation in the Americas (AMTA06), pages 185-192, Cambridge,MA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Arabic Morphological Tagging, Diacritization, and Lemmatization Using Lexeme Models and Feature Ranking",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Rudin",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT, Short Papers",
"volume": "",
"issue": "",
"pages": "117--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Roth, Owen Rambow, Nizar Habash, Mona Diab, and Cynthia Rudin. 2008. Arabic Morphological Tag- ging, Diacritization, and Lemmatization Using Lex- eme Models and Feature Ranking. In Proceedings of ACL-08: HLT, Short Papers, pages 117-120, Colum- bus, Ohio.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Dialectal to Standard Arabic Paraphrasing to Improve Arabic-English Statistical Machine Translation",
"authors": [
{
"first": "Wael",
"middle": [],
"last": "Salloum",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the First Workshop on Algorithms and Resources for Modelling of Dialects and Language Varieties",
"volume": "",
"issue": "",
"pages": "10--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wael Salloum and Nizar Habash. 2011. Dialectal to Standard Arabic Paraphrasing to Improve Arabic- English Statistical Machine Translation. In Proceed- ings of the First Workshop on Algorithms and Re- sources for Modelling of Dialects and Language Va- rieties, pages 10-21, Edinburgh, Scotland.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Elissa: A Dialectal to Standard Arabic Machine Translation System",
"authors": [
{
"first": "Wael",
"middle": [],
"last": "Salloum",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012): Demonstration Papers",
"volume": "",
"issue": "",
"pages": "385--392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wael Salloum and Nizar Habash. 2012. Elissa: A Di- alectal to Standard Arabic Machine Translation Sys- tem. In Proceedings of the 24th International Confer- ence on Computational Linguistics (COLING 2012): Demonstration Papers, pages 385-392, Mumbai, In- dia.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Arabic dialect handling in hybrid machine translation",
"authors": [
{
"first": "Hassan",
"middle": [],
"last": "Sawaf",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference of the Association for Machine Translation in the Americas (AMTA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hassan Sawaf. 2010. Arabic dialect handling in hybrid machine translation. In Proceedings of the Confer- ence of the Association for Machine Translation in the Americas (AMTA), Denver, Colorado.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "SRILM an Extensible Language Modeling Toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM an Extensible Language Modeling Toolkit. In Proceedings of the International Conference on Spoken Language Processing.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A comparison of pivot methods for phrase-based statistical machine translation",
"authors": [
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2007,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "484--491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masao Utiyama and Hitoshi Isahara. 2007. A compar- ison of pivot methods for phrase-based statistical ma- chine translation. In HLT-NAACL, pages 484-491.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Machine Translation of Arabic Dialects",
"authors": [
{
"first": "Rabih",
"middle": [],
"last": "Zbib",
"suffix": ""
},
{
"first": "Erika",
"middle": [],
"last": "Malchiodi",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Stallard",
"suffix": ""
},
{
"first": "Spyros",
"middle": [],
"last": "Matsoukas",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
},
{
"first": "Omar",
"middle": [
"F"
],
"last": "Zaidan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "49--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rabih Zbib, Erika Malchiodi, Jacob Devlin, David Stallard, Spyros Matsoukas, Richard Schwartz, John Makhoul, Omar F. Zaidan, and Chris Callison-Burch. 2012. Machine Translation of Arabic Dialects. In Pro- ceedings of the 2012 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 49- 59, Montr\u00e9al, Canada, June. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Dialect MT: a case study between Cantonese and Mandarin",
"authors": [
{
"first": "Xiaoheng",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, ACL '98",
"volume": "",
"issue": "",
"pages": "1460--1464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoheng Zhang. 1998. Dialect MT: a case study be- tween Cantonese and Mandarin. In Proceedings of the 36th Annual Meeting of the Association for Computa- tional Linguistics and 17th International Conference on Computational Linguistics, ACL '98, pages 1460- 1464, Montreal, Canada.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"text": "A motivating example for DA-to-English MT by pivoting (bridging) on MSA. The top half of the table displays a DA sentence, its human reference translation and the output of Google Translate. The bottom half of the table shows the result of human translation into MSA of the DA sentence before sending it to Google Translate.",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "Examples of some types of phrase-based selection and translation rules.",
"num": null,
"content": "<table><tr><td>DA Phrase</td><td/><td/><td colspan=\"2\">wmA rAHwlA 'And they did not go to her'</td></tr><tr><td>Analysis</td><td/><td>Word 1</td><td/><td>Word 2</td></tr><tr><td/><td colspan=\"4\">Proclitics [Lemma &amp; Features] [Lemma &amp; Features] [Lemma &amp; Features]</td><td>Enclitic</td></tr><tr><td/><td>w+</td><td>mA</td><td>rAHw</td><td>+l</td><td>+A</td></tr><tr><td/><td>conj+</td><td>[neg]</td><td>[rAH PV subj:3MP]</td><td>+prep</td><td>+pron3F S</td></tr><tr><td/><td>and+</td><td>not</td><td>they go</td><td>+to</td><td>+her</td></tr><tr><td>Transfer</td><td/><td>Word 1</td><td>Word 2</td><td>Word 3</td></tr><tr><td/><td colspan=\"4\">Proclitics [Lemma &amp; Features] [Lemma &amp; Features] [Lemma &amp; Features]</td><td>Enclitic</td></tr><tr><td/><td>conj+</td><td>[ lam ]</td><td>[\u00f0ahab IV subj:3MP]</td><td>[\u01cdl\u00fd ]</td><td>+pron3F S</td></tr><tr><td/><td>and+</td><td>did not</td><td>they go</td><td>to</td><td>+her</td></tr><tr><td>Generation</td><td>w+</td><td>lm</td><td colspan=\"2\">y\u00f0hbwA\u01cdly</td><td>+hA</td></tr><tr><td>MSA Phrase</td><td/><td/><td colspan=\"2\">wlm y\u00f0hbwA\u01cdlyhA 'And they did not go to her'</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "Revisiting our motivating example, but with ELISSA-based DA-to-MSA middle step. ELISSA's output is Alif/Ya normalized. Parentheses are added for illustrative reasons to highlight how multi-word DA constructions are selected and translated. Superscript indices link the selected words and phrases with their MSA translations.",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF6": {
"type_str": "table",
"text": "Phrase; ((OOV U ADAM U DICT) -(Freq >= 50)) 39.13 0.95 10.54 1.41 19.59 0.61",
"num": null,
"content": "<table><tr><td>Test Set</td><td colspan=\"2\">speech-test web-lev-test web-egy-test</td></tr><tr><td/><td colspan=\"2\">BLEU Diff. BLEU Diff. BLEU Diff.</td></tr><tr><td>Baseline</td><td>38.18 0.00</td><td>9.13 0.00 18.98 0.00</td></tr><tr><td>Select: OOV</td><td>38.76 0.58</td><td>9.65 0.62 19.19 0.21</td></tr><tr><td>Select:</td><td/><td/></tr></table>",
"html": null
},
"TABREF7": {
"type_str": "table",
"text": "Results for the three blind test sets (table columns) in terms of BLEU. The 'Diff.' columns show result differences from the baselines. The rows of the table are the different systems (baselines and ELISSA's experiments). The best systems are in bold.",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}