{ "paper_id": "N13-1049", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:40:06.828524Z" }, "title": "Automatic Morphological Enrichment of a Morphologically Underspecified Treebank", "authors": [ { "first": "Sarah", "middle": [], "last": "Alkuhlani", "suffix": "", "affiliation": {}, "email": "salkuhlani@ccls.columbia.edu" }, { "first": "Nizar", "middle": [], "last": "Habash", "suffix": "", "affiliation": {}, "email": "habash@ccls.columbia.edu" }, { "first": "Ryan", "middle": [], "last": "Roth", "suffix": "", "affiliation": {}, "email": "ryanr@ccls.columbia.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we study the problem of automatic enrichment of a morphologically underspecified treebank for Arabic, a morphologically rich language. We show that we can map from a tagset of size six to one with 485 tags at an accuracy rate of 94%-95%. We can also identify the unspecified lemmas in the treebank with an accuracy over 97%. Furthermore, we demonstrate that using our automatic annotations improves the performance of a state-of-the-art Arabic morphological tagger. Our approach combines a variety of techniques from corpus-based statistical models to linguistic rules that target specific phenomena. These results suggest that the cost of treebanking can be reduced by designing underspecified treebanks that can be subsequently enriched automatically.", "pdf_parse": { "paper_id": "N13-1049", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we study the problem of automatic enrichment of a morphologically underspecified treebank for Arabic, a morphologically rich language. We show that we can map from a tagset of size six to one with 485 tags at an accuracy rate of 94%-95%. We can also identify the unspecified lemmas in the treebank with an accuracy over 97%. Furthermore, we demonstrate that using our automatic annotations improves the performance of a state-of-the-art Arabic morphological tagger. Our approach combines a variety of techniques from corpus-based statistical models to linguistic rules that target specific phenomena. These results suggest that the cost of treebanking can be reduced by designing underspecified treebanks that can be subsequently enriched automatically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Collections of manually-annotated morphological and syntactic analyses of sentences, or treebanks, are an important resource for building statistical parsing models or for syntax-aware approaches to applications such as machine translation. Rich treebank annotations have also been used for a variety of natural language processing (NLP) applications such as tokenization, diacritization, part-of-speech (POS) tagging, morphological disambiguation, base phrase chunking, and semantic role labeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The development of a treebank with rich annotations is demanding in time and money, especially for morphologically complex languages. Consequently, the richer the annotation, the slower the annotation process and the smaller the size of the treebank. As such, a tradeoff is usually made between the size of the treebank and the richness of its annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we investigate the possibility of automatically enriching the morphologically underspecified Columbia Arabic Treebank (CATiB) with the more complex POS tags and lemmas used in the Penn Arabic Treebank (PATB) (Maamouri et al., 2004) . We employ a variety of techniques that range from corpus-based statistical models to handwritten rules based on linguistic observations. Our best method reaches accuracy rates of 94%-95% on full POS tag identification. We can also identify the unspecified lemmas in CATiB with an accuracy over 97%. 37% of our POS tag errors are due to gold tree or gold POS errors. A learning curve experiment to evaluate the dependence of our method on annotated data shows that while the quality of some components may reduce sharply with less data (12% absolute reduction in accuracy when using 1 32 of the data or some 10K annotated words), the overall effect is a lot smaller (2% absolute drop). These results suggest that the cost of treebanking can be reduced by designing underspecified treebanks that can be subsequently enriched automatically.", "cite_spans": [ { "start": 223, "end": 246, "text": "(Maamouri et al., 2004)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is structured as follows: Section 2 presents related work; Section 3 details various language background facts about Arabic and its treebanking; Section 4 explains our approach; and Section 5 presents and discusses our results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Arabic Treebanking There has been a lot work on building treebanks for different languages. In the case of Modern Standard Arabic (MSA), there are three efforts that vary in terms of richness and representation choice. The Penn Arabic Treebank (PATB) (Maamouri et al., 2004; Maamouri et al., 2009b; Maamouri et al., 2009a) , the Prague Arabic Dependency Treebank (PADT) (Smr\u017e and Haji\u010d, 2006; Smr\u017e et al., 2008) and the Columbia Arabic Treebank (CATiB) . The PATB uses phrase structure representation, while the other two use two different dependency representations. The PATB and PADT representations are quite detailed. The PATB not only provides tokenization, complex POS tags (485 tags in our data set), and syntactic structure; it also provides empty categories, diacritization, lemma choices, glosses and some semantic tags. In comparison CATiB only provides tokenization, six POS tags and eight dependency relations. The tradeoff is speed: CATiB's complete POS and syntax annotation rate is 540 tokens/hour (and annotator training takes two months), a much higher speed than reported for complete (POS and syntax) annotation in PATB (around 250-300 tokens/hour and 6-12 months for annotator training) and PADT (around 75 tokens/hour) . An important recent addition to the family of Arabic treebanks is the Quran Treebank, which targets the Classical Arabic language of the Quran, not MSA (Dukes and Buckwalter, 2010) .", "cite_spans": [ { "start": 251, "end": 274, "text": "(Maamouri et al., 2004;", "ref_id": "BIBREF17" }, { "start": 275, "end": 298, "text": "Maamouri et al., 2009b;", "ref_id": "BIBREF19" }, { "start": 299, "end": 322, "text": "Maamouri et al., 2009a)", "ref_id": null }, { "start": 370, "end": 392, "text": "(Smr\u017e and Haji\u010d, 2006;", "ref_id": "BIBREF28" }, { "start": 393, "end": 411, "text": "Smr\u017e et al., 2008)", "ref_id": "BIBREF29" }, { "start": 1395, "end": 1423, "text": "(Dukes and Buckwalter, 2010)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Treebank Enrichment There has been a number of efforts on developing treebanks with rich representations and on treebank enrichment for many languages, such Danish, English, German, Italian and Spanish (Oepen et al., 2002; Hinrichs et al., 2004; M\u00fcller, 2010) . Additionally, there has been some work on Arabic treebank enrichment that built on the PATB by manually extending its already rich annotations or automatically converting them to new formalisms. The Arabic Propbank (Propositional Bank) (Palmer et al., 2008) and the OntoNotes project (Hovy et al., 2006) both annotate for Arabic semantic information. Alkuhlani and Habash (2011) add annotations marking functional gender and number, and rationality; and Abdul-Mageed and Diab (2012) annotate the sentence level with sentiment labels. Tounsi et al. (2009) automatically converted the PATB to a lexical functional grammar (LFG) representation. Similarly, used a similar technique to build an initial version of CATiB. We use this CATiB version of PATB to evaluate our approach in this paper. Also related to this is the work on automatic enrichment of specific features, e.g., Habash et al. (2007a) demonstrated that nominal case, can be determined for gold syntactic analyses at high accuracy. We replicate their results and improve upon them. And unlike them, we handle all the morphological features in the PATB, not just case.", "cite_spans": [ { "start": 202, "end": 222, "text": "(Oepen et al., 2002;", "ref_id": "BIBREF25" }, { "start": 223, "end": 245, "text": "Hinrichs et al., 2004;", "ref_id": "BIBREF14" }, { "start": 246, "end": 259, "text": "M\u00fcller, 2010)", "ref_id": "BIBREF22" }, { "start": 498, "end": 519, "text": "(Palmer et al., 2008)", "ref_id": "BIBREF26" }, { "start": 546, "end": 565, "text": "(Hovy et al., 2006)", "ref_id": "BIBREF15" }, { "start": 613, "end": 640, "text": "Alkuhlani and Habash (2011)", "ref_id": "BIBREF1" }, { "start": 733, "end": 744, "text": "Diab (2012)", "ref_id": "BIBREF0" }, { "start": 796, "end": 816, "text": "Tounsi et al. (2009)", "ref_id": "BIBREF31" }, { "start": 1137, "end": 1158, "text": "Habash et al. (2007a)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Morphological Disambiguation There has been a lot of work on Arabic POS tagging and morphological disambiguation (Diab et al., 2004; Habash and Rambow, 2005; Smith et al., 2005; Haji\u010d et al., 2005; Roth et al., 2008; Habash et al., 2013) . These approaches are intended to apply to raw text and determine the appropriate in-context morphological reading for each word. In contrast, in this paper, we are starting from a partially disambiguated and relatively rich representation: we have tokenization, general POS tags and syntactic dependency information.", "cite_spans": [ { "start": 113, "end": 132, "text": "(Diab et al., 2004;", "ref_id": "BIBREF4" }, { "start": 133, "end": 157, "text": "Habash and Rambow, 2005;", "ref_id": "BIBREF7" }, { "start": 158, "end": 177, "text": "Smith et al., 2005;", "ref_id": "BIBREF27" }, { "start": 178, "end": 197, "text": "Haji\u010d et al., 2005;", "ref_id": "BIBREF13" }, { "start": 198, "end": 216, "text": "Roth et al., 2008;", "ref_id": null }, { "start": 217, "end": 237, "text": "Habash et al., 2013)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Finally, morphological information (beyond tokenization) has been shown to be useful for many NLP applications. Marton et al. (2011) demonstrated that morphology helps Arabic parsing. Using morphological features such as case has also improved parsing for Russian, Turkish and Hindi Eryigit et al., 2008; Nivre, 2009) . Other work has shown value for morphology in the context of Arabic named entity recognition (Benajiba et al., 2009) . These results support the value of our goal of enriching resources with morphological information, which then can be used to improve different NLP applications.", "cite_spans": [ { "start": 112, "end": 132, "text": "Marton et al. (2011)", "ref_id": "BIBREF21" }, { "start": 283, "end": 304, "text": "Eryigit et al., 2008;", "ref_id": "BIBREF6" }, { "start": 305, "end": 317, "text": "Nivre, 2009)", "ref_id": "BIBREF24" }, { "start": 412, "end": 435, "text": "(Benajiba et al., 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section, we present some relevant general linguistic facts about Arabic and then discuss the specifics of the tagsets we work with in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Background", "sec_num": "3" }, { "text": "Arabic Linguistic Facts The Arabic language poses many challenges for NLP. Arabic is a morphologically complex language which includes rich inflectional and cliticizational morphology, e.g., the word A\u00d3 E\u00d2J . \u21e3 J\u222bJ \u2326 \u00c9\uf8ff w+s+y-ktb-wn+hA 1 'and they will write it' has two proclitics, one prefix, one suffix and one pronominal enclitic. Additionally, Arabic has a high degree of ambigiouty due to the absence of the diacritics and inconsistent spelling of letters such as Alif, @ \u00c2 and Ya\u2326 y. The Buckwalter Arabic Morphological Analyzer (BAMA) (Buckwalter, 2004) , which is used in the PATB, produces an average of 12 analyses per word.", "cite_spans": [ { "start": 543, "end": 561, "text": "(Buckwalter, 2004)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic Background", "sec_num": "3" }, { "text": "In this paper, we work with gold tokenized Arabic as it appears in the PATB and CATiB treebanks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Background", "sec_num": "3" }, { "text": "As such, the words are partially disambiguated with regards to possible tokenizable clitics and Alif/Ya spelling forms. That said, there is still a lot of ambiguity remaining especially because diacritics are not marked. Words in the treebank may be ambiguous in terms of their POS, lemmas and inflectional features. The inflectional features include gender, number, person, case, state, mood, voice, aspect and the presence of the determiner +\u00bb@ Al+ 'the', which is not tokenized off in the treebanks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Background", "sec_num": "3" }, { "text": "Arabic has a well known discrepancy in form and function that appears most commonly in the form of irregular plurals, called Broken Plurals, which although functionally are plural, have singular suffixes. We will not discuss form and function discrepancy in this paper except as needed. For more on this see Habash (2010) .", "cite_spans": [ { "start": 308, "end": 321, "text": "Habash (2010)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic Background", "sec_num": "3" }, { "text": "The Buckwalter Tagset The Buckwalter POS tagset is perhaps one of the most commonly used tagsets for Arabic NLP research. The tagset's popularity is in part due to its use in the PATB. Buckwalter tags can be used for tokenized and untokenized text. The untokenized tags are produced by BAMA (Buckwalter, 2004) and consist of 485 tags. The tokenized tags, which are used in the PATB, are derived from the untokenized tags and can reach thousands of tags. Both variants use the same basic 70 or so sub-tag symbols (such as DET 'determiner', NSUFF 'nominal suffix', ADJ 'adjective' and ACC 'accusative') (Maamouri et al., 2009a) . These sub-tags are combined to form around 170 morpheme tags such as NSUFF_FEM_SG 'feminine singular nominal suffix' and CASE_DEF_ACC 'accusative definite'. The word tags are constructed out of one or more morpheme tags, e.g. DET+NOUN_PROP+CASE_DEF_NOM for the word \u2022 \u2326 \u00ed\u00c0@ Al+Siyn+u 'China'.", "cite_spans": [ { "start": 291, "end": 309, "text": "(Buckwalter, 2004)", "ref_id": "BIBREF3" }, { "start": 601, "end": 625, "text": "(Maamouri et al., 2009a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic Background", "sec_num": "3" }, { "text": "CATiB Trees and POS Tags CATiB uses the same basic tokenization scheme used by PATB and PADT. However, the CATiB POS tagset is much smaller. Whereas in practice PATB uses 485 Buckwalter tags specifying every aspect of Arabic word morphology such as definiteness, gender, number, person, mood, voice and case, CATiB uses 6 POS tags: NOM (non-proper nominals including nouns, pronouns, adjectives and adverbs), PROP (proper nouns), VRB (verbs), VRB-PASS (passive-voice verbs), PRT (particles such as prepositions or con-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Background", "sec_num": "3" }, { "text": "VRB \u2026\u00c9Q \u21e3 K trsl --- IV3FS+IV+IVSUFF_MOOD:I \u2026\u00c9 @ >arsal 'send' MOD PRT + s+ --- FUT_PART + sa+ 'will' SBJ PROP \u2022 \u2326 \u00ed\u00c0@ AlSyn --- DET+NOUN_PROP +CASE_DEF_NOM \u2022 \u2326 \u00ec Siyn 'China' OBJ NOM @ Q' \u21e3 \u00d8 qmrA --- NOUN +CASE_INDEF_ACC Q' \u21e3 \u00d8 qamar 'moon' MOD NOM AJ \u2326\u00b4A J\u00a2\u00ec@ ", "type_str": "table", "text": "Accuracy of enriching CATiB trees with Buckwalter (BW) tags and lemmas on the development set. Reduced Buckwalter is similar to Buckwalter, but ignores case, mood and state. The Difference between the two metrics highlights the errors from case, mood and state.", "num": null, "html": null }, "TABREF3": { "content": "", "type_str": "table", "text": "Accuracy of enriching CATiB trees with Buckwalter (BW) tags and lemmas on the blind test set.", "num": null, "html": null }, "TABREF5": { "content": "
SizeFull BW Reduced BW Diff Lemma
1/32 10.6K93.2495.812.57 95.68
1/16 21.3K93.6796.282.61 96.27
1/842.6K94.1496.792.65 96.94
1/485.3K94.5697.262.70 97.22
1/2 170.7K94.9697.662.70 97.61
1341.1K95.2798.002.73 97.81
", "type_str": "table", "text": "Accuracy of enriching CATiB trees with Buckwalter (BW) tags and lemmas using TADA only for different training sizes on the development set.", "num": null, "html": null } } } }