{ "paper_id": "C12-1006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:24:07.267030Z" }, "title": "The Floating Arabic Dictionary: An Automatic Method for Updating a Lexical Database through the Detection and Lemmatization of Unknown Words", "authors": [ { "first": "Mohammed", "middle": [], "last": "Attia", "suffix": "", "affiliation": { "laboratory": "", "institution": "The British University", "location": { "settlement": "Dubai", "country": "UAE (" } }, "email": "mattia@computing.dcu.ie" }, { "first": "Younes", "middle": [], "last": "Samih", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heinrich-Heine-Universit\u00e4t", "location": { "country": "Germany (" } }, "email": "samih@phil.uni-duesseldorf.de" }, { "first": "Khaled", "middle": [], "last": "Shaalan", "suffix": "", "affiliation": { "laboratory": "", "institution": "The British University", "location": { "settlement": "Dubai", "country": "UAE (" } }, "email": "khaled.shaalan@buid.ac.ae" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dublin City University", "location": { "country": "Ireland" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Unknown words, or out of vocabulary words (OOV), cause a significant problem to morphological analysers, syntactic parses, MT systems and other NLP applications. Unknown words make up 29 % of the word types in in a large Arabic corpus used in this study. With today's corpus sizes exceeding 10 9 words, it becomes impossible to manually check corpora for new words to be included in a lexicon. We develop a finite-state morphological guesser and integrate it with a machine-learning-based pre-annotation tool in a pipeline architecture for extracting unknown words, lemmatizing them, and giving them a priority weight for inclusion in a lexical database. The processing is performed on a corpus of contemporary Arabic of 1,089,111,204 words. Our method is tested on a manually-annotated gold standard and yields encouraging results despite the complexity of the task. Our work shows the usability of a highly non-deterministic morphological guesser in a practical and complex application.", "pdf_parse": { "paper_id": "C12-1006", "_pdf_hash": "", "abstract": [ { "text": "Unknown words, or out of vocabulary words (OOV), cause a significant problem to morphological analysers, syntactic parses, MT systems and other NLP applications. Unknown words make up 29 % of the word types in in a large Arabic corpus used in this study. With today's corpus sizes exceeding 10 9 words, it becomes impossible to manually check corpora for new words to be included in a lexicon. We develop a finite-state morphological guesser and integrate it with a machine-learning-based pre-annotation tool in a pipeline architecture for extracting unknown words, lemmatizing them, and giving them a priority weight for inclusion in a lexical database. The processing is performed on a corpus of contemporary Arabic of 1,089,111,204 words. Our method is tested on a manually-annotated gold standard and yields encouraging results despite the complexity of the task. Our work shows the usability of a highly non-deterministic morphological guesser in a practical and complex application.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Due to the complexity and semi-algorithmic nature of Arabic morphology (that employs numerous rules and constraints on inflection, derivation and cliticization), it has been a challenge for computational processing and analysis (Kiraz, 2001; Beesley 2003) . A lexicon is an indispensable part of a morphological analyser (Dichy and Farghaly, 2003; Attia, 2006; Buckwalter, 2004; Beesley, 2001) , and the coverage of the lexical database is a key factor in the coverage of the morphological analyser, and limitations in the lexicon will cascade through to higher levels of processing. Moreover, out of vocabulary words (or OOVs) have impact negatively on the performance of parsers (Attia et al., 2010) and MT applications (Huang et al. 2010) . This is why an automatic method for updating a lexical database and dealing with unknown words is crucially important.", "cite_spans": [ { "start": 228, "end": 241, "text": "(Kiraz, 2001;", "ref_id": "BIBREF14" }, { "start": 242, "end": 255, "text": "Beesley 2003)", "ref_id": "BIBREF5" }, { "start": 321, "end": 347, "text": "(Dichy and Farghaly, 2003;", "ref_id": "BIBREF10" }, { "start": 348, "end": 360, "text": "Attia, 2006;", "ref_id": "BIBREF1" }, { "start": 361, "end": 378, "text": "Buckwalter, 2004;", "ref_id": "BIBREF6" }, { "start": 379, "end": 393, "text": "Beesley, 2001)", "ref_id": "BIBREF4" }, { "start": 681, "end": 701, "text": "(Attia et al., 2010)", "ref_id": "BIBREF2" }, { "start": 722, "end": 741, "text": "(Huang et al. 2010)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present the first attempt, to the best of our knowledge, to address the lemmatization (rather than stemming) of Arabic unknown words. The problem with lemmatizing unknown words is that they cannot be matched against a morphological lexicon. Furthermore, the specific problem with lemmatizing Arabic words is the richness and complexity of Arabic morphological derivational and inflectional processes. For the purposes of this paper, unknown words are words not found by the SAMA morphological analyser (Maamouri et al., 2010) but accepted by the Microsoft Spell Checker. We develop a rule-based finite-state morphological guesser and use a machine learning based disambiguator, MADA (Roth et al., 2008) , in a pipeline-based approach to lemmatization.", "cite_spans": [ { "start": 505, "end": 528, "text": "(Maamouri et al., 2010)", "ref_id": "BIBREF16" }, { "start": 686, "end": 705, "text": "(Roth et al., 2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We test our method against a manually created gold standard of 1,310 types (unique words) and show a significant improvement over the baseline. Furthermore, we devise a novel algorithm for weighting and prioritizing new words for inclusion in a lexicon depending on three factors: number of form variations of the lemmas, cumulative frequency of the forms, and the type of POS (part of speech) tag. This paper is structured as follows. The remainder of the introduction provides more details on the complexity of the lemmatization process in Arabic, why dealing with unknown words is important, previous work on the topic, and the data used in our experiments. Section 2 presents the methodology we follow in extracting and analysing unknown words. Section 3 provides details on the morphological guesser we develop to help deal with the problem. Section 4 presents and discusses the evaluation results, and Section 5 concludes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Arabic is an inflectionally rich language with nouns specified for number, gender and case; and verbs specified for tense, number, gender, person, voice and mood. These inflectional processes entail complex alterations on base forms. Arabic is also a clitic language. Clitics are morphemes that have the syntactic characteristics of a word but are morphologically bound to other words (Crystal, 1980) . In Arabic, many coordinating conjunctions, the definite article, many prepositions and particles, and a class of pronouns are all clitics that attach themselves either to the start or end of words, and subsequently change the base form according to alteration rules which include assimilation and deletion. These facts complicate the process of lemmatization, or returning the base form given the inflected form.", "cite_spans": [ { "start": 385, "end": 400, "text": "(Crystal, 1980)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Complexity of Lemmatization in Arabic", "sec_num": "1.1" }, { "text": "For English, one can reasonably assume that new words appear very often in their base forms, or the lexical look-up forms. Lind\u00e9n (2008) indicates that about 86 % of the new words in English appear in their base form. However, in Arabic, which is highly inflectional in nature, only 45 % of new token types in our test set appear in their base form. Moreover, 36 % of the unknown types do not appear in their base form at all in the entire corpus. Sinclair (1987) introduced the term \"Floating Dictionary\", a self-updating dictionary that is able to automatically monitor language change. \"It would, so to speak, float on top of a corpus, rather like a jelly-fish, its tendrils constantly sensing the state of the language.\" We think that an electronic 'floating dictionary' should be able to perform at least three major tasks. It should be able to tell which words are not is use anymore, which words have newly appeared in a language, and which word usages or senses have changed based on contemporary data. In this paper we explain our methodology for automatically detecting new words in Arabic, lemmatizing such new words in order to relate multiple surface forms to their base underlying representations, deciding on the word POS tag, collecting statistics on the frequency of use, and modelling human decisions on whether to include the new words in a lexicon or not.", "cite_spans": [ { "start": 123, "end": 136, "text": "Lind\u00e9n (2008)", "ref_id": "BIBREF15" }, { "start": 448, "end": 463, "text": "Sinclair (1987)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Complexity of Lemmatization in Arabic", "sec_num": "1.1" }, { "text": "New words are constantly finding their way into any living human language. These new words are either coined or borrowed, or they can be transliterations of proper nouns from other languages. The inclusion of new words in a lexicon is a non-trivial task as it needs to address two important problems. First, there is the problem of detection, or how do we know that a new word has appeared? Second, there is the problem of reaching a decision on the new word, or how do we judge whether the new word is worth adding to the lexicon or not? This is usually done by looking at whether the word is frequent enough, whether it appears in various forms and inflections, and whether it is well-distributed in a corpus. This enables us to determine whether the word constitutes a core lexical item or the usage of the word is just accidental or idiosyncratic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Deal with Unknown Words?", "sec_num": "1.2" }, { "text": "We address this issue by developing an automatic technique to recognize unknown words and reduce them to their lemmas, predict their POS, and rank them in their order of importance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Deal with Unknown Words?", "sec_num": "1.2" }, { "text": "Lemmatization of unknown words has been addressed for Slovene in (Erjavec and D\u017eerosk, 2004) , for Hebrew in (Adler at al., 2008) and for English, Finnish, Swedish and Swahili in (Lind\u00e9n, 2008) . Apart from the language involved, our work is different in that we incorporate a finite state guesser in the process. Lemmatization of Arabic words has been addressed in (Roth et al., 2008; Dichy, 2001 ). The idea of finding and stemming unknown Arabic words has been utilized by Diab et al, (2004) . While Diab et al. do not mention unknown words specifically, the fact that they use a character-based classification model and tokenization indicates that they can handle unknown words and perform stemming on them. However, they do not present any evaluation on unknown words specifically. Mohamed and K\u00fcbler (2010) handle unknown words explicitly and provide results for known and unknown words in both word segmentation (stemming) and part of speech tagging. They reach a stemming accuracy of 81.39 % on unknown words and over 99 % on known words.", "cite_spans": [ { "start": 65, "end": 92, "text": "(Erjavec and D\u017eerosk, 2004)", "ref_id": "BIBREF11" }, { "start": 109, "end": 129, "text": "(Adler at al., 2008)", "ref_id": "BIBREF0" }, { "start": 179, "end": 193, "text": "(Lind\u00e9n, 2008)", "ref_id": "BIBREF15" }, { "start": 366, "end": 385, "text": "(Roth et al., 2008;", "ref_id": "BIBREF19" }, { "start": 386, "end": 397, "text": "Dichy, 2001", "ref_id": "BIBREF9" }, { "start": 476, "end": 494, "text": "Diab et al, (2004)", "ref_id": "BIBREF8" }, { "start": 787, "end": 812, "text": "Mohamed and K\u00fcbler (2010)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "1.3" }, { "text": "Mohammed and K\u00fcbler's work focuses on stemming rather than lemmatization, which are quite distinct albeit frequently confused. The difference between stemming and lemmatization is that stemming strips off prefixes and suffixes and leaves the bare stem, while lemmatization returns the canonical base form. To illustrate this with an example, take the Arabic verb form \u202b\u064a\u0642\u0648\u0644\u0648\u0646\u202c 'yqwlwn' \"they say\". Stemming will remove the present prefix 'y' and the plural suffix 'wn' and leave \u202b\u0642\u0648\u0644\u202c 'qwl' which is a non-word in Arabic. By contrast, full lemmatization will reveal that the word has gone through an alteration process and return the canonical \u202b\u0642\u0627\u0644\u202c 'qAl' \"to say\" as the base form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diab et al.'s and", "sec_num": null }, { "text": "Lemmatization reduces surface forms to their canonical base representations (or dictionary lookup form), i.e, words before undergoing any inflection, which, in Arabic, means verbs in their perfective, indicative, 3rd person, masculine, singular forms, such as \u064e \u202b\u064e\u0631\u202c \u202b\u064e\u0643\u202c \u202b\u0634\u202c $akara \"to thank\"; and nominals (the term used for both nouns and adjectives) in their nominative, singular, masculine forms, such as \u202b\u0650\u0628\u202c \u202b\u0637\u0627\u0644\u202c TAlib \"student\"; and nominative plural for pluralia tantum nouns (or nouns that appear only in the plural form and are not derived from a singular form) , such as \u202b\u0646\u0627\u0633\u202c nAs \"people\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diab et al.'s and", "sec_num": null }, { "text": "In our work we use a large-scale corpus of 1,089,111,204 words, consisting of the Arabic Gigaword Fourth Edition (Parker et al., 2009) with 925,461,707 words, in addition to 163,649,497 words from news articles crawled from the Al-Jazeera web site. In this corpus, unknown words appear at a rate between 2 % of word tokens (when we ignore possible spelling variants) and 9 % of word tokens (when possible spelling variants are included). In this context spelling variants refer to alternative (sub-standard) spellings recognized by SAMA which are mostly related to the possible overlap between orthographically similar letters, such as the various shapes of hamzahs \u202b\u0622(\u202c \u202b\u0627\u202c \u202b\u0625\u202c \u202b,)\u0623\u202c taa' marboutah and haa' \u202b\u0629(\u202c \u202b,)\u0647\u202c and yaa' and alif maqsoura \u202b\u0649(\u202c \u202b.)\u064a\u202c", "cite_spans": [ { "start": 113, "end": 134, "text": "(Parker et al., 2009)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Data Used", "sec_num": "1.4" }, { "text": "To deal with unknown (or out-of-vocabulary) words, we use a pipeline approach which predicts part-of-speech tags and morpho-syntactic features before lemmatization. In the first stage of the pipeline, we use MADA (Roth et al., 2008) , an SVM-based tool that relies on the word context to assign POS tags and morpho-syntactic features. MADA internally uses the SAMA morphological analyser (Maamouri et al., 2010) , an updated version of Buckalter morphology (Buckwalter, 2004) . Second, we develop a finite-state morphological guesser that can provide all the possible interpretations of a given word. The morphological guesser first takes an Arabic surface form as a whole and then strips all possible affixes and clitics off one by one until all possible analyses are exhausted. The morphological guesser is highly non-deterministic as it outputs a large number of solutions. To counteract this non-determinism, all the solutions are matched against the POS and morpho-syntactic tag output for the full surface token by MADA and the analysis with the closest resemblance (i.e. the analysis with the largest number of matching morphological features) is selected.", "cite_spans": [ { "start": 213, "end": 232, "text": "(Roth et al., 2008)", "ref_id": "BIBREF19" }, { "start": 388, "end": 411, "text": "(Maamouri et al., 2010)", "ref_id": "BIBREF16" }, { "start": 457, "end": 475, "text": "(Buckwalter, 2004)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "Beside the complexity of lemmatization described in Section 1.1, the problem is further compounded when dealing with unknown words that cannot be matched by existing lexicons. This requires the development of a finite-state guesser to list all the possible interpretations of an unknown string of letters (explained in detail in Section 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "To identify, extract and lemmatize unknown Arabic words we use the following sequence of processing steps (Figure 1 ): \uf0b7 A corpus of 1,089,111,204 tokens (7,348,173 types) is analysed with MADA.", "cite_spans": [], "ref_spans": [ { "start": 106, "end": 115, "text": "(Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "\uf0b7 The number of types for which MADA could not find an analysis in the Buckwalter morphological analyser is 2,116,180 (about 29 % of the types).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "\uf0b7 These unknown types were spell checked by the Microsoft Arabic spell checker using MS Office 2010. Among the unknown types of 2,116,180, the number of types accepted as correct is 208,188. The advantage of using spell checking at this stage is that it provides significant filtration of the forms (almost 90 % reduction) and retains a more compact, more manageable, and better quality list of entries to deal with in further processing. The disadvantage is that there is no guarantee that all word forms not accepted by the MS speller are actually spelling mistakes (or that all the ones accepted are correct).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FIGURE 1 -Lemmatization process", "sec_num": null }, { "text": "\uf0b7 We select types with frequency of 10 or more of the types accepted by the MS spell checker. This results in a total of 40,277 types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FIGURE 1 -Lemmatization process", "sec_num": null }, { "text": "\uf0b7 We use the full POS tags and morpho-syntactic features produced by MADA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FIGURE 1 -Lemmatization process", "sec_num": null }, { "text": "\uf0b7 We use the finite-state morphological guesser to produce all possible morphological interpretations and relevant lemmatizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FIGURE 1 -Lemmatization process", "sec_num": null }, { "text": "\uf0b7 We compare the POS tags and morphosyntactic features in MADA output with the output of the morphological guesser and choose the one with the highest matching score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FIGURE 1 -Lemmatization process", "sec_num": null }, { "text": "For testing and evaluation we gold annotate 1,310 words randomly selected from the 40,277 types, providing the gold lemma, the gold POS and lexicographic preference for inclusion in a dictionary. It is to be noted that working with the 2,116,180 types before filtering out possible spelling errors will require annotating a much larger gold standard. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FIGURE 1 -Lemmatization process", "sec_num": null }, { "text": "Arabic morphotactics allows words to be concatenated with a comparatively large number of clitics (Attia, 2006) . Clitics themselves can be concatenated one after the other. Furthermore, clitics undergo assimilation with word stems and with each other, which makes them even harder to handle using surface features only. A verb can comprise up to four tokens (a conjunction, complementizer, verb stem and object pronoun) as illustrated in Table 1 . Moreover the verb stem can be prefixed and suffixed with bound morphemes that mark the morpho-syntactic features of tense, number, gender, person, voice and mood. The lemma resides as a nucleus inside layers of proclitics, prefixes, suffixes and enclitics. A verb lemma like \u202b\u0634\u0643\u0631\u202c '$akara' \"to thank\" can generate up to 9,552 different valid forms. Similarly a noun stem can be attached to up to three clitics as shown in Table 2. Although Table 2 shows four clitics, we note that the definite article and the genitive (or possessive) pronoun are mutually exclusive. Nominal stems can also be suffixed with bound morphemes that mark the morpho-syntactic features of number, gender and case. a typical noun like \u202b\u0645\u0639\u0644\u0645\u202c 'muEal~im' 'teacher', generates 519 valid forms.", "cite_spans": [ { "start": 98, "end": 111, "text": "(Attia, 2006)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 439, "end": 446, "text": "Table 1", "ref_id": null }, { "start": 871, "end": 897, "text": "Table 2. Although Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Morphological Guesser", "sec_num": "3" }, { "text": "We develop a finite state (Beesley and Karttunen, 2003; Hulden, 2009) morphological guesser for Arabic that can analyse unknown words with all possible clitics, morphosyntactic affixes and all relevant alteration operations that include insertion, assimilation, and deletion. Beesley and Karttunen (2003) give some advice on how to create a basic guesser. The core idea of a guesser is to assume that a stem is composed of any arbitrary sequence of non-numeric characters, and this stem can be prefixed and/or suffixed with a predefined set of prefixes, suffixes or clitics. The guesser marks clitic boundaries and tries to return the stem to its default unmarked form, the lemma. Due to the nondeterministic nature of the guesser, there will be a multitude of possible lemmas for each form. The Arabic FST guesser consists of three parts: a lexc file, alteration rules and an XFST compilation file. First, there is the lexc file (Figure 2 ) with lexicons and continuation classes for the Arabic guesser. The lexc file specifies that there is an optional conjunction, followed by an optional preposition, followed by an optional definite article before the Arabic noun. FIGURE 2 -Snapshot of the Arabic lexc file Second, there are the alteration rules which handle the morphological processes of assimilation and deletion. In our system there are about 130 replace rules to handle alterations that affect verbs, nouns, adjectives and function words when they undergo inflections or are attached to affixes and clitics. They take the form of XFST replace rules:", "cite_spans": [ { "start": 26, "end": 55, "text": "(Beesley and Karttunen, 2003;", "ref_id": "BIBREF5" }, { "start": 56, "end": 69, "text": "Hulden, 2009)", "ref_id": "BIBREF13" }, { "start": 276, "end": 304, "text": "Beesley and Karttunen (2003)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 930, "end": 939, "text": "(Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Proclitics", "sec_num": null }, { "text": "A -> w || \"+pres\" Alphabet _ Alphabet", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proclitics", "sec_num": null }, { "text": "The example rule indicates that 'A' changes to 'w' under the condition of having the left context '+pres' and a single alphabetical character and the right context of another alphabetical character. Following this rule the verb \u202b\u0642\u0627\u0644\u202c qAl \"to say\" will change to \u202b\u064a\u0642\u0648\u0644\u202c yaqwl in the present tense form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proclitics", "sec_num": null }, { "text": "Third, there are the XFST compilation rules which bind components together. They replace the multivariable words 'GUESSNOUNSTEM' and 'GUESSVERBSTEM' with the relevant alphabet using the 'substitute defined' command. The XFST commands in our guesser are stated as follows. This states that a possible noun stem is defined as any sequence of Arabic non-numeric characters of length between 2 and 24 characters. A possible verb stem is between 2 and 6 characters. This word stem is surrounded by prefixes, suffixes, proclitics and enclitics. Clitics are considered as independent tokens and are separated by the '@' sign, while prefixes and suffixes are considered as morpho-syntactic features and are interpreted with tags preceded by the '+' sign. Below we present the analysis of the noun \u202b\u0650\u0651\u0642\u0648\u0646\u064e\u064e\u202c \u202b\u064e\u0648\u202c \u202b\u064f\u0633\u202c \u202b\u0648\u0627\u0644\u0645\u202c wa-Al-musaw~iquwna \"and-the-marketers\", and the verb \u202b\u0646\u0627\u202c \u064f \u202b\u064f\u0630\u202c \u202b\u062e\u202c \u0652 \u202b\u064e\u0623\u202c \u202b\u064e\u064a\u202c \u202b\u0633\u202c sa-ya'xu*unA \"will-take-us\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proclitics", "sec_num": null }, { "text": "MADA output for wa-Al-musaw~iquwna:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proclitics", "sec_num": null }, { "text": "form:wAlmswqwn num:p gen:m per:na case:n asp:na mod:na vox:na pos:noun prc0:Al_det prc1:0 prc2:wa_conj prc3:0 enc0:0 stt:d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proclitics", "sec_num": null }, { "text": "Finite-state guesser output for wa-Al-musaw~iquwna:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proclitics", "sec_num": null }, { "text": "\u202b\u0648\u0627\u0644\u0645\u0633\u0648\u0642\u0648\u0646\u202c +adj\u202b+\u0648\u0627\u0644\u0645\u0633\u0648\u0642\u202cGuess+masc+pl+nom@ \u202b\u0648\u0627\u0644\u0645\u0633\u0648\u0642\u0648\u0646\u202c +adj\u202b+\u0648\u0627\u0644\u0645\u0633\u0648\u0642\u0648\u0646\u202cGuess+sg@ \u202b\u0648\u0627\u0644\u0645\u0633\u0648\u0642\u0648\u0646\u202c +noun\u202b+\u0648\u0627\u0644\u0645\u0633\u0648\u0642\u202cGuess+masc+pl+nom@ \u202b\u0648\u0627\u0644\u0645\u0633\u0648\u0642\u0648\u0646\u202c +noun\u202b+\u0648\u0627\u0644\u0645\u0633\u0648\u0642\u0648\u0646\u202cGuess+sg@ \u202b\u0648\u0627\u0644\u0645\u0633\u0648\u0642\u0648\u0646\u202c \u202b+\u0648\u202cconj@\u202b+\u0627\u0644\u202cdefArt@+adj\u202b+\u0645\u0633\u0648\u0642\u202cGuess+masc+pl+nom@ \u202b\u0648\u0627\u0644\u0645\u0633\u0648\u0642\u0648\u0646\u202c \u202b+\u0648\u202cconj@\u202b+\u0627\u0644\u202cdefArt@+adj\u202b+\u0645\u0633\u0648\u0642\u0648\u0646\u202cGuess+sg@ \u202b\u0648\u0627\u0644\u0645\u0633\u0648\u0642\u0648\u0646\u202c \u202b+\u0648\u202cconj@\u202b+\u0627\u0644\u202cdefArt@+noun\u202b+\u0645\u0633\u0648\u0642\u202cGuess+masc+pl+nom@ [correct match] \u202b\u0648\u0627\u0644\u0645\u0633\u0648\u0642\u0648\u0646\u202c \u202b+\u0648\u202cconj@\u202b+\u0627\u0644\u202cdefArt@+noun\u202b+\u0645\u0633\u0648\u0642\u0648\u0646\u202cGuess+sg@ \u2026 MADA output for wa-sa-ya'xu*unA:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proclitics", "sec_num": null }, { "text": "form:sy>x*nA num:s gen:m per:na case:na asp:na mod:i vox:a pos:verb prc0:0 prc1:0 prc2:0 prc3:0 enc0:1p_poss stt:na", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proclitics", "sec_num": null }, { "text": "Finite-state guesser output for wa-sa-ya'xu*unA:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proclitics", "sec_num": null }, { "text": "\u202b\u0633\u064a\u0623\u062e\u0630\u0646\u0627\u202c +adj\u202b+\u0633\u064a\u0623\u062e\u0630\u0646\u202cGuess+dual+nom+compound@ \u202b\u0633\u064a\u0623\u062e\u0630\u0646\u0627\u202c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proclitics", "sec_num": null }, { "text": "+adj\u202b+\u0633\u064a\u0623\u062e\u0630\u0646\u0627\u202cGuess+sg@ \u202b\u0633\u064a\u0623\u062e\u0630\u0646\u0627\u202c +noun\u202b+\u0633\u064a\u0623\u062e\u0630\u202cGuess+sg@\u202b+\u0646\u0627\u202cgenpron+1pers+@ \u202b\u0633\u064a\u202c \u202b\u0623\u062e\u0630\u0646\u0627\u202c +noun\u202b+\u0633\u064a\u0623\u062e\u0630\u0646\u0627\u202cGuess+sg@ \u202b\u0633\u064a\u0623\u062e\u0630\u0646\u0627\u202c +verb+imp\u202b+\u0633\u064a\u0623\u062e\u0630\u202cGuess+2pers+masc+sg@\u202b+\u0646\u0627\u202cobjpron+1pers+pl@ \u202b\u0633\u064a\u0623\u062e\u0630\u0646\u0627\u202c +verb+imp\u202b+\u0633\u064a\u0623\u062e\u0630\u0646\u202cGuess+2pers+dual@ \u202b\u0633\u064a\u0623\u062e\u0630\u0646\u0627\u202c \u202b+\u0633\u202cfut+art@+verb+pres+pass+3pers\u202b+\u0623\u062e\u0630\u0646\u0627\u202cGuess+masc+sg@ \u202b\u0633\u064a\u0623\u062e\u0630\u0646\u0627\u202c \u202b+\u0633\u202cfut+art@+verb+pres+active+3pers\u202b\u0623\u062e\u0630\u202c +Guess+masc+sg@ \u202b\u0646\u202c \u202b\u0627\u202c +objpron+1pers+pl@ [correct match] \u202b\u0633\u064a\u0623\u062e\u0630\u0646\u0627\u202c \u202b+\u0633\u202cfut+art@+verb+pres+active+3pers\u202b+\u0623\u062e\u0630\u0646\u0627\u202cGuess+masc+sg@ \u2026 For a list of 40,277 unknown word types, the morphological guesser produces an average of 12.6 possible interpretations per word. This is highly non-deterministic when compared to the finite state morphological analyser (Attia et al., 2011) which has an average of 2.1 solutions per known word. We also note that 97 % of the gold lemmas in our test set are found among the finite-state guesser's choices, which indicates the high performance of the guesser.", "cite_spans": [ { "start": 697, "end": 717, "text": "(Attia et al., 2011)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Proclitics", "sec_num": null }, { "text": "To evaluate our methodology we create a manually annotated gold standard test suite of randomly selected surface form types as mentioned in Section 2. For these surface forms, the gold lemma and part of speech are manually provided. In addition, a human annotator indicates a preference on whether or not to include the entry in a dictionary, that is whether a lemmatized form makes a valid dictionary entry or not. We noticed that most of the forms marked by the annotator as not fitting for inclusion in a dictionary were proper nouns, misspelled words, colloquial words, and words that form a part of a multiword expression. By contrast, nouns, verbs adjectives, and proper nouns with significantly high frequency were marked for inclusion in the lexical database. It is to be mentioned that proper nouns in Arabic are not orthographically distinguished from other words, i.e. there is no capitalization in Arabic as is the case in European languages. This feature of lexicographic preference helps to evaluate our lemma weighting algorithm discussed in Section 4.2. The size of the test suite is 1,310 word form types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing and Evaluation", "sec_num": "4" }, { "text": "We observe that proper nouns are the most frequent category (45 %) among the unknown words types in the data, and they also cover about 61 % of the unknown token instances in the gold annotated dataset. The POS distribution of the unknown token types of our annotated data is shown in Table 3 . As expected, most unknown words are open class words: proper names, nouns, adjectives, and, to a lesser degree, verbs. ", "cite_spans": [], "ref_spans": [ { "start": 285, "end": 292, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Testing and Evaluation", "sec_num": "4" }, { "text": "In the evaluation experiment we measure accuracy calculated as the number of correct tags divided by the count of all tags. The baseline is given by the assumption that new words appear in their base form, i.e., we do not need to lemmatize them. The baseline accuracy is 45 %. The POS tagging baseline proposes the most frequent tag (proper name) for all unknown words. In our test data accuracy stands also at 45 %. We notice that MADA POS tagging accuracy for unknown words is unexpectedly low (60 %) as shown in Table 4 . We use Voted POS Tagging, that is we choose the POS tag assigned most frequently in the data to a lemma. This method has improved the tagging results significantly (Table 4) .", "cite_spans": [], "ref_spans": [ { "start": 515, "end": 522, "text": "Table 4", "ref_id": null }, { "start": 689, "end": 698, "text": "(Table 4)", "ref_id": null } ], "eq_spans": [], "section": "Evaluating Lemmatization", "sec_num": "4.1" }, { "text": "As for the lemmatization process, our first experiment in the pipeline-based lemmatization approach obtains a higher score (54 %) than the baseline (45 %) as shown in Table 5 . Examining the data further, we notice that when a proper noun is prefixed with the definite article \"Al\", the definite article is not stripped off in the gold annotation and is considered as part of the lemma, such as \u202b\u0627\u0644\u0642\u0634\u064a\u0631\u064a\u202c 'Al-qu$ayriy'. In MADA morpho-syntactic tagging, the definite article is considered as a clitic and not part of the lemma. When this difference is ignored in the second experiment, the lemmatization accuracy increases from 54 % to 63 %. A more detailed error analysis will help devise better heuristics to increase the accuracy of the pipeline-based lemmatization. For example, in the gold annotation some regular feminine and masculine plural forms are considered as pluralia tantum, while in the automatic lemmatization they are reduced to their singular forms, such as \u202b\u062d\u062c\u0648\u0632\u0627\u062a\u202c HujuwzAt \"bookings\". The test results indicate significant improvements over the baseline. However, we expect that substantial further improvements can be obtained through further extensive error analysis and developing refined heuristics.", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 174, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluating Lemmatization", "sec_num": "4.1" }, { "text": "We create a weighting algorithm for ranking and prioritizing unknown words in Arabic so that important words that are valid for inclusion in a lexicon are pushed up the list and less interesting words (from a lexicographic point of view) are pushed down. This is meant to facilitate the effort of manual revision by making sure that the top part of the stack contains the words with highest priority.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating Lemma Weighting", "sec_num": "4.2" }, { "text": "In our case we have 40,277 unknown token types. After lemmatization they are reduced to 18,399 types (that is 54 % reduction of the surface forms). This number is still too big for manual validation. In order to address this issue we devise a weighting algorithm for ranking so that the top n number of words will include the most lexicographically relevant words. We call surface forms that share the same lemma 'sister forms', and we call the lemma that they share the 'mother lemma'. The weighting algorithm is based on three criteria: number of sister forms, cumulative frequency of the sister forms, and a POS factor. The POS factor gives 50 extra points to verbs, 30 to to nouns and adjectives, and nothing to proper nouns. This is meant to penalize proper nouns due to their high frequency which is disproportionate to other categories. The parameters of the weighting algorithm have been tuned through several rounds of experimentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating Lemma Weighting", "sec_num": "4.2" }, { "text": "Word Weight = ((number of sister forms * 800) + cumulative sum of frequencies of sister forms) / 2 + POS factor", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating Lemma Weighting", "sec_num": "4.2" }, { "text": "We use the gold annotated data for the evaluation of the lemma weighting criteria, as shown in Table 6 . We notice that the combined criteria gives the best balance between increasing the number of lexicographically-relevant words in the top 100 words and reducing the number of lexicographically-relevant words in the bottom 100 words. As the corpus is composed mainly of news articles, we assume that the distribution of proper nouns is artificial and arbitrary as it depends, to a large extent, on the specific date and time of an event or series of events that occupies the news for a certain (short-term or long-term) duration. For example, as Table 7 shows, Obama and Sarkozy ranked top of the list of unknown words, but now as Sarkozy is no longer the French president and the fate of Obama will be determined in the next presidential election in America, whether these names will continue to maintain the same level of frequency is questionable. This is why verbs, adjectives and nouns constitute the core of the language lexicon, while proper nouns are, to some extent, temporal and transient and the frequency of their use tends to shift from time to time.", "cite_spans": [], "ref_spans": [ { "start": 95, "end": 102, "text": "Table 6", "ref_id": null }, { "start": 649, "end": 656, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Evaluating Lemma Weighting", "sec_num": "4.2" }, { "text": "We have developed a methodology for automatically updating an Arabic dictionary by extracting unknown words from data and lemmatizing them in order to relate multiple surface forms to their canonical underlying representation using a finite-state guesser and a machine learning tool for disambiguation. We have developed a weighting mechanism for simulating a human decision on whether or not to include new words in a general-domain lexical database. We have shown the feasibility of a highly non-deterministic finite state guesser in an essential application. Out of a word list of 40,255 unknown words we created a lexicon of 18,399 lemmatized, POS-tagged and weighted entries. We have made our unknown word lexicon available as a free open source resource (http://arabic-unknowns.sourceforge.net/).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null } ], "back_matter": [ { "text": "This research is funded by the Irish Research Council for Science Engineering and Technology (IRCSET), the UAE National Research Foundation (NRF) (Grant No. 0514/2011), and the Science Foundation Ireland (Grant No. 07/CE/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Dublin City University.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised Lexicon-Based Resolution of Unknown Words for Full Morpholological Analysis", "authors": [ { "first": "M", "middle": [], "last": "Adler", "suffix": "" }, { "first": "Y", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "D", "middle": [], "last": "Gabay", "suffix": "" }, { "first": "M", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2008, "venue": "Proceedings of Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adler, M., Goldberg, Y., Gabay, D. and Elhadad, M. )2008(. Unsupervised Lexicon-Based Resolution of Unknown Words for Full Morpholological Analysis. In: Proceedings of Association for Computational Linguistics (ACL), Columbus, Ohio.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An Ambiguity-Controlled Morphological Analyzer for Modern Standard Arabic Modelling Finite State Networks", "authors": [ { "first": "M", "middle": [], "last": "Attia", "suffix": "" } ], "year": 2006, "venue": "Challenges of Arabic for NLP/MT Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Attia, M. )2006(. An Ambiguity-Controlled Morphological Analyzer for Modern Standard Arabic Modelling Finite State Networks. In: Challenges of Arabic for NLP/MT Conference, The British Computer Society, London, UK.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Handling Unknown Words in Statistical Latent-Variable Parsing Models for Arabic", "authors": [ { "first": "Mohammed", "middle": [], "last": "Attia", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Deirdre", "middle": [], "last": "Hogan", "suffix": "" }, { "first": "Joseph", "middle": [ "Le" ], "last": "Roux", "suffix": "" }, { "first": "Lamia", "middle": [], "last": "Tounsi", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2010, "venue": "First Workshop on Statistical Parsing of Morphologically Rich Languages (SPMRL 2010), NAACL HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Attia, Mohammed, Jennifer Foster, Deirdre Hogan, Joseph Le Roux, Lamia Tounsi and Josef van Genabith. )2010(. 'Handling Unknown Words in Statistical Latent-Variable Parsing Models for Arabic, English and French'. First Workshop on Statistical Parsing of Morphologically Rich Languages (SPMRL 2010), NAACL HLT. Los Angeles, CA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An Open-Source Finite State Morphological Transducer for Modern Standard Arabic. International Workshop on Finite State Methods and Natural Language Processing (FSMNLP)", "authors": [ { "first": "Mohammed", "middle": [], "last": "Attia", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Pecina", "suffix": "" }, { "first": "Lamia", "middle": [], "last": "Tounsi", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" } ], "year": 2011, "venue": "Josef van Genabith", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Attia, Mohammed, Pavel Pecina, Lamia Tounsi, Antonio Toral, Josef van Genabith. (2011). An Open-Source Finite State Morphological Transducer for Modern Standard Arabic. International Workshop on Finite State Methods and Natural Language Processing (FSMNLP). Blois, France.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Finite-State Morphological Analysis and Generation of Arabic at Xerox Research: Status and Plans in 2001", "authors": [ { "first": "K", "middle": [ "R" ], "last": "Beesley", "suffix": "" } ], "year": 2001, "venue": "The ACL 2001 Workshop on Arabic Language Processing: Status and Prospects", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beesley, K. R. )2001(. Finite-State Morphological Analysis and Generation of Arabic at Xerox Research: Status and Plans in 2001. In: The ACL 2001 Workshop on Arabic Language Processing: Status and Prospects, Toulouse, France.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Finite State Morphology: CSLI studies in computational linguistics", "authors": [ { "first": "K", "middle": [ "R" ], "last": "Beesley", "suffix": "" }, { "first": "L", "middle": [], "last": "Karttunen", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beesley, K. R., and Karttunen, L.. )2003(. Finite State Morphology: CSLI studies in computational linguistics. Stanford, Calif.: Csli.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Buckwalter Arabic Morphological Analyzer (BAMA) Version 2.0. Linguistic Data Consortium (LDC) catalogue number LDC2004L02", "authors": [ { "first": "T", "middle": [], "last": "Buckwalter", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Buckwalter, T. )2004(. Buckwalter Arabic Morphological Analyzer (BAMA) Version 2.0. Linguistic Data Consortium (LDC) catalogue number LDC2004L02, ISBN1-58563-324-0", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A First Dictionary of Linguistics and Phonetics", "authors": [ { "first": "D", "middle": [], "last": "Crystal", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Crystal, D. )1980(. A First Dictionary of Linguistics and Phonetics. London: Deutsch.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic Tagging of Arabic Text: From Raw Text to Base Phrase Chunks", "authors": [ { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Kadri", "middle": [], "last": "Hacioglu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Human Language Technology-North American Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diab, Mona, Kadri Hacioglu and Daniel Jurafsky. (2004). Automatic Tagging of Arabic Text: From Raw Text to Base Phrase Chunks. Proceedings of Human Language Technology-North American Association for Computational Linguistics (HLT-NAACL)", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On lemmatization in Arabic, A formal definition of the Arabic entries of multilingual lexical databases", "authors": [ { "first": "J", "middle": [], "last": "Dichy", "suffix": "" } ], "year": 2001, "venue": "Workshop on Arabic Language Processing: Status and Prospects", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dichy, J. )2001(. On lemmatization in Arabic, A formal definition of the Arabic entries of multilingual lexical databases. ACL/EACL 2001 Workshop on Arabic Language Processing: Status and Prospects. Toulouse, France.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Roots & Patterns vs. Stems plus Grammar-Lexis Specifications: on what basis should a multilingual lexical database centred on Arabic be built? In: The MT-Summit IX workshop on Machine Translation for Semitic Languages", "authors": [ { "first": "J", "middle": [], "last": "Dichy", "suffix": "" }, { "first": "A", "middle": [], "last": "Farghaly", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dichy, J., and Farghaly, A. )2003(. Roots & Patterns vs. Stems plus Grammar-Lexis Specifications: on what basis should a multilingual lexical database centred on Arabic be built? In: The MT-Summit IX workshop on Machine Translation for Semitic Languages, New Orleans.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Machine Learning of Morphosyntactic Structure: Lemmatizing Unknown Slovene Words", "authors": [ { "first": "T", "middle": [], "last": "Erjavec", "suffix": "" }, { "first": "S", "middle": [], "last": "D\u017eerosk", "suffix": "" } ], "year": 2004, "venue": "", "volume": "18", "issue": "", "pages": "17--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erjavec, T., and D\u017eerosk, S. )2004(. Machine Learning of Morphosyntactic Structure: Lemmatizing Unknown Slovene Words. Applied Artificial Intelligence, 18:17-41.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Using Sublexical Translations to Handle the OOV Problem in MT", "authors": [ { "first": "Chung-Chi", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Ho-Ching Yen", "suffix": "" }, { "first": "", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2010, "venue": "Proceedings of The Ninth Conference of the Association for Machine Translation in the Americas(AMTA)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, Chung-chi, Ho-ching Yen and Jason S. Chang. (2010). Using Sublexical Translations to Handle the OOV Problem in MT. in Proceedings of The Ninth Conference of the Association for Machine Translation in the Americas(AMTA).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Foma: a finite-state compiler and library", "authors": [ { "first": "M", "middle": [], "last": "Hulden", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL '09)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hulden, M. )2009(. Foma: a finite-state compiler and library. In: Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL '09). Stroudsburg, PA, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Computational Nonlinear Morphology: With Emphasis on Semitic Languages", "authors": [ { "first": "G", "middle": [ "A" ], "last": "Kiraz", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kiraz, G. A. )2001(. Computational Nonlinear Morphology: With Emphasis on Semitic Languages. Cambridge University Press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A Probabilistic Model for Guessing Base Forms of New Words by Analogy", "authors": [ { "first": "K", "middle": [], "last": "Lind\u00e9n", "suffix": "" } ], "year": 2008, "venue": "CICling-2008, 9th International Conference on Intelligent Text Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "106--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lind\u00e9n, K. )2008(. A Probabilistic Model for Guessing Base Forms of New Words by Analogy. In CICling-2008, 9th International Conference on Intelligent Text Processing and Computational Linguistics, Haifa, Israel, pp. 106-116.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "LDC Standard Arabic Morphological Analyzer (SAMA) v. 3.1. LDC Catalog No. LDC2010L01", "authors": [ { "first": "M", "middle": [], "last": "Maamouri", "suffix": "" }, { "first": "D", "middle": [], "last": "Graff", "suffix": "" }, { "first": "B", "middle": [], "last": "Bouziri", "suffix": "" }, { "first": "S", "middle": [], "last": "Krouna", "suffix": "" }, { "first": "S", "middle": [], "last": "Kulick", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "1--58563", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maamouri, M., Graff, D., Bouziri, B., Krouna, S., and Kulick, S. )2010(. LDC Standard Arabic Morphological Analyzer (SAMA) v. 3.1. LDC Catalog No. LDC2010L01. ISBN: 1-58563- 555-3.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Arabic Part of Speech Tagging", "authors": [ { "first": "Emad; Sandra", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "", "middle": [], "last": "K\u00fcbler", "suffix": "" } ], "year": 2010, "venue": "Proceedings of LREC 2010", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohamed, Emad; Sandra K\u00fcbler (2010). Arabic Part of Speech Tagging. Proceedings of LREC 2010, Valetta, Malta.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Arabic Gigaword Fourth Edition. LDC Catalog No. LDC2009T30", "authors": [ { "first": "R", "middle": [], "last": "Parker", "suffix": "" }, { "first": "D", "middle": [], "last": "Graff", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [], "last": "Kong", "suffix": "" }, { "first": "K", "middle": [], "last": "Maeda", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "1--58563", "other_ids": {}, "num": null, "urls": [], "raw_text": "Parker, R., Graff, D., Chen, K., Kong, J., and Maeda, K. )2009(. Arabic Gigaword Fourth Edition. LDC Catalog No. LDC2009T30. ISBN: 1-58563-532-4.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Arabic Morphological Tagging, Diacritization, and Lemmatization Using Lexeme Models and Feature Ranking", "authors": [ { "first": "R", "middle": [], "last": "Roth", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "N", "middle": [], "last": "Habash", "suffix": "" }, { "first": "M", "middle": [], "last": "Diab", "suffix": "" }, { "first": "C", "middle": [], "last": "Rudin", "suffix": "" } ], "year": 2008, "venue": "Proceedings of Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roth, R., Rambow, O., Habash, N., Diab, M., and Rudin, C. )2008(. Arabic Morphological Tagging, Diacritization, and Lemmatization Using Lexeme Models and Feature Ranking. In: Proceedings of Association for Computational Linguistics (ACL), Columbus, Ohio.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Looking Up: An Account of the COBUILD Project in Lexical Computing", "authors": [ { "first": "J", "middle": [ "M" ], "last": "Sinclair", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sinclair, J. M. (ed.). )1987(. Looking Up: An Account of the COBUILD Project in Lexical Computing. London: Collins.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "html": null, "content": "
ABSTRACT \u202b\u0627\u0644\u0644\u063a\u0629\u202c \u202b\u0627\u0644\u0639\u0631\u0628\u064a\u0629\u060c\u202c \u202b\u0627\u0644\u0643\u0644\u0645\u0627\u062a\u202c \u202b\u0627\u0644\u063a\u064a\u0631\u202c \u202b\u0645\u0639\u0631\u0648\u0641\u0629\u060c\u202c \u202b\u0627\u0644\u0643\u0644\u0645\u0627\u062a\u202c \u202b\u0627\u0644\u063a\u064a\u0631\u202c \u202b\u0645\u062f\u0631\u062c\u0629\u202c \u202b\u0641\u064a\u202c \u202b\u0627\u0644\u0642\u0648\u0627\u0645\u064a\u0633\u202c \u060c \u202b\u0627\u0644\u0642\u0627\u0645\u0648\u0633\u202c \u202b\u0627\u0644\u0639\u0627\u0626\u0645\u060c\u202c \u202b\u0627\u0625\u0644\u062b\u0631\u0627\u0621\u202c \u202b\u0627\u0644\u0645\u0639\u062c\u0645\u064a\u060c\u202c \u202b\u0627\u0644\u062a\u0648\u0633\u0639\u202c \u202b\u0627\u0644\u0645\u0639\u062c\u0645\u064a\u202c
", "num": null, "text": "IN ARABIC \u202b\u0645\u0634\u0643\u0644\u0629\u202c \u202b\u0627\u0644\u0642\u0648\u0627\u0645\u064a\u0633\u202c \u202b\u0641\u064a\u202c \u202b\u0645\u062f\u0648\u0646\u0629\u202c \u202b\u0627\u0644\u063a\u064a\u0631\u202c \u202b\u0627\u0644\u0643\u0644\u0645\u0627\u062a\u202c \u202b\u0623\u0648\u202c \u202b\u0645\u0639\u0631\u0648\u0641\u0629\u202c \u202b\u0627\u0644\u063a\u064a\u0631\u202c \u202b\u0627\u0644\u0643\u0644\u0645\u0627\u062a\u202c \u202b\u062a\u0633\u0628\u0628\u202c \u202b\u0643\u0628\u064a\u0631\u0629\u202c \u202b\u0648\u0627\u0644\u062a\u0631\u062c\u0645\u0629\u202c \u202b\u0627\u0622\u0644\u0644\u064a\u202c \u202b\u0648\u0627\u0625\u0644\u0639\u0631\u0627\u0628\u202c \u202b\u0627\u0644\u0635\u0631\u0641\u064a\u202c \u202b\u0627\u0644\u062a\u062d\u0644\u064a\u0644\u202c \u202b\u0641\u064a\u202c \u202b\u0627\u0644\u0637\u0628\u064a\u0639\u064a\u0629.\u202c \u202b\u0644\u0644\u063a\u0627\u062a\u202c \u202b\u0627\u0622\u0644\u0644\u064a\u0629\u202c \u202b\u0627\u0644\u0645\u0639\u0627\u0644\u062c\u0629\u202c \u202b\u062a\u0637\u0628\u064a\u0642\u0627\u062a\u202c \u202b\u0645\u0646\u202c \u202b\u0648\u063a\u064a\u0631\u0647\u0627\u202c \u202b\u0627\u0622\u0644\u0644\u064a\u0629\u202c \u202b\u0641\u202c \u202b\u0646\u0633\u0628\u0629\u202c \u202b\u0645\u0639\u0631\u0648\u0641\u0629\u202c \u202b\u0627\u0644\u063a\u064a\u0631\u202c \u202b\u0627\u0644\u0643\u0644\u0645\u0627\u062a\u202c \u202b\u062a\u0634\u0643\u0644\u202c 92 \u202b\u0627\u0644\u0643\u0644\u0645\u0627\u062a\u202c \u202b\u0645\u0646\u202c % \u202b\u0627\u0644\u0645\u0648\u062c\u0648\u062f\u0629\u202c \u202b\u0641\u064a\u202c \u202b\u064a\u0635\u0628\u062d\u202c \u202b\u0643\u0644\u0645\u0629\u202c \u202b\u0645\u0644\u064a\u0627\u0631\u202c \u202b\u0627\u0644\u064a\u0648\u0645\u202c \u202b\u062a\u062a\u062c\u0627\u0648\u0632\u202c \u202b\u0627\u0644\u062a\u064a\u202c \u202b\u0627\u0644\u0646\u0635\u0648\u0635\u202c \u202b\u0630\u062e\u0627\u0626\u0631\u202c \u202b\u062d\u062c\u0645\u202c \u202b\u0641\u064a\u202c \u202b\u0627\u0644\u0647\u0627\u0626\u0644\u0629\u202c \u202b\u0627\u0644\u0632\u064a\u0627\u062f\u0629\u202c \u202b\u0648\u0645\u0639\u202c \u202b\u0627\u0644\u0628\u062d\u062b.\u202c \u202b\u0647\u0630\u0627\u202c \u202b\u0641\u064a\u202c \u202b\u0627\u0644\u0645\u0633\u062a\u062e\u062f\u0645\u0629\u202c \u202b\u0627\u0644\u0646\u0635\u0648\u0635\u202c \u202b\u0630\u062e\u064a\u0631\u0629\u202c \u202b\u0623\u064a\u202c \u202b\u0625\u062c\u0631\u0627\u0621\u202c \u202b\u0627\u0644\u0645\u0633\u062a\u062d\u064a\u0644\u202c \u202b\u0645\u0646\u202c \u202b\u064a\u062f\u202c \u202b\u0628\u062d\u062b\u202c \u202b\u0648\u064a\u202c \u202b\u0627\u0644\u062d\u062f\u064a\u062b\u0629\u202c \u202b\u0627\u0644\u0645\u0639\u0627\u062c\u0645\u202c \u202b\u0641\u064a\u202c \u202b\u0625\u0644\u062f\u0631\u0627\u062c\u0647\u0627\u202c \u202b\u0627\u0644\u062c\u062f\u064a\u062f\u0629\u202c \u202b\u0627\u0644\u0643\u0644\u0645\u0627\u062a\u202c \u202b\u0639\u0646\u202c . \u202b\u0628\u062a\u0637\u0648\u064a\u0631\u202c \u202b\u0642\u0645\u0646\u0627\u202c \u202b\u0648\u0644\u0630\u0644\u0643\u202c \u202b\u0627\u0644\u0635\u0631\u0641\u064a\u202c \u202b\u0644\u0644\u062a\u062e\u0645\u064a\u0646\u202c \u202b\u0623\u062f\u0627\u0629\u202c \u202b\u0627\u0644\u0645\u062d\u062f\u0648\u062f\u0629\u202c \u202b\u0627\u0644\u062d\u0627\u0644\u0629\u202c \u202b\u0622\u0627\u0644\u062a\u202c \u202b\u062a\u0642\u0646\u064a\u0629\u202c \u202b\u0639\u0644\u0649\u202c \u202b\u0642\u0627\u0626\u0645\u0629\u202c \u202b\u062f\u0645\u062c\u0647\u0627\u202c \u202b\u0648\u062a\u0645\u202c \u202b\u0645\u0639\u202c \u202b\u0627\u0622\u0644\u0644\u064a\u202c \u202b\u0627\u0644\u062a\u0639\u0644\u0645\u202c \u202b\u0639\u0644\u0649\u202c \u202b\u0642\u0627\u0626\u0645\u0629\u202c \u202b\u0623\u062f\u0627\u0629\u202c \u202b\u0648\u0627\u0633\u062a\u062e\u062f\u0645\u0646\u0627\u0647\u0645\u0627\u202c \u202b\u0627\u0623\u0644\u0646\u0627\u0628\u064a\u0628\u202c \u202b\u062e\u0637\u202c \u202b\u062a\u0634\u0628\u0647\u202c \u202b\u0639\u0645\u0644\u064a\u0629\u202c \u202b\u0641\u064a\u202c \u202b\u0645\u0639\u0627\u202c \u202b\u0627\u0623\u0644\u062e\u0631\u0649\u202c \u202b\u0623\u0644\u062c\u0632\u0627\u0626\u0647\u202c \u202b\u0645\u062f\u062e\u0627\u0644\u062a\u202c \u202b\u0623\u062c\u0632\u0627\u0626\u0647\u202c \u202b\u0628\u0639\u0636\u202c \u202b\u0645\u062e\u0631\u062c\u0627\u062a\u202c \u202b\u062a\u0643\u0648\u0646\u202c \u202b\u0628\u202c \u202b\u0623\u0635\u0644\u0647\u0627\u202c \u202b\u0625\u0644\u0649\u202c \u202b\u0648\u0631\u062f\u0647\u0627\u202c \u202b\u0645\u0639\u0631\u0648\u0641\u0629\u202c \u202b\u0627\u0644\u063a\u064a\u0631\u202c \u202b\u0627\u0644\u0643\u0644\u0645\u0627\u062a\u202c \u202b\u0627\u0633\u062a\u062e\u0631\u0627\u062c\u202c \u202b\u0645\u0646\u202c \u202b\u0646\u062a\u0645\u0643\u0646\u202c \u202b\u062d\u064a\u062b\u202c \u202b\u0648\u0632\u0646\u0627\u202c \u202b\u0648\u0625\u0639\u0637\u0627\u0626\u0647\u0627\u202c \u202b\u0642\u0627\u0639\u062f\u0629\u202c \u202b\u0641\u064a\u202c \u202b\u0627\u0625\u0644\u062f\u0631\u0627\u062c\u202c \u202b\u0641\u064a\u202c \u202b\u0627\u0623\u0644\u0648\u0644\u0648\u064a\u0629\u202c \u202b\u0639\u0646\u202c \u202b\u064a\u0639\u0628\u0631\u202c \u202b\u0627\u0644\u202c \u202b\u0628\u064a\u0627\u0646\u0627\u062a\u202c \u202b\u0627\u0644\u202c \u202b\u0645\u0639\u062c\u0645\u064a\u0629.\u202c \u202b\u062d\u062c\u0645\u0647\u0627\u202c \u202b\u0646\u0635\u0648\u0635\u202c \u202b\u0630\u062e\u064a\u0631\u0629\u202c \u202b\u0639\u0644\u0649\u202c \u202b\u0627\u0644\u0628\u062d\u062b\u202c \u202b\u0647\u0630\u0627\u202c \u202b\u0648\u064a\u0639\u062a\u0645\u062f\u202c 1,089,111,204 \u202b\u0648\u064a\u202c \u202b\u064a\u062f\u0648\u064a\u0627\u202c \u202b\u0628\u0646\u0627\u0624\u0647\u202c \u202b\u062a\u0645\u202c \u202b\u0645\u0639\u064a\u0627\u0631\u202c \u202b\u0628\u0627\u0633\u062a\u062e\u062f\u0627\u0645\u202c \u202b\u0637\u0648\u0631\u0646\u0627\u0647\u0627\u202c \u202b\u0627\u0644\u062a\u064a\u202c \u202b\u0627\u0644\u0637\u0631\u064a\u0642\u0629\u202c \u202b\u0628\u0627\u062e\u062a\u0628\u0627\u0631\u202c \u202b\u0642\u0645\u0646\u0627\u202c \u202b\u0648\u0642\u062f\u202c \u202b\u0643\u0644\u0645\u0629.\u202c \u202b\u0645\u0646\u202c \u202b\u0628\u0627\u0644\u0631\u063a\u0645\u202c \u202b\u0645\u0631\u0636\u064a\u0629\u202c \u202b\u0646\u062a\u0627\u0626\u062c\u202c \u202b\u0642\u062f\u0645\u202c \u202b\u0627\u0644\u0635\u0631\u0641\u064a\u202c \u202b\u0627\u0644\u062a\u062e\u0645\u064a\u0646\u202c \u202b\u0623\u062f\u0627\u0629\u202c \u202b\u0641\u0627\u0626\u062f\u0629\u202c \u202b\u0627\u0633\u062a\u062e\u062f\u0645\u0646\u0627\u0647\u0627\u202c \u202b\u0627\u0644\u062a\u064a\u202c \u202b\u0627\u0644\u0637\u0631\u064a\u0642\u0629\u202c \u202b\u0648\u062a\u0628\u064a\u0646\u202c \u202b\u0627\u0644\u0645\u0647\u0645\u0629.\u202c \u202b\u062a\u0639\u0642\u064a\u062f\u202c \u202b\u0627\u0644\u063a\u0645\u0648\u0636\u202c \u202b\u0645\u0646\u202c \u202b\u0643\u0628\u064a\u0631\u0629\u202c \u202b\u062f\u0631\u062c\u0629\u202c \u202b\u0628\u0647\u0627\u202c \u202b\u0646\u062a\u0627\u0626\u062c\u202c \u202b\u064a\u0639\u0637\u064a\u202c \u202b\u0627\u0644\u0630\u064a\u202c \u202b\u0641\u064a\u202c \u202b\u0648\u0645\u0639\u0642\u062f\u0629\u202c \u202b\u0639\u0645\u0644\u064a\u0629\u202c \u202b\u062a\u0637\u0628\u064a\u0642\u0627\u062a\u202c . KEYWORDS : Arabic, unknown words, out of vocabulary words, floating dictionary, lexical enrichment, lexical extension KEYWORDS IN ARABIC:" }, "TABREF3": { "type_str": "table", "html": null, "content": "", "num": null, "text": "Proclitics, enclitics, prefixes and suffixes with Arabic nouns" }, "TABREF8": { "type_str": "table", "html": null, "content": "
Lexicographically-relevantIn topIn bottom
words100100
relying on Frequency6350
alone (baseline)
relying on number of sister8728
forms * 800
relying on POS factor5830
using combined criteria7815
", "num": null, "text": "Evaluation of lemma weighting and rankingTable 7shows a sample of the entries in the unknown words lexicon. The list includes a spectrum of the different word categories such as proper nouns, adjectives, nouns, broken plural and feminine plural forms, as well as verbs. Sample entries selected from the unknown words lexicon" } } } }