{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:08:00.796672Z" }, "title": "Employing Wikipedia as a Resource for Named Entity Recognition in Morphologically Complex Under-Resourced Languages", "authors": [ { "first": "Aravind", "middle": [], "last": "Krishnan", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Stefan", "middle": [], "last": "Ziehe", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of G\u00f6ttingen", "location": { "country": "Germany" } }, "email": "stefan.ziehe@cs.uni-goettingen.de" }, { "first": "Franziska", "middle": [], "last": "Pannach", "suffix": "", "affiliation": { "laboratory": "", "institution": "G\u00f6ttingen Centre for Digital Humanities", "location": { "country": "Germany" } }, "email": "franziska.pannach@uni-goettingen.de" }, { "first": "Caroline", "middle": [], "last": "Sporleder", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of G\u00f6ttingen", "location": { "country": "Germany" } }, "email": "caroline.sporleder@cs.uni-goettingen.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a novel approach for rapid prototyping of named entity recognisers through the development of semi-automatically annotated data sets. We demonstrate the proposed pipeline on two under-resourced agglutinating languages: the Dravidian language Malayalam and the Bantu language isiZulu. Our approach is weakly supervised and bootstraps training data from Wikipedia and Google Knowledge Graph. Moreover, our approach is relatively language independent and can consequently be ported quickly (and hence cost-effectively) from one language to another, requiring only minor language-specific tailoring.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We propose a novel approach for rapid prototyping of named entity recognisers through the development of semi-automatically annotated data sets. We demonstrate the proposed pipeline on two under-resourced agglutinating languages: the Dravidian language Malayalam and the Bantu language isiZulu. Our approach is weakly supervised and bootstraps training data from Wikipedia and Google Knowledge Graph. Moreover, our approach is relatively language independent and can consequently be ported quickly (and hence cost-effectively) from one language to another, requiring only minor language-specific tailoring.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Named entity recognition (NER) is the task of identifying proper names and assigning them to one of several named entity (NE) classes, such as PERSON (PER), LOCATION (LOC) or ORGANISATION (ORG) , which is a crucial processing step for many NLP tasks, but also for many applications in the digital humanities where information about the entities involved (e.g. names of emperors or archaelogical sites) is often particularly important. While stateof-the-art systems obtain good results for standard NE inventories and general purpose English (Chiu and Nichols, 2016) , annotated data sets for the development of named entity taggers are not readily available for most of the world's languages. 1 In this paper, we focus on semi-automatically generating annotated data and bootstrapping NE recognisers for under-resourced languages (cf. Krauwer (2003) ), i.e., languages for which manually annotated data as well as pre-processing tools, such as part-of-speech taggers, are typically hard to come by. To this end, we propose a weakly supervised approach that bootstraps the training set from Wikipedia (in the target language) and Google knowledge graph (in English), requiring no manual annotation and no pre-processing apart from the language-specific tweaking of our matching heuristics. This approach is therefore in principle suitable for any language for which Wikipedia articles exist. 2 Because the manual effort is limited, systems can be quickly ported to new languages, while still obtaining reasonable results.", "cite_spans": [ { "start": 157, "end": 165, "text": "LOCATION", "ref_id": null }, { "start": 166, "end": 171, "text": "(LOC)", "ref_id": null }, { "start": 175, "end": 187, "text": "ORGANISATION", "ref_id": null }, { "start": 188, "end": 193, "text": "(ORG)", "ref_id": null }, { "start": 541, "end": 565, "text": "(Chiu and Nichols, 2016)", "ref_id": "BIBREF8" }, { "start": 693, "end": 694, "text": "1", "ref_id": null }, { "start": 835, "end": 849, "text": "Krauwer (2003)", "ref_id": "BIBREF16" }, { "start": 1391, "end": 1392, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We demonstrate this by developing the system for Malayalam and then porting it to isiZulu. These two languages were chosen because they are agglutinating and morphologically complex, making the task considerably more challenging than for many Indo-European languages where NEs are only minimally inflected. While our target languages are both agglutinating, they are also structurally quite different in other respects and exhibit different degrees of \"under-resourcing\", with noticeably fewer resources being available for isiZulu (see Sect. 3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Wikipedia has been employed for NER in three main ways: In a monolingual setting, early studies used it to extract Gazetteer lists which were then used as features in (typically supervised) NER systems. One of the first studies taking this approach was by Toral and Mu\u00f1oz (2006) , who extract Gazetteers by matching the first sentence of a Wikipedia article heuristically against the WordNet (Fellbaum, 1998) noun hierarchy to identify the category of the entity described. This was followed by a number of similar approaches (Kazama and Torisawa, 2007; Ratinov and Roth, 2009; Radford et al., 2015) .", "cite_spans": [ { "start": 256, "end": 278, "text": "Toral and Mu\u00f1oz (2006)", "ref_id": "BIBREF35" }, { "start": 392, "end": 408, "text": "(Fellbaum, 1998)", "ref_id": null }, { "start": 526, "end": 553, "text": "(Kazama and Torisawa, 2007;", "ref_id": "BIBREF15" }, { "start": 554, "end": 577, "text": "Ratinov and Roth, 2009;", "ref_id": "BIBREF28" }, { "start": 578, "end": 599, "text": "Radford et al., 2015)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Going one step further, some researchers used Wikipedia not only for extracting Gazetteers but also for bootstrapping annotated training data. For example, Nothman et al. (2008) exploit hyperlinks to annotate the sentences containing them with category information, which is extracted from the article the hyperlink links to. As not all mentions of an entity in an article are hyperlinked, they extend the data set by finding verbatim repetitions of the hyperlink's anchor text in the article. Finally, they use the data to train an NE tagger. The system requires hand-labelling of seed data that maps information extracted from articles to NE classes.", "cite_spans": [ { "start": 156, "end": 177, "text": "Nothman et al. (2008)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Wikipedia has also been used in a multilingual setting to obtain NE taggers for languages other than English, e.g. by exploiting cross-lingual links between articles (Richman and Schone, 2008; Bhagavatula et al., 2012; Pan et al., 2017) . This approach has also been applied to under-resourced languages (Littell et al., 2016) . Ni and Florian (2016) go one step further and construct entity type mappings for the English Wikipedia before projecting across Wikipedia language links. Bouamor et al. (Bouamor et al., 2013) propose employing Wikipedia as a resource for creating domain-specific lexicons for machine-translation. They demonstrate their approach for English-French and English-Romanian translation tasks. Mayhew et al. (Mayhew et al., 2017) combine lexicon-based translation of training data from a source to a target language with features generated from Wikipedia and show that this approach can be applied to under-resourced languages.", "cite_spans": [ { "start": 166, "end": 192, "text": "(Richman and Schone, 2008;", "ref_id": "BIBREF29" }, { "start": 193, "end": 218, "text": "Bhagavatula et al., 2012;", "ref_id": "BIBREF2" }, { "start": 219, "end": 236, "text": "Pan et al., 2017)", "ref_id": "BIBREF26" }, { "start": 304, "end": 326, "text": "(Littell et al., 2016)", "ref_id": "BIBREF18" }, { "start": 329, "end": 350, "text": "Ni and Florian (2016)", "ref_id": "BIBREF22" }, { "start": 483, "end": 520, "text": "Bouamor et al. (Bouamor et al., 2013)", "ref_id": "BIBREF5" }, { "start": 717, "end": 752, "text": "Mayhew et al. (Mayhew et al., 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Studies that address NER for our target languages are very limited. To our knowledge the first NER system for Malayalam was proposed by Bindu and Idicula (2011) , who use supervised machine learning utilising a variety of features complemented with a finite-state automaton to deal with complex words. Jayan et al. (2013) propose a hybrid approach that combines rules with supervised machine learning. Devi et al. (2016) tackle named entity extraction from social media and combine supervised machine learning (SVMs) with skipgram features. Shruthi and Pranav (2016) propose another supervised approach based on the TnT tagger (Brants, 2002) and maximum entropy models. A neural network approach is proposed by Ajees and Idicula (2018) who use word embeddings of context words and morphs of the target word as features. A similar system but with a different neural architecture (RNN-LSTM) has also been proposed (Sreeja and Pillai, 2020) . To our knowledge, the only NER system for isiZulu was proposed by Eiselen (2016) , who used linear-chain Conditional Random Fields (CRFs) for the classification of the named entities. The features included gazetteer lists and graphemic information (capitalization, punctuation, numerals).", "cite_spans": [ { "start": 136, "end": 160, "text": "Bindu and Idicula (2011)", "ref_id": "BIBREF4" }, { "start": 402, "end": 420, "text": "Devi et al. (2016)", "ref_id": "BIBREF11" }, { "start": 541, "end": 566, "text": "Shruthi and Pranav (2016)", "ref_id": "BIBREF30" }, { "start": 627, "end": 641, "text": "(Brants, 2002)", "ref_id": "BIBREF7" }, { "start": 711, "end": 735, "text": "Ajees and Idicula (2018)", "ref_id": "BIBREF0" }, { "start": 912, "end": 937, "text": "(Sreeja and Pillai, 2020)", "ref_id": "BIBREF32" }, { "start": 1006, "end": 1020, "text": "Eiselen (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We test our system on two agglutinating languages: Malayalam and isiZulu. We hypothesise that inflection and agglutination will make the task particularly challenging, as one token can correspond to several linguistic words (see Sec. 3.1 and 3.2). However, Malayalam and isiZulu also differ in several aspects: They use different writing systems (Brahmic vs. Latin) and while the former tends to make extensive use of suffixes the latter tends to favour prefixes to encode grammatical information. From a practical perspective, while both languages are under-resourced, isiZulu is so to a greater extent, in particular its Wikipedia version is more than an order of magnitude smaller (see Sec. 4). We thus believe that these two languages pose sufficiently heterogeneous use cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Target Languages: Malayalam and isiZulu", "sec_num": "3" }, { "text": "Malayalam is the official language of the A noun in Malayalam can be suffixed in at least 7 different ways according to the case and grammatical category employed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Malayalam", "sec_num": "3.1" }, { "text": "For example, \"Kochi\" (tIn\u00bbo) is a place in Kerala. tIn\u00bbobo\u00b2 means \"inside/in Kochi\". tIn-\u00bbobo\u00b2\\o\u00c1qw means from Kochi and tIn\u00bbobqtS means of Kochi. The word can be be inflected in various other ways as well. An example of suffixing within a sentence is depicted in (1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Malayalam", "sec_num": "3.1" }, { "text": "( 1) Agglutination is optional in Malayalam. Therefore, a word has the option of merging with another consecutive word, producing a new word in the process. For example, tIn\u00bbobo\u00b2 Bbocq\u00c1q (tIn\u00bbobo\u00b2: in Kochi, Bbocq\u00c1q: was) translates to was in Kochi. The two words can be optionally combined into a new token: tIn\u00bbobodnbocq\u00c1q (was in Kochi). Grammatically speaking, the split version and the agglutinated version can be used interchangeably in a sentence. This increases the complexity of token matching and dictionary generation significantly. Furthermore, unlike languages written in Latin script, Malayalam does not distinguish between upper and lower case in its writing system, hence casing cannot be used as a cue for named entity recognition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Malayalam", "sec_num": "3.1" }, { "text": "Although it is an under-resourced language, the presence of Malayalam in the form of articles and data repositories on the internet has been growing steadily over the years. It has featured in a limited number of NLP tasks, including morphological analysis (Bhavukam et al., 2018) , POS tagging (Akhil et al., 2020) and NER (Ajees and Idicula, 2018) . However, many studies use small locally generated data sets (Nambiar et al., 2019) or domain specific data sets (Kumar et al., 2019) , (Devi et al., 2016) , which usually are not freely available.", "cite_spans": [ { "start": 257, "end": 280, "text": "(Bhavukam et al., 2018)", "ref_id": "BIBREF3" }, { "start": 295, "end": 315, "text": "(Akhil et al., 2020)", "ref_id": "BIBREF1" }, { "start": 324, "end": 349, "text": "(Ajees and Idicula, 2018)", "ref_id": "BIBREF0" }, { "start": 412, "end": 434, "text": "(Nambiar et al., 2019)", "ref_id": "BIBREF21" }, { "start": 464, "end": 484, "text": "(Kumar et al., 2019)", "ref_id": "BIBREF17" }, { "start": 487, "end": 506, "text": "(Devi et al., 2016)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Malayalam", "sec_num": "3.1" }, { "text": "IsiZulu is the language of the Zulu people in Southern Africa. It is spoken by approximately 10.6 Million people (Taljard and Bosch, 2006) , mainly in the eastern part of South Africa and Mozambique. IsiZulu is an agglutinating, conjunctivelywritten language and belongs to the Bantu languages (Nguni sub-branch) (Taljard and Bosch, 2006) . As is characteristic for Bantu languages, isiZulu uses noun classes, e.g. dedicated classes for nouns describing humans in singular or plural.", "cite_spans": [ { "start": 113, "end": 138, "text": "(Taljard and Bosch, 2006)", "ref_id": "BIBREF34" }, { "start": 313, "end": 338, "text": "(Taljard and Bosch, 2006)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "IsiZulu", "sec_num": "3.2" }, { "text": "Certain natural language processing tasks can be very challenging or almost infeasible to solve for languages such as isiZulu. For instance, due to the nature of isiZulu concords, prefixes and infixes 3 , sentences might consist of ambiguous words, as in Example (2). Another characteristic that isiZulu shares with other conjunctive languages is the use of capitalization inside a word, which can be an indicator of a named entity, e.g. eGoli -in/from Johannesburg, as in (3). Cultural naming conventions are another challenge for NER (Eiselen, 2016) . For example, Nkosi means king, lord or chief and can be both first-or lastname, as for the South African rugby players S'busiso Nkosi and Nkosi Nofuma. While isiZulu is not an endangered language 4 , there is a lack of large digital textual resources, such as newspaper archives, and consequently also of NLP tools. The South African Centre for Digital Language Resources (SADiLaR) is one of the main drivers of language development in South Africa. Besides their teaching and knowledge sharing efforts, SADiLaR also collects resources for the South African languages and makes them available through their website 5 . The SADiLaR repository currenty lists 49 language resources, tools and corpora for isiZulu.", "cite_spans": [ { "start": 536, "end": 551, "text": "(Eiselen, 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "IsiZulu", "sec_num": "3.2" }, { "text": "For bootstrapping training data, we utilise the Malayalam and isiZulu Wikipedias. The former is significantly larger (65,000 vs. 2,701 articles as of June 2020) and therefore also gives rise to a larger training set. In order to find appropriate named entity tags for Wikipedia articles (see Sect. 5.2) we employ the Google Knowledge Graph (GKG) (Singhal, 2012) . We test our system on two external data sets for Malayalam (ARNEKT and CUSAT) and one for isiZulu (NCHLT II) as well as part of our bootstrapped data:", "cite_spans": [ { "start": 346, "end": 361, "text": "(Singhal, 2012)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Resources", "sec_num": "4" }, { "text": "ARNEKT IECSIL FIRE 2018 NER Dataset This corpus was compiled from the abstracts and info-box properties from DBpedia for the (IECSIL) shared task (Hullathy Balakrishnan et al., 2018 CUSAT NER Dataset This is a manually annotated NER data set developed by CUSAT. 6 It is based on the CUSAT POS tagged data set for Malayalam (Ajees and Idicula, 2018) . About 200,000 words from \"internet texts\" were manually annotated. The POS tags were ignored and the data was cleaned to remove special characters. The data set consists of 190,265 tokens overall, with 1,864 PER, 1,035 LOC, and 496 ORG entities. It is thus considerably smaller than the ARNEKT data set.", "cite_spans": [ { "start": 156, "end": 181, "text": "Balakrishnan et al., 2018", "ref_id": "BIBREF14" }, { "start": 262, "end": 263, "text": "6", "ref_id": null }, { "start": 323, "end": 348, "text": "(Ajees and Idicula, 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Resources", "sec_num": "4" }, { "text": "NCHLT II Dataset This isiZulu data set consists of South African governmental texts, which are manually annotated with named entities (Eiselen, 2016) , containing 5,024 PER, 3,872 LOC, and 5,039 ORG, 1,8224 MISC (i.e. other entity classes), and 169,393 OUT (non-entities) tokens. For evaluation, we merge the latter two classes to OTHER.", "cite_spans": [ { "start": 134, "end": 149, "text": "(Eiselen, 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Resources", "sec_num": "4" }, { "text": "WikiML and WikiZu Apart from ARNEKT, the above data sets come from other domains as our training data (Wikipedia). Hence, testing on them can be seen as an out-of-domain lower bound evaluation of our system. For comparison, we therefore also test on a 10% portion of our Wikipedia data sets (see Sect. 5). This constitutes an upper bound as these data sets are from the same domain as the training data but are labelled automatically in a fashion identical to labelling the training data, which might lead to overly optimistic results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Resources", "sec_num": "4" }, { "text": "As our focus is on under-resourced languages, we do not assume that a manually labelled training set is available. Instead we bootstrap from 6 https://www.cusat.ac.in/ Wikipedia and GKG. Utilising Wikipedia has a number of advantages: First, as it is communitydriven, many under-resourced languages have a version of Wikipedia. Second, Wikipedia articles cover a wide range of subjects and often refer to named entities. Third, Wikipedia has a number of features that help with bootstrapping entity labels (see Sect. 2). Finally, it has been shown that additional training data bootstrapped from Wikipedia can also improve the performance of taggers trained on other sources, especially if they are applied out-of-domain (Nothman et al., 2009) .", "cite_spans": [ { "start": 721, "end": 743, "text": "(Nothman et al., 2009)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Bootstrapping the Training Data", "sec_num": "5" }, { "text": "We employ a 4-step pipeline to bootstrap NE labelled data (Fig. 1) : First, we extract a list of titles from Wikipedia dumps in the target language. Second, we use the Wikipedia language links to look up their English counterparts. Third, we employ the GKG to extract candidates for named entity tags. Finally, we use the title list to annotate Wikipedia articles. The distribution of the different NE tags for both data sets is shown in ", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 66, "text": "(Fig. 1)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Bootstrapping the Training Data", "sec_num": "5" }, { "text": "We compile a list of all article titles from the Wikipedia dump of the target language 7 and preprocess it by removing all entries that do not contain at least one character in the target language. This removes titles entirely composed of numbers, special characters and characters from other languages. Duplicate titles are also removed. The title list includes titles which share the primary token, but contain descriptors in brackets to distinguish them, for example DBobn\u00b1\u00bb: Unniyarcha and D-Bobn\u00b1\u00bb (Nd\u00bbo}Xw): Unniyarcha (Film), where the descriptor helps distinguish the person from the movie. Descriptors are preserved, because they are vital when annotating the title with an NE tag. In order to assign titles their respective NE tags, we query each title in the title list through the GKG, which associates the search result with a tag similar to entity tags used in NER systems. For the purpose of this study, the tags have been limited to PER, LOC and ORG, since they are the most widely employed entity types. Entities that do not fall into these categories are labelled OTHER. As the GKG accepts only English queries, we need to translate (and transliterate for Malayalam) titles from the target languages. We exploit the multilingualism in Wikipedia to map titles from the target language to their respective counterparts in English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creation of the Title Lists", "sec_num": "5.1" }, { "text": "The GKG makes use of different sources when producing tags and will generate a ranked list of (possibly different) tags for each query. We consider only the top three of these. If one of the three named entity tags appears in this list, it is assigned to the respective title, with priority being given to the higher ranking source. If the tags generated by the GKG do not contain any of our named entities, the title is annotated with the tag OTHER (i.e. no named entity or a named entity belonging to a different category such as DATE). We then automatically annotate the text of each Wikipedia article, assigning each token one of three NE tags (PERS, LOC, ORG) or the tag OTHER. As illustrated in Figure 2 , we perform two \"sweeps\":", "cite_spans": [], "ref_spans": [ { "start": 701, "end": 709, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Creation of the Title Lists", "sec_num": "5.1" }, { "text": "The first stage of the first sweep exploits hyperlinks to annotate tokens within an article. Even if a title present in the body of the article is ambiguous, a hyperlink will direct to the correct source and tag. For example, tokens that have different NE tags but the same primary token, e.g. Unniyarcha and Unniyarcha (Film), can be disambiguated by extracting the corresponding named entity tag for each hyperlink from the title list created earlier. Then, the descriptions within brackets are removed in the case of ambiguous titles. All appearances of hyperlinks are annotated with their respective tags. Tokens that do not match any hyperlink are labelled OTHER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creation of the Title Lists", "sec_num": "5.1" }, { "text": "In the second stage of the first sweep, all occurrences of titles that are not hyperlinked in the article body are annotated. For each article, the tokens labelled OTHER after the first stage are compared with the named entity titles in the title list. All token matches are annotated with the tag of the respective title.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creation of the Title Lists", "sec_num": "5.1" }, { "text": "In the second sweep, we annotate tokens that match sub-words of named entity titles in our title list, i.e., we annotate inflected forms and complex words. This is necessary because, in agglutinating languages, proper nouns seldom exist in their base form. This makes the matching of words that refer to the same concept harder than for languages such as English, which only has minimal pre-and suffixing, because a simple search for string equality with a title will not suffice. Therefore, we developed language-specific token-title matching algorithms discussed in the next sections. Since the secondary sweep is executed only after the ambiguous tokens are dealt with, the annotation procedure tackles both reliability and quantity of annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creation of the Title Lists", "sec_num": "5.1" }, { "text": "Suffix matching A major problem for NER in Malayalam is that -due to inflection and agglutination-nouns rarely occur in their base form but are typically adorned by suffixes. To solve this problem, a suffix stripping algorithm is employed, which initially compares each title in the list with the tokens in the body of the article and extracts all tokens that qualify a basic distance match. A threshold of 70% match was empirically found to work well. To counteract overgeneration and ensure the presence of suffixation, the results are further filtered by checking if the token begins with the root word. That is, the first (n \u2212 1) characters of the token must match the first (n \u2212 1) characters of the title, for a title of length n. This separates suffixed versions from accidental matches. For example, the title ]\u00d0jw (Panthalam-Place) matches both ]\u00d0bw (Panthayam-competition) and ]\u00d0jeqw (Panthala+vum-Panthalam as well). Only the second token is an inflected version. The suffix match with the first (n \u2212 1) characters (\"]\", \"\u00d0\", \"j\") extracts the inflected token and discards arbitrary matches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristics for Malayalam", "sec_num": "5.3.1" }, { "text": "Attachment of the place of origin to a person's name It is a common practice in Kerala to attach the place of a person's origin to their name. For example, consider the name Pinarayi Vijayan (]oWlnbo eoPb\u00b0). The individual's name is \"Vijayan\" (eoPb\u00b0), while \"Pinarayi\" (]oWlnbo) is the place where he is from. The title list would consist of both \"]oWlnbo-Place\" and \"]oWlnbo eoPb\u00b0-Person\". When a bigram check is employed first, all instances of \"]oWlnbo eoPb\u00b0\" are annotated with the tag \"Person\". The tokens in the article body are annotated \"]oWlnbo-Person, eoPb\u00b0-Person\". If this is followed by the annotation of \"]oWlnbo\", the token \"]oWlnbo eoPb\u00b0\" is modified to \" ]oWlnbo-Place, eoP-b\u00b0-Person\". To avoid this behaviour for Malayalam, uni-grams are always annotated first and then followed by higher order n grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristics for Malayalam", "sec_num": "5.3.1" }, { "text": "Punctuation in names Another common practice is the usage of acronyms within the name. For example, Madath Thekkepaattu Vasudevan Nair usually goes by M.T. Vasudevan Nair (Fw.So. en-hquZe\u00b0\\nb\u00b1). The name is sometimes tokenized as (\"Fw.\", \"So.\", \"enhquZe\u00b0\", \"\\nb\u00b1\") or as (\"Fw\" ,\".\" , \"So.\", \"enhquZe\u00b0\", \"\\nb\u00b1\"). In some other cases, the article omits the punctuation, and prints the name as (\"Fw\", \"So\", \"enhquZ-e\u00b0\", \"\\nb\u00b1\"). Since the number of tokens within the title changes, the n-gram search consequently varies. Since this issue is specific to full stops (\".\"), all full stops are removed from both the article and the titles during the search phase. After all appearances of the individual tokens sans punctuation are annotated within the article, the full stops are reinserted. If the tokens to either side of the full stop have the same NE tag, the full stop is given the same tag as the tokens that wrap around it. All end-of-sentence full stops are annotated with the tag OTHER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristics for Malayalam", "sec_num": "5.3.1" }, { "text": "As isiZulu focuses on prefixes rather than suffixes, we perform prefix stripping for isiZulu. To this end, we make use of the capitalization described in Section 3.2. Where we could not find a full match between a title and a word in the list, we matched titles and occurrences in the text from the first capital after the initial letter. Thus, we were able to match iGoli and eGoli, i.e. Johannesburg ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language-Specific Adaptation for isiZulu", "sec_num": "5.4" }, { "text": "For comparison, we use two baseline systems. One rule-based baseline annotation system and a neural network baseline. The rule-based baseline directly annotates the data sets with the title list generated in section 5.1. A bi-gram search is used to annotate titles that have two words. This procedure does not account for inflections and annotates perfect matches in the corpus. The rule-based baseline is therefore language independent. This system is used to evaluate the importance of accommodating inflection and agglutination when compiling an NER data set for morphologically complex languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Machine Learning Setup", "sec_num": "6" }, { "text": "The deep learning baseline for NER is implemented using Keras (Chollet, 2015) . It is a recurrent LSTM network with the following layers:", "cite_spans": [ { "start": 62, "end": 77, "text": "(Chollet, 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Machine Learning Setup", "sec_num": "6" }, { "text": "1. Trainable linear embeddings of size 200 2. Bidirectional LSTM with 45 units for each direction; recurrent dropout probability of 0.1 3. Linear layer with 50 units and ReLU activation, applied to each time step 4. CRF layer with four units (one per NE class)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Machine Learning Setup", "sec_num": "6" }, { "text": "The model is trained using the RMSprop optimizer with a learning rate of 0.001 for 10 epochs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Machine Learning Setup", "sec_num": "6" }, { "text": "We use XLM-RoBERTa (Conneau et al., 2019) to build the NER system. It is a pre-trained multilingual transformer model which has successfully been applied to low resourced languages such as Swahili and Urdu. The model is trained in the xlm-roberta-base configuration using decoupled weight decay (Loshchilov and Hutter, 2019) and layer-wise decaying learning rates (Sun et al., 2019) . The embedding layer is frozen to avoid overfitting. We train the model on a TPU in Google Colab 8 using bfloat16 mixed precision training and the following hyperparameters: \u2022 Weight decay factor: 0.99", "cite_spans": [ { "start": 19, "end": 41, "text": "(Conneau et al., 2019)", "ref_id": "BIBREF10" }, { "start": 295, "end": 324, "text": "(Loshchilov and Hutter, 2019)", "ref_id": "BIBREF19" }, { "start": 364, "end": 382, "text": "(Sun et al., 2019)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Machine Learning Setup", "sec_num": "6" }, { "text": "\u2022 Learning rate decay factor: 0.95", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Machine Learning Setup", "sec_num": "6" }, { "text": "The data sets are split into training, testing and validation sets by a 80:10:10 ratio.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Machine Learning Setup", "sec_num": "6" }, { "text": "Before testing the model with a target data set, the model is fine tuned for adaptation. A small subset of each test set is used to tune the weights and the remaining data is used to test the model. Fine tuning is carried out for two reasons: (i) to accommodate for changes in writing style and format and (ii) to expose the model to previously unseen tokens. Since agglutination and heavy inflection exists in both languages, it is practically infeasible to construct dictionaries that account for all words in them. During the training phase, a dictionary is created using the developed data set which is then used to feed tokens into an embedding layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine Tuning", "sec_num": "6.1" }, { "text": "In the case of an external data set, the model encounters many words foreign to its dictionary. Fine tuning helps it to learn the appearance patterns of unknown tokens within the article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine Tuning", "sec_num": "6.1" }, { "text": "Figure 3a depicts the model performance when varying amounts of test data are used for fine tuning. For the ARNEKT data, tuning the model with a small portion of the test data set increases the performance drastically. Since this data set is large, even a small portion of it helps the model adapt easily. On the other hand, the smaller CUSAT data set attains a noticeable increase in model performance at a slightly higher level of fine tuning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results for Malayalam", "sec_num": "6.2" }, { "text": "Since fine tuning requires a sufficient amount of tokens, a slightly bigger chunk of the CUSAT data set has to be used to fine-tune the model's parameters. The effect of fine tuning on the overall performance is visualised in Figure 3b (with 10% data for ARNEKT and 20% for CUSAT). The model performance increases considerably after fine tuning, in both cases. The class-wise performance for the WikiML, ARNEKT and CUSAT data sets is shown in Table 2 . As expected, the (upper bound) results for WikiML are high across all NE classes. For the ARNEKT data set, the model performs also quite well with an average F1 score of 0.78. In comparison, the out-of-domain evaluation on the CUSAT data obtains an F1 score of 0.61, with particularly low results for org entities. This may be due to the fact that organisations are distributed differently in this domain.", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 235, "text": "Figure 3b", "ref_id": "FIGREF4" }, { "start": 443, "end": 450, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results for Malayalam", "sec_num": "6.2" }, { "text": "The baseline annotation system was also evaluated on the CUSAT data set and the ARNEKT data set, obtaining F1-scores of 0.29 and 0.39, respectively. This performance highlights the importance of considering inflections, and fine tuning the model for domain adaptation. Table 3 shows the results of porting our system to isiZulu (with 20% of the data for fine-tuning). With an average F1-Score of 0.87 our system performs well on the in-domain WikiZu data but worse in the out-of-domain evaluation on NHCLT II, with an average F1-Score of 0.45. It still easily outperforms the rule-based baseline system (0.24 F1-Score) and the LSTM baseline (0.45 F1-Score). The lower performance compared to Malayalam can be explained by the fact that the domain of the test set is very different from that of the training set (legal vs. Wikipedia) and, moreover, the training set for isiZulu is considerably smaller than for Malayalam. In this context and given that we only used 2 days to tweak the system for isiZulu, we still consider the results an encouraging first step.", "cite_spans": [], "ref_spans": [ { "start": 269, "end": 276, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results for Malayalam", "sec_num": "6.2" }, { "text": "Since testing on WikiML can be regarded as indomain, we focus on the analysis of errors on the when testing our model before and after fine tuning. In both cases, the biggest source of error is observed to be misclassification into the \"Other\" category. This is expected in languages with morphological complexities, since named entities are concealed within agglutinations and suffixes. It should be noted that for ARNEKT, the performance errors do not necessarily originate from the model. To our knowledge, ARNEKT was not manually annotated, but created with rule based annotation procedures and word lists. Consequently, annotation errors can be observed within the data set. In some cases, the WikiML model is seen to predict correct named entity tags for tokens wrongly annotated in ARNEKT. Two examples are presented in (4) and (5). Wrong annotations have been highlighted in red. Fine tuning clearly improves the impact of errors involving the \"Other\" category significantly. Murali is one of the best players in the history of Sri Lankan cricket For CUSAT, the presence of out-ofdomain/unseen words is clearly the cause of most errors in the vanilla model. Once fine-tuned with a portion of the data set, this is reduced significantly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of Results", "sec_num": "7" }, { "text": "Disregarding the \"Other\" class, the model seems to confuse \"Person\" and \"Organization\" entites with \"Place\" entities in both data sets. This is almost always observed with multi worded entities that have places embedded in their names. Cases include people with places attached to them (as explained in section 5.3.1) and organizations with the same characteristic (e.g. \"New York Public Library\"). The \"New York\" portion in such entities can be thought of as a place entity embedded in an organization entity, or can be viewed simply as an organization entity without taking into account embedded entity classes. For one-worded entities, errors can often be seen to arise from annotation variations between the ground truth and the automatically generated dataset. For example, words such as \"Library\" and \"College\" are mapped as \"Place\" entities by the Google Knowledge graph during the generation of title lists. Subsequently, instances of such words are labeled as \"Place\" by our vanilla model trained on the WikiML dataset. However, the external datasets label them as \"Organization\" entities in some cases, which indirectly translates to mistakes during evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of Results", "sec_num": "7" }, { "text": "We demonstrated the implementation of a fully automated pipeline for the creation of a named entity tagged data set with freely available resources. We showed how the pipeline can be adaptated for two morphologically complex, agglutinating languages. Finally, we propose an easily portable, weakly supervised NER system for Malayalam and isiZulu based on this pipeline. The system can be developed quickly: We spent 2 weeks on developing the initial system for Malayalam and 2 days for porting it to isiZulu. We tested in-and out-of-domain on a number of publicly available data sets, with encouraging results, especially for Malayalam.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Even for English, NER is not necessarily a solved problem for specialised domains, which often require specific entity class inventories(Brandsen et al., 2020).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As of July 2021 this applies to 323 languages. Arguably this still leaves out a large amount of the world's 6000+ languages but it covers many languages which have a fair amount of speakers but are still under-resourced. Furthermore, Wikipedia is constantly growing both in terms of content for a given language and in terms of the languages it covers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://en.wiktionary.org/wiki/ Category:Zulu_prefixes 4 https://glottolog.org/resource/ languoid/id/zulu12485 https://www.sadilar.org/index.php/en/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For Malayalam, we used a Wikipedia dump from July 2020, for isiZulu from January 2021.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://colab.research.google.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A named entity recognition system for Malayalam using neural networks", "authors": [ { "first": "A", "middle": [], "last": "Ajees", "suffix": "" }, { "first": "Sumam Mary", "middle": [], "last": "Idicula", "suffix": "" } ], "year": 2018, "venue": "8th International Conference on Advances in Computing & Communications (ICACC-2018)", "volume": "143", "issue": "", "pages": "962--969", "other_ids": { "DOI": [ "10.1016/j.procs.2018.10.338" ] }, "num": null, "urls": [], "raw_text": "A.P Ajees and Sumam Mary Idicula. 2018. A named entity recognition system for Malayalam using neu- ral networks. Procedia Computer Science, 143:962 -969. 8th International Conference on Advances in Computing & Communications (ICACC-2018).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Partsof-speech tagging for Malayalam using deep learning techniques", "authors": [ { "first": "K", "middle": [ "K" ], "last": "Akhil", "suffix": "" }, { "first": "R", "middle": [], "last": "Rajimol", "suffix": "" }, { "first": "V", "middle": [ "S" ], "last": "Anoop", "suffix": "" } ], "year": 2020, "venue": "International Journal of Information Technology", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/s41870-020-00491-z" ] }, "num": null, "urls": [], "raw_text": "K. K. Akhil, R. Rajimol, and V. S. Anoop. 2020. Parts- of-speech tagging for Malayalam using deep learn- ing techniques. International Journal of Informa- tion Technology.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Language independent named entity identification using Wikipedia", "authors": [ { "first": "Mahathi", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Gsk", "middle": [], "last": "Santosh", "suffix": "" }, { "first": "Vasudeva", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the First Workshop on Multilingual Modeling", "volume": "", "issue": "", "pages": "11--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahathi Bhagavatula, Santosh GSK, and Vasudeva Varma. 2012. Language independent named entity identification using Wikipedia. In Proceedings of the First Workshop on Multilingual Modeling, pages 11-17, Jeju, Republic of Korea. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A deep learning approach for Malayalam morphological analysis at character level", "authors": [ { "first": "Premjith", "middle": [], "last": "Bhavukam", "suffix": "" }, { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "" }, { "first": "", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2018, "venue": "International Conference on Computational Intelligence and Data Science", "volume": "132", "issue": "", "pages": "47--54", "other_ids": { "DOI": [ "10.1016/j.procs.2018.05.058" ] }, "num": null, "urls": [], "raw_text": "Premjith Bhavukam, Soman K.P., and M Anand Ku- mar. 2018. A deep learning approach for Malayalam morphological analysis at character level. Procedia Computer Science, 132:47 -54. International Con- ference on Computational Intelligence and Data Sci- ence.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Named entity identifier for Malayalam using linguistic principles employing statistical methods", "authors": [ { "first": "M", "middle": [ "S" ], "last": "Bindu", "suffix": "" }, { "first": "Mary", "middle": [], "last": "Sumam", "suffix": "" }, { "first": "", "middle": [], "last": "Idicula", "suffix": "" } ], "year": 2011, "venue": "International Journal of Computer Science Issues(IJCSI)", "volume": "8", "issue": "5", "pages": "185--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "MS Bindu and Sumam Mary Idicula. 2011. Named entity identifier for Malayalam using linguistic prin- ciples employing statistical methods. Interna- tional Journal of Computer Science Issues(IJCSI), 8(5):185-191.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Building specialized bilingual lexicons using large scale background knowledge", "authors": [ { "first": "Dhouha", "middle": [], "last": "Bouamor", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Popescu", "suffix": "" }, { "first": "Nasredine", "middle": [], "last": "Semmar", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "479--489", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dhouha Bouamor, Adrian Popescu, Nasredine Semmar, and Pierre Zweigenbaum. 2013. Building special- ized bilingual lexicons using large scale background knowledge. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process- ing, pages 479-489, Seattle, Washington, USA. As- sociation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Creating a dataset for named entity recognition in the archaeology domain", "authors": [ { "first": "Alex", "middle": [], "last": "Brandsen", "suffix": "" }, { "first": "Suzan", "middle": [], "last": "Verberne", "suffix": "" }, { "first": "Milco", "middle": [], "last": "Wansleeben", "suffix": "" }, { "first": "Karsten", "middle": [], "last": "Lambers", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "4573--4577", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Brandsen, Suzan Verberne, Milco Wansleeben, and Karsten Lambers. 2020. Creating a dataset for named entity recognition in the archaeology domain. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4573-4577, Mar- seille, France. European Language Resources Asso- ciation.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "TnT: A statistical part-ofspeech tagger", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" } ], "year": 2002, "venue": "ANLP", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3115/974147.974178" ] }, "num": null, "urls": [], "raw_text": "Thorsten Brants. 2002. TnT: A statistical part-of- speech tagger. ANLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics", "authors": [ { "first": "P", "middle": [ "C" ], "last": "Jason", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "", "middle": [], "last": "Nichols", "suffix": "" } ], "year": 2016, "venue": "", "volume": "4", "issue": "", "pages": "357--370", "other_ids": { "DOI": [ "10.1162/tacl_a_00104" ] }, "num": null, "urls": [], "raw_text": "Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Trans- actions of the Association for Computational Lin- guistics, 4:357-370.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.02116" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Entity extraction for Malayalam social media text using structured skip-gram based embedding features from unlabeled data", "authors": [ { "first": "G", "middle": [], "last": "Devi", "suffix": "" }, { "first": "P", "middle": [ "V" ], "last": "Veena", "suffix": "" }, { "first": "M", "middle": [ "Anand" ], "last": "Kumar", "suffix": "" }, { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 6th International Conference on Advances in Computing and Communications", "volume": "93", "issue": "", "pages": "547--553", "other_ids": { "DOI": [ "10.1016/j.procs.2016.07.276" ] }, "num": null, "urls": [], "raw_text": "G. Remmiya Devi, P.V. Veena, M. Anand Kumar, and K.P. Soman. 2016. Entity extraction for Malayalam social media text using structured skip-gram based embedding features from unlabeled data. Procedia Computer Science, 93:547 -553. Proceedings of the 6th International Conference on Advances in Com- puting and Communications.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Government domain named entity recognition for South African languages", "authors": [ { "first": "Roald", "middle": [], "last": "Eiselen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "3344--3348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roald Eiselen. 2016. Government domain named en- tity recognition for South African languages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3344-3348.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press, Cambridge, MA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Information extraction for conversational systems in Indian languages -Arnekt IECSIL", "authors": [ { "first": "Barathi Ganesh Hullathy", "middle": [], "last": "Balakrishnan", "suffix": "" }, { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "" }, { "first": "U", "middle": [], "last": "Reshma", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Kale", "suffix": "" }, { "first": "Prachi", "middle": [], "last": "Mankame", "suffix": "" }, { "first": "Gouri", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Anitha", "middle": [], "last": "Kale", "suffix": "" }, { "first": "Anand", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "M", "middle": [], "last": "", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 10th Annual Meeting of the Forum for Information Retrieval Evaluation, FIRE'18", "volume": "", "issue": "", "pages": "18--20", "other_ids": { "DOI": [ "10.1145/3293339.3293344" ] }, "num": null, "urls": [], "raw_text": "Barathi Ganesh Hullathy Balakrishnan, Soman KP, Reshma U, Mandar Kale, Prachi Mankame, Gouri Kulkarni, Anitha Kale, and Anand Kumar M. 2018. Information extraction for conversational systems in Indian languages -Arnekt IECSIL. In Proceedings of the 10th Annual Meeting of the Forum for Infor- mation Retrieval Evaluation, FIRE'18, page 18-20, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Exploiting Wikipedia as external knowledge for named entity recognition", "authors": [ { "first": "Kentaro", "middle": [], "last": "Jun'ichi Kazama", "suffix": "" }, { "first": "", "middle": [], "last": "Torisawa", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", "volume": "", "issue": "", "pages": "698--707", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun'ichi Kazama and Kentaro Torisawa. 2007. Exploit- ing Wikipedia as external knowledge for named en- tity recognition. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning (EMNLP-CoNLL), pages 698-707, Prague, Czech Republic. Association for Computa- tional Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The Basic Language Resource Kit (BLARK) as the first milestone for the language resources roadmap", "authors": [ { "first": "Steven", "middle": [], "last": "Krauwer", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 International Workshop Speech and Computer (SPECOM 2003)", "volume": "", "issue": "", "pages": "8--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Krauwer. 2003. The Basic Language Resource Kit (BLARK) as the first milestone for the lan- guage resources roadmap. In Proceedings of the 2003 International Workshop Speech and Computer (SPECOM 2003), pages 8-15.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Deep learning based part-of-speech tagging for Malayalam Twitter data (special issue: Deep learning techniques for natural language processing)", "authors": [ { "first": "S", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "M", "middle": [ "Anand" ], "last": "Kumar", "suffix": "" }, { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "" } ], "year": 2019, "venue": "Journal of Intelligent Systems", "volume": "28", "issue": "3", "pages": "423--435", "other_ids": { "DOI": [ "10.1515/jisys-2017-0520" ] }, "num": null, "urls": [], "raw_text": "S. Kumar, M. Anand Kumar, and K.P. Soman. 2019. Deep learning based part-of-speech tagging for Malayalam Twitter data (special issue: Deep learn- ing techniques for natural language processing). Journal of Intelligent Systems, 28(3):423 -435.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Named entity recognition for linguistic rapid response in low-resource languages: Sorani Kurdish and Tajik", "authors": [ { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "David", "middle": [ "R" ], "last": "Mortensen", "suffix": "" }, { "first": "Alexa", "middle": [], "last": "Little", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Lori", "middle": [], "last": "Levin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "998--1006", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Littell, Kartik Goyal, David R. Mortensen, Alexa Little, Chris Dyer, and Lori Levin. 2016. Named entity recognition for linguistic rapid re- sponse in low-resource languages: Sorani Kurdish and Tajik. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 998-1006, Os- aka, Japan. The COLING 2016 Organizing Commit- tee.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Decoupled weight decay regularization", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Con- ference on Learning Representations.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Cheap translation for cross-lingual named entity recognition", "authors": [ { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Chen-Tse", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2536--2545", "other_ids": { "DOI": [ "10.18653/v1/D17-1269" ] }, "num": null, "urls": [], "raw_text": "Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 2536-2545, Copenhagen, Denmark. As- sociation for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "POS tagger for Malayalam using Hidden Markov Model", "authors": [ { "first": "S", "middle": [ "K" ], "last": "Nambiar", "suffix": "" }, { "first": "A", "middle": [], "last": "Leons", "suffix": "" }, { "first": "S", "middle": [], "last": "Jose", "suffix": "" }, { "first": "Arunsree", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "2019 International Conference on Smart Systems and Inventive Technology (ICSSIT)", "volume": "", "issue": "", "pages": "957--960", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. K. Nambiar, A. Leons, S. Jose, and Arunsree. 2019. POS tagger for Malayalam using Hidden Markov Model. In 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT), pages 957-960.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Improving multilingual named entity recognition with Wikipedia entity type mapping", "authors": [ { "first": "Jian", "middle": [], "last": "Ni", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Florian", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1275--1284", "other_ids": { "DOI": [ "10.18653/v1/D16-1135" ] }, "num": null, "urls": [], "raw_text": "Jian Ni and Radu Florian. 2016. Improving multilin- gual named entity recognition with Wikipedia en- tity type mapping. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 1275-1284, Austin, Texas. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Transforming Wikipedia into named entity training data", "authors": [ { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" }, { "first": "Tara", "middle": [], "last": "Murphy", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Australasian Language Technology Association Workshop", "volume": "", "issue": "", "pages": "124--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joel Nothman, James R. Curran, and Tara Murphy. 2008. Transforming Wikipedia into named entity training data. In Proceedings of the Australasian Language Technology Association Workshop 2008, pages 124-132, Hobart, Australia.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Analysing Wikipedia and gold-standard corpora for NER training", "authors": [ { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "Tara", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)", "volume": "", "issue": "", "pages": "612--620", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joel Nothman, Tara Murphy, and James R. Curran. 2009. Analysing Wikipedia and gold-standard cor- pora for NER training. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 612-620, Athens, Greece. As- sociation for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A hybrid statistical approach for named entity recognition for Malayalam language", "authors": [ { "first": "Rajeev", "middle": [ "R" ], "last": "Jisha P Jayan", "suffix": "" }, { "first": "R", "middle": [], "last": "", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Sherly", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 11th Workshop on Asian Language Resources", "volume": "", "issue": "", "pages": "58--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jisha P Jayan, Rajeev R R, and Elizabeth Sherly. 2013. A hybrid statistical approach for named entity recog- nition for Malayalam language. In Proceedings of the 11th Workshop on Asian Language Resources, pages 58-63, Nagoya, Japan. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Crosslingual name tagging and linking for 282 languages", "authors": [ { "first": "Xiaoman", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Boliang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1946--1958", "other_ids": { "DOI": [ "10.18653/v1/P17-1178" ] }, "num": null, "urls": [], "raw_text": "Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross- lingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946-1958, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Named entity recognition with documentspecific KB tag gazetteers", "authors": [ { "first": "Will", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "James", "middle": [], "last": "Henderson", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "512--517", "other_ids": { "DOI": [ "10.18653/v1/D15-1058" ] }, "num": null, "urls": [], "raw_text": "Will Radford, Xavier Carreras, and James Henderson. 2015. Named entity recognition with document- specific KB tag gazetteers. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 512-517, Lisbon, Por- tugal. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Design challenges and misconceptions in named entity recognition", "authors": [ { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009)", "volume": "", "issue": "", "pages": "147--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Ratinov and Dan Roth. 2009. Design chal- lenges and misconceptions in named entity recog- nition. In Proceedings of the Thirteenth Confer- ence on Computational Natural Language Learning (CoNLL-2009), pages 147-155, Boulder, Colorado. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Mining Wiki resources for multilingual named entity recognition", "authors": [ { "first": "Alexander", "middle": [ "E" ], "last": "Richman", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Schone", "suffix": "" } ], "year": 2008, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander E. Richman and Patrick Schone. 2008. Min- ing Wiki resources for multilingual named entity recognition. In Proceedings of ACL-08: HLT, pages 1-9, Columbus, Ohio. Association for Computa- tional Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A study on named entity recognition for Malayalam language using TnT tagger & maximum entropy Markov model", "authors": [ { "first": "S", "middle": [], "last": "Shruthi", "suffix": "" }, { "first": "P", "middle": [ "V" ], "last": "Jiljo", "suffix": "" }, { "first": "", "middle": [], "last": "Pranav", "suffix": "" } ], "year": 2016, "venue": "International Journal of Applied Engineering Research", "volume": "11", "issue": "", "pages": "5425--5429", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Shruthi, Jiljo, and P.V. Pranav. 2016. A study on named entity recognition for Malayalam language using TnT tagger & maximum entropy Markov model. International Journal of Applied Engineer- ing Research, 11:5425-5429.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Introducing the Knowledge Graph: things, not strings", "authors": [ { "first": "Amit", "middle": [], "last": "Singhal", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Singhal. 2012. Introducing the Knowl- edge Graph: things, not strings. https: //www.blog.google/products/search/ introducing-knowledge-graph-things-not/.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Towards an efficient Malayalam named entity recognizer analysis on the challenges", "authors": [ { "first": "P S", "middle": [], "last": "Sreeja", "suffix": "" }, { "first": "S", "middle": [], "last": "Anitha", "suffix": "" }, { "first": "", "middle": [], "last": "Pillai", "suffix": "" } ], "year": 2020, "venue": "Third International Conference on Computing and Network Communications (Co-CoNet'19)", "volume": "171", "issue": "", "pages": "2541--2546", "other_ids": { "DOI": [ "10.1016/j.procs.2020.04.275" ] }, "num": null, "urls": [], "raw_text": "P S Sreeja and Anitha S Pillai. 2020. Towards an efficient Malayalam named entity recognizer analy- sis on the challenges. Procedia Computer Science, 171:2541 -2546. Third International Conference on Computing and Network Communications (Co- CoNet'19).", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "How to fine-tune bert for text classification?", "authors": [ { "first": "Chi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Yige", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "Chinese Computational Linguistics", "volume": "", "issue": "", "pages": "194--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In Chinese Computational Linguistics, pages 194- 206, Cham. Springer International Publishing.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A comparison of approaches to word class tagging: Disjunctively vs. conjunctively written Bantu languages", "authors": [ { "first": "Elsab\u00e9", "middle": [], "last": "Taljard", "suffix": "" }, { "first": "Sonja", "middle": [ "E" ], "last": "Bosch", "suffix": "" } ], "year": 2006, "venue": "Nordic journal of African studies", "volume": "15", "issue": "4", "pages": "428--442", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elsab\u00e9 Taljard and Sonja E. Bosch. 2006. A com- parison of approaches to word class tagging: Dis- junctively vs. conjunctively written Bantu languages. Nordic journal of African studies, 15(4):428-442.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A proposal to automatically build and maintain gazetteers for named entity recognition by using Wikipedia", "authors": [ { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" }, { "first": "Rafael", "middle": [], "last": "Mu\u00f1oz", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Workshop on NEW TEXT Wikis and blogs and other dynamic text sources", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antonio Toral and Rafael Mu\u00f1oz. 2006. A proposal to automatically build and maintain gazetteers for named entity recognition by using Wikipedia. In Proceedings of the Workshop on NEW TEXT Wikis and blogs and other dynamic text sources.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "to Lanka to see Seetha'", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "Schematic overview of the data set generation process5.2 Labeling Titles with NE Tags", "uris": null }, "FIGREF3": { "num": null, "type_str": "figure", "text": "Overview of the three stages of data set annotation", "uris": null }, "FIGREF4": { "num": null, "type_str": "figure", "text": "Fine tuning analysis \u2192 from/in Johannesburg.", "uris": null }, "FIGREF5": { "num": null, "type_str": "figure", "text": "Base learning rate: 2 \u2022 10 \u22125", "uris": null }, "FIGREF6": { "num": null, "type_str": "figure", "text": "Confusion Matrix for CUSAT before (above) and after (below) fine tuning external data sets. Figures 4 and 5 display the confusion matrices of the results obtained", "uris": null }, "FIGREF7": { "num": null, "type_str": "figure", "text": "Figure 5: Confusion Matrix for ARNEKT before (above) and after (below) fine tuning", "uris": null }, "TABREF1": { "type_str": "table", "html": null, "text": "). The info-box features are used to annotate long abstracts. Meta tags are translated into English using Google translator. The data set consists of 838,333 tokens overall: 59,422 PER, 29,371 LOC, and 4,841 ORG. All other tokens are labelled OTHER.", "num": null, "content": "" }, "TABREF2": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
NE TagMalayalam isiZulu
Other21012137138986
Place7232595916
Person4442602748
Organization 179022700
Total22358678148350
" }, "TABREF3": { "type_str": "table", "html": null, "text": "", "num": null, "content": "" }, "TABREF4": { "type_str": "table", "html": null, "text": "XLM-RoBERTa results for Malayalam", "num": null, "content": "
WikiMlCUSATARNEKT
ClassPrecision Recall F1 Score Precision Recall F1 Score Precision Recall F1 Score
Person0.940.800.870.650.480.560.740.650.69
Place0.930.830.870.550.570.560.760.780.77
Organization0.790.820.810.480.280.350.750.620.68
Other0.991.000.990.990.990.990.960.970.97
Macro Average0.940.800.870.660.580.610.800.750.78
" }, "TABREF5": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
: XLM-RoBERTa results for isiZulu
WikiZuNCHLT II
ClassPrecision Recall F1 Score Precision Recall F1 Score
Place0.940.870.900.460.410.43
Person0.780.900.840.310.200.24
Organization0.780.750.770.250.150.19
Other0.990.990.990.950.970.96
Macro Average0.90.880.890.690.560.63
" } } } }