{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:27:01.870027Z" }, "title": "Analyzing the Morphological Structures in Seediq Words", "authors": [ { "first": "Chuan-Jie", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Ocean University", "location": { "settlement": "Keelung", "country": "Taiwan" } }, "email": "cjlin@email.ntou.edu.tw" }, { "first": "Li-May", "middle": [], "last": "Sung", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University", "location": { "settlement": "Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "Jing-Sheng", "middle": [], "last": "You", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Ocean University", "location": { "settlement": "Keelung", "country": "Taiwan" } }, "email": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Ocean University", "location": { "settlement": "Keelung", "country": "Taiwan" } }, "email": "" }, { "first": "Cheng-Hsun", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Ocean University", "location": { "settlement": "Keelung", "country": "Taiwan" } }, "email": "" }, { "first": "Zih-Cyuan", "middle": [], "last": "Liao", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Ocean University", "location": { "settlement": "Keelung", "country": "Taiwan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "NLP techniques are efficient to build large datasets for low-resource languages. It is helpful for preservation and revitalization of the indigenous languages. This paper proposes approaches to analyze morphological structures in Seediq words automatically as the first step to develop NLP applications such as machine translation. Word inflections in Seediq are plentiful. Sets of morphological rules have been created according to the linguisitic features provided in the Seediq syntax book (Sung, 2018) and based on regular morpho-phonological processing in Seediq, a new idea of \"deep root\" is also suggested. The rule-based system proposed in this paper can successfully detect the existence of infixes and suffixes in Seediq with a precision of 98.88% and a recall of 89.59%. The structure of a prefix string is predicted by probabilistic models. We conclude that the best system is bigram model with back-off approach and Lidstone smoothing with an accuracy of 82.86%.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "NLP techniques are efficient to build large datasets for low-resource languages. It is helpful for preservation and revitalization of the indigenous languages. This paper proposes approaches to analyze morphological structures in Seediq words automatically as the first step to develop NLP applications such as machine translation. Word inflections in Seediq are plentiful. Sets of morphological rules have been created according to the linguisitic features provided in the Seediq syntax book (Sung, 2018) and based on regular morpho-phonological processing in Seediq, a new idea of \"deep root\" is also suggested. The rule-based system proposed in this paper can successfully detect the existence of infixes and suffixes in Seediq with a precision of 98.88% and a recall of 89.59%. The structure of a prefix string is predicted by probabilistic models. We conclude that the best system is bigram model with back-off approach and Lidstone smoothing with an accuracy of 82.86%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine learning and deep learning have been the most popular techniques in recent days. Systems built by machine learning or deep learning often achieve good performance, but the scale of the training sets in general should be large enough. Comparing to English, the amount of resources in Mandarin is far more small, not to mention the resources in the Southern Min, Hakka, even the indigenous languages in Taiwan. The United Nations has declared the Year of 2019 as the International Year of Indigenous Languages 1 in order to highlight the preservation issues of these endangered languages and gain more attention from the world. Following the same spirit, in January 2019 Taiwan has also promulgated the National Languages Development Act 2 ( \u570b \u5bb6 \u8a9e \u8a00 \u767c \u5c55 \u6cd5 ) to speed up the preservation and revitalization of the indigenous languages in Taiwan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1.1" }, { "text": "The indigenous languages in Taiwan, well-known as Formosan languages 3 (\u53f0\u7063\u5357\u5cf6\u8a9e \u8a00) in the Austronesian languages (\u5357\u5cf6\u8a9e\u7cfb) family, include 16 languages of 42 dialects in total. All are endangered to some degree according to the investigation by UNESCO in 2009. So far we have not found many researches on the natural language processing of the Formosan languages. Collaborating with a linguist and an expert in Seediq, one of the authors, this paper aims to provide an innovative first step on Seediq. In addition, we expect that the research results can be applied to linguistically related languages, Atayal (\u6cf0\u96c5\u8a9e) or Truku (\u592a\u9b6f\u95a3\u8a9e) (Li, 1981) , without much effort, or even to Amis (\u963f\u7f8e\u8a9e) which has the largest population of speakers and a similar writing system to Seediq.", "cite_spans": [ { "start": 627, "end": 637, "text": "(Li, 1981)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1.1" }, { "text": "The morphology in Seediq is quite complicated, including many word inflections to represent verbal focus, aspect and causation etc. For example, the morphological structure of the word \"psetuq\" (break, \u65b7) is \"p-setuq\", and the structure of the word \"qnyutan\" (bite, \u54ac) is \"qyuc-an\", where \"p-\" (CAU, causative, \u4f7f\u52d5), \"\" (PRFTV, perfective aspect, \u5b8c \u6210\u8c8c), and \"-an\" (LV, locative voice, \u8655\u6240\u7126\u9ede) are prefix, infix, and suffix, respectively. As we will discuss later, it is not easy to decompose the affixes and the stem in a Seediq word, but they carry important information for NLP tasks such as machine translation. This paper proposes the automatic approaches to analyze the morphological structures in Seediq as the first step of machine translation or other NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1.1" }, { "text": "There is no large corpus in Seediq available so far. The experimental data in this paper came from the book \"\u8cfd\u5fb7\u514b\u8a9e\u8a9e\u6cd5\u6982\u8ad6\" (A Sketch Grammar of Seediq) (Sung, 2018) (referred to as the Seediq syntax book hereafter). This book provides many sentences with morphological information as illustrations. We used these data to construct the training set. Dr. Li-May Sung, the author of the Seediq syntax book and one of the authors of this paper, provided another batch of sentences tagged with morphological information as well. We used them to construct the test set. There are 394 and 322 affixed Seediq words in these two datasets, far less than the necessary amount to train a classifier by machine learning or deep learning. One additional Seediq resource is an online Seediq dictionary \"\u8cfd\u5fb7\u514b\u8a9e\u5fb7\u56fa\u9054\u96c5 \u65b9\u8a00\" (Tgdaya Seediq, referred to as the CIP Seediq dictionary hereafter) (Sung, 2011), compiled by Dr. Sung for the Council of Indigenous Peoples (\u539f\u4f4f\u6c11\u65cf\u59d4\u54e1\u6703). There are about 5,600 words in this dictionary but with no morphological analysis. In the future we will apply the techniques developed in this paper to analyze these dictionary words in order to build up a larger dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1.1" }, { "text": "To our best knowledge, there are not many researches on natural language processing of the Formosan languages. The most related studies are the ones done by Dr. Meng-Chien Yang in Tao (\u9054\u609f\u8a9e, aka. Yami \u96c5\u7f8e\u8a9e), including the construction of a wordnet and a lexicon in Yami Rau et al., 2015) , and machine translation between Yami and Mandarin under a small bilingual corpus ).", "cite_spans": [ { "start": 268, "end": 285, "text": "Rau et al., 2015)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "1.2" }, { "text": "There are many NLP studies for other local languages in Taiwan though, including machine translation for Taiwanese (Lin & Chen, 1999) , speech recognition and synthesis in Taiwanese (Iunn et al., 2007; Yu & Lin, 2012) , and prosodic models in Hakka (Gu et al., 2007; Chiang, 2018 ). As we know that Taiwanese Southern Min, Hakka, and Mandarin belong to the Sinitic languages (\u6f22\u8a9e\u7fa4), and they do not share similar language structure with the Formosan languages. Thus the research results cannot be applied directly to the Formosan languages.", "cite_spans": [ { "start": 115, "end": 133, "text": "(Lin & Chen, 1999)", "ref_id": "BIBREF9" }, { "start": 182, "end": 201, "text": "(Iunn et al., 2007;", "ref_id": "BIBREF5" }, { "start": 202, "end": 217, "text": "Yu & Lin, 2012)", "ref_id": "BIBREF19" }, { "start": 249, "end": 266, "text": "(Gu et al., 2007;", "ref_id": "BIBREF2" }, { "start": 267, "end": 279, "text": "Chiang, 2018", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "1.2" }, { "text": "In addition, there are limited electronic resources in Seediq available in the Internet. The CIP Seediq dictionary contains 5,595 words and 6,019 sentences with Mandarin translations. It is the largest dataset we can find so far. There are also textbooks for the elementary, junior-high and high schools available in \"\u539f\u4f4f\u6c11\u65cf\u96fb\u5b50\u66f8\u57ce 4 \" (Taiwanese Indigenous ebooks) and \"\u65cf\u8a9e E \u6a02\u5712 5 \" (Formosan Languages E-Land), but their amounts are still comparatively small with no morphological analysis. Only sentences in the Seediq syntax book are tagged with morphological information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "1.2" }, { "text": "A Seediq ontology was built by Dr. Shu-Kai Hsieh and Dr. Chu-Ren Huang (Hsieh et al., 4 Chuan-Jie Lin et al. 2007) . It contains 270 Seediq words mapping to the senses of WordNet in English in order to study the hyponymy relationships between Seediq words. As the ontology only covers a small set of Seediq words and provides mainly semantic information, we will not use it in this paper.", "cite_spans": [ { "start": 71, "end": 85, "text": "(Hsieh et al.,", "ref_id": null }, { "start": 86, "end": 87, "text": "4", "ref_id": null }, { "start": 98, "end": 114, "text": "Lin et al. 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "1.2" }, { "text": "The development of a machine translation system usually requires a large size bilingual corpus in order to train a good-quality MT system by machine learning or deep learning (Bahdanau et al., 2015; Luong et al., 2015) . It is important to create a large corpus efficiently by the help of NLP techniques, and this is the main goal we plan to do on Seediq in this paper.", "cite_spans": [ { "start": 175, "end": 198, "text": "(Bahdanau et al., 2015;", "ref_id": "BIBREF0" }, { "start": 199, "end": 218, "text": "Luong et al., 2015)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "1.2" }, { "text": "Seediq as one of the Formosan languages, the Seediq people mainly live in Nantou County and Hualien County. Linguistically belonging to the Atayalic subgroup (Li, 1981) , Seediq is closely related to Atayal (\u6cf0\u96c5\u8a9e) and Truku (\u592a\u9b6f\u95a3\u8a9e).", "cite_spans": [ { "start": 158, "end": 168, "text": "(Li, 1981)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Seediq Writing System", "sec_num": "2.1" }, { "text": "It has three dialects, including Tgdaya (\u5fb7\u56fa\u9054\u96c5), Toda (\u90fd\u9054), and Truku (\u5fb7\u8def\u56fa). Our experimental data came from the Seediq syntax book \"\u8cfd\u5fb7\u514b\u8a9e\u8a9e\u6cd5\u6982\u8ad6\" (Sung, 2018) focusing on the Tgdaya dialect. Most morphological information about Seediq provided in this section also came from this syntax book. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seediq Writing System", "sec_num": "2.1" }, { "text": "The Seediq syntax book (Sung, 2018) provides detailed morphological information in each exemplar sentence to help the reader understand Seediq more efficiently. Words, especially the verbs, in the sentence are affixed to indicate actor voice (AV), patient voice (PV), locative voice (LV), beneficiary/instrumental/referential voices (BV/IV/RV), etc., and aspects such as perfective aspect (PFV). Affixation is overwhelmingly prevailing in Seediq. Such information is very useful in our study. One example of the morphological information is as follows. In this example, the root of the word \"qnyutan\" (bitten by, \u88ab..\u54ac) is \"qiyuc\". This word is affixed with a suffix \"-an\" (LV, locative voice, \u8655\u6240\u7126\u9ede) and an infix \"\" (PFV, perfective aspect, \u5b8c\u6210\u8c8c), and becomes \"qiyuc-an\". Similarly, the root of the word \"psetuq\" (broken, \u4f7f\u65b7) is \"setuq\". This word is affixed with a prefix \"p-\" (CAUS, causative, \u4f7f\u52d5) and becomes \"p-setuq\". (GEN means genitive case. The symbol '=' represents the attachment of pronouns and other cases. It will not be discussed in this paper.) Several examples of word inflections are provided in Table 1 . Notes: \"m-\": AV, agent voice \u4e3b\u4e8b\u7126\u9ede; \"p-\": FUT, future \u672a\u4f86 or CAUS, causative \u4f7f\u52d5; \"k-\": STAT, stative \u975c\u614b; \"\": AV, actor voice \u4e3b\u4e8b\u7126\u9ede; \"\": PFV, perfective aspect \u5b8c\u6210\u8c8c; \"-an\": LV, locative voice \u8655\u6240\u7126\u9ede; \"-un\": PV, patient voice \u53d7\u4e8b\u7126\u9ede Another type of prefixes is reduplication (RED, \u91cd\u758a) which repeats some part of the word. It is used for plurality, intensification, and etc. For example, the word \"sseediq\" (\"s-seediq\") (RED-person, \u91cd \u758a -\u4eba ) means \"many people\", and the word \"mkrkere\" (\"m-kr-kere\") (AV-RED-strong, \u4e3b\u4e8b\u7126\u9ede-\u91cd\u758a-\u5f37\u58ef) means something is very strong. Even prefixes can be repeated, such as in the word \"pposa\" (\"p-p-osa\") (RED-CAUS-go, \u91cd\u758a-\u4f7f \u52d5-\u53bb) which means \"forced to go to somewhere\". The reduplication usually does not change the meaning of a word but its amount or intensity, which could also be an issue in machine translation.", "cite_spans": [], "ref_spans": [ { "start": 1117, "end": 1124, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Morphology in Seediq", "sec_num": "2.2" }, { "text": "When a Seediq word is affixed, the final writing form can be different from its original combination, as we can see in the examples in Table 1 . This is the reason why the morphological structure of a Seediq word cannot be generated directly from its surface form. We discuss only three variation cases here (Sung, 2018; Yang, 1976; Li, 1977; Li, 1991) .", "cite_spans": [ { "start": 308, "end": 320, "text": "(Sung, 2018;", "ref_id": null }, { "start": 321, "end": 332, "text": "Yang, 1976;", "ref_id": "BIBREF14" }, { "start": 333, "end": 342, "text": "Li, 1977;", "ref_id": "BIBREF6" }, { "start": 343, "end": 352, "text": "Li, 1991)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 135, "end": 142, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Morphology in Seediq", "sec_num": "2.2" }, { "text": "The first case is related to vowel neutralization (\u5143\u97f3\u4e2d\u6027\u5316) and vowel reduction (\u5143\u97f3 \u812b\u843d). In Seediq, vowels other than the last two syllables are weakened (neutralized) and omitted when writing. It usually happens in the suffixation process in Seediq. Take examples from Table 1. In the word \"qyaanun\" (\"qeya-an-un\"), the first vowel \"e\" of its root \"qeya\" is omitted when affixed. And in the word \"pndsanan\" (\"padis-an-an\"), both vowels of its root \"adis\" are omitted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology in Seediq", "sec_num": "2.2" }, { "text": "Consider another example. The word \"dngei\" (\"dengu-i\") consists of a root \"dengu\" (sun-dry; \u66ec\u4e7e) and a suffix \"-i\" (IMP, imperative, \u7948\u4f7f). We suggest that the root word \"dengu\" may originally be \"denge\": that is, the second vowel 'e' is neutralized as 'u' when it appears at the end of a word. When \"denge\" is suffixed with \"-i\", the accent falls on the second vowel 'e' (hence not neutralized any more) and makes it remain as \"e\"; meanwhile, the first vowel \"e\" of \"denge\" is neutralized and omitted, resulting in \"dngei\". That is, the word \"dngei\" comes from the original structure of \"denge-i\". We refer to such original form of a root as its \"deep root\" and will discuss it in details in Section 3.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology in Seediq", "sec_num": "2.2" }, { "text": "The second case is about vowel harmony (\u5143\u97f3\u548c\u8ae7\u8b8a\u5316). When a root word starts with a vowel, the preceding prefix usually ends with the same vowel. For example, if the prefix \"s-\" (RV, referential voice, \u53c3\u8003\u7126\u9ede) attaches to the root \"osa\" (go, \u53bb), the prefix becomes \"so-\" and the final writing form is \"soosa\" (\"so-osa\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology in Seediq", "sec_num": "2.2" }, { "text": "The third case is about word-final consonant mutation (\u8a5e\u5c3e\u8f14\u97f3\u8b8a\u5316). Some word-final consonants will be changed if there is no suffix attached. When such a word is suffixed, its final consonant changes back to the original one. Take the word \"qnyutan\" (bite, \u54ac) as an example. Its root \"qiyuc\" is in fact the result of word-final consonant mutation from its original form (deep root) \"qiyut\". When \"qiyuc\" is attached with a suffix \"-an\" (LV, locative voice, \u8655\u6240\u7126\u9ede), the final consonant 'c' changes back to 't' and the affixed word is in fact \"qiyut-an\" and the final writing form is \"qnyutan\" (note that the first vowel 'i' of the root is omitted).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology in Seediq", "sec_num": "2.2" }, { "text": "Word inflections in Seediq are overwhelmingly plentiful. In the CIP Seediq dictionary, for example, there are 39 words relating to the same root \"adis\" (bring, \u5e36\u8d70): desan, dese, desi, deso, desun, dnsanan, dsanan, dsane, dsani, dsanun, dsdesan, dsdesi, dsdesun, knddesi, maadis, madis, mdaadis, mkdesun, mkmadis, mnadis, nadis, paadis, pdaadis, pdesan, pdese, pdesi, pdeso, pdesun, pdsanan, pdsane, pdsani, pdsanun, pnaadis, pnadis, pndesan, pndsanan, ppaadis, saadis, and spaadis. Analyzing the Morphological Structures in Seediq Words 7", "cite_spans": [ { "start": 178, "end": 481, "text": "desi, deso, desun, dnsanan, dsanan, dsane, dsani, dsanun, dsdesan, dsdesi, dsdesun, knddesi, maadis, madis, mdaadis, mkdesun, mkmadis, mnadis, nadis, paadis, pdaadis, pdesan, pdese, pdesi, pdeso, pdesun, pdsanan, pdsane, pdsani, pdsanun, pnaadis, pnadis, pndesan, pndsanan, ppaadis, saadis, and spaadis.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Morphology in Seediq", "sec_num": "2.2" }, { "text": "The main issue focused in this paper is: when we have a Seediq word and its root word, we want to know its morphological structure, i.e. the combination of prefixes, infixes, and suffixes in that word. In the CIP Seediq dictionary, words and their roots are available. By the techniques developed in this paper, we can generate those words' morphological structures automatically and efficiently. Figure 1 demonstrates our proposed flowchart to analyze Seediq morphological structure automatically. Take the word \"pnsltudan\" (whose root word is \"lutuc\") as an example to explain the flowchart. First, a list of deep root candidates {\"lutud\", \"lutuc\"\u2026} of the root word \"lutuc\" is prepared by the method introduced in Section 3.1. (The definition of deep root is also given in Section 3.1.) Each deep root candidate is combined with a set (or none) of known infixes and suffixes to form a partial morphological structure (cf. Section 3.2). For example, by selecting a deep root \"lutud\", no infix, and a suffix \"-an\", we will have a partial morphological structure \"lutud-an\". Transformation rules (described in Section 3.3) are then applied to the partial structure and its writing form \"ltudan\" will be generated (note that the first vowel 'u' and all the structural symbols are omitted). Since that \"ltudan\" is exactly the trailing substring of the given word \"pnsltudan\", the leading substring \"pns\" is extracted as the prefix part, and its structure \"ps\" is decided by prefix structure analysis methods (as discussed in Section 3.4). Finally, the overall predicted morphological structure of the word \"pnsltudan\" is \"ps-lutuc-an\". Note that we still use root words in the morphological structures, not the deep roots.", "cite_spans": [], "ref_spans": [ { "start": 397, "end": 405, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Seediq Morphological Structure Analysis", "sec_num": "3." }, { "text": "As discussed briefly above, some root words (\u539f\u5f62\u8a5e) when suffixed in Seediq will change back to their original forms before vowel neutralization or word-final consonant mutation (cf. Section 2.2). We refer to such original form of a root word as its deep root (\u6df1\u5c64\u539f\u5f62). For example, when generating writing form of the word \"p-adis-o\", the root word \"adis\" should be replaced with its deep root \"ades\", so that, by omitting neutralized vowels, \"p-ades-o\" becomes the correct writing form \"pdeso\" (bring, \u5e36). Table 2 provides more examples of deep roots. All four root words in Table 2 have the same trailing substring \"uk\". However, when they are attached with the suffix \"-i\", the result words (in the third column) do not all end with \"uki\" but change into different trailing substrings. That is because the deep root of \"aduk\" is \"adup\", the deep root of \"ciyuk\" is the same as the root word, the deep root of \"dehuk\" is \"dehek\", and the deep root of \"eluk\" is \"eleb\".", "cite_spans": [], "ref_spans": [ { "start": 504, "end": 511, "text": "Table 2", "ref_id": null }, { "start": 573, "end": 580, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Deep Root Prediction", "sec_num": "3.1" }, { "text": "Suffixed Structure Word Struct w. Deep Root", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Root Word", "sec_num": null }, { "text": "aduk (repel, \u8d95\u8d70) aduk-i dupi adup-i ciyuk (reply, \u56de\u8986) ciyuk-i ciyuki ciyuk-i dehuk (arrive, \u5230\u9054) dehuk-i dheki dehek-i eluk (close door or window, \u95dc\u9580\u7a97) eluk-i lebi eleb-i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Root Word", "sec_num": null }, { "text": "Predicting deep roots is not an easy task. Neither dictionaries nor syntax books provide information of deep roots. Vowel neutralization or word-final consonant mutation could also be many-to-one mapping. In the following we discuss how we gain a list of deep roots.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Root Word", "sec_num": null }, { "text": "Our first proposed method is based on inductive method. From the CIP Seediq dictionary, we can collect a set of suffixed words referencing to the same root word. The most frequent common trailing substrings among the suffixed words is extracted as its deep root. Note that it Analyzing the Morphological Structures in Seediq Words 9 may be identical to the root word, but we are only interested in the transformed deep roots. Details of steps to predict deep roots as well as some examples are given as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 1. Inductive Deep Root Prediction", "sec_num": null }, { "text": "Step 1. Collect words referencing to the same root word. In the CIP Seediq dictionary, \"\u53c3\u8003\u689d\u76ee\" (cross reference) often provides the root word information. For example, as shown in Section 1.2, 39 words including desan, dese, desi, etc., all refer to the same root word \"adis\". Words referencing to the same root word should have the same deep root.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 1. Inductive Deep Root Prediction", "sec_num": null }, { "text": "Step 2. Select words with suffixes. Only suffixed words will reveal its deep root, so we need to decide if a word is suffixed or not. Note that the CIP Seediq dictionary does not provide detailed morphological structural information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 1. Inductive Deep Root Prediction", "sec_num": null }, { "text": "If a word ends with its root word, it is not suffixed. For example, both \"ppaadis\" and \"maadis\" end with their root word \"adis\", so there is no suffix in these two words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 1. Inductive Deep Root Prediction", "sec_num": null }, { "text": "If a word ends with its root word after removing possible infixes, it is not suffixed. For example, the word \"cmnebu\" does not ends with its root word \"cebu\". By removing infixes \"\" after the first consonant 'c', this word appears exactly the same as its root word and hence not suffixed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 1. Inductive Deep Root Prediction", "sec_num": null }, { "text": "Words other than the two cases above and ending with known suffixes are considered as suffixed words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 1. Inductive Deep Root Prediction", "sec_num": null }, { "text": "Step 3. Predict deep roots by induction. When there is only one vowel in a suffix, the last vowel of the suffixed deep root will not be omitted. We can check if these words end with the same trailing substring and decide the deep root. For example, the structures of the words \"desan\", \"dese\", and \"desi\" are \"ades-an\", \"ades-e\", and \"ades-i\", respectively. After removing suffix parts, they all end with \"es\". Moreover, the preceding consonant 'd' appears in the root word \"adis\". By replacing the trailing substring of the root word with the most common substring induced from these suffixed words, we can obtain the deep root \"ades\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 1. Inductive Deep Root Prediction", "sec_num": null }, { "text": "Unfortunately, some root words do not have enough related words to induce their deep roots. Moreover, in some rare cases, we found two different deep roots related to the same root word. In order to increase the coverage of deep root prediction and morphological analysis, below we further propose a mapping table for the deep root prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 1. Inductive Deep Root Prediction", "sec_num": null }, { "text": "The deep root mapping table lists the mapping of trailing substrings between root words and their deep roots. This table is constructed from the pairs collected by Method 1. For example, the pairs in Table 2 of Section 3.1 tell us that a root word ending with \"uk\" may have a deep root ending with \"up\", \"ek\", or \"eb\". These mappings are saved in the deep root mapping table. The real data show that \"uk\" maps to \"ek\" for 4 times, \"up\" for twice, and \"eb\" for once.", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 230, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Method 2. Deep Root Mapping Table", "sec_num": null }, { "text": "Since deep roots are closely related to the processes of vowel neutralization and word-final consonant mutation, we only need to consider trailing substrings consist of the last vowel and the word-final consonant. For example, the extracted trailing substring of the word \"aduk\" is \"uk\" and trailing substring of the word \"beebu\" is \"u\". Figure 2 illustrates the steps of building the deep root mapping table. First, by applying Method 1 to the words in the CIP Seediq dictionary, a set of predicted pairs are collected. Words in the pairs are then replaced with their trailing substrings. Finally, by counting the mapping pairs, a mapping table is constructed where the mappings are sorted by their frequencies.", "cite_spans": [], "ref_spans": [ { "start": 338, "end": 346, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Method 2. Deep Root Mapping Table", "sec_num": null }, { "text": "When we do not know the deep root of a root word, we can still propose deep root candidates by replacing the trailing substrings according to the deep root mapping table. For example, we know that the root word of \"hligan\" is \"haluy\" but we do not know its deep root. According to the mapping table, the trailing substring \"uy\" often maps to \"ig\". By replacing the trailing substring, we guess that its deep root is \"halig\". The result structure of \"halig-an\" matches indeed with the target word \"hligan\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 2. Deep Root Mapping Table", "sec_num": null }, { "text": "To recap, deep root candidates in Figure 1 are generated according to the following order:", "cite_spans": [], "ref_spans": [ { "start": 34, "end": 42, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Method 2. Deep Root Mapping Table", "sec_num": null }, { "text": "(1) The deep root induced from the CIP Seediq by Method 1 (if any)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 2. Deep Root Mapping Table", "sec_num": null }, { "text": "(2) The original root word", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 2. Deep Root Mapping Table", "sec_num": null }, { "text": "(3) Trail-replacement results according to the deep root mapping table built by Method 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 2. Deep Root Mapping Table", "sec_num": null }, { "text": "With the list of deep root candidates of the root words, we then move to the next step. Each deep root candidate will be combined with known infixes and suffixes for string matching in the next steps. Infixes and suffixes are first considered because their sets are rather fixed; we only see 3 kinds of infixes {\"\", \"\", \"\"} and 10 kinds of suffixes {\"-an-an\", \"-an-un\", \"-ane\", \"-ani\", \"-ano\", \"-an\", \"-un\", \"-e\", \"-i\", \"-o\"} in the training set. Note that an infix appears after the first consonant. For example, when the word \"quyux\" is infixed with \"\", it becomes \"qmuyux\" (\"quyux\") (raining, \u4e0b\u96e8). But if a root word starts with a vowel, the infix appears at the beginning of the word. For example, when the word \"apa\" is infixed with \"\", it becomes \"napa\" (\"apa\") (carry, \u63f9). For convenience, we leave the infixes in such cases together with the prefix part (extracted in Section 3.3) to be processed in Section 3.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affixation with Infixes and Suffixes", "sec_num": "3.2" }, { "text": "For the case when a root word is affixed with only prefixes and infixes, its writing form can be derived directly from the combination by removing structural symbols. For instance, \"m-p-k-beyax\" becomes \"mpkbeyax\" (work hard, \u52aa\u529b) and \"haduc\" becomes \"hmaduc\" (send, \u9001).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation Rules to Generate Writing Form", "sec_num": "3.3" }, { "text": "But when a root word is suffixed, two cases should be considered. The first case is vowel reduction (\u5143\u97f3\u812b\u843d) where vowels other than the last two are neutralized and omitted. For example, \"hetur-ani\" (block out, \u64cb) becomes \"htrani\" where the first two vowels 'e' and 'u' are omitted. The rule of vowel reduction can be applied by programs easily.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation Rules to Generate Writing Form", "sec_num": "3.3" }, { "text": "One exception of vowel reduction happens when only one of the two adjacent identical vowels is about to be omitted. In such case, both vowels will not be omitted. Take \"osa-an-un\" (go, \u53bb) as an example. According to the general rule of vowel reduction, both vowels 'o' and 'a' in the root word \"osa\" should be omitted. However, the second vowel 'a' of the root word \"osa\" is followed by the suffix \"-an\" which starts with the same vowel. Therefore, the second vowel 'a' is not omitted, and the final writing form becomes \"saanun\" where only the first vowel 'o' is omitted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation Rules to Generate Writing Form", "sec_num": "3.3" }, { "text": "The second case is the addition of 'y' or 'w'. We find some cases that a 'y' or 'w' is added between the root word and the suffix. For example, the final writing form of \"chungi-an\" is \"chngiyan\" (forget, \u5fd8\u8a18) and the final writing form of \"cebu-an\" is \"cbuwan\" (to be shot, \u88ab\u64ca\u4e2d). We have not figured out the rules for such cases. Currently we simply insert a 'y' or 'w' to see if the transformation result matches the final writing form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation Rules to Generate Writing Form", "sec_num": "3.3" }, { "text": "The complete transformation rules to generate the final writing form are defined as follows. Given a morphological structure represented as pfx-rootstr-sfx, rootstr is the root part (root word or deep root), pfx is the prefix part, ifx is the infix part, and sfx is the suffix part. Any affix part may be empty. The writing form of the morphological structure is generated by the following steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation Rules to Generate Writing Form", "sec_num": "3.3" }, { "text": "Step 1. When the suffix part is not empty, the last two vowels in the structure remain unchanged. Vowels other than the last third one are all omitted. As for the last third vowel, a) If it is the same as the last second vowel and they are adjacent to each other, the last third vowel remains unchanged b) Otherwise the last third vowel is omitted", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation Rules to Generate Writing Form", "sec_num": "3.3" }, { "text": "Step 2. If the suffix sfx starts with a vowel but is different from the word-final vowel of rootstr, one 'y' or 'w' may be inserted between them to generate a correct writing form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation Rules to Generate Writing Form", "sec_num": "3.3" }, { "text": "Step 3. Remove all morphological structural symbols (including -, <, and >).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation Rules to Generate Writing Form", "sec_num": "3.3" }, { "text": "An interim summary for Section 3.1 to Section 3.3: Given a Seediq word and its root word, a list of deep root candidates is generated by methods proposed in Section 3.1. Each deep root candidate is then combined with every infix and suffix (including empty strings) as described in Section 3.2. Each combination is then transformed into the writing form by rules explained in this Section 3.3. If this writing form matches the trailing substring of the target Seediq word, this combination of deep root, infix and suffix is proposed as the predicted morphological structure, and the unmatched part is extracted as the prefix part for further analysis by methods proposed next in Section 3.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation Rules to Generate Writing Form", "sec_num": "3.3" }, { "text": "The prefix structure analysis also encounters ambiguity problem. One prefix string can be segmented into several different prefix combinations. For example, the prefix string \"kn-\" can be either \"kn-\" (NMLZ, nominalization, \u540d\u7269\u5316) or \"k-\" (STAT, \u975c\u614b<\u5b8c\u6210\u8c8c>), and the prefix string \"sk-\" can be either \"sk-\" (deceased, \u5df2\u6545) or \"s-k-\" (existential-STAT,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prefix Structure Analysis", "sec_num": "3.4" }, { "text": "To our best effort, we so far cannot find much information about prefix combinations. To solve the prefix problem in Seediq, we here propose several approaches similar to the classical solutions for Chinese word segmentation, including probability models and machine learning, which will be discussed in details below. Our goal is to find the best system in which we can predict the morphological structures of words in the CIP Seediq dictionary with high accuracy in order to reduce the effort of human checking in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6709-\u975c\u614b).", "sec_num": null }, { "text": "First of all, we need to prepare a list of atomic prefixes. There are 29 atomic prefixes found in the Seediq syntax book, including {\"k-\", \"n-\", \"kn-\", \"m-\"\u2026}. We further found 10 different atomic prefixes in the test data, including {\"de-\", \"gn-\", \"km-\"\u2026}. The following experiments are based on these atomic prefixes. We do not know whether there will be more new atomic prefixes in the CIP Seediq dictionary or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6709-\u975c\u614b).", "sec_num": null }, { "text": "Reduplication (introduced in Section 2.2) also appears in the prefix part. It is used to emphasize the amount of something or the intensity of an action. It can be attached to a root word or an atomic prefix. It repeats either the first consonant (e.g. \"s-\" in \"s-seediq\" and the first \"p-\" in \"p-p-heyu\"), or the first consonant with the word-initial vowel (e.g. \"le-\" in \"k-le-eluw\"), or the first two consonants (e.g. \"kr-\" in \"m-kr-kere\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6709-\u975c\u614b).", "sec_num": null }, { "text": "During training, all reduplication prefixes are replaced with a special symbol and treated as one type of the atomic prefixes. Therefore, there are totally 40 types of atomic prefixes in the experiments in Section 4. When segmenting a prefix string, a segment matching any of the 3 reduplication cases shown in the previous paragraph is considered to be a reduplication prefix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6709-\u975c\u614b).", "sec_num": null }, { "text": "One common approach for Chinese word segmentation is to build probability models. In a similar way, we propose unigram and bigram models for prefix structure analysis in Seediq. Given a prefix string pfx and one of its segmentation x 1 x 2 \u2026x m where x i is an atomic prefix, the probability of this segmentation is defined as follows, where $ denotes the beginning of the prefix string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Models", "sec_num": null }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unigram model: \u220f", "sec_num": null }, { "text": "Bigram model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unigram model: \u220f", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "|$ \u220f |", "eq_num": "(2)" } ], "section": "Unigram model: \u220f", "sec_num": null }, { "text": "Because the amount of training data is not large enough, we still need to apply smoothing methods to avoid zero probabilities. But some well-known smoothing methods such as Witten-Bell or Good-Turing are good for large training data. We did not choose them in this paper. Instead, we use Lidstone smoothing to build our unigram model. That is, the frequency of each atomic prefix (seen or unseen) is added with a value \u03bb before building the probability model. Let N be the original sum of the frequencies of all atomic prefixes and B be the number of types of atomic prefixes. Lidstone smoothing will assign a probability of \u03bb / (N+B\u03bb) to each unseen atomic prefix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unigram model: \u220f", "sec_num": null }, { "text": "We use back-off approach to deal with zero probabilities in the bigram model. That is, we consider the unigram probability (weighted by an \u03b1 value) of the second prefix in an unseen bigram. When P(x|y)=0, we use P(x|y)=\u03b1P(x) instead.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unigram model: \u220f", "sec_num": null }, { "text": "The unigram model provides the probabilities of 40 atomic prefixes. The bigram model provides the probabilities of bigram of these 40 prefixes and the starting sign $ (thus 41\uf0b440 types of bigrams). An unknown prefix x i or a bigram containing such an unknown prefix has no probability. Smoothing is designed for known but unseen atomic prefixes in our work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unigram model: \u220f", "sec_num": null }, { "text": "The steps of prefix structure analysis are as follows. Given a prefix string pfx, all segmentations x 1 x 2 \u2026x m are enumerated by inserting one or zero '-' between any two adjacent letters. For example, the prefix string \"mss-\" can be segmented into {\"mss-\", \"m-ss-\", \"ms-s-\", \"m-s-s-\"}. The segmentation having the best probability is selected as the final answer. Note that the strings \"mss-\" and \"ss-\" do not appear in the list of atomic prefixes and thus have no probability; so the probability of \"mss-\" and \"m-ss-\" is also 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unigram model: \u220f", "sec_num": null }, { "text": "Machine learning methods are also tried to guess the prefix structure. However, we have too little features so far, and the only features we know are contextual information and the list of atomic prefixes. More useful features need to be discovered in the future. The following example illustrates the features of each letter in the prefix string \"psq-\" where its correct structure is \"ps-q-\". For more details, the third row of Table 3 shows that 79 of the 394 words in the training set contain infixes, where 77 of them can be detected by the system (recall 77 / 79 = 97.47%) and all of them are correct (precision 77 / 77 = 100%). All precision scores of prefix, infix, and suffix detections are around 98% to 100%. Recall scores are a little lower, because 37 of the 394 affixed words are exceptions of morphological rules.", "cite_spans": [], "ref_spans": [ { "start": 429, "end": 436, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Machine Learning and Deep Learning Methods", "sec_num": null }, { "text": "In the training set, only 221 words are prefixed as shown in Table 3 . 116 of them are prefixed by one single-letter prefix and hence no further analysis is needed. Therefore, the training set of prefix structure analysis contain only 105 words whose prefix parts are longer than one letter. When evaluating on the training set, we adopt leave-one-out cross-validation method due to the small amount of data. Each word is predicted by the classifier trained with the other 104 words. As for the testing set, there are 194 prefixed words in the test set and 103 of them have prefixes longer than one letter. The metric of evaluation is accuracy. The prefix structure prediction has to be exactly the same as the gold standard to be counted as \"correct\".", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 68, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Prefix Structure Analysis Experiments", "sec_num": "4.3" }, { "text": "The experimental results of unigram models with different \u03bb values are shown in Table 4 . The \u03bb value does not affect much performance on the training set. It means that most unseen prefixes only appear in the test set. Interestingly, when \u03bb value is set to be 3 or larger, the performance on the test set is improved in a great degree. It seems to hint that we need a training set where each atomic prefix should appear at least 3 times.", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 87, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Prefix Structure Analysis Experiments", "sec_num": "4.3" }, { "text": "The experimental results of bigram models with different \u03bb values are listed in Table 5 . Again, the \u03bb value does not affect much performance on the training set, but improves the performance on the test set a lot when it is set to be 2 or larger. The experimental results of bigram models with different \u03b1 values are shown in Table 6 . Comparing the first system (where \u03b1 = 0) with the others, we can see that back-off method does improve the performance. However, the \u03b1 value does not affect the performance much.", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 87, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 327, "end": 334, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Prefix Structure Analysis Experiments", "sec_num": "4.3" }, { "text": "Parameters in the best system are \u03bb = 3 and \u03b1 = 0.7, which achieves an accuracy of 82.86% on the training set and 85.44% on the test set. Machine learning and deep learning methods described in Section 3.4 are also tested in this paper. Many well-known classifiers including Na\u00efve Bayes, SVM, and decision tree are tried, and an encoder-decoder system by LSTM is also constructed. But unfortunately, the best accuracy is only 52.06%. The training set is too small for machine learning and deep learning at this stage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prefix Structure Analysis Experiments", "sec_num": "4.3" }, { "text": "In general, our infix and suffix detection system can successfully predict structures for nearly 90% of words. It will greatly reduce the human effort needed to construct a larger dataset from the CIP Seediq dictionary. In our preliminary observation, only 335 of the 5,600 words in the CIP Seediq dictionary cannot be predicted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Remarks", "sec_num": "4.4" }, { "text": "Error analysis indicates that some words are inflected in an exceptional way. For example, the word \"kesa-un\" is \"kesun\" (do this way, \u9019\u6a23\u505a), but our system incorrectly predicts it as \"ksaun\"; and the word \"p-uqi-un\" is \"puqun\" (eat, \u5403), but our system incorrectly predicts as \"puqiun\" or \"puqiyun\". A list of exceptional words should be constructed in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Remarks", "sec_num": "4.4" }, { "text": "As for our prefix analysis system, it can successfully analyze structures for around 83% of prefixed words. Again, it will greatly reduce the human effort in the future. However, it is not easy to improve the performance of the prefix structure analysis system. To solve the ambiguity problem (such as \"kn-\" vs. \"k-\"), we might need the semantic information of the prefixed word or even the information about its functionality in that sentence. This will also be explored in the near future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Remarks", "sec_num": "4.4" }, { "text": "This paper proposes approaches to analyze morphological structures of Seediq words automatically. The experimental datasets contain 716 affixed Seediq words with their morphological structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "Morphological analysis starts from the infix and suffix detection. Deep root candidates generated by our proposed methods are combined with known infixes and suffixes. The writing form of the combination is then generated by the transformation rules. If the writing form matches the trailing substring of the target word, this combination is selected as the result of infix and suffix detection. This approach achieves a precision of 98.88% and a recall of 89.59%. Prefix structure analysis is treated similar to the word segmentation problem and predicted by probabilistic models. Zero probability problem in the bigram model is solved by the back-off approach, i.e. using the unigram probability weighted by \u03b1 instead. Zero probability problem in the unigram model is solved by the Lidstone Smoothing. i.e. adding \u03bb to frequencies of unigrams. We conclude that the best system is based on bigram model where \u03bb = 3 and \u03b1 = 0.7, with an accuracy of 82.86%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "In the future, we would like to apply the techniques developed in this paper to analyze the 5,595 words in the CIP Seediq dictionary to create a larger dataset and build a more reliable probabilistic model. Moreover, if the morphological structures of all words appearing in the 6,019 exemplar sentences in the CIP Seediq dictionary are available, it will be possible to build a large bilingual corpus for machine translation then.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "https://alilin.apc.gov.tw/tw/ebooks 5 http://web.klokah.tw/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was funded by the Ministry of Science and Technology in Taiwan (Grant: MOST 109-2221-E-019 -053 -).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null }, { "text": "The feature c k denotes the letter in the context. The Boolean features [B E S] denotes the position where this letter appear in the atomic prefixes. For example, the [B E S] values of the letter 'p' are [1 0 1] because it appears in the beginning (B) of some atomic prefixes {\"pn-\", \"ps-\"\u2026}, and it can be a single-letter prefix \"p-\" itself (S). E means appearing the end of an atomic prefix. Note that no atomic prefix is longer than 2 letters in our datasets. The final classification is also one of the BES labels.In addition, deep learning methods such as the encoder-decoder model are explored. The input is the prefix string where letters and the symbol '-' are denoted by one-hot encoding. The output is the prediction of morphological structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "The first dataset comes from the Seediq syntax book \"\u8cfd\u5fb7\u514b\u8a9e\u8a9e\u6cd5\u6982\u8ad6\". There are 509 sentences provided as illustrations in this book. The morphological structures of words in the sentences are also provided. There are 817 distinct Seediq words appearing in the sentences and 394 of them contain affixes. We took these 394 affixed words as the training data.The second dataset comes from 515 new sentences provided by Dr. Li-May Sung, the author of the Seediq syntax book and one of the authors of this paper. These sentences are also tagged with morphological structures. 322 new Seediq words with affixes are extracted from these sentences as the test data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Datasets", "sec_num": "4.1" }, { "text": "Sections 3.1 ~ 3.3 propose approaches to detect deep root, prefix, infix, and suffix parts in a given Seediq word (in which the structure inside the prefix part has not been predicted). Table 3 lists the performance of these approaches, where precision is the percentage of system-detected units (words or affixes) being correct, and recall is the percentage of gold-standard units being detected by the system.", "cite_spans": [], "ref_spans": [ { "start": 186, "end": 194, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Infix and Suffix Detection Experiments", "sec_num": "4.2" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural Machine Translation by Jointly Learning to Align and Translate", "authors": [ { "first": "D", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "K", "middle": [ "H" ], "last": "Cho", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bahdanau, D., Cho, K.H., & Bengio, Y. (2015). Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Cross-Dialect Adaptation Framework for Constructing Prosodic Models for Chinese Dialect Text-to-Speech Systems", "authors": [ { "first": "C.-Y", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2018, "venue": "Speech, and Language Processing", "volume": "26", "issue": "", "pages": "108--121", "other_ids": { "DOI": [ "10.1109/TASLP.2017.2762432" ] }, "num": null, "urls": [], "raw_text": "Chiang, C.-Y. (2018). Cross-Dialect Adaptation Framework for Constructing Prosodic Models for Chinese Dialect Text-to-Speech Systems. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(1), 108-121. doi: 10.1109/TASLP.2017.2762432", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A System Framework for Integrated Synthesis of Mandarin, Min-Nan, and Hakka Speech", "authors": [ { "first": "H.-Y", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Y.-Z", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "H.-L", "middle": [], "last": "Liau", "suffix": "" } ], "year": 2007, "venue": "International Journal of Computational Linguistics & Chinese Language Processing", "volume": "12", "issue": "4", "pages": "371--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gu, H.-Y., Zhou, Y.-Z., & Liau, H.-L. (2007). A System Framework for Integrated Synthesis of Mandarin, Min-Nan, and Hakka Speech. International Journal of Computational Linguistics & Chinese Language Processing, 12(4), 371-390.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Analyzing the Morphological Structures in Seediq Words 19", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Analyzing the Morphological Structures in Seediq Words 19", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Basic Lexicon and Shared Ontology for Multilingual Resources: A SUMO+MILO Hybrid Approach", "authors": [ { "first": "S.-K", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "I-L", "middle": [], "last": "Su", "suffix": "" }, { "first": "P.-Y", "middle": [], "last": "Hsiao", "suffix": "" }, { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" }, { "first": "T.-Y", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "L", "middle": [], "last": "Pr\u00e9vot", "suffix": "" } ], "year": 2007, "venue": "Proceedings of OntoLex Workshop in the 6th International Semantic Web Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsieh, S.-K., Su, I-L., Hsiao, P.-Y., Huang, C.-R., Kuo, T.-Y., & Pr\u00e9vot, L. (2007). Basic Lexicon and Shared Ontology for Multilingual Resources: A SUMO+MILO Hybrid Approach. In Proceedings of OntoLex Workshop in the 6th International Semantic Web Conference.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Modeling Taiwanese Southern-Min Tone Sandhi Using Rule-Based Methods", "authors": [ { "first": "U.-G", "middle": [], "last": "Iunn", "suffix": "" }, { "first": "K.-G", "middle": [], "last": "Lau", "suffix": "" }, { "first": "H.-G", "middle": [], "last": "Tan-Tenn", "suffix": "" }, { "first": "S.-A", "middle": [], "last": "Lee", "suffix": "" }, { "first": "C.-Y", "middle": [], "last": "Kao", "suffix": "" } ], "year": 2007, "venue": "International Journal of Computational Linguistics & Chinese Language Processing", "volume": "12", "issue": "4", "pages": "349--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iunn, U.-G., Lau, K.-G., Tan-Tenn, H.-G., Lee, S.-A., & Kao, C.-Y. (2007). Modeling Taiwanese Southern-Min Tone Sandhi Using Rule-Based Methods. International Journal of Computational Linguistics & Chinese Language Processing, 12(4), 349-370.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Morphophonemic Alternations in Formosan Languages", "authors": [ { "first": "P", "middle": [ "J" ], "last": "Li", "suffix": "" }, { "first": ".-K", "middle": [], "last": "", "suffix": "" } ], "year": 1977, "venue": "Bulletin of the Institute of History and Philology (\u4e2d\u592e\u7814\u7a76\u9662\u6b77\u53f2\u8a9e\u8a00\u7814\u7a76\u6240\u96c6\u520a)", "volume": "48", "issue": "3", "pages": "375--413", "other_ids": { "DOI": [ "10.6355/BIHPAS.197709.0375" ] }, "num": null, "urls": [], "raw_text": "Li, P. J.-K. (1977). Morphophonemic Alternations in Formosan Languages. Bulletin of the Institute of History and Philology (\u4e2d\u592e\u7814\u7a76\u9662\u6b77\u53f2\u8a9e\u8a00\u7814\u7a76\u6240\u96c6\u520a), 48(3), 375-413. doi: 10.6355/BIHPAS.197709.0375", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Reconstruction of Proto-Atayalic Phonology", "authors": [ { "first": "P", "middle": [ "J" ], "last": "Li", "suffix": "" }, { "first": ".-K", "middle": [], "last": "", "suffix": "" } ], "year": 1981, "venue": "Bulletin of the Institute of History and Philology (\u4e2d\u592e\u7814\u7a76\u9662\u6b77\u53f2\u8a9e\u8a00\u7814\u7a76\u6240\u96c6\u520a )", "volume": "52", "issue": "", "pages": "235--301", "other_ids": { "DOI": [ "10.6355/BIHPAS.198106.0235" ] }, "num": null, "urls": [], "raw_text": "Li, P. J.-K. (1981). Reconstruction of Proto-Atayalic Phonology. Bulletin of the Institute of History and Philology (\u4e2d\u592e\u7814\u7a76\u9662\u6b77\u53f2\u8a9e\u8a00\u7814\u7a76\u6240\u96c6\u520a ), 52(2), 235-301. doi: 10.6355/BIHPAS.198106.0235", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Vowel Deletion and Vowel Assimilation in Sediq", "authors": [ { "first": "P", "middle": [ "J" ], "last": "Li", "suffix": "" }, { "first": ".-K", "middle": [], "last": "", "suffix": "" } ], "year": 1991, "venue": "Papers on Austronesian languages and ethnolinguistics in honour of George W. Grace, Pacific Linguistics C-117", "volume": "", "issue": "", "pages": "163--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, P. J.-K. (1991). Vowel Deletion and Vowel Assimilation in Sediq. In Papers on Austronesian languages and ethnolinguistics in honour of George W. Grace, Pacific Linguistics C-117, 163-169.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Mandarin to Taiwanese Min Nan Machine Translation System with Speech Synthesis of Taiwanese Min Nan", "authors": [ { "first": "C.-J", "middle": [], "last": "Lin", "suffix": "" }, { "first": "H.-H", "middle": [], "last": "Chen", "suffix": "" } ], "year": 1999, "venue": "International Journal of Computational Linguistics & Chinese Language Processing", "volume": "4", "issue": "1", "pages": "59--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, C.-J. & Chen., H.-H. (1999). A Mandarin to Taiwanese Min Nan Machine Translation System with Speech Synthesis of Taiwanese Min Nan. International Journal of Computational Linguistics & Chinese Language Processing, 4(1), 59-84.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Effective Approaches to Attention-based Neural Machine Translation", "authors": [ { "first": "M.-T", "middle": [], "last": "Luong", "suffix": "" }, { "first": "H", "middle": [], "last": "Pham", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": { "DOI": [ "10.18653/v1/D15-1166" ] }, "num": null, "urls": [], "raw_text": "Luong, M.-T., Pham, H., & Manning, C. D. (2015). Effective Approaches to Attention-based Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1412-1421. doi: 10.18653/v1/D15-1166", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A Corpus-Based Approach to the Classification of Yami Emotion", "authors": [ { "first": "D", "middle": [ "V" ], "last": "Rau", "suffix": "" }, { "first": "Y.-H", "middle": [], "last": "Wu", "suffix": "" }, { "first": "M.-C", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2015, "venue": "New Advances in Formosan Linguistics", "volume": "", "issue": "", "pages": "533--554", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rau, D. V., Wu, Y.-H., & Yang., M.-C. (2015). A Corpus-Based Approach to the Classification of Yami Emotion. New Advances in Formosan Linguistics, Asia-Pacific Linguistics, 533-554.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Revitalization of Formosan Languages: Compilation of Seediq Dictionary", "authors": [ { "first": "", "middle": [], "last": "\u5b8b\u9e97\u6885", "suffix": "" } ], "year": 2009, "venue": "New Taipei City", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u5b8b\u9e97\u6885(2011)\u3002\u300c\u539f\u4f4f\u6c11\u65cf\u8a9e\u8a00\u5b57\u8a5e\u5178\u7de8\u7e82\u56db\u5e74\u8a08\u756b\u2500\u7b2c 3 \u968e\u6bb5\u8a08\u756b\u300d(\u8cfd\u5fb7\u514b\u8a9e)\u3002\u65b0\u5317 \u5e02:\u539f\u4f4f\u6c11\u65cf\u59d4\u54e1\u6703\u3002[Sung, L.-M. (2011). Revitalization of Formosan Languages: Compilation of Seediq Dictionary. New Taipei City, Taiwan: Council of Indigenous Peoples.] 2009/8/3-2011/8/2.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A Sketch Grammar of Seediq", "authors": [ { "first": "", "middle": [], "last": "\u5b8b\u9e97\u6885", "suffix": "" } ], "year": 2018, "venue": "\u3002\u81fa\u7063\u5357\u5cf6\u8a9e\u8a00\u53e2\u66f8 5: \u8cfd\u5fb7\u514b\u8a9e\u8a9e\u6cd5\u6982\u8ad6(2 \u7248)\u3002\u65b0\u5317\u5e02:\u539f\u4f4f\u6c11\u65cf\u59d4\u54e1\u6703\u3002 [Sung, L.-M", "volume": "5", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u5b8b\u9e97\u6885(2018)\u3002\u81fa\u7063\u5357\u5cf6\u8a9e\u8a00\u53e2\u66f8 5: \u8cfd\u5fb7\u514b\u8a9e\u8a9e\u6cd5\u6982\u8ad6(2 \u7248)\u3002\u65b0\u5317\u5e02:\u539f\u4f4f\u6c11\u65cf\u59d4\u54e1\u6703\u3002 [Sung, L.-M. (2018). A Sketch Grammar of Seediq, Formosan Series #5, 2018 (2nd Edition). New Taipei City, Taiwan: Council of Indigenous Peoples.]", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The Phonological Structure of the Paran Dialect of Sedeq", "authors": [ { "first": "H.-F", "middle": [], "last": "\u694a\u79c0\u82b3 ; Yang", "suffix": "" } ], "year": 1976, "venue": "Bulletin of the Institute of History and Philology", "volume": "47", "issue": "4", "pages": "611--706", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u694a\u79c0\u82b3(1976)\u3002\u8cfd\u5fb7\u8a9e\u9727\u793e\u65b9\u8a00\u7684\u97f3\u97fb\u7d50\u69cb\u3002\u4e2d\u592e\u7814\u7a76\u9662\u6b77\u53f2\u8a9e\u8a00\u7814\u7a76\u6240\u96c6\u520a\uff0c47(4)\uff0c 611-706\u3002[Yang, H.-F. (1976). The Phonological Structure of the Paran Dialect of Sedeq. Bulletin of the Institute of History and Philology, 47(4), 611-706.]", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Constructing a Yami Language Lexicon Database from Yami Archives", "authors": [ { "first": "M.-C", "middle": [], "last": "Yang", "suffix": "" }, { "first": "D", "middle": [ "V" ], "last": "Rau", "suffix": "" } ], "year": 2011, "venue": "Proceeding of the 2011 TELDAP (Taiwan e-Learning and Digital Archives Program) International Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, M.-C. & Rau, D. V. (2011). Constructing a Yami Language Lexicon Database from Yami Archives. In Proceeding of the 2011 TELDAP (Taiwan e-Learning and Digital Archives Program) International Conference.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A Proposed Model for Constructing a Yami Wordnet", "authors": [ { "first": "M.-C", "middle": [], "last": "Yang", "suffix": "" }, { "first": "D", "middle": [ "V" ], "last": "Rau", "suffix": "" }, { "first": "A", "middle": [ "H" ], "last": "Chang", "suffix": "" }, { "first": ".-H", "middle": [], "last": "", "suffix": "" } ], "year": 2011, "venue": "International Journal of Asian Language Processing", "volume": "21", "issue": "1", "pages": "1--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, M.-C., Rau, D. V., & Chang, A. H.-H. (2011). A Proposed Model for Constructing a Yami Wordnet. International Journal of Asian Language Processing, 21(1), 1-14.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Exploring the NLP Techniques for Formosa Indigenous Languages", "authors": [ { "first": "M.-C", "middle": [], "last": "Yang", "suffix": "" }, { "first": "D", "middle": [ "V" ], "last": "Rau", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u694a\u5b5f\u84a8\u3001\u4f55\u5fb7\u83ef(2015)\u3002\u5efa\u69cb\u53f0\u7063\u539f\u4f4f\u6c11\u8a9e\u81ea\u7136\u8a9e\u8a00\u8655\u7406\u6280\u8853\u63a2\u8a0e\u8207\u7814\u7a76\u3002\u79d1\u6280\u90e8\u8a08\u756b \u671f\u672b\u5831\u544a(\u7de8\u865f: MOST 103-2221-E-126-008-) [Yang, M.-C. & Rau, D.V. (2015). Exploring the NLP Techniques for Formosa Indigenous Languages. (MOST 103-2221-E-126-008-), 2014/8~2015/7.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The Polysemy Problem, an Important Issue in a Chinese to Taiwanese TTS System", "authors": [ { "first": "M.-S", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Y.-J", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2012, "venue": "International Journal of Computational Linguistics & Chinese Language Processing", "volume": "17", "issue": "1", "pages": "43--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu, M.-S. & Lin, Y.-J. (2012). The Polysemy Problem, an Important Issue in a Chinese to Taiwanese TTS System. International Journal of Computational Linguistics & Chinese Language Processing, 17(1), 43-64.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Chuan-JieLin et al.", "uris": null }, "TABREF2": { "num": null, "html": null, "type_str": "table", "content": "
SeediqRootMorphological StructureMeaning (Seediq / Root)
mpkbeyax beyax m-p-k-beyaxhard-working, \u52aa\u529b / do with force, \u7528\u529b
cmnebucebuc<m><n>ebushot successfully, \u6253\u4e2d\u4e86 / shoot, \u64ca\u5c04
qyaanunqeyaqeya-an-unhang, \u639b / hang, \u639b
pndsanan adisp<n>adis-an-anbring back, \u5e36\u56de / bring, \u5e36
", "text": "" }, "TABREF4": { "num": null, "html": null, "type_str": "table", "content": "
Training DataTest Data
UnitGold System Correct P (%) R (%) Gold System Correct P (%) R (%)
Word394357353 98.88 89.59322286278 97.20 86.34
Infix797777 100.0 97.47554747 100.0 85.45
Suffix169135135 100.0 79.881279898 100.0 77.17
Prefix221207203 98.07 91.86194186180 96.77 92.78
", "text": "" }, "TABREF5": { "num": null, "html": null, "type_str": "table", "content": "
01058681.9051036159.223
0.11058681.9051036462.136
0.31058580.9521036563.107
0.51058681.9051036966.990
11058580.9521037168.932
21058177.1431038380.583
31057975.2381038683.495
41058177.1431038683.495
51057975.2381038784.466
", "text": "" }, "TABREF6": { "num": null, "html": null, "type_str": "table", "content": "
Training DataTest Data
\u03bb\u03b1Word Correct A (%) Word Correct A (%)
00.71058278.0951036462.136
0.01 0.71058177.1431036765.049
0.1 0.71058177.1431036765.049
0.2 0.71058177.1431036866.019
0.3 0.71058278.0951036866.019
0.4 0.71058278.0951036866.019
0.5 0.71058379.0481036866.019
0.6 0.71058379.0481036866.019
10.71058278.0951037067.961
20.71058480.0001038784.466
30.71058782.8571038885.437
40.71058278.0951038986.408
50.71058076.1911038784.466
60.71058177.1431038784.466
70.71058278.0951038784.466
80.71058076.1911038784.466
", "text": "" }, "TABREF7": { "num": null, "html": null, "type_str": "table", "content": "
Training DataTest Data
\u03bb\u03b1Word Correct A (%) Word Correct A (%)
301058076.1911038380.583
30.11058681.9051038784.466
30.41058681.9051038885.437
30.71058782.8571038885.437
311058681.9051038885.437
", "text": "" } } } }