{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:10:18.748162Z" }, "title": "Character Set Construction for Chinese Language Learning", "authors": [ { "first": "Yan", "middle": [], "last": "Chak", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hong Kong Hong Kong SAR", "location": { "country": "China" } }, "email": "" }, { "first": "John", "middle": [], "last": "Yeung", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hong Kong Hong Kong SAR", "location": { "country": "China" } }, "email": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hong Kong Hong Kong SAR", "location": { "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "To promote efficient learning of Chinese characters, pedagogical materials may present not only a single character, but a set of characters that are related in meaning and in written form. This paper investigates automatic construction of these character sets. The proposed model represents a character as averaged word vectors of common words containing the character. It then identifies sets of characters with high semantic similarity through clustering. Human evaluation shows that this representation outperforms direct use of character embeddings, and that the resulting character sets capture distinct semantic ranges.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "To promote efficient learning of Chinese characters, pedagogical materials may present not only a single character, but a set of characters that are related in meaning and in written form. This paper investigates automatic construction of these character sets. The proposed model represents a character as averaged word vectors of common words containing the character. It then identifies sets of characters with high semantic similarity through clustering. Human evaluation shows that this representation outperforms direct use of character embeddings, and that the resulting character sets capture distinct semantic ranges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "To promote efficient vocabulary acquisition, pedagogical materials may present the learner with a set of related words, rather than a single word. The set often consists of words belonging to the same \"family\"; for English, family members may share the same root, such as the words special, specialize, specialty, especially, etc. These families can be constructed in a straightforward manner from morphological databases and analyzers. 1 An analogous strategy for teaching Chinese is the \"character family\", i.e. a set of characters that are similar in meaning and written form. A natural criterion for family membership is the semantic component, or semantic radical, of the character. The family based on the component 'sun', for example, includes the characters for 'sunny', 'sunshine' and 'dawn' as members (Table 1) .", "cite_spans": [ { "start": 437, "end": 438, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 812, "end": 821, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In comparison to word families in English, character families tend to exhibit less semantic regularity. Some family members may have unrelated meaning, or have obscured semantic relation in modern Chinese. For example, the character cu\u00f2 'wrong' belongs to the 'metal' family, and the character zu\u00f3 'past' belongs to the 'sun' family (Table 1). When preparing character sets for use in computer-assisted language learning (CALL) applications, manual selection is often necessary to ensure that the sets illustrate semantic regularity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper investigates automatic construction of character sets. The proposed method can be expected to expedite the generation of these sets for more components and for a larger variety of semantic categories, with the goal of enhancing the coverage and effectiveness of CALL applications for learning Chinese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For each semantic component, we define its \"character family\" to consist of all characters that contain the component. As shown in Table 1 , not all family members have sufficiently related meaning to serve as good examples in pedagogical materials. Given a family, the character set construction task is to identify a subset of its characters that are semantically close. In designing an algorithm for this task, we address two research topics:", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 138, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Research Questions", "sec_num": "2" }, { "text": "Character representation (Q1) The character representation should reflect the \"overall\" meaning of the character in a variety of contexts. We compare the use of character and word embeddings in constructing character sets (Section 4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Research Questions", "sec_num": "2" }, { "text": "Subfamilies (Q2) All family members are traditionally viewed as capturing the general meaning of its semantic component. We investigate whether some families can be clustered into subfamilies to produce character sets with more tightly related meaning (Section 5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Research Questions", "sec_num": "2" }, { "text": "Example character sets", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Character family component", "sec_num": null }, { "text": "Other characters \u65e5 r\u00ec 'sun' \u6674 q\u00edng 'sunny', \u6689 hu\u012b 'sunshine', \u66c9 xi\u01ceo 'dawn' \u6628 zu\u00f3 'past' \u91d1 j\u012bn 'metal' \u9285 t\u00f3ng 'copper', \u9435 ti\u011b 'iron', \u9280 j\u00edn 'silver' \u932f cu\u00f2 'wrong' \u9801 y\u00e8 'page' \u982d t\u00f3u 'head', \u984d\u00e9 'forehead', \u9838 j\u01d0ng 'neck' \u985e l\u00e8i 'type' \u5973 n\u01d4 Subfamily #1: \u59cb sh\u01d0 'begin' 'female'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Character family component", "sec_num": null }, { "text": "\u5ac1 ji\u00e0 'marry', \u5a36 q\u01d4 'marry', \u5a5a h\u016bn 'marriage' Subfamily #2: \u59e8 y\u00ed 'aunt', \u59d0 ji\u011b 'older sister', \u59b9 m\u00e8i 'younger sister' Table 1 : Each character family is associated with a semantic component and its members consist of all characters that contain the component. Semantic similarity can be strong for some family members (\"Character sets\" column) but less apparent for others (\"Other characters\" column).", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 125, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Semantic Character family component", "sec_num": null }, { "text": "Chinese words are composed of characters. According to Li and Kang (1993) , 81% of the characters are \"semantic-phonetic compounds\", which can be decomposed into two components. The phonetic component gives pronunciation clues. The semantic component, often used for organizing characters into families (Table 1) , indicates the semantic range of the character. The rest of this section summarizes research on Chinese subword structures in CALL (Section 3.1) and in natural language processing (Section 3.2).", "cite_spans": [ { "start": 55, "end": 73, "text": "Li and Kang (1993)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 303, "end": 312, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "There are considerable pedagogical benefits in highlighting the semantic regularity in character families (Tse et al., 2007; Leong et al., 2011) . Many CALL applications for Chinese have therefore featured these families, including web-based tutorials (Chen et al., 2011) and Scrabble-like character formation games (Lam et al., 2001; Lee and Yeung, 2020) .", "cite_spans": [ { "start": 106, "end": 124, "text": "(Tse et al., 2007;", "ref_id": "BIBREF15" }, { "start": 125, "end": 144, "text": "Leong et al., 2011)", "ref_id": "BIBREF10" }, { "start": 252, "end": 271, "text": "(Chen et al., 2011)", "ref_id": "BIBREF2" }, { "start": 316, "end": 334, "text": "(Lam et al., 2001;", "ref_id": "BIBREF8" }, { "start": 335, "end": 355, "text": "Lee and Yeung, 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "CALL for Chinese characters", "sec_num": "3.1" }, { "text": "It is however well known that not all members in a character family have related meaning in modern Chinese. A character is called transparent if its meaning is similar or directly related to that of its semantic component. For example, in the 'metal' family, the characters t\u00f3ng 'copper' and ti\u011b 'iron' are transparent, while the character cu\u00f2 'wrong' is not (Table 1) . According to an analysis of primary school material (Chung and Leung, 2008) , only 64% to 82% of the characters have meaning that is related or somewhat related to its semantic component. A direct consequence is that semantic components are not uniformly useful in aiding comprehension (Liow et al., 1999) . Character sets therefore often require manual curation, which con-strains their use in interactive CALL applications.", "cite_spans": [ { "start": 423, "end": 446, "text": "(Chung and Leung, 2008)", "ref_id": "BIBREF3" }, { "start": 657, "end": 676, "text": "(Liow et al., 1999)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 359, "end": 368, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "CALL for Chinese characters", "sec_num": "3.1" }, { "text": "Various algorithms have been proposed to train embeddings for Chinese at the subword level. Contextual embeddings such as BERT (Devlin et al., 2019) are designed to derive character embeddings in a specific sentential context. In contrast, identifying the general or overall meaning of a character is the main objective in the character set construction task. Our evaluation will focus on the use of context-free embeddings.", "cite_spans": [ { "start": 127, "end": 148, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Subword representation in Chinese", "sec_num": "3.2" }, { "text": "Context-free embeddings at the word-, characterand component levels (Lu et al., 2016; Yu et al., 2017; Cao et al., 2018; Devlin et al., 2019) have been applied to many downstream tasks in Chinese NLP, but there has not been any quantitative evaluation on their use in creating character sets for CALL. Besides direct use of character embeddings, a possible approach is to measure similarity between character and component embeddings, as suggested by a qualitative study on the 'illness' component (Yu et al., 2017 ). An alternative is to exploit embeddings of words formed by the character, although the character's semantic contribution to different words may vary (Xu et al., 2016) . Our study evaluates character sets produced by a number of these approaches.", "cite_spans": [ { "start": 68, "end": 85, "text": "(Lu et al., 2016;", "ref_id": "BIBREF13" }, { "start": 86, "end": 102, "text": "Yu et al., 2017;", "ref_id": "BIBREF17" }, { "start": 103, "end": 120, "text": "Cao et al., 2018;", "ref_id": "BIBREF1" }, { "start": 121, "end": 141, "text": "Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 498, "end": 514, "text": "(Yu et al., 2017", "ref_id": "BIBREF17" }, { "start": 667, "end": 684, "text": "(Xu et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Subword representation in Chinese", "sec_num": "3.2" }, { "text": "We address Q1 by evaluating two character representations for the character set construction task: given a family F , identify a subset of N characters, say S = {c 1 , ..., c N } \u2286 F , that have the most similar or related meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character representation", "sec_num": "4" }, { "text": "A simple approach would be to retrieve characters with the closest meaning to the semantic component associated with F . This method can be problematic, however, since the dominant meaning of a component may differ from those of the family members. For example, members of the family associated with the component y\u00e8 are semantically related to \"head\", but as a standalone character y\u00e8 means 'page' in modern Chinese (Table 1) .", "cite_spans": [], "ref_spans": [ { "start": 417, "end": 426, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Approach", "sec_num": "4.1" }, { "text": "We instead measure semantic similarity between characters. Using a set of 7.6 million sentences from Chinese Wikipedia, we trained context-free embeddings c and w for each character c and word w with the joint learning model proposed by Yu et al. (2017) . We compare two methods for generating the representation v(c) for a character c:", "cite_spans": [ { "start": 237, "end": 253, "text": "Yu et al. (2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4.1" }, { "text": "Character vector The baseline directly uses the character embeddings, i.e., v(c) = c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4.1" }, { "text": "Averaged word vectors The meaning of some characters may be more clearly expressed within words. From the Wikipedia dataset, we retrieve the k most frequent words w 1 , . . . , w k that contain the character c. We then average the word vectors of these k words, i.e. defin-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4.1" }, { "text": "ing v(c) = 1 k i=1,...,k w i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4.1" }, { "text": "We assign a score to each candidate character set S by summing the cosine similarity for all its character pairs c i , c j :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score(S) = i,j\u2264N,i =j cos(v(c i ), v(c j ))", "eq_num": "(1)" } ], "section": "Approach", "sec_num": "4.1" }, { "text": "We then choose the character set that maximizes this score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4.1" }, { "text": "We extracted all characters that can be decomposed into two components from the open-source dataset HanziJS. 2 Each character was then assigned to the two character families associated with its two components. 3 We included only characters listed in the Hanyu Shuiping Kaoshi (HSK) (Hanban, 2014) , the most popular scheme for learning Chinese as a foreign language. The \"Averaged word vectors\" method (top-k word vectors) assigns higher scores to similar character pairs (rated 0.5 or over) than dissimilar pairs (rated below 0.5); in contrast, the \"Character vector\" method does not clearly distinguish the similar and dissimilar pairs.", "cite_spans": [ { "start": 210, "end": 211, "text": "3", "ref_id": null }, { "start": 282, "end": 296, "text": "(Hanban, 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Set-up", "sec_num": "4.2" }, { "text": "There were 2,285 characters distributed in 600 character families. We randomly selected ten character families for evaluation. For each family, we constructed two character sets of size N = 3 using the two methods for generating v(c) (Section 4.1), with the settings k = {5, 10, 15}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Set-up", "sec_num": "4.2" }, { "text": "We presented two human judges with all character pairs within the 20 character sets. Both native speakers of Chinese, the judges rated each pair according to the annotation scheme of the SemEval-2012 shared task on Chinese word similarity (Jin and Wu, 2012) . The similarity score ranged from 0 (not at all related) to 5 (identical).", "cite_spans": [ { "start": 239, "end": 257, "text": "(Jin and Wu, 2012)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.3" }, { "text": "We computed the correlation between the averaged human scores and the cosine similarity cos(v(c i ), v(c j )) as generated by the two methods described in Section 4.1. The \"Character vector\" method attained a Pearson correlation coefficient of only 0.09. 4 The \"Averaged word vectors\" method achieved a coefficient of 0.80 5 , outperforming the \"Character vector\" method. This coefficient was obtained at k = 5, i.e., averaging the 5 most frequent words. Performance degraded at higher values of k, likely because of increased sensitivity to the corpus domain.", "cite_spans": [ { "start": 255, "end": 256, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.3" }, { "text": "To visualize the correlation, we compared the cosine similarity cos(v(c i ), v(c j )) of the similar char-acter pairs (defined as those with human ratings of 0.5 and above) and the dissimilar pairs. As shown in Figure 1 , the \"Averaged word vectors\" method produces substantially higher similarity scores for the similar pairs than the dissimilar pairs, while the \"Character vector\" method does not clearly distinguish the similar and dissimilar pairs.", "cite_spans": [], "ref_spans": [ { "start": 211, "end": 219, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4.3" }, { "text": "These results suggest that averaged word vectors are more effective for character set construction than direct use of character embeddings. The most frequent words likely play significant roles in shaping the \"general\" meaning of a character as perceived by native speakers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.3" }, { "text": "We next address Q2 by dividing a family into subfamilies, and evaluating the quality of character sets generated from the subfamilies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subfamilies", "sec_num": "5" }, { "text": "We used K-means clustering to produce subfamilies F i from a character family F . The number of clusters for each family was determined by the silhouette value, which measures the distance between each point in a cluster to the points in its neighboring clusters. 6 We extracted a character set of size N = 3 from each subfamily using the \"Averaged word vectors\" method at k = 5, which obtained the best results (Section 4.3). We identified the two subfamilies F 1 and F 2 with the highest-scoring character sets in terms of score(S), as defined in Section 4.1. For evaluation, we compare the following sets:", "cite_spans": [ { "start": 264, "end": 265, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "5.1" }, { "text": "Subfamily #1 The character set produced by F 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "5.1" }, { "text": "The character set produced by F 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subfamily #2", "sec_num": null }, { "text": "Mixed Subfamilies The character set produced by randomly swapping one character between Subfamily #1 and Subfamily #2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subfamily #2", "sec_num": null }, { "text": "Random The character set produced by random selection among characters in F .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subfamily #2", "sec_num": null }, { "text": "Among the 600 character families (Section 4.2), K-means clustering discovered two clusters in 14 character families, and three clusters in 5 character Character set Average score Subfamily #1 1.67 Subfamily #2 1.32 Mixed Subfamilies 0.75 Random 0.53 Table 2 : Human scores on character sets constructed from two subfamilies and two baselines (Section 5.1) families. Table 1 shows two clusters, or subfamilies, identified in the character family of the component n\u01d4 'female', semantically associated with matrimony and relatives, respectively. We randomly selected eight of these families for evaluation.", "cite_spans": [], "ref_spans": [ { "start": 250, "end": 257, "text": "Table 2", "ref_id": null }, { "start": 366, "end": 373, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Set-up", "sec_num": "5.2" }, { "text": "Similar to the previous experiment, the two human judges rated the similarity of all character pairs in the generated character sets. As shown in Table 2 , the character sets Subfamily #1 (1.67) and Subfamily #2 (1.32) achieved the highest average similarity scores. Both outperformed 7 the Random set, which attained an average of 0.53 only. This result indicates that our proposed method is able to identify characters within a family that are more semantically related than other family members.", "cite_spans": [], "ref_spans": [ { "start": 146, "end": 153, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "Further, Subfamily #1 (1.67) outperformed 8 Mixed Subfamilies (0.75), suggesting that the judges perceived semantic differences between the two subfamilies. Subfamily #2 (1.32) also scored higher than Mixed Subfamilies, although the difference was not significant 9 , likely due to the lower degree of similarity between its members compared to their Subfamily #1 counterparts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "We have presented the first quantitative study on automatic construction of Chinese character sets to facilitate language learning. We have evaluated a number of methods for character representation and family clustering. Experimental results showed that averaged word vectors achieved statistically significant improvement over direct use of character vectors. Further, K-means clustering produced subfamilies that yielded character sets with distinctive meaning. It is hoped that these methods will help expand the variety and coverage of character", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Examples include CELEX(Baayen et al., 1995) and Morfessor(Creutz and Lagus, 2006).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/nieldlr/hanzi 3 Manual inspection would be needed to distinguish between phonetic and semantic components. For a fully automatic algorithm, we relied on the model to learn the distinction rather than manually filtering out phonetic components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Correlation with human scores was not significant, at p > 0.055 Correlation with human scores was significant, at p < 0.006", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used the implementation in scikit-learn(Pedregosa et al., 2011). We allowed a maximum of 10 clusters per family, and rejected clusters with less than 5 characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Statistically significant at p < 0.025 by t-test 8 Statistically significant at p < 0.002 by t-test 9 At p = 0.27", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We gratefully acknowledge support from an Applied Research Grant (project #9667175) at City University of Hong Kong, and a grant from the Hong Kong Institute for Data Science (project #9360163) at City University of Hong Kong.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "sets for use in CALL applications for Chinese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The CELEX lexical database (CD-ROM)", "authors": [ { "first": "Harald", "middle": [], "last": "Baayen", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Piepenbrock", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Gulikers", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harald Baayen, Richard Piepenbrock, and L\u00e9on Gulik- ers. 1995. The CELEX lexical database (CD-ROM).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning Chinese Word Embeddings with Stroke n-gram Information", "authors": [ { "first": "Shaosheng", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xiaolong", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proc. 32nd AAAI Conference on Artificial Intelligence", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaosheng Cao, Wei Lu, Jun Zhou, and Xiaolong Li. 2018. cw2vec: Learning Chinese Word Embed- dings with Stroke n-gram Information. In Proc. 32nd AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Chinese Orthography Database and Its Application in Teaching Chinese Characters (in Chinese)", "authors": [ { "first": "Hsueh-Chih", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Li-Yun", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kuo-En", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yu-Shiou", "middle": [], "last": "Chiou", "suffix": "" }, { "first": "Yao-Ting", "middle": [], "last": "Sung", "suffix": "" } ], "year": 2011, "venue": "Bulletin of Educational Psychology (Special Issue on Reading)", "volume": "43", "issue": "", "pages": "269--290", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsueh-Chih Chen, Li-Yun Chang, Kuo-En Chang, Yu- Shiou Chiou, and Yao-Ting Sung. 2011. Chinese Orthography Database and Its Application in Teach- ing Chinese Characters (in Chinese). Bulletin of Educational Psychology (Special Issue on Reading), 43:269-290.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Data analysis of Chinese characters in primary school corpora of Hong Kong and mainland China: preliminary theoretical interpretations", "authors": [ { "first": "Flora", "middle": [], "last": "Hoi", "suffix": "" }, { "first": "Ki", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Man", "middle": [ "Tak" ], "last": "Leung", "suffix": "" } ], "year": 2008, "venue": "Clinical Linguistics and Phonetics", "volume": "22", "issue": "4-5", "pages": "379--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Flora Hoi Ki Chung and Man Tak Leung. 2008. Data analysis of Chinese characters in primary school cor- pora of Hong Kong and mainland China: prelimi- nary theoretical interpretations. Clinical Linguistics and Phonetics, 22(4-5):379-389.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Morfessor in the Morpho Challenge", "authors": [ { "first": "Mathias", "middle": [], "last": "Creutz", "suffix": "" }, { "first": "Krista", "middle": [], "last": "Lagus", "suffix": "" } ], "year": 2006, "venue": "Proc. PASCAL Challenge Workshop on Unsupervised Segmentation of Words into Morphemes", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mathias Creutz and Krista Lagus. 2006. Morfessor in the Morpho Challenge. In Proc. PASCAL Challenge Workshop on Unsupervised Segmentation of Words into Morphemes.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pretraining of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proc. NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pretraining of Deep Bidirectional Transformers for Language Un- derstanding. In Proc. NAACL-HLT.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "International Curriculum for Chinese Language and Education", "authors": [ { "first": "", "middle": [], "last": "Hanban", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hanban. 2014. International Curriculum for Chinese Language and Education. Beijing Language and Culture University Press, Beijing, China.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "SemEval-2012 Task 4: Evaluating Chinese Word Similarity", "authors": [ { "first": "Peng", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Yunfang", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2012, "venue": "Proc. First Joint Conference on Lexical and Computational Semantics (*SEM)", "volume": "", "issue": "", "pages": "374--377", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Jin and Yunfang Wu. 2012. SemEval-2012 Task 4: Evaluating Chinese Word Similarity. In Proc. First Joint Conference on Lexical and Computa- tional Semantics (*SEM), pages 374-377.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Designing CALL for Learning Chinese Characters", "authors": [ { "first": "H", "middle": [ "C" ], "last": "Lam", "suffix": "" }, { "first": "W", "middle": [ "W" ], "last": "Ki", "suffix": "" }, { "first": "N", "middle": [], "last": "Law", "suffix": "" }, { "first": "A", "middle": [ "L S" ], "last": "Chung", "suffix": "" }, { "first": "P", "middle": [ "Y" ], "last": "Ko", "suffix": "" }, { "first": "A", "middle": [ "H S" ], "last": "Ho", "suffix": "" }, { "first": "S", "middle": [ "W" ], "last": "Pun", "suffix": "" } ], "year": 2001, "venue": "Journal of Computer Assisted Learning", "volume": "17", "issue": "", "pages": "115--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. C. Lam, W. W. Ki, N. Law, A. L. S. Chung, P. Y. Ko, A. H. S. Ho, and S. W. Pun. 2001. Designing CALL for Learning Chinese Characters. Journal of Computer Assisted Learning, 17:115-128.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Computerassisted learning for chinese based on character families", "authors": [ { "first": "John", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Chak Yan", "middle": [], "last": "Yeung", "suffix": "" } ], "year": 2020, "venue": "Information Management and Big Data", "volume": "", "issue": "", "pages": "299--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lee and Chak Yan Yeung. 2020. Computer- assisted learning for chinese based on character fam- ilies. In Information Management and Big Data, pages 299-305, Cham. Springer International Pub- lishing.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Orthographic Knowledge Important in Comprehending Elementary Chinese Text by Users of Alphasyllabaries", "authors": [ { "first": "Che", "middle": [ "Kan" ], "last": "Leong", "suffix": "" }, { "first": "Shek", "middle": [], "last": "Kam Tse", "suffix": "" }, { "first": "Ka", "middle": [ "Yee" ], "last": "Loh", "suffix": "" }, { "first": "Wing Wah", "middle": [], "last": "Ki", "suffix": "" } ], "year": 2011, "venue": "Reading Psychology", "volume": "32", "issue": "3", "pages": "237--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Che Kan Leong, Shek Kam Tse, Ka Yee Loh, and Wing Wah Ki. 2011. Orthographic Knowledge Im- portant in Comprehending Elementary Chinese Text by Users of Alphasyllabaries. Reading Psychology, 32(3):237-271.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Analysis of Phonetics of the Ideophonetic Characters in Modern Chinese", "authors": [ { "first": "Y", "middle": [], "last": "Li", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Kang", "suffix": "" } ], "year": 1993, "venue": "Information Analysis of Usage of Characters in Modern Chinese (in Chinese)", "volume": "", "issue": "", "pages": "84--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Li and J. S. Kang. 1993. Analysis of Phonetics of the Ideophonetic Characters in Modern Chinese. In Information Analysis of Usage of Characters in Modern Chinese (in Chinese), pages 84-98, Shang- hai. Shanghai Education Publisher.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Chinese Characters: Semantic and Phonetic Regularity Norms for China", "authors": [ { "first": "Susan", "middle": [ "J Rickard" ], "last": "Liow", "suffix": "" }, { "first": "Siok", "middle": [], "last": "Keng Tng", "suffix": "" }, { "first": "Cher Leng", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1999, "venue": "Behavior Research Methods, Instruments, and Computers", "volume": "31", "issue": "1", "pages": "155--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Susan J. Rickard Liow, Siok Keng Tng, and Cher Leng Lee. 1999. Chinese Characters: Semantic and Pho- netic Regularity Norms for China, Singapore, and Taiwan. Behavior Research Methods, Instruments, and Computers, 31(1):155-177.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Multiprototype Chinese Character Embedding", "authors": [ { "first": "Yanan", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Donghong", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2016, "venue": "Proc. 10th International Conference on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "855--859", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanan Lu, Yue Zhang, and Donghong Ji. 2016. Multi- prototype Chinese Character Embedding. In Proc. 10th International Conference on Language Re- sources and Evaluation (LREC), page 855-859.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Scikit-learn: Machine learning in python", "authors": [ { "first": "Fabian", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "Ga\u00ebl", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Bertrand", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Ron", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Dubourg", "suffix": "" } ], "year": 2011, "venue": "Journal of machine learning research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825-2830.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "An Integrative Perceptual Approach to Teaching Chinese Characters", "authors": [ { "first": "Ference", "middle": [], "last": "Shek Kam Tse", "suffix": "" }, { "first": "", "middle": [], "last": "Marton", "suffix": "" }, { "first": "Elizabeth Ka Yee", "middle": [], "last": "Wing Wah Ki", "suffix": "" }, { "first": "", "middle": [], "last": "Loh", "suffix": "" } ], "year": 2007, "venue": "structional Science", "volume": "35", "issue": "", "pages": "375--406", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shek Kam Tse, Ference Marton, Wing Wah Ki, and Elizabeth Ka Yee Loh. 2007. An Integrative Percep- tual Approach to Teaching Chinese Characters. In- structional Science, 35:375-406.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improve Chinese Word Embeddings by Exploiting Internal Structure", "authors": [ { "first": "Jian", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Liangang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhengyu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Huanhuan", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2016, "venue": "Proc. NAACL-HLT", "volume": "", "issue": "", "pages": "1041--1050", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Xu, Jiawei Liu, Liangang Zhang, Zhengyu Li, and Huanhuan Chen. 2016. Improve Chinese Word Em- beddings by Exploiting Internal Structure. In Proc. NAACL-HLT, page 1041-1050.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Joint Embeddings of Chinese Words, Characters, and Fine-grained Subcharacter Components", "authors": [ { "first": "Jinxing", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xun", "middle": [], "last": "Jian", "suffix": "" }, { "first": "Yangqiu", "middle": [], "last": "Hao Xin", "suffix": "" }, { "first": "", "middle": [], "last": "Song", "suffix": "" } ], "year": 2017, "venue": "Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "286--291", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinxing Yu, Xun Jian, Hao Xin, and Yangqiu Song. 2017. Joint Embeddings of Chinese Words, Char- acters, and Fine-grained Subcharacter Components. In Proc. Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), page 286-291.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Figure 1: The \"Averaged word vectors\" method (top-k word vectors) assigns higher scores to similar character pairs (rated 0.5 or over) than dissimilar pairs (rated below 0.5); in contrast, the \"Character vector\" method does not clearly distinguish the similar and dissimilar pairs.", "uris": null, "type_str": "figure" } } } }