{ "paper_id": "D17-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:16:13.816450Z" }, "title": "Joint Embeddings of Chinese Words, Characters, and Fine-grained Subcharacter Components", "authors": [ { "first": "Jinxing", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": {} }, "email": "" }, { "first": "Xun", "middle": [], "last": "Jian", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": {} }, "email": "xjian@cse.ust.hk" }, { "first": "Hao", "middle": [], "last": "Xin", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": {} }, "email": "" }, { "first": "Yangqiu", "middle": [], "last": "Song", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": {} }, "email": "yqsong@cse.ust.hk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Word embeddings have attracted much attention recently. Different from alphabetic writing systems, Chinese characters are often composed of subcharacter components which are also semantically informative. In this work, we propose an approach to jointly embed Chinese words as well as their characters and fine-grained subcharacter components. We use three likelihoods to evaluate whether the context words, characters, and components can predict the current target word, and collected 13,253 subcharacter components to demonstrate the existing approaches of decomposing Chinese characters are not enough. Evaluation on both word similarity and word analogy tasks demonstrates the superior performance of our model.", "pdf_parse": { "paper_id": "D17-1027", "_pdf_hash": "", "abstract": [ { "text": "Word embeddings have attracted much attention recently. Different from alphabetic writing systems, Chinese characters are often composed of subcharacter components which are also semantically informative. In this work, we propose an approach to jointly embed Chinese words as well as their characters and fine-grained subcharacter components. We use three likelihoods to evaluate whether the context words, characters, and components can predict the current target word, and collected 13,253 subcharacter components to demonstrate the existing approaches of decomposing Chinese characters are not enough. Evaluation on both word similarity and word analogy tasks demonstrates the superior performance of our model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Distributed word representation represents a word as a vector in a continuous vector space and can better uncover both the semantic and syntactic information over traditional one-hot representations. It has been successfully applied to many downstream natural language processing (NLP) tasks as input features, such as named entity recognition (Collobert et al., 2011 ), text classification (Joulin et al., 2016) , sentiment analysis (Tang et al., 2014) , and question answering (Zhou et al., 2015) . Among many embedding methods (Bengio et al., 2003; Mnih and Hinton, 2009) , CBOW and Skip-Gram models are very popular due to their simplicity and efficiency, making it feasible to learn good embeddings of words from large scale training corpora (Mikolov et al., 2013b,a) .", "cite_spans": [ { "start": 344, "end": 367, "text": "(Collobert et al., 2011", "ref_id": "BIBREF3" }, { "start": 391, "end": 412, "text": "(Joulin et al., 2016)", "ref_id": "BIBREF4" }, { "start": 434, "end": 453, "text": "(Tang et al., 2014)", "ref_id": "BIBREF16" }, { "start": 479, "end": 498, "text": "(Zhou et al., 2015)", "ref_id": "BIBREF20" }, { "start": 530, "end": 551, "text": "(Bengio et al., 2003;", "ref_id": "BIBREF0" }, { "start": 552, "end": 574, "text": "Mnih and Hinton, 2009)", "ref_id": "BIBREF9" }, { "start": 747, "end": 772, "text": "(Mikolov et al., 2013b,a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite the success and popularity of word embeddings, most of the existing methods treat each word as the minimum unit, which ignores the morphological information of words. Rare words cannot be well represented when optimizing a cost function related to a rare word and its contexts. To address this issue, some recent studies (Luong et al., 2013; Qiu et al., 2014; Sun et al., 2016a; Wieting et al., 2016) have investigated how to exploit morphemes or character n-grams to learn better embeddings of English words.", "cite_spans": [ { "start": 329, "end": 349, "text": "(Luong et al., 2013;", "ref_id": "BIBREF6" }, { "start": 350, "end": 367, "text": "Qiu et al., 2014;", "ref_id": "BIBREF11" }, { "start": 368, "end": 386, "text": "Sun et al., 2016a;", "ref_id": "BIBREF13" }, { "start": 387, "end": 408, "text": "Wieting et al., 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different from other alphabetic writing systems such as English, written Chinese is logosyllabic, i.e., a Chinese character can be a word on its own or part of a polysyllabic word 1 . The characters themselves are often composed of subcharacter components which are also semantically informative. The subword items of Chinese words, including characters and subcharacter components, contain rich semantic information. The characters composing a word can indicate the semantic meaning of the word and the subcharacter components, such as radicals and components themselves being a character, composing a character can indicate the semantic meaning of the character. The components of characters can be roughly divided into two types: semantic component and phonetic component. The semantic component indicates the meaning of a character while the phonetic component indicates the sound of a character. For example, (water) is the semantic component of characters (lake) and (sea), (horse) is the phonetic component of characters (mother) and (scold) where both and are pronounced similar to .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Leveraging the subword information such as characters and subcharacter components can enhance Chinese word embeddings with internal morphological semantics. Some methods have been proposed to incorporate the subword infor-mation for Chinese word embeddings. Sun et al. (2014) and Li et al. (2015) proposed methods to enhance Chinese character embeddings with radicals based on C&W model (Collobert and Weston, 2008 ) and word2vec models (Mikolov et al., 2013a,b) respectively. Chen et al. (2015) used Chinese characters to improve Chinese word embeddings and proposed the CWE model to jointly learn Chinese word and character embeddings. Xu et al. (2016) extended the CWE model by exploiting the internal semantic similarity between a word and its characters in a cross-lingual manner. To combine both the radical-character and character-word compositions, Yin et al. (2016) proposed a multi-granularity embedding (MGE) model based on the CWE model, which represents the context as a combination of surrounding words, surrounding characters, and the radicals of the target word. Particularly, they developed a dictionary of 20,847 characters and 296 radicals.", "cite_spans": [ { "start": 258, "end": 275, "text": "Sun et al. (2014)", "ref_id": "BIBREF15" }, { "start": 280, "end": 296, "text": "Li et al. (2015)", "ref_id": "BIBREF5" }, { "start": 387, "end": 414, "text": "(Collobert and Weston, 2008", "ref_id": "BIBREF2" }, { "start": 437, "end": 462, "text": "(Mikolov et al., 2013a,b)", "ref_id": null }, { "start": 477, "end": 495, "text": "Chen et al. (2015)", "ref_id": "BIBREF1" }, { "start": 638, "end": 654, "text": "Xu et al. (2016)", "ref_id": "BIBREF18" }, { "start": 857, "end": 874, "text": "Yin et al. (2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, all the above approaches still missed a lot of fine-grained components in Chinese characters. Formally and historically, radicals are character components used to index Chinese characters in dictionaries. Although many of the radicals are also semantic components, a character has only one radical, which cannot fully uncover the semantics and structure of the character. Besides over 200 radicals, there are more than 10,000 components which are also semantically meaningful or phonetically useful. For example, Chinese character (illuminate, reflect, mirror, picture) has one radical (the corresponding traditional Chinese radical is , meaning fire) and three other components, i.e., (sun), (knife), and (mouth). Shi et al. (2015) proposed using WUBI input method to decompose the Chinese characters into components. However, WUBI input method uses rules to group Chinese characters into meaningless clusters which can fit the alphabet based keyboard. The semantics of the components are not straightforwardly meaningful.", "cite_spans": [ { "start": 724, "end": 741, "text": "Shi et al. (2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we present a model to jointly learn the embeddings of Chinese words, characters, and subcharacter components. The learned Chinese word embeddings can leverage the external context co-occurrence information and incorporate rich internal subword semantic information. Experiments on both word similarity and word analogy tasks demonstrate the effectiveness of our model over previous works. The code and data are available at https://github.com/ HKUST-KnowComp/JWE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we introduce our joint learning word embedding model (JWE), which combines words, characters, and subcharacter components information. Our model is based on CBOW model (Mikolov et al., 2013a) . JWE uses the average of context word vectors, the average of context character vectors, and the average of context subcharacter vectors to predict the target word, and uses the sum of these three prediction losses as the objective function. We denote D as the training corpus,", "cite_spans": [ { "start": 185, "end": 208, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "INPUT PROJECTION OUTPUT w i+1 w i 1 c i 1 c i+1 s i 1 s i+1 s i w i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "W = (w 1 , w 2 , \u2022 \u2022 \u2022 , w N ) as the vocabulary of words, C = (c 1 , c 2 , \u2022 \u2022 \u2022 , c M ) as the vocabulary of char- acters, S = (s 1 , s 2 , \u2022 \u2022 \u2022 , s K )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "as the vocabulary of subcharacters, and T as the context window size respectively. As illustrated in Figure 1 , JWE aims to maximize the sum of log-likelihoods of three predictive conditional probabilities for a target word w i :", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 109, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(w i ) = 3 \u2211 k=1 log P (w i |h i k ),", "eq_num": "(1)" } ], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "where h i 1 , h i 2 , h i 3 are the composition of context words, context characters, context subcharacters respectively. Let v w i , v c i , v s i be the \"input\" vectors of word w i , character c i , and subcharacter s i respectively,v w i be the \"output\" vectors of word w i . The conditional probability is defined by the softmax function as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(w i |h i k ) = exp(h T i kv w i ) \u2211 N j=1 exp(h T i kv w j ) , k = 1, 2, 3,", "eq_num": "(2)" } ], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "where h i 1 is the average of the \"input\" vectors of words in the context, i.e.:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h i 1 = 1 2T \u2211 \u2212T \u2264j\u2264T,j\u0338 =0 v w i+j .", "eq_num": "(3)" } ], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "Similarly, h i 2 is the average of characters' \"input\" vectors in the context, h i 3 is the average of subcharacters' \"input\" vectors in the context or in the target word or all of them. Given a corpus D, JWE maximizes the overall log likelihood:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(D) = \u2211 w i \u2208D L(w i ),", "eq_num": "(4)" } ], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "where the optimization follows the implementation of negative sampling used in CBOW model (Mikolov et al., 2013a) . This objective function is different from that of MGE (Yin et al., 2016) . For a target word w i , the objective function of MGE is almost equivalent to maximizing P (w", "cite_spans": [ { "start": 90, "end": 113, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF7" }, { "start": 170, "end": 188, "text": "(Yin et al., 2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "i |h i 1 + h i 2 + h i 3 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "During the backpropagation, the gradients of h i 1 , h i 2 , h i 3 can be different in our model while they are always same in MGE, so the gradients of the embeddings of words, characters, subcharacter components can be different in our model while they are same in MGE. Thus, the representations of words, characters, and subcharacter components are decoupled and can be better trained in our model. A similar decoupled objective function is used in (Sun et al., 2016a) to learn English word embeddings and phrase embeddings. Our model differs from theirs in that we combine the subwords of both the context words and target word to predict the target word while they use the morphemes of the target English word to predict it.", "cite_spans": [ { "start": 451, "end": 470, "text": "(Sun et al., 2016a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Joint Learning Word Embedding", "sec_num": "2" }, { "text": "We quantitatively evaluate the quality of word embeddings learned by our model on word similarity evaluation and word analogy tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Training Corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "3.1" }, { "text": "We adopt the Chinese Wikipedia Dump 2 as our training corpus. In pre- Table 1 : Results on word similarity evaluation. For our JWE model, +c represents the components feature and +r represents the radicals feature; +p indicates which subcharacters are used to predict the target word; +p1 indicates using the surrounding words' subcharacter features; +p2 indicates using the target word's subcharacter features; +p3 indicates using the subcharacter features of both the surrounding words and the target word; -n indicates only using characters without either components or radicals.", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "3.1" }, { "text": "processing, pure digits and non Chinese characters are removed. We use THULAC 3 (Sun et al., 2016b) for Chinese word segmentation and POS tagging. We identify all entity names for CWE (Chen et al., 2015) and MGE (Yin et al., 2016) as they do not use the characters information for non-compositional words. Our model (JWE) does not use such a non-compositional word list. We obtained a 1GB training corpus with 153,071,899 tokens and 3,158,225 unique words. Subcharacter Components. We crawled the components and radicals information of Chinese characters from HTTPCN 4 . We obtained 20,879 characters, 13,253 components and 218 radicals, of which 7,744 characters have more than one components, and 214 characters are equal to their radicals.", "cite_spans": [ { "start": 80, "end": 99, "text": "(Sun et al., 2016b)", "ref_id": "BIBREF14" }, { "start": 184, "end": 203, "text": "(Chen et al., 2015)", "ref_id": "BIBREF1" }, { "start": 212, "end": 230, "text": "(Yin et al., 2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "3.1" }, { "text": "Parameter Settings. We compare our method with CBOW (Mikolov et al., 2013b) 5 , CWE (Chen et al., 2015) 6 , and MGE (Yin et al., 2016) 7 .", "cite_spans": [ { "start": 52, "end": 75, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF8" }, { "start": 84, "end": 105, "text": "(Chen et al., 2015) 6", "ref_id": null }, { "start": 116, "end": 134, "text": "(Yin et al., 2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "3.1" }, { "text": "3 http://thulac.thunlp.org/ 4 http://tool.httpcn.com/zi/ 5 https://code.google.com/p/word2vec/ 6 https://github.com/Leonard-Xu/CWE 7 We used the source code provided by the author. Our experimental results of baselines are different from that in MGE paper because we used a 1GB corpus while they used a 500MB corpus and we fixed the training iteration while they tried the training iteration in range [5, 200] and chose the best.", "cite_spans": [ { "start": 401, "end": 404, "text": "[5,", "ref_id": null }, { "start": 405, "end": 409, "text": "200]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "3.1" }, { "text": "For all models, we used the same parameter settings. We fixed the word vector dimension to be 200, the window size to be 5, the training iteration to be 100, the initial learning rate to be 0.025, and the subsampling parameter to be 10 \u22124 . Words with frequency less than 5 were ignored during training. We used 10-word negative sampling for optimization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "3.1" }, { "text": "This task evaluates the embedding's ability of uncovering the semantic relatedness of word pairs. We select two different Chinese word similarity datasets, wordsim-240 and wordsim-296 provided by (Chen et al., 2015) for evaluation. There are 240 pairs of Chinese words in wordsim-240 and 296 pairs of Chinese words in wordsim-296. Both datasets contain human-labeled similarity scores for each word pair. There is a word in wordsim-296 that did not appear in the training corpus, so we removed this from the gold-standard to produce wordsim-295. All words in wordsim-240 appeared in the training corpus. The similarity score for a word pair is computed as the cosine similarity of their embeddings generated by the learning model. We compute the Spearman correlation (Myers et al., 2010) between the human-labeled scores and similarity scores computed by embeddings. The evaluation results of our model and baseline methods on wordsim-240 and wordsim-295 are shown in Table 1 .", "cite_spans": [ { "start": 196, "end": 215, "text": "(Chen et al., 2015)", "ref_id": "BIBREF1" }, { "start": 767, "end": 787, "text": "(Myers et al., 2010)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 968, "end": 975, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Word Similarity", "sec_num": "3.2" }, { "text": "From the results, we can see that JWE substantially outperforms CBOW, CWE, and MGE on the two word similarity datasets. JWE can better leverage the rich morphological information in Chinese words than CWE and MGE. It shows the benefits of decoupling the representation of words, characters, and subcharacter components as opposed to employing concatenation, sum, or average on all of them as the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Similarity", "sec_num": "3.2" }, { "text": "We also observe that JWE with only characters can get competitive results on the word similarity task compared to JWE with characters and subcharacters. The reason may be that characters are enough to provide additional semantic information for computing the similarities of many word pairs in the two datasets. For example, the similarity of (law, statute) and (lawyer) in wordsim-295 can be directly inferred from the shared character (law, rule).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Similarity", "sec_num": "3.2" }, { "text": "This task examines the quality of word embedding by its capacity of discovering linguistic regularities between pairs of words. For example, for a tuple like \" We use accuracy as the evaluation metric. In this beddings, it would be interesting to see the relationships of the embeddings of words, characters, and subcharacter components as they are embedded into a same continuous vector space. We evaluate the embeddings' abilities of uncovering the semantic relatedness of words, characters, and subcharacter components through case studies. The similarities between them are computed by the cosine similarities of their embeddings. Take two Chinese character (photograph) and (river) as examples, we list their closest words in Table 3 . We can see that most of the closest words are semantically related to the corresponding character.", "cite_spans": [], "ref_spans": [ { "start": 731, "end": 738, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Word Analogy", "sec_num": "3.3" }, { "text": "We further take the component (illness) as an example and list its closest characters and words in Table 4 . All of the closest characters and words are semantically related to the component (illness). Most of them have the component (illness).", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 106, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Word Analogy", "sec_num": "3.3" }, { "text": "(suffer), (swelling), and (patients) do not have the component (illness), but they are also semantically related to (illness). It shows that JWE does not overuse the component information but leverages both the external context co-occurrence information and internal subword morphological information well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "3.3" }, { "text": "In this paper, we propose a model to jointly learn the embeddings of Chinese words, characters, and subcharacter components. Our approach makes full use of subword information to enhance Chinese word embeddings. Experiments show that our model substantially outperforms the baseline methods on Chinese word similarity computation and Chinese word analogy reasoning, and demonstrate the benefits of incorporating fine-grained components compared to just using characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "4" }, { "text": "There could be several directions to be explored for future work. First, we use the average operation to integrate the subcharacter components as the context to predict the target word. The structure of Chinese characters and the positions of components in the character may be considered to fully leverage the component information of Chinese characters. Second, for any target word, we simply use word context, character context, and subcharacter context to predict it and do not distinguish compositional words and non-compositional words. To solve this problem, attention models may be used to adaptively assign weights to word context, character context, and subcharacter context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "4" }, { "text": "https://en.wikipedia.org/wiki/Written_ Chinese", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This paper was supported by HKUST initiation grant IGN16EG01, the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. 26206717), China 973 Fundamental R&D Program (No. 2014CB340304), and the LORELEI Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. We also thank the anonymous reviewers for their valuable comments and suggestions that help improve the quality of this manuscript.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Total Table 2 : Results on word analogy reasoning. The configurations are the same of the ones used in Table 1. task, we use the Chinese word analogy dataset introduced by (Chen et al., 2015) , which consists of 1,124 tuples of words and each tuple contains 4 words, coming from three different categories: \"Capital\" (677 tuples), \"State\" (175 tuples), and \"Family\" (272 tuples). Our training corpus covers all the testing words. The results in Table 2 show that JWE outperforms the baselines on all categories' word analogy tasks. Different from the results on the word similarity task, JWE with components consistently performs better than JWE with radicals and JWE without either radicals or components. It demonstrates the necessary of delving deeper into finegrained components for complex semantic reasoning tasks.", "cite_spans": [ { "start": 172, "end": 191, "text": "(Chen et al., 2015)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 6, "end": 13, "text": "Table 2", "ref_id": null }, { "start": 103, "end": 111, "text": "Table 1.", "ref_id": null }, { "start": 445, "end": 452, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "In addition to evaluating the benefits of incorporating subword information for Chinese word em-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Studies", "sec_num": "3.4" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Jauvin", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Re- search, 3(Feb):1137-1155.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Joint learning of character and word embeddings", "authors": [ { "first": "Xinxiong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of IJCAI", "volume": "", "issue": "", "pages": "1236--1242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huanbo Luan. 2015. Joint learning of charac- ter and word embeddings. In Proceedings of IJCAI, pages 1236-1242.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of ICML, pages 160-167.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.01759" ] }, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Component-enhanced chinese character embeddings", "authors": [ { "first": "Yanran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" } ], "year": 2015, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "829--834", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanran Li, Wenjie Li, Fei Sun, and Sujian Li. 2015. Component-enhanced chinese character em- beddings. In Proceedings of EMNLP, pages 829- 834.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Better word representations with recursive neural networks for morphology", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "104--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Richard Socher, and Christopher D Man- ning. 2013. Better word representations with recur- sive neural networks for morphology. In Proceed- ings of CoNLL, pages 104-113.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Proceedings of NIPS, pages 3111-3119.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A scalable hierarchical distributed language model", "authors": [ { "first": "Andriy", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2009, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "1081--1088", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andriy Mnih and Geoffrey E Hinton. 2009. A scalable hierarchical distributed language model. In Pro- ceedings of NIPS, pages 1081-1088.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Research design and statistical analysis", "authors": [ { "first": "L", "middle": [], "last": "Jerome", "suffix": "" }, { "first": "Arnold", "middle": [], "last": "Myers", "suffix": "" }, { "first": "Robert", "middle": [ "Frederick" ], "last": "Well", "suffix": "" }, { "first": "", "middle": [], "last": "Lorch", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerome L Myers, Arnold Well, and Robert Frederick Lorch. 2010. Research design and statistical analy- sis. Routledge.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Co-learning of word representations and morpheme representations", "authors": [ { "first": "Siyu", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Qing", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COL-ING", "volume": "", "issue": "", "pages": "141--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siyu Qiu, Qing Cui, Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Co-learning of word representations and morpheme representations. In Proceedings of COL- ING, pages 141-150.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Radical embedding: Delving deeper to chinese radicals", "authors": [ { "first": "Xinlei", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Xudong", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zehua", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "594--598", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinlei Shi, Junjie Zhai, Xudong Yang, Zehua Xie, and Chao Liu. 2015. Radical embedding: Delving deeper to chinese radicals. In Proceedings of ACL, pages 594-598.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Inside out: Two jointly predictive models for word representations and phrase representations", "authors": [ { "first": "Fei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Yanyan", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "2821--2827", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Sun, Jiafeng Guo, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2016a. Inside out: Two jointly predictive models for word representations and phrase repre- sentations. In Proceedings of AAAI, pages 2821- 2827.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Thulac: An efficient lexical analyzer for chinese", "authors": [ { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Xinxiong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kaixu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhipeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maosong Sun, Xinxiong Chen, Kaixu Zhang, Zhipeng Guo, and Zhiyuan Liu. 2016b. Thulac: An efficient lexical analyzer for chinese. Technical Report.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Radical-enhanced chinese character embedding", "authors": [ { "first": "Yaming", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhenzhou", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Xiaolong", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2014, "venue": "International Conference on Neural Information Processing", "volume": "", "issue": "", "pages": "279--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaming Sun, Lei Lin, Nan Yang, Zhenzhou Ji, and Xiaolong Wang. 2014. Radical-enhanced chinese character embedding. In International Conference on Neural Information Processing, pages 279-286. Springer.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning sentiment-specific word embedding for twitter sentiment classification", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1555--1565", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment-specific word embedding for twitter sentiment classification. In Proceedings of ACL, pages 1555-1565.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Charagram: Embedding words and sentences via character n-grams", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Livescu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1504--1515", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Charagram: Embedding words and sentences via character n-grams. In Proceedings of EMNLP, pages 1504-1515.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Improve chinese word embeddings by exploiting internal structure", "authors": [ { "first": "Jian", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Liangang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhengyu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Huanhuan", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "1041--1050", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Xu, Jiawei Liu, Liangang Zhang, Zhengyu Li, and Huanhuan Chen. 2016. Improve chinese word em- beddings by exploiting internal structure. In Pro- ceedings of NAACL-HLT, pages 1041-1050.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Multi-granularity chinese word embedding", "authors": [ { "first": "Rongchao", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Quan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Li", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "981--986", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rongchao Yin, Quan Wang, Peng Li, Rui Li, and Bin Wang. 2016. Multi-granularity chinese word em- bedding. In Proceedings of EMNLP, pages 981-986.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning continuous word embedding with metadata for question retrieval in community question answering", "authors": [ { "first": "Guangyou", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Tingting", "middle": [], "last": "He", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Po", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "250--259", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guangyou Zhou, Tingting He, Jun Zhao, and Po Hu. 2015. Learning continuous word embedding with metadata for question retrieval in community ques- tion answering. In Proceedings of ACL, pages 250- 259.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Illustration of JWE. w i is the target word. w i\u22121 and w i+1 are the left word and right word of w i respectively. c i\u22121 and c i+1 represent the characters in the context. s i\u22121 and s i+1 represent the subcharacters in the context, s i represents the subcharacters of the target word w i .", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "an analogy tuple \"a : b :: c : d,\" the model answers the analogy question \"a : b :: c :?\" by finding x in the vocabulary such that arg max x\u0338 =a,x\u0338 =b,x\u0338 =c cos( \u20d7 b \u2212 \u20d7 a + \u20d7 c, \u20d7 x).", "type_str": "figure", "uris": null }, "TABREF2": { "content": "
: Closest words of characters(photo-
graph) and (river).
Component(illness)
(cure) (symptom)
Closest characters(pain) (sore) (suffer) (itch) (infantile malnutrition)
(disease) (swelling)
(cure)(symptom)
(recurrence)(pain)
Closest(symptom)
words(abdominal pain)
(patients)(epilepsy)
(disease)(therapy)
", "num": null, "type_str": "table", "text": "", "html": null }, "TABREF3": { "content": "", "num": null, "type_str": "table", "text": "Closest characters and closest words of the component (illness).", "html": null } } } }