{ "paper_id": "D17-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:17:47.343266Z" }, "title": "Learning Chinese Word Representations From Glyphs Of Characters", "authors": [ { "first": "Tzu-Ray", "middle": [], "last": "Su", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University No", "location": { "addrLine": "1, Sec. 4, Roosevelt Road", "settlement": "Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University No", "location": { "addrLine": "1, Sec. 4, Roosevelt Road", "settlement": "Taipei", "country": "Taiwan" } }, "email": "hungyilee@ntu.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we propose new methods to learn Chinese word representations. Chinese characters are composed of graphical components, which carry rich semantics. It is common for a Chinese learner to comprehend the meaning of a word from these graphical components. As a result, we propose models that enhance word representations by character glyphs. The character glyph features are directly learned from the bitmaps of characters by convolutional auto-encoder(convAE), and the glyph features improve Chinese word representations which are already enhanced by character embeddings. Another contribution in this paper is that we created several evaluation datasets in traditional Chinese and made them public.", "pdf_parse": { "paper_id": "D17-1025", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we propose new methods to learn Chinese word representations. Chinese characters are composed of graphical components, which carry rich semantics. It is common for a Chinese learner to comprehend the meaning of a word from these graphical components. As a result, we propose models that enhance word representations by character glyphs. The character glyph features are directly learned from the bitmaps of characters by convolutional auto-encoder(convAE), and the glyph features improve Chinese word representations which are already enhanced by character embeddings. Another contribution in this paper is that we created several evaluation datasets in traditional Chinese and made them public.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "No matter which target language it is, high quality word representations (also known as word \"embeddings\") are keys to many natural language processing tasks, for example, sentence classification (Kim, 2014) , question answering (Zhou et al., 2015) , machine translation (Sutskever et al., 2014) , etc. Besides, word-level representations are building blocks in producing phrase-level (Cho et al., 2014) and sentence-level (Kiros et al., 2015) representations.", "cite_spans": [ { "start": 196, "end": 207, "text": "(Kim, 2014)", "ref_id": "BIBREF7" }, { "start": 229, "end": 248, "text": "(Zhou et al., 2015)", "ref_id": "BIBREF25" }, { "start": 271, "end": 295, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF18" }, { "start": 385, "end": 403, "text": "(Cho et al., 2014)", "ref_id": "BIBREF3" }, { "start": 423, "end": 443, "text": "(Kiros et al., 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we focus on learning Chinese word representations. A Chinese word is composed of characters which contain rich semantics. The meaning of a Chinese word is often related to the meaning of its compositional characters. Therefore, Chinese word embedding can be enhanced by its compositional character embeddings (Chen et al., 2015; Xu et al., 2016) . Further-more, a Chinese character is composed of several graphical components. Characters with the same component share similar semantic or pronunciation. When a Chinese user encounters a previously unseen character, it is instinctive to guess the meaning (and pronunciation) from its graphical components, so understanding the graphical components and associating them with semantics help people learning Chinese. Radicals 1 are the graphical components used to index Chinese characters in a dictionary. By identifying the radical of a character, one obtains a rough meaning of that character, so it is used in learning Chinese word embedding (Yin et al., 2016) and character embedding (Sun et al., 2014; Li et al., 2015) . However, other components in addition to radicals may contain potentially useful information in word representation learning.", "cite_spans": [ { "start": 324, "end": 343, "text": "(Chen et al., 2015;", "ref_id": "BIBREF2" }, { "start": 344, "end": 360, "text": "Xu et al., 2016)", "ref_id": "BIBREF20" }, { "start": 1007, "end": 1025, "text": "(Yin et al., 2016)", "ref_id": "BIBREF21" }, { "start": 1050, "end": 1068, "text": "(Sun et al., 2014;", "ref_id": "BIBREF17" }, { "start": 1069, "end": 1085, "text": "Li et al., 2015)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our research begins with a question: Can machines learn Chinese word representations from glyphs of characters? By exploiting the glyphs of characters as images in word representation learning, all the graphical components in a character are considered, not limited to radicals. In our proposed methods, we render character glyphs to fixed-size grayscale images which are referred to as \"character bitmaps\", as illustrated in Fig.1 . A similar idea was also used in (Liu et al., 2017) to help classifying wikipedia article titles into 12 categories. We use a convAE to extract character features from the bitmap to represent the glyphs. It is also possible to represent the glyph of a character by the graphical components in it. We do not choose this way because there is no unique way to decompose a character, and directly learning representation from bitmaps is more straightforward. Then we use the models parallel to Skipgram (Mikolov et al., 2013a) or GloVe (Penning-ton et al., 2014) to learn word representations from the character glyph features. Although we only consider traditional Chinese characters in this paper, and the examples given below are based on the traditional characters, the same ideas and methods can be applied on the simplified characters.", "cite_spans": [ { "start": 466, "end": 484, "text": "(Liu et al., 2017)", "ref_id": "BIBREF11" }, { "start": 932, "end": 955, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF14" }, { "start": 965, "end": 991, "text": "(Penning-ton et al., 2014)", "ref_id": null } ], "ref_spans": [ { "start": 426, "end": 431, "text": "Fig.1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Characters Glyphs (As printed in PDF file) 60 pixels Figure 1 : A Chinese character is represented as a fixed-size gray-scale image which is referred to as \"character bitmap\" in this paper.", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 61, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Rendered bitmaps 60 pixels", "sec_num": null }, { "text": "To give a clear illustration of our own work, we briefly introduce the representative methods of word representation learning in Section 2.1. In Section 2.2, we will introduce some of the linguistic properties of Chinese, and then introduce the methods that utilize these properties to improve word representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background Knowledge and Related Works", "sec_num": "2" }, { "text": "Mainstream research of word representation is built upon the distributional hypothesis, that is, words with similar contexts share similar meanings. Usually a large-scale corpus is used, and word representations are produced from the cooccurrence information of a word and its context. Existing methods of producing word representations could be separated into two families (Levy et al., 2015) : count-based family (Turney and Pantel, 2010; Bullinaria and Levy, 2007) , and prediction-based family. Word representations can be obtained by training a neural-networkbased models (Bengio et al., 2003; Collobert et al., 2011) . The representative methods are briefly introduced below.", "cite_spans": [ { "start": 374, "end": 393, "text": "(Levy et al., 2015)", "ref_id": "BIBREF9" }, { "start": 456, "end": 467, "text": "Levy, 2007)", "ref_id": "BIBREF1" }, { "start": 577, "end": 598, "text": "(Bengio et al., 2003;", "ref_id": "BIBREF0" }, { "start": 599, "end": 622, "text": "Collobert et al., 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Word Representation Learning", "sec_num": "2.1" }, { "text": "Both continuous bag-of-words (CBOW) model and Skipgram model train with words and contexts in a sliding local context window (Mikolov et al., 2013a) . Both of them assign each word w i with an embedding w i . CBOW predicts the word given its context embeddings, while Skipgram predicts contexts given the word embedding. Predicting the occurrence of word/context in CBOW and Skipgram models could be viewed as learning a multi-class classification neural network (the number of classes is the size of vocabulary). In (Mikolov et al., 2013b) , the authors introduced several techniques to improve the performance. Negative sampling is introduced to speed up learning, and subsampling frequent words is introduced to randomly discard training examples with frequent words (such as \"the\", \"a\", \"of\"), and has an effect similar to the removal of stop words.", "cite_spans": [ { "start": 125, "end": 148, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF14" }, { "start": 517, "end": 540, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "CBOW and Skipgram", "sec_num": "2.1.1" }, { "text": "Instead of using local context windows, (Pennington et al., 2014) proposed GloVe model. Training GloVe word representations begins with creating a co-occurrence matrix X from a corpus, where each matrix entry X ij represents the counts that word w j appears in the context of word w i .", "cite_spans": [ { "start": 40, "end": 65, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "GloVe", "sec_num": "2.1.2" }, { "text": "In (Pennington et al., 2014) , the authors used a harmonic weighting function for co-occurrence count, that is, word-context pairs with distance d contributes 1 d to the global co-occurrence count. Let w i be the word representation of word w i , and w j be the word representation of word w j as context, GloVe model minimizes the loss:", "cite_spans": [ { "start": 3, "end": 28, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "GloVe", "sec_num": "2.1.2" }, { "text": "i,j\u2208 non\u2212zero entries of X f (X ij )( w T i w j +b i +b j \u2212log(X ij )),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GloVe", "sec_num": "2.1.2" }, { "text": "where b i is the bias for word w i , andb j is the bias for context w j . A weighting function f (X ij ) is introduced because the authors consider rare cooccurrence word-context pairs carry less information than frequent ones, and their contributions to the total loss should be decreased. The weighting function f (X ij ) is defined as below. It depends on the co-occurrence count, and the authors set parameters x max = 100, \u03b1 = 0.75.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GloVe", "sec_num": "2.1.2" }, { "text": "f (X ij ) = ( X ij xmax ) \u03b1 if X ij < x max 1 otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GloVe", "sec_num": "2.1.2" }, { "text": "In the GloVe model, each word has 2 representations w and w. The authors suggest using w + w as the word representation, and reported improvements over using w only. A Chinese word is composed of a sequence of characters. The meanings of some Chinese words are related to the composition of the meanings of their characters. For example, \"\u6230\u8266\" (battleship), is composed of two characters, \"\u6230\" (war) and \"\u8266\" (ship). More examples are given in Fig. 2 . To improve Chinese word representations with sub-word information, character-enhanced word embedding (CWE) (Chen et al., 2015) ", "cite_spans": [ { "start": 557, "end": 576, "text": "(Chen et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 441, "end": 447, "text": "Fig. 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "GloVe", "sec_num": "2.1.2" }, { "text": "(C-1) (C-2) (C-3) (C-4) (C-5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GloVe", "sec_num": "2.1.2" }, { "text": "Figure 3: Some examples of radicals and the characters containing them. In rows (C-1) to (C-4), the radicals are at the left hand side of the character, while in row (C-5), the radicals are at the bottom, and may have different of shapes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GloVe", "sec_num": "2.1.2" }, { "text": "A Chinese character is composed of several graphical components. Characters with the same component share similar semantic or phonetic properties. In a Chinese dictionary characters with similar coarse semantics are grouped into categories for the ease of searching. The common graphical component which relates to the common semantic is chosen to index the category, known as a radical. Examples are given in Fig. 3 . There are three radicals in row (A), and their semantic meanings are in row (B). In each column, there are five characters containing each radical. It is easy to find that the characters having the same radical have meanings related to the radical in some aspect. A radical can be put in different positions in a character. For example, in rows (C-1) to (C-4), the radicals are at the left hand side of a character, but in row (C-5), the radicals are at the bottom. The shape of a radical can be different in different positions. For example, the third radical which represents \"water\" or \"liquid\" has different forms when it is at the left hand side or the bottom of a character. Because radicals serve as a strong semantic indicator of a character, multigranularity embedding (MGE) (Yin et al., 2016) in Section 2.2.3 incorporates radical embeddings in learning word representation. Usually the components other than radicals determine the pronunciation of the characters, but in some cases they also influence the meaning of a character. Two examples are given in Fig. 4 2 . Both characters in Fig. 4 have the same radical \"\u4ebb\" (means humans) at the left hand side, but the graphical components at the right hand side also have semantic meanings related to the characters. Considering the left character \"\u4f10\" (means attack). Its right component \"\u6208\" means \"weapon\", and the meaning of the character \"\u4f10\" is the composition of the meaning of its two components (a human with a weapon). None of the previous word embedding approach considers all the components of Chinese characters in our best knowledge.", "cite_spans": [ { "start": 1203, "end": 1221, "text": "(Yin et al., 2016)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 410, "end": 416, "text": "Fig. 3", "ref_id": null }, { "start": 1486, "end": 1492, "text": "Fig. 4", "ref_id": "FIGREF1" }, { "start": 1516, "end": 1522, "text": "Fig. 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "GloVe", "sec_num": "2.1.2" }, { "text": "The main idea of CWE is that word embedding is enhanced by its compositional character embeddings. CWE predicts the word from both word and character embeddings of contexts, as illustrated in Fig. 5 (a) . For word w i , the CWE word embedding w cwe i has the following form:", "cite_spans": [], "ref_spans": [ { "start": 192, "end": 202, "text": "Fig. 5 (a)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Character-enhanced Word Embedding (CWE)", "sec_num": "2.2.2" }, { "text": "w cwe i = w i + 1 |C(i)| c j \u2208C(i) c j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character-enhanced Word Embedding (CWE)", "sec_num": "2.2.2" }, { "text": "where w i is the word embedding, c j is the embedding of the j-th character in w i , and C(i) is the set of compositional characters of word w i . Mean value of CWE word embeddings of contexts are then used to predict the word w i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character-enhanced Word Embedding (CWE)", "sec_num": "2.2.2" }, { "text": "Sometimes one character has several different meanings, this is known as the ambiguity problem. To deal with this, each character is assigned with a bag of embeddings. During training, one of the embeddings is picked to form the modified word embedding. The authors proposed three methods to decide which embedding is picked: positionbased, cluster-based, and non-parametric clusterbased character embeddings. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character-enhanced Word Embedding (CWE)", "sec_num": "2.2.2" }, { "text": "Based on CBOW and CWE, (Yin et al., 2016) proposed MGE, which predicts target word with its radical embeddings and modified word embeddings of context in CWE, as shown in Fig.5 (b) . There is no ambiguity of radicals, so each radical is assigned with one embedding r. We denote r k as the radical embedding of character c k . MGE predicts the target word w i with the following hidden vector:", "cite_spans": [ { "start": 23, "end": 41, "text": "(Yin et al., 2016)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 171, "end": 180, "text": "Fig.5 (b)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Multi-granularity Embedding (MGE)", "sec_num": "2.2.3" }, { "text": "h i = 1 |C(i)| c k \u2208C(i) r k + 1 |W (i)| w j \u2208W (i) w cwe j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-granularity Embedding (MGE)", "sec_num": "2.2.3" }, { "text": ", where W(i) is the set of contexts words of w i , w cwe j is the CWE word embedding of w j . MGE picks character embeddings with the positionbased method in CWE, and picks radical embeddings according to a character-radical index built from a dictionary during training. When noncompositional word is encountered, only the word embedding is used to form h i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-granularity Embedding (MGE)", "sec_num": "2.2.3" }, { "text": "We first extract glyph features from bitmaps with the convAE in Section 3.1. The glyph features are used to enhance the existing word representation learning models in Section 3.2. In Section 3.3, we try to learn word representations directly from the glyph features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "A convAE (Masci et al., 2011) is used to reduce the dimensions of rendered character bitmaps and capture high-level features. The architecture of the convAE is shown in Fig. 6 . The convAE is composed of 5 convolutional layers in both encoder and decoder. The stride larger than one is used instead of pooling layers. Convolutional and deconvolutional layers on the same level share the same kernel. The input image is a 60\u00d760 8-bit grayscale bitmap, and the encoder extracts 512-dimensional feature. The feature of character c k from the encoder is refer to as character glyph feature g k in the paper. of word w i has the form:", "cite_spans": [ { "start": 9, "end": 29, "text": "(Masci et al., 2011)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 169, "end": 175, "text": "Fig. 6", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Character Bitmap Feature Extraction", "sec_num": "3.1" }, { "text": "w ctxG i = w i + 1 |C(i)| c j \u2208C(i) ( c j + g j ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character Bitmap Feature Extraction", "sec_num": "3.1" }, { "text": "where C(i) is the compositional characters of w i and g j is the glyph feature of c j . The model predicts target word w i from ctxG word embeddings of contexts, as shown in Fig.7 . The parameters in the convAE are pre-trained, thus not jointly learned with embeddings w and c, so character glyph features g are fixed during training.", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 179, "text": "Fig.7", "ref_id": null } ], "eq_spans": [], "section": "Character Bitmap Feature Extraction", "sec_num": "3.1" }, { "text": "w i w ctxG i-1 w ctxG i+1 w i+1 Mean(c ) c g Mean(g ) c g char bitmaps of w i glyph features of w i+1 char emb. of w i+1 word emb. of w i+1 = + \u2026 \u2026 + ctxG word emb. of w i-1 ctxG word emb. of w i+1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character Bitmap Feature Extraction", "sec_num": "3.1" }, { "text": "! ! Figure 7 : Illustration of exploiting context word glyphs. Mean value of character glyph features in the context is added to the hidden vector that predicts target word.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 12, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Character Bitmap Feature Extraction", "sec_num": "3.1" }, { "text": "Here we propose another variant. In this model, the model structure is the same as in Fig.7 . The difference lies in the hidden vector used to predict the target word. Instead of adding mean value of character glyph features of the contexts, it adds mean value of glyph feature of the target word (tarG), as shown in Fig.8 . As in Section 3.2.1, con-vAE is not jointly learned. ", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 91, "text": "Fig.7", "ref_id": null }, { "start": 317, "end": 322, "text": "Fig.8", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Enhanced by Target Word Glyphs", "sec_num": "3.2.2" }, { "text": "We learn word representation w i directly from the sequence of character glyph features { g k , c k \u2208 C(i)} of word w i , with the objective of Skipgram. As in Fig.9 , a 2-layer Gated Recurrent Units (GRU) (Cho et al., 2014) network followed by 2 fully connected ELU (Clevert et al., 2015 ) layers produces word representation w i from input sequence { g k } of word w i . w i is then used to predict the contexts of w i . In the training we use negative sampling and subsampling on frequent words from (Mikolov et al., 2013b) . ", "cite_spans": [ { "start": 206, "end": 224, "text": "(Cho et al., 2014)", "ref_id": "BIBREF3" }, { "start": 267, "end": 288, "text": "(Clevert et al., 2015", "ref_id": "BIBREF4" }, { "start": 503, "end": 526, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 160, "end": 165, "text": "Fig.9", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "RNN-Skipgram", "sec_num": "3.3.1" }, { "text": "We modify GloVe model to directly learn from character glyph features as in Fig.10 . We feed character glyph feature sequence { g k , c k \u2208 C(i)}, { g k , c k \u2208 C(j)} of word w i and context w j to a shared GRU network. Outputs of GRU are then fed to two different fully connected ELU layers to produce word representations w i and w j . The inner product of w i and w j is the prediction of log co-occurrence log(X ij ). We apply the same loss function with weights in GloVe. We follow (Pennington et al., 2014) and use w i + w i for evaluations of word representation. Figure 10 : Model architecture of RNN-GloVe. A shared GRU network and 2 different sets of fully connected ELU layers produce w i and w j . Inner product of w i and w j is the prediction of log cooccurrence log(X ij ).", "cite_spans": [ { "start": 487, "end": 512, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 76, "end": 82, "text": "Fig.10", "ref_id": null }, { "start": 571, "end": 580, "text": "Figure 10", "ref_id": null } ], "eq_spans": [], "section": "RNN-GloVe", "sec_num": "3.3.2" }, { "text": "word, LDC2003T09). All foreign words, numerical words, and punctuations were removed. Word segmentation was performed using open source python package jieba 3 . In all 316,960,386 segmented words, we extracted 8780 unique characters, and used a true type font (BiauKai) to render each character glyph to a 60\u00d760 8-bit grayscale bitmap. Furthermore, We removed words whose frequency <= 25, leaving 158,565 unique words as the vocabulary set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RNN-GloVe", "sec_num": "3.3.2" }, { "text": "Inspired by (Zeiler et al., 2011), layer-wise training was applied to our convAE. From lower level to higher, the kernel of each layer is trained individually, with other kernels frozen for 100 epochs. Loss function is the Euclidean distance between input and reconstructed bitmap, and we added l1 regularization to the activations of convolution layers. We chose Adagrad as the optimizing algorithm, and set batch size = 20 and learning rate = 0.001. The comparison between the input bitmaps and their reconstructions is shown in Fig 11. The input bitmaps are in the upper row, while the reconstructions are in the lower row. We further visualized the extracted character glyph features with t-SNE (Maaten and Hinton, 2008) . Part of the visualization result is shown in Fig. 12. From Fig. 12 , we found that the characters with the same components are clustered. The result shows that the features extracted by the convAE are capable of expressing the graphical information in the bitmaps.", "cite_spans": [ { "start": 699, "end": 724, "text": "(Maaten and Hinton, 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 531, "end": 538, "text": "Fig 11.", "ref_id": "FIGREF7" }, { "start": 772, "end": 793, "text": "Fig. 12. From Fig. 12", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Extracting Visual Features of Character Bitmap", "sec_num": "4.2" }, { "text": "We used CWE code 4 to implement both CBOW and Skipgram, along with the CWE. The number of multi-embedding was set to 3. We modified the CWE code to produce GWE representations. For CBOW, Skipgram, CWE, GWE and RNN-Skipgram, we used the following hyperparameters. Context window was set to 5 to both sides of a word. We used 10 negative samples, and threshold t of subsampling was set to 10 \u22125 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details of Word Representations", "sec_num": "4.3" }, { "text": "Since Yin at al. did not publish their code, we followed their paper and reproduced the MGE model. We created the mapping between characters and radicals from the Unihan database 5 . Each character corresponds to one of the 214 radicals in this dataset, and the same hyperparameters were used in training as above. Note that we did not separate non-compositional words during training as the original CWE and MGE did.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details of Word Representations", "sec_num": "4.3" }, { "text": "We used the GloVe code 6 to train the baseline GloVe vectors. In construction of co-occurrence matrix for GloVe and RNN-GloVe, we followed the parameter settings of x max = 100 and \u03b1 = 0.75 in (Pennington et al., 2014) . Context window was 5 words to the both sides of a word, and harmonic weighting was used on co-occurrence counts. For the RNN-GloVe model, we removed entries whose value < 0.5 to speed up training.", "cite_spans": [ { "start": 193, "end": 218, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Training Details of Word Representations", "sec_num": "4.3" }, { "text": "RNN-Skipgram and RNN-GloVe generated 200-dimensional word embeddings, while other models generated 512-dimensional word embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details of Word Representations", "sec_num": "4.3" }, { "text": "To encourage further research, we published our convAE and embedding models on github 7 . Evaluation datasets were also uploaded, whose details will be explained in Section 5. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details of Word Representations", "sec_num": "4.3" }, { "text": "A word similarity test contains multiple word pairs and their human annotated similarity scores. Word representations are considered good if the calculated similarity and human annotated scores have a high rank correlation. We computed the Spearman's correlation between human annotated scores and cosine similarity of word representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Similarity", "sec_num": "5.1" }, { "text": "Since there is little resource for traditional Chinese, we translated WordSim-240 and WordSim-296 datasets provided by (Chen et al., 2015) . Note that this translation is non-trivial. Some frequent words are considered out-of-vocabulary (OOV) due to the different usage between the simplified and traditional. For example, \"butter\" is translated to \"\u9ec3\u6cb9\" in simplified, but \"\u5976\u6cb9\" in traditional. Besides, we manually translated SimLex-999 (Hill et al., 2016) to traditional Chinese, and used it as the third testing dataset. We also made these datasets public along with our code.", "cite_spans": [ { "start": 119, "end": 138, "text": "(Chen et al., 2015)", "ref_id": "BIBREF2" }, { "start": 437, "end": 456, "text": "(Hill et al., 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Word Similarity", "sec_num": "5.1" }, { "text": "When calculating similarities, word pairs containing OOVs were removed. In Table 1, Table 1 : Spearman's correlation between human annotated scores and cosine similarity of word representations on three datasets: WordSim-240, WordSim-296 and SimLex-999. The higher the values, the better the results.", "cite_spans": [], "ref_spans": [ { "start": 75, "end": 83, "text": "Table 1,", "ref_id": null }, { "start": 84, "end": 91, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Word Similarity", "sec_num": "5.1" }, { "text": "only show the results of position-based character embeddings here because the results of clusterbased character embeddings are worse in the experiments. We found that CWE only consistently improved the performance on SimLex-999 for both CBOW and Skipgram probably because SimLex-999 contains more words that could be understood from their compositional characters. On SimLex-999, we observed that CWE was better with CBOW than Skipgram. We think the reason is that CBOW+CWE predicts the target word with the mean value of all character embeddings in the context, thus has a less noisy feature; however Skipgram+CWE uses character embeddings of an individual word. This noisy feature could cause negative effects on predicting the target word. The GWEs were learned based on CWE in two ways. \"ctxG\" represents using glyph features of context words, while \"tarG\" represents using glyph features of target words. The glyph features improved CWE on WordSim-240 and SimLex-999, but not WordSim-296. As for MGE results, we were not able to reproduce the performance in (Yin et al., 2016) . We list possible reasons as below: we did not separate non-compositional word during training (character and radical embeddings are not used for these words), and the we created character-radical index from different data source. We conjecture that the first to be the most crucial factor in reproducing MGE.", "cite_spans": [ { "start": 1063, "end": 1081, "text": "(Yin et al., 2016)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Word Similarity", "sec_num": "5.1" }, { "text": "The results of RNN-Skipgram and RNN-GloVe are also in Table 1 . Their results are not comparable with CBOW and Skipgram. From the results, we conclude that it is not easy to produce word representations directly from glyphs. We think the reason is that RNN representations are dependent on each other. Updating model parameters for word w i would also change the word representation of word w j . As a result it is much more difficult to train such models.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 61, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Word Similarity", "sec_num": "5.1" }, { "text": "We further inspect the impact of glyph features by doing significance test 8 between proposed methods and existing ones. The p-values of the tests are given in Table 2 . We found only \"tarG\" method has a p-value less than 0.05 over CWE. ", "cite_spans": [], "ref_spans": [ { "start": 160, "end": 167, "text": "Table 2", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Word Similarity", "sec_num": "5.1" }, { "text": "An analogy problem has the following form: \"king\":\"queen\" = \"man\":\"?\", and \"woman\" is answer to \"?\". By answering the question correctly, the model is considered capable of expressing semantic relationships. Furthermore, the analogy relation could be expressed by vector arithmetic of word representations as shown in (Mikolov et al., 2013b) . For the above problem, we find word w i such that w i = arg max w cos( w, w queen \u2212 w king + w man ). Table 3 : Accuracy of analogy problems for capitals of countries, (China) states/provinces of cities, family relations, and our proposed job&place (J&P) dataset. The higher the values, the better the results.", "cite_spans": [ { "start": 318, "end": 341, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 446, "end": 453, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Word Analogy", "sec_num": "5.2" }, { "text": "As in the previous subsection, we translated the word analogy dataset in (Chen et al., 2015) to traditional. The dataset contains 3 groups of analogy problems: capitals of countries, (China) states/provinces of cities, and family relations. Considering that most capital and city names do not relate to the meaning of their compositional characters, and that we did not separate noncompositional word in our experiments, we proposed a new analogy dataset composed of jobs and places (job&place). Nonetheless, there might be multiple corresponding places for a single job. For instance, A \"doctor\" could be in a \"hospital\" or \"clinic\". In this job&place dataset, we provide a set of places for each job. The model is considered to answer correctly as long as the predicted word is in this set.", "cite_spans": [ { "start": 73, "end": 92, "text": "(Chen et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "5.2" }, { "text": "We take the mean of all word representations of places (mean( w places 1 )) for the first job (job 1 ), and find the place for another job (job 2 ) by calculating w i such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "5.2" }, { "text": "w i = arg max w cos( w, mean( w places 1 )\u2212 w job 1 + w job 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "5.2" }, { "text": "The results are shown in Table 3 . we observed CWE only improved accuracy only for the family group. The results are not surprising. The words of family relations are compositional in Chinese, however capital and city names are usually not. We observed that GWE further improved CWE for words in the family group. From Table 3 , we found that glyph features are helpful when the characters can enhance word representations. This is very reasonable because glyph features are fruitful representations of characters. If character information does not play a role in learning word representations, character glyphs may not be useful. The same phenomenon is observed in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 3", "ref_id": null }, { "start": 319, "end": 326, "text": "Table 3", "ref_id": null }, { "start": 666, "end": 673, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Word Analogy", "sec_num": "5.2" }, { "text": "In our job&place, we still observed that GWE improving CWE, however both CWE and GWE were slightly worse than CBOW. We also observed that Skipgram-based methods became worse than CBOW-based methods, while in all previous evaluation Skipgram-based methods are consistently better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "5.2" }, { "text": "The results of RNN-Skipgram and RNN-GloVe are still poor. We observe that the word representations learned from RNN can no longer be expressed by vector arithmetic. The reason is still under investigation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "5.2" }, { "text": "To further probe the effect of glyph features, we show the following word pairs in SimLex-999 whose calculated cosine similarities are higher based on GWE models than CWE. The pairs may not look alike, but their components share related semantics. For example, in \"\u4f36\u4fd0\" (clever), the component \"\u5229\"(sharp) is compositional to the meaning of \"\u4fd0\"(acute), describing someone with a sharp mind. Other examples show the ability to associate semantics with radicals. Table 4 : Case study on word pairs in SimLex-999.", "cite_spans": [], "ref_spans": [ { "start": 459, "end": 466, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Case Study", "sec_num": "5.3" }, { "text": "We also provide several counter-examples. Below are some word pairs which are not similar, however GWE methods produces higher similarity than CBOW or CWE. Take \"\u5c71\u5cf0\" (mountain) and \"\u8702\u871c\" (honey) as example. Since they share no Table 5 : Counter examples to which GWE methods give higher similarity scores than CBOW or CWE.", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 233, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Case Study", "sec_num": "5.3" }, { "text": "common characters, the only thing in common is the component \"\u5906\", and we assume this to be the reason for the higher similarity. Also note that in the pair \"\u7121\u8da3\" (boring) and \"\u597d\u7b11\" (funny), the CWE similarity is also higher. We conclude that the character \"\u7121\" (none) is not strong enough, so the character \"\u8da3\" (fun) overrides the word \"\u7121 \u8da3\" (boring), thus a higher score was mistakenly assigned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "5.3" }, { "text": "This work is a pioneer in enhancing Chinese word representations with character glyphs. The character glyph features are directly learned from the bitmaps of characters by convAE. We then proposed 2 methods in learning Chinese word representations: the first is to use character glyph features as enhancement; the other is to directly learn word representation from sequences of glyph features. In experiments, we found the latter totally infeasible. Training word representations with RNN without word and character information is challenging. Nonetheless, the glyph features improved the character-enhanced Chinese word representations, especially on the word analogy task related to family. The results of exploiting character glyph features in word representation learning was ordinary. Perhaps the co-occurrence information in the corpus plays a bigger role than glyph features. Nonetheless, the idea to treat each Chinese character as image is innovative. As more character-level models (Zheng et al., 2013; Kim, 2014; Zhang et al., 2015) are proposed in the NLP field, we believe glyph features could serve as an enhancement, and we will further examine the effect of glyph features on other tasks, such as word segmentation, POS tagging, dependency parsing, or downstream tasks such as text classification, or document retrieval.", "cite_spans": [ { "start": 993, "end": 1013, "text": "(Zheng et al., 2013;", "ref_id": "BIBREF24" }, { "start": 1014, "end": 1024, "text": "Kim, 2014;", "ref_id": "BIBREF7" }, { "start": 1025, "end": 1044, "text": "Zhang et al., 2015)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "https://en.wikipedia.org/wiki/ Radical_(Chinese_characters)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The two example characters here have the same glyphs in the traditional and simplified Chinese characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/fxsjy/jieba", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/Leonard-Xu/CWE 5 http://unicode.org/charts/unihan.html 6 https://github.com/stanfordnlp/GloVe 7 https://github.com/ray1007/GWE", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We followed the method described in https:// stats.stackexchange.com/questions/17696/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Jauvin", "suffix": "" } ], "year": 2003, "venue": "Journal of machine learning research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research, 3(Feb):1137-1155.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior research methods", "authors": [ { "first": "A", "middle": [], "last": "John", "suffix": "" }, { "first": "Joseph P", "middle": [], "last": "Bullinaria", "suffix": "" }, { "first": "", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2007, "venue": "", "volume": "39", "issue": "", "pages": "510--526", "other_ids": {}, "num": null, "urls": [], "raw_text": "John A Bullinaria and Joseph P Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior re- search methods, 39(3):510-526.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Joint learning of character and word embeddings", "authors": [ { "first": "Xinxiong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Huan-Bo", "middle": [], "last": "Luan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015", "volume": "", "issue": "", "pages": "1236--1242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huan-Bo Luan. 2015. Joint learning of char- acter and word embeddings. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 1236-1242.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1406.1078" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Fast and accurate deep network learning by exponential linear units (elus)", "authors": [ { "first": "Djork-Arn\u00e9", "middle": [], "last": "Clevert", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Unterthiner", "suffix": "" }, { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.07289" ] }, "num": null, "urls": [], "raw_text": "Djork-Arn\u00e9 Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2015. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2016. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746-1751.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Skip-thought vectors", "authors": [ { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "R", "middle": [], "last": "Ruslan", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Urtasun", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "", "middle": [], "last": "Fidler", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3294--3302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294-3302.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improving distributional similarity with lessons learned from word embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "211--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics, 3:211-225.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Component-enhanced chinese character embeddings", "authors": [ { "first": "Yanran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "829--834", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanran Li, Wenjie Li, Fei Sun, and Sujian Li. 2015. Component-enhanced chinese character em- beddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 829-834, Lisbon, Portugal. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning character-level compositionality with visual features", "authors": [ { "first": "Frederick", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Han", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Chieh", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neu", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frederick Liu, Han Lu, Chieh Lo, and Graham Neu- big. 2017. Learning character-level compositional- ity with visual features. CoRR, abs/1704.04859.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Visualizing data using t-SNE", "authors": [ { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2008, "venue": "Journal of Machine Learning Research", "volume": "9", "issue": "", "pages": "2579--2605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(Nov):2579-2605.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Stacked convolutional autoencoders for hierarchical feature extraction", "authors": [ { "first": "Jonathan", "middle": [], "last": "Masci", "suffix": "" }, { "first": "Ueli", "middle": [], "last": "Meier", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Cire\u015fan", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2011, "venue": "International Conference on Artificial Neural Networks", "volume": "", "issue": "", "pages": "52--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Masci, Ueli Meier, Dan Cire\u015fan, and J\u00fcrgen Schmidhuber. 2011. Stacked convolutional auto- encoders for hierarchical feature extraction. In International Conference on Artificial Neural Net- works, pages 52-59. Springer.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word repre- sentations in vector space. In Proceedings of the International Conference on Learning Representa- tions (ICLR).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "14", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532- 1543.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Radical-enhanced chinese character embedding", "authors": [ { "first": "Yaming", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhenzhou", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Xiaolong", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2014, "venue": "International Conference on Neural Information Processing", "volume": "", "issue": "", "pages": "279--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaming Sun, Lei Lin, Nan Yang, Zhenzhou Ji, and Xiaolong Wang. 2014. Radical-enhanced chinese character embedding. In International Conference on Neural Information Processing, pages 279-286. Springer.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104-3112.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "From frequency to meaning: Vector space models of semantics", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Turney", "suffix": "" }, { "first": "", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2010, "venue": "Journal of artificial intelligence research", "volume": "37", "issue": "", "pages": "141--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of artificial intelligence research, 37:141-188.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Improve chinese word embeddings by exploiting internal structure", "authors": [ { "first": "Jian", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Liangang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhengyu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Huanhuan", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "1041--1050", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Xu, Jiawei Liu, Liangang Zhang, Zhengyu Li, and Huanhuan Chen. 2016. Improve chinese word em- beddings by exploiting internal structure. In Pro- ceedings of NAACL-HLT, pages 1041-1050.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Multi-granularity chinese word embedding", "authors": [ { "first": "Rongchao", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Quan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Li", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "981--986", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rongchao Yin, Quan Wang, Peng Li, Rui Li, and Bin Wang. 2016. Multi-granularity chinese word em- bedding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 981-986.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Adaptive deconvolutional networks for mid and high level feature learning", "authors": [ { "first": "D", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "", "middle": [], "last": "Zeiler", "suffix": "" }, { "first": "W", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2011, "venue": "Computer Vision (ICCV), 2011 IEEE International Conference on", "volume": "", "issue": "", "pages": "2018--2025", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew D Zeiler, Graham W Taylor, and Rob Fer- gus. 2011. Adaptive deconvolutional networks for mid and high level feature learning. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2018-2025. IEEE.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Character-level convolutional networks for text classification", "authors": [ { "first": "Xiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "649--657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in neural information pro- cessing systems, pages 649-657.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Deep learning for chinese word segmentation and pos tagging", "authors": [ { "first": "Xiaoqing", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Hanyang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Tianyu", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "647--657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for chinese word segmentation and pos tagging. In EMNLP, pages 647-657.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Learning continuous word embedding with metadata for question retrieval in community question answering", "authors": [ { "first": "Guangyou", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Tingting", "middle": [], "last": "He", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Po", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2015, "venue": "ACL (1)", "volume": "", "issue": "", "pages": "250--259", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guangyou Zhou, Tingting He, Jun Zhao, and Po Hu. 2015. Learning continuous word embedding with metadata for question retrieval in community ques- tion answering. In ACL (1), pages 250-259.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Examples of compositional Chinese words. Still, the reader should keep in mind that NOT all Chinese words are compositional (related to the meanings of its compositional characters).", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "Both characters in the figure have the same radical \"\u4ebb\" (means humans) at the left hand side, but their meanings are the composition of the graphical components at the right hand side and their radical.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "Model comparison of Characterenhanced Word Embedding (CWE) and Multigranularity Embedding (MGE).", "uris": null, "num": null, "type_str": "figure" }, "FIGREF3": { "text": "The architecture of convAE.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF4": { "text": "Illustration of exploiting target word glyphs. Mean value of character glyph features of target words help predicting target word itself.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF6": { "text": "Model architecture of RNN-Skipgram model. Produced word representation w i is used to predict the context of word w i .", "uris": null, "num": null, "type_str": "figure" }, "FIGREF7": { "text": "The input bitmaps of convAE and their reconstructions. The input bitmaps are in the upper row, while the reconstructions are in the lower row.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF8": { "text": "Parts of t-SNE visualization of character glyph features. Most of the characters in the ovals share the same components.", "uris": null, "num": null, "type_str": "figure" }, "TABREF3": { "text": "Enhanced by Context Word GlyphsWe modify CWE model based on CBOW in Section 2.2.2 to incorporate context character glyph features (ctxG). This modified word embedding w ctxG i", "num": null, "content": "
3.2 Glyph-Enhanced Word Embedding
(GWE)
3.2.1
", "html": null, "type_str": "table" }, "TABREF7": { "text": "", "num": null, "content": "
: p-values of significance tests between
proposed methods and existing ones.
", "html": null, "type_str": "table" } } } }