{ "paper_id": "Y15-1018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:41:34.686694Z" }, "title": "Japanese Sentiment Classification with Stacked Denoising Auto-Encoder using Distributed Word Representation", "authors": [ { "first": "Peinan", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Metropolitan University", "location": {} }, "email": "zhang-peinan@ed.tmu.ac.jp" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Metropolitan University", "location": {} }, "email": "komachi@tmu.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Traditional sentiment classification methods often require polarity dictionaries or crafted features to utilize machine learning. However, those approaches incur high costs in the making of dictionaries and/or features, which hinder generalization of tasks. Examples of these approaches include an approach that uses a polarity dictionary that cannot handle unknown or newly invented words and another approach that uses a complex model with 13 types of feature templates. We propose a novel high performance sentiment classification method with stacked denoising auto-encoders that uses distributed word representation instead of building dictionaries or utilizing engineering features. The results of experiments conducted indicate that our model achieves state-of-the-art performance in Japanese sentiment classification tasks.", "pdf_parse": { "paper_id": "Y15-1018", "_pdf_hash": "", "abstract": [ { "text": "Traditional sentiment classification methods often require polarity dictionaries or crafted features to utilize machine learning. However, those approaches incur high costs in the making of dictionaries and/or features, which hinder generalization of tasks. Examples of these approaches include an approach that uses a polarity dictionary that cannot handle unknown or newly invented words and another approach that uses a complex model with 13 types of feature templates. We propose a novel high performance sentiment classification method with stacked denoising auto-encoders that uses distributed word representation instead of building dictionaries or utilizing engineering features. The results of experiments conducted indicate that our model achieves state-of-the-art performance in Japanese sentiment classification tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As the popularity of social media continues to rise, serious attention is being given to review information nowadays. Reviews with positive/negative ratings, in particular, help (potential) customers with product comparisons and to make purchasing decisions. Consequently, automatic classification of the polarities (such as positive and negative) of reviews is extremely important.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditional approaches to sentiment analysis utilize polarity dictionaries or classification rules. Although these approaches are fairly accurate, they depend on languages that may require significant amounts of manual labor. Further, dictionary-based methods have difficulty dealing with new or unknown words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Machine learning-based methods are widely adopted in sentiment classification in order to mitigate the problems associated with the making of dictionaries and/or rules. One of the most basic features used in machine learning-based sentiment classification is the bag-of-words feature (Wang and Manning, 2012; Pang et al., 2002) . In machine learningbased frameworks, the weights of words are automatically learned from a training corpus instead of being manually assigned.", "cite_spans": [ { "start": 284, "end": 308, "text": "(Wang and Manning, 2012;", "ref_id": "BIBREF25" }, { "start": 309, "end": 327, "text": "Pang et al., 2002)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, the bag-of-words feature cannot take syntactic structures into account. This leads to mistakes such as \"a great design but inconvenient\" and \"inconvenient but a great design\" being deemed to have the same meaning, even though their nuances are different; the former is somewhat negative whereas the latter is slightly positive. To solve this syntactic problem, Nakagawa et al. (2010) proposed a sentiment analysis model that used dependency trees with polarities assigned to their subtrees. However, their proposed model requires specialized knowledge to design complicated feature templates.", "cite_spans": [ { "start": 370, "end": 392, "text": "Nakagawa et al. (2010)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this study, we propose an approach that uses distributed word representation to overcome the first problem and deep neural networks to alleviate the second problem. The former is an unsupervised method capable of representing a word s meaning without using hand-tagged resources such as a polarity dictionary. In addition, it is robust to the data sparseness problem. The latter is a highly expressive model that does not utilize complex engineering features or models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our research makes the following two main contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We show that distributed word representation learned from a large-scale corpus and multiple layers (more than three layers) contributes significantly to classification accuracy in sentiment classification tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We achieve state-of-the-art performance in Japanese sentiment classification tasks without designing complex features and models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we discuss related works from two areas: sentiment classification and deep learning (distributed word representation and multi-layer neural networks).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Sentiment classification has been researched extensively in the past decade. Most of the previous approaches in this area rely on either timeconsuming hand-tagged dictionaries or knowledgeintensive complex models. Ikeda et al. (2008) proposed a method that classifies polarities by learning them within a window around a word. Their proposed method works well with words registered in a dictionary. However, building a polarity dictionary is expensive and their approach is not able to cope with unknown words. In contrast, our proposed approach does not use a polarity dictionary and works robustly even when there are infrequent words in the test data.", "cite_spans": [ { "start": 214, "end": 233, "text": "Ikeda et al. (2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Sentiment classification", "sec_num": "2.1" }, { "text": "In a similar manner, Choi et al. (2008) proposed a method in which rules are manually built up and polarities are classified considering dependency structures. However, the rules are based on English, which cannot be applied directly to other languages. This is unlike our method, which does not employ any language-specific rules. Nakagawa et al. (2010) proposed a supervised model that uses a dependency tree with polarity assigned to each subtree as hidden variables. The proposed approach further classifies sentiment polarities in English and Japanese sentences with Conditional Random Field (CRF), considering the interactions between the hidden variables. The dependency information enables them to take syntactic structures into account in order to model polarity flip. However, their proposed method is so complex that it has to create multiple feature templates. In contrast, our model is quite simple and does not require the engineering of such features.", "cite_spans": [ { "start": 21, "end": 39, "text": "Choi et al. (2008)", "ref_id": "BIBREF2" }, { "start": 332, "end": 354, "text": "Nakagawa et al. (2010)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Sentiment classification", "sec_num": "2.1" }, { "text": "One of the great advantages of deep learning is that it reduces the need to hand-design features. Instead, it automatically extracts hierarchical features and enhances the end-to-end classification performance learned through backpropagation. As a consequence, it avoids the engineering of task-specific ad-hoc features using copious amounts of prior knowledge. Further, it sometimes surpasses humanlevel performance (He et al., 2015) . Two of the most actively studied areas in deep learning for NLP applications are representation learning and deep neural networks.", "cite_spans": [ { "start": 417, "end": 434, "text": "(He et al., 2015)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Deep learning", "sec_num": "2.2" }, { "text": "Representation learning Several studies have attempted to model natural language texts using deep architectures. Distributed word representations, or word embeddings, represent words as vectors. Distributed representations of word vectors are not sparse but dense vectors that can express the meaning of words. Sentiment classification tasks are significantly influenced by the data sparseness problem. As a result, distributed word representation is more suitable than traditional 1-of-K representation, which only treats words as symbols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deep learning", "sec_num": "2.2" }, { "text": "In our proposed method, to learn the word embeddings, we employ a state-of-the-art word embedding technique called word2vec (Mikolov et al., 2013b; Mikolov et al., 2013a ), which we discuss in Section 3.1. Although several word embedding techniques currently exist (Collobert and Weston, 2008; Pennington et al., 2014) , word2vec is one of the most computationally efficient and is considered to be state-of-the-art. Collobert et al. (2008) presented a model that learns word embedding by jointly performing multi-task learning using a deep convolutional architecture. Their method is considered to be state-of-the-art as well, but it is not readily applicable to Japanese.", "cite_spans": [ { "start": 124, "end": 147, "text": "(Mikolov et al., 2013b;", "ref_id": "BIBREF14" }, { "start": 148, "end": 169, "text": "Mikolov et al., 2013a", "ref_id": "BIBREF13" }, { "start": 265, "end": 293, "text": "(Collobert and Weston, 2008;", "ref_id": "BIBREF3" }, { "start": 294, "end": 318, "text": "Pennington et al., 2014)", "ref_id": "BIBREF17" }, { "start": 417, "end": 440, "text": "Collobert et al. (2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Deep learning", "sec_num": "2.2" }, { "text": "Multi-layer neural networks A stacked denoising auto-encoder (SdA) is a deep neural network that extends a stacked auto-encoder (Bengio et al., 2007) with denoising auto-encoders (dA). Stacking multiple layers and introducing noise to the input layer adds high generalization ability to auto-encoders. This method is used in speech recognition (Dahl et al., 2011) , image processing (Xie et al., 2012) and domain adaptation ; further, it exhibits high representation ability. Glorot et al. (2011) used SdAs to perform domain adaptation in sentiment analysis. After learning sentiment classification in four domains of the reviews of products on Amazon, they tested each model with different domains. Although the task and method are similar to those of our proposed approach, they only use the most frequent verbs as input.", "cite_spans": [ { "start": 128, "end": 149, "text": "(Bengio et al., 2007)", "ref_id": "BIBREF0" }, { "start": 344, "end": 363, "text": "(Dahl et al., 2011)", "ref_id": "BIBREF4" }, { "start": 383, "end": 401, "text": "(Xie et al., 2012)", "ref_id": "BIBREF26" }, { "start": 476, "end": 496, "text": "Glorot et al. (2011)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Deep learning", "sec_num": "2.2" }, { "text": "Dos Santos et al. 2014and Tang et al. (2014) researched sentiment classification of microblogs such as Twitter using the distributed representation learned by the methods of Collobert et al. (2008) and Mikolov et al. (2013b; 2013a) . Those two tasks are the same task as ours, but the former generats sentence vectors using string-based convolution networks while the latter utilizes a model that treats the distributed word representation itself as polarities. Our proposed approach makes sentence vectors by simply averaging the distributed word representation, yet achieves state-of-the-art performance in Japanese sentiment classification tasks. Kim (2014) classified the polarities of sentences using convolutional neural networks. He built a simple CNN with one layer of convolution, whereas our model uses multiple hidden layers. Socher et al. (2011; 2013) placed common autoencoders recursively (recursive neural networks) and concatenated input vectors to take syntactic information such as the order of words into account. In addition, they arranged auto-encoders (AEs) to syntactic trees to represent the polarities of each phrase. Recursive neural networks construct sentence vectors differently from our approach. Compared to their model, our distributed sentence representation is quite simple yet effective for Japanese sentiment classification.", "cite_spans": [ { "start": 26, "end": 44, "text": "Tang et al. (2014)", "ref_id": "BIBREF22" }, { "start": 174, "end": 197, "text": "Collobert et al. (2008)", "ref_id": "BIBREF3" }, { "start": 202, "end": 224, "text": "Mikolov et al. (2013b;", "ref_id": "BIBREF14" }, { "start": 225, "end": 231, "text": "2013a)", "ref_id": "BIBREF13" }, { "start": 650, "end": 660, "text": "Kim (2014)", "ref_id": "BIBREF12" }, { "start": 837, "end": 857, "text": "Socher et al. (2011;", "ref_id": "BIBREF19" }, { "start": 858, "end": 863, "text": "2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Deep learning", "sec_num": "2.2" }, { "text": "Denoising Auto-Encoder using Distributed Word Representation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Classification with Stacked", "sec_num": "3" }, { "text": "In this study, we treated the task of classifying the polarity of a sentence as a binary classification. Our proposed approach makes a sentence vector from the input sentence, and then inputs the sen-tence vector to a classifier. The sentence vector is computed from the average of word vectors in the sentence, based on distributed word representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Classification with Stacked", "sec_num": "3" }, { "text": "In Section 3.1 we introduce distributed representation of words and sentences, and in Section 3.2 we explain multi-layer neural networks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Classification with Stacked", "sec_num": "3" }, { "text": "1-of-K representation is a traditional word vector representation for making bag-of-words. The dimension of a word vector in 1-of-K is the same as the size of the vocabulary, and the elements of a dimension correspond to words. 1-of-K treats different words as discrete symbols. However, 1-of-K representation fails to model the shared meanings of words. For example, the word vectors \"dog\" and \"cat\" should share \"animal\" or \"pet\" meanings to a certain degree, but 1-of-K representation is not able to capture this similarity. Consequently, we propose distributed word representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed representation", "sec_num": "3.1" }, { "text": "The task of learning distributed representation is called representation learning and has been of significant interest in the NLP literature in the last few years. Distributed word representation learns a lowdimension dense vector for a word from a largescale text corpus to capture the word's features from its context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed representation", "sec_num": "3.1" }, { "text": "Let the number of vocabularies be |V |, the dimension of a vector representing words be d, 1-of-K vector be b \u2208 R |V | and the matrix of all word vectors be L \u2208 R d\u00d7|V | . The kth target word vector w k is consequently represented as in Equation 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed word representation", "sec_num": "3.1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w k = Lb k", "eq_num": "(1)" } ], "section": "Distributed word representation", "sec_num": "3.1.1" }, { "text": "Continuous Bag-of-Words (CBOW) and Skipgram models in word2vec (Mikolov et al., 2013b; Mikolov et al., 2013a) have attracted tremendous attention as a result of their effectiveness and efficiency. The former is a model that predicts the target word using contexts around the word, whereas the latter is a model that predicts the surrounding context from the target word. According to Mikolov's work, skip-gram shows higher accuracy than CBOW 1 . Therefore, we used skip-gram in our experiments. ", "cite_spans": [ { "start": 63, "end": 86, "text": "(Mikolov et al., 2013b;", "ref_id": "BIBREF14" }, { "start": 87, "end": 109, "text": "Mikolov et al., 2013a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Distributed word representation", "sec_num": "3.1.1" }, { "text": "In our approach, we construct a sentence matrix S \u2208 R |M |\u00d7d from the corpus containing |M | sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed sentence representation", "sec_num": "3.1.2" }, { "text": "First, we describe how to create a sentence vector from word vectors. The ith (1 \u2264 i \u2264 M ) input sentence composed of |N (i) | words is used to make a sentence vector S (i) \u2208 R d with the word vectors.", "cite_spans": [ { "start": 121, "end": 124, "text": "(i)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Distributed sentence representation", "sec_num": "3.1.2" }, { "text": "The jth (1 \u2264 j \u2264 d) element of sentence vector S (i) is calculated by averaging the corresponding element of the word vectors in the sentence as expressed in Equation 2 (Figure 1) .", "cite_spans": [ { "start": 49, "end": 52, "text": "(i)", "ref_id": null } ], "ref_spans": [ { "start": 169, "end": 179, "text": "(Figure 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Distributed sentence representation", "sec_num": "3.1.2" }, { "text": "S (i) j = 1 N (i) N (i) \u2211 n=1 w (i) n (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed sentence representation", "sec_num": "3.1.2" }, { "text": "Finally, the sentence matrix S is defined by Equation 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed sentence representation", "sec_num": "3.1.2" }, { "text": "S = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 S (1)T S (2)T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed sentence representation", "sec_num": "3.1.2" }, { "text": ". . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed sentence representation", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S (M )T \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb", "eq_num": "(3)" } ], "section": "Distributed sentence representation", "sec_num": "3.1.2" }, { "text": "An auto-encoder is an unsupervised learning method devised by Hinton and Salakhutdinov (2006) that uses neural networks. It learns shared features of the input at the hidden layer. By restricting the dimension of the hidden layer to be smaller than that of an input layer, it reduces the dimension of the input layer. The encode function that calculates a hidden layer from an input is shown in Equation 4, and the formed it. Therefore, we present only the experiments conducted using skip-gram in this paper. decode function that calculates an output layer from the hidden layer is shown in Equation 5 below.", "cite_spans": [ { "start": 62, "end": 93, "text": "Hinton and Salakhutdinov (2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Auto-Encoder", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y = s(W x + b) (4) z = s(W \u2032 y + b \u2032 )", "eq_num": "(5)" } ], "section": "Auto-Encoder", "sec_num": "3.2" }, { "text": "s( * ) represents nonlinear functions such as tanh or sigmoid, W , W \u2032 are weight matrices and b, b \u2032 are bias terms, respectively. The parameters of auto-encoders are learned by minimizing the following loss functions. The loss function measures the difference between input vector x and output vector z using the cross entropy (Equation 6). We use Stochastic Gradient Descent (SGD) to minimize the loss function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auto-Encoder", "sec_num": "3.2" }, { "text": "L H (x, z) = \u2212 d \u2211 k=1 [x k log z k +(1\u2212x k ) log(1\u2212z k )] (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auto-Encoder", "sec_num": "3.2" }, { "text": "Regularization is usually used in the loss function in traditional multi-layer perceptrons. Denoising techniques play the same role as regularization in auto-encoders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Denoising Auto-Encoder", "sec_num": "3.2.1" }, { "text": "A denoising auto-encoder is a stochastic extension of a regular auto-encoder that adds noise randomly to the input during training to obtain higher generalization ability. Because the loss function of denoising auto-encoders evaluates the input without adding noise, denoising auto-encoders can be expected to extract better representations than autoencoders (Vincent et al., 2008) . DropOut (Hinton et al., 2012) achieves similar regularization objectives by ignoring the hidden nodes, not input, with a uniform probability.", "cite_spans": [ { "start": 359, "end": 381, "text": "(Vincent et al., 2008)", "ref_id": "BIBREF23" }, { "start": 392, "end": 413, "text": "(Hinton et al., 2012)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Denoising Auto-Encoder", "sec_num": "3.2.1" }, { "text": "A stacked denoising auto-encoder piles dAs into multiple layers and improves representation ability. The deeper the layers go, the more abstract features will be extracted (Vincent et al., 2010) . The training procedure used for SdAs comprises two steps. Initially, dAs are used to pre-train each layer via unsupervised learning, after which the entire neural network is fine-tuned via supervised learning. In the pre-training phase, feature extraction is carried out by the dAs from input A i , and the extracted hidden representation is treated as the input to the next hidden layer. After the final pre-training process, the last hidden layer is classified with softmax and the resulting vector is passed to the output layer. The fine-tuning phase backpropagates supervision to each layer to update weight matrices ( Figure 2) .", "cite_spans": [ { "start": 172, "end": 194, "text": "(Vincent et al., 2010)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 820, "end": 829, "text": "Figure 2)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Stacked Denoising Auto-Encoder", "sec_num": "3.2.2" }, { "text": "In Figure 2 , the input vector is obtained from Equation 2 and dA1 is applied with the weight matrix of the first layer W 1 to calculate the first hidden layer. Note that the numbers of hidden layers and hidden nodes are hyperparameters. We define n i to be the number of hidden nodes of the ith layer. Therefore, using Equation 4 the dimension of weight matrix W 1 will be n 1 \u00d7 d. Similarly, the weight matrices up to the l \u2212 1th layer will be W i \u2208 R n i \u00d7n i\u22121 (i > 2). At the final lth layer, we need to convert the dimension of the hidden layer into d label , the dimension of the label, so the dimension of weight matrix W l should become d label \u00d7 n l\u22121 .", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Stacked Denoising Auto-Encoder", "sec_num": "3.2.2" }, { "text": "To demonstrate the effectiveness of a nonlinear SdA, we compared it with a linear classifier (logistic regression, LogRes-w2v). 2 In addition, to investigate the usefulness of distributed word representation, we compared methods using bag-of-features (LogRes-BoF, SdA-BoF). We constructed sentence vectors S \u2208 R |V | with 1-of-K representation in the same manner as Equation 2, and performed dimension 2 Both SdA and logistic regression were implemented using Theano version 0.6.0. reduction to d = 200 using Principal Component Analysis (PCA). 3 We introduce a weak baseline (most frequent sense) and a strong baseline (state-of-the-art). The latter is a method by Nakagawa et al. (2010) , which uses the same corpus.", "cite_spans": [ { "start": 545, "end": 546, "text": "3", "ref_id": null }, { "start": 666, "end": 688, "text": "Nakagawa et al. (2010)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments 4.1 Methods", "sec_num": "4" }, { "text": "The most frequent sense baseline. It always selects the most frequent choice (in this case, negative).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MFS.", "sec_num": null }, { "text": "Tree-CRF. The state-of-the-art baseline with hidden variables learned by tree-structured CRF (Nakagawa et al., 2010) .", "cite_spans": [ { "start": 93, "end": 116, "text": "(Nakagawa et al., 2010)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "MFS.", "sec_num": null }, { "text": "LogRes-BoF. Performs sentiment classification using bag-of-features with a linear classifier (logistic regression).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MFS.", "sec_num": null }, { "text": "SdA-BoF. Classifies polarity with the same input vectors as LogRes-BoF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MFS.", "sec_num": null }, { "text": "LogRes-w2v. Classifies polarity with a linear classifier (logistic regression) using the sentence vector computed by distributed word representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MFS.", "sec_num": null }, { "text": "SdA-w2v. Our proposed method that classifies polarity with a SdA using the same input as LogRes-w2v.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MFS.", "sec_num": null }, { "text": "SdA-w2v-neg. Similar to Nakagawa et al. (2010), we pre-processed negation before creating distributed word representation as in SdA-w2v.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MFS.", "sec_num": null }, { "text": "We adjusted the noise rate, the numbers of hidden layers and hidden nodes, as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MFS.", "sec_num": null }, { "text": "To demonstrate the denoising efficiency, we varied the noise rate (0%, 10%, 20%, 30%, 40% and 50%) for SdAs. We then performed denoising by zeroing a vector with binomial distribution at a specified rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MFS.", "sec_num": null }, { "text": "To show the effect of stacking, we increased the number of hidden layers (from 1 to 6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MFS.", "sec_num": null }, { "text": "To examine the representation ability of the network, we varied the number of hidden nodes (100, 300, 500, and 700). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MFS.", "sec_num": null }, { "text": "We obtained distributed word representations using word2vec 4 with Skip-gram (Mikolov et al., 2013b; Mikolov et al., 2013a) . We used Japanese Wikipedia's dump data (2014.11) to learn the 200 dimension distributed representation with word2vec after word-segmentation with MeCab 5 . The vocabulary of the models contains 426,782 words (without processing negation) and 431,782 words (with processing negation).", "cite_spans": [ { "start": 77, "end": 100, "text": "(Mikolov et al., 2013b;", "ref_id": "BIBREF14" }, { "start": 101, "end": 123, "text": "Mikolov et al., 2013a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus and tools", "sec_num": "4.2" }, { "text": "The corpus used in the experiment was the Japanese section of NTCIR-6 OPINION (Seki et al., 2007) . The data used in our research were the sentences from The Mainichi Newspaper and The Japan News articles with polarities annotated by three annotators. For each sentence, we took the union of the annotations of the three annotators. When the annotations were split to both positive and negative, we always used the annotation of the specific annotator. The resulting corpus contained 2,599 sentences. The positive instances comprised 765 sentences whereas the negative instances comprised 1,830 sentences. Although a neutral polarity existed, we ignored it because our task is binary classification.", "cite_spans": [ { "start": 78, "end": 97, "text": "(Seki et al., 2007)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus and tools", "sec_num": "4.2" }, { "text": "We performed 10-fold cross validation with 10 threads of parallel processing and evaluated the performance of binary classification with accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus and tools", "sec_num": "4.2" }, { "text": "First, Figure 3 shows the accuracy and standard errors of each method for the NTCIR-6 corpus.", "cite_spans": [], "ref_spans": [ { "start": 7, "end": 15, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "It can be clearly seen that our method is superior Note that the parameters of the SdAs above are the best combination of noise rate, number of hidden layers, and number of hidden nodes (noise rate: 10%, four layers, and 500 dimensions). 6 Table 1 contrasts the various hyperparameters. We changed one parameter at a time, while leaving all other parameters fixed. The upper row compares the accuracy of the system with changing noise rate. The best result was obtained when the noise rate was set to 50%. Compared with the standard stacked auto-encoder (noise rate: 0%, accuracy: 81.1%), an SdA with a noise rate of 50% exhibits better accuracy (81.6%). In the middle of the table, we changed the number of hidden layers. It turned out that, the classifier worked best with four layers. As can be seen, the stacked auto-encoder is superior to the unstacked one by 1.0 accuracy point. At the bottom of the table, we changed the dimension of hidden nodes. We changed hidden nodes in intervals of 200 dimensions, but the accuracy only fluctuated by \u00b10.1 point. The accuracy was highest when the dimension was 500. ", "cite_spans": [ { "start": 238, "end": 239, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "In this section, we discuss the results of the models (Figure 3 ), parameter tuning (Table 1) , and examples (Table 2) .", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 63, "text": "(Figure 3", "ref_id": "FIGREF2" }, { "start": 84, "end": 93, "text": "(Table 1)", "ref_id": "TABREF0" }, { "start": 109, "end": 118, "text": "(Table 2)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "BoF vs. Distributed word representation. When the model was fixed to a linear classifier (logistic regression), the accuracies with Bagof-Features and distributed word representation were 70.8% and 79.5%, respectively. In contrast, using an SdA, the result for Bagof-Features was 76.9% and that of distributed word representation was 81.7%. Considering these outcomes, it can be seen that a 4.8 to 8.7 point increase in accuracy occurred when distributed word representation was used. Hence, the contribution of distributed word representation is the largest among the different experimental settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "5.1" }, { "text": "Figures 4 and 5 show the total training time obtained with 10 parallel processes by changing the numbers of hidden layers and hidden nodes. Figure 4 shows that the training time grew gradually as the number of hidden layers increased. In contrast, Figure 5 shows that the training time doubled when the number of hidden nodes was increased by 200. These results originate from the structure of SdAs. The nodes of the two adjacent hidden layers are fully connected. Hence, if the network has l layers and n dimensional nodes, the number of connections will be l \u00d7 n \u00d7 n = ln 2 . That indicates the relationship between the number of layers and connections is linear, but the number of connections grows exponentially with the number of nodes. Consequently, a small increase in the number of nodes results in a long training time. In contrast, as can be seen from Table 1 , the number of nodes has little or no effect on accuracy, whereas changing the number of layers helps to improve the performance.", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 148, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 248, "end": 256, "text": "Figure 5", "ref_id": "FIGREF4" }, { "start": 862, "end": 869, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Methods", "sec_num": "5.1" }, { "text": "Several examples are presented in Table 2 . The values P and N represent the prediction of positive and negative, respectively.", "cite_spans": [], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Examples", "sec_num": "5.3" }, { "text": "Looking at the top of the correct answer, it can be seen that our model classified polarity robustly In the discourse of Ministry of Education, he criticized \"History textbooks should reflect the truth of history, and only that can make the younger to have the correct view of history so that it can prevent to playing the tragedy again\". P N N P N P I would like him not to yield to the pressure and to keep his declaration to the end.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": "5.3" }, { "text": "against the data sparseness problem, such as with the coined word \" (Fujimorism)\" with which the BoF model is weak. Further, linear classifiers and the unstacked AE fail to handle double negative sentences such as at the bottom. Regardless of the difficulties, our model copes well with the situation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": "5.3" }, { "text": "Moving on to the wrong answers, it can be seen that our proposed model made human-like mistakes. For example, it mistook the top one containing the word \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": "5.3" }, { "text": "(thinking over, reflection, regret),\" but it is an ambiguous sentence that might be labeled as positive. Similarly, it failed to classify the middle sentence containing the phrase \" (prevent to replay the tragedy),\" which ends with \" (criticize).\" The annotations of the above two examples were divided into both positive and negative 7 . At the bottom, the proposed method did not successfully identify the polarity flipping with the phrase \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": "5.3" }, { "text": "(not yield to the pressure).\" Because the model with negation handling answered it correctly, there remains much room for improvement on how to deal with interactions between syntax and semantics (Tai et al., 2015; Socher et al., 2013) .", "cite_spans": [ { "start": 196, "end": 214, "text": "(Tai et al., 2015;", "ref_id": "BIBREF21" }, { "start": 215, "end": 235, "text": "Socher et al., 2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": "5.3" }, { "text": "In this study, we presented a high performance Japanese sentiment classification method that uses distributed word representation learned from a largescale corpus with word2vec and a stacked denoising auto-encoder. The proposed method requires no dictionaries, complex models, or the engineering of numerous features. Consequently, it can easily be adapted to other tasks and domains without the need for advanced knowledge from experts. In addition, due to the nature of learning with vectors, our system does not depend on languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "As our future works, we will try to create the distributed sentence representation using the Recurrent Neural Networks (Irsoy and Cardie, 2014) and Recursive Neural Networks (Socher et al., 2011; Socher et al., 2013) to capture global information.", "cite_spans": [ { "start": 119, "end": 143, "text": "(Irsoy and Cardie, 2014)", "ref_id": "BIBREF11" }, { "start": 174, "end": 195, "text": "(Socher et al., 2011;", "ref_id": "BIBREF19" }, { "start": 196, "end": 216, "text": "Socher et al., 2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We carried out a preliminary experiment using CBOW representation and found that skip-gram considerably outper-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used scikit-learn version 0.10.PACLIC 29", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://code.google.com/p/word2vec/ 5 MeCab version-0.996 IPADic version-2.7.0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We carried out 10-fold cross validation without using the development set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As explained in Section 4.2, we arbitrarily determined the polarity of a sentence when the annotations were split.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Greedy layer-wise training of deep networks", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Lamblin", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Popovici", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" } ], "year": 2007, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "153--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2007. Greedy layer-wise training of deep networks. In Advances in Neural Information Process- ing Systems, pages 153-160.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Marginalized denoising autoencoders for domain adaptation", "authors": [ { "first": "Minmin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhixiang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kilian", "middle": [], "last": "Weinberger", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Sha", "suffix": "" } ], "year": 2012, "venue": "Proceedings of The 29th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "767--774", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In Proceedings of The 29th In- ternational Conference on Machine Learning, pages 767-774.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning with compositional semantics as structural inference for subsentential sentiment analysis", "authors": [ { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "793--801", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yejin Choi and Claire Cardie. 2008. Learning with com- positional semantics as structural inference for sub- sentential sentiment analysis. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 793-801.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing: Deep neu- ral networks with multitask learning. In Proceed- ings of the 25th International Conference on Machine Learning, pages 160-167.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition", "authors": [ { "first": "E", "middle": [], "last": "George", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Dahl", "suffix": "" }, { "first": "Li", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Deng", "suffix": "" }, { "first": "", "middle": [], "last": "Acero", "suffix": "" } ], "year": 2011, "venue": "Audio, Speech, and Language Processing", "volume": "20", "issue": "", "pages": "30--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "George E Dahl, Dong Yu, Li Deng, and Alex Acero. 2011. Context-dependent pre-trained deep neural net- works for large-vocabulary speech recognition. In Au- dio, Speech, and Language Processing, IEEE Trans- actions, volume 20, pages 30-42.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Deep convolutional neural networks for sentiment analysis of short texts", "authors": [ { "first": "C\u0131cero", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "Ma\u0131ra", "middle": [], "last": "Santos", "suffix": "" }, { "first": "", "middle": [], "last": "Gatti", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 25th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "69--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "C\u0131cero Nogueira dos Santos and Ma\u0131ra Gatti. 2014. Deep convolutional neural networks for sentiment analysis of short texts. In Proceedings of the 25th International Conference on Computational Linguistics, pages 69- 78.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Domain adaptation for large-scale sentiment classification: A deep learning approach", "authors": [ { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "513--520", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceed- ings of the 28th International Conference on Machine Learning, pages 513-520.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. CoRR, abs/1502.01852.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Reducing the dimensionality of data with neural networks", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Ruslan", "middle": [ "R" ], "last": "Salakhutdinov", "suffix": "" } ], "year": 2006, "venue": "Science", "volume": "313", "issue": "5786", "pages": "504--507", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey E. Hinton and Ruslan R. Salakhutdinov. 2006. Reducing the dimensionality of data with neural net- works. Science, 313(5786):504-507.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improving neural networks by preventing coadaptation of feature detectors", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan R", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1207.0580" ] }, "num": null, "urls": [], "raw_text": "Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing co- adaptation of feature detectors. arXiv preprint arXiv:1207.0580.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning to shift the polarity of words for sentiment classification", "authors": [ { "first": "Daisuke", "middle": [], "last": "Ikeda", "suffix": "" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "" }, { "first": "Lev-Arie", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 3rd International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "296--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daisuke Ikeda, Hiroya Takamura, Lev-Arie Ratinov, and Manabu Okumura. 2008. Learning to shift the po- larity of words for sentiment classification. In Pro- ceedings of the 3rd International Joint Conference on Natural Language Processing, pages 296-303.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Opinion mining with deep recurrent neural networks", "authors": [ { "first": "Ozan", "middle": [], "last": "Irsoy", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "720--728", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ozan Irsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing, pages 720-728.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "International Conference on Learning Representations Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representa- tions in vector space. In International Conference on Learning Representations Workshop.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "26", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed represen- tations of words and phrases and their composition- ality. In Advances in Neural Information Processing Systems 26, pages 3111-3119.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Dependency tree-based sentiment classification using crfs with hidden variables", "authors": [ { "first": "Tetsuji", "middle": [], "last": "Nakagawa", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2010, "venue": "Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "786--794", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classifica- tion using crfs with hidden variables. In Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics, pages 786- 794.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Thumbs up?: sentiment classification using machine learning techniques", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lee", "middle": [], "last": "Lillian", "suffix": "" }, { "first": "Vaithyanathan", "middle": [], "last": "Shivakumar", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lee Lillian, and Vaithyanathan Shivakumar. 2002. Thumbs up?: sentiment classification using ma- chine learning techniques. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 79-86.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Empiricial Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word rep- resentation. In Proceedings of the Empiricial Methods in Natural Language Processing, pages 1532-1543.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Overview of opinion analysis pilot task at NTCIR-6", "authors": [ { "first": "Yohei", "middle": [], "last": "Seki", "suffix": "" }, { "first": "David", "middle": [ "Kirk" ], "last": "Evans", "suffix": "" }, { "first": "Lun-Wei", "middle": [], "last": "Ku", "suffix": "" }, { "first": "Hsin-Hsi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Noriko", "middle": [], "last": "Kando", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2007, "venue": "Proceedings of NTCIR-6 Workshop Meeting", "volume": "", "issue": "", "pages": "265--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yohei Seki, David Kirk Evans, Lun-Wei Ku, Hsin-Hsi Chen, Noriko Kando, and Chin-Yew Lin. 2007. Overview of opinion analysis pilot task at NTCIR-6. In Proceedings of NTCIR-6 Workshop Meeting, pages 265-278.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Semi-supervised recursive autoencoders for predicting sentiment distributions", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Eric", "middle": [ "H" ], "last": "Huang", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "151--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Jeffrey Pennington, Eric H. Huang, An- drew Y. Ng, and Christopher D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Lan- guage Processing, pages 151-161.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christo- pher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 1631- 1642.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Improved semantic representations from tree-structured long short-term memory networks", "authors": [ { "first": "Kai", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Tai", "suffix": "" }, { "first": "D", "middle": [ "Christopher" ], "last": "Socher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1556--1566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sheng Kai Tai, Richard Socher, and D. Christopher Man- ning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing, pages 1556-1566.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning sentiment-specific word embedding for twitter sentiment classification", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1555--1565", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment-specific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1555-1565.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Extracting and composing robust features with denoising autoencoders", "authors": [ { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Pierre-Antoine", "middle": [], "last": "Manzagol", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1096--1103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and com- posing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, pages 1096-1103.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "authors": [ { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Lajoie", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Pierre-Antoine", "middle": [], "last": "Manzagol", "suffix": "" } ], "year": 2010, "venue": "The Journal of Machine Learning Research", "volume": "11", "issue": "", "pages": "3371--3408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representa- tions in a deep network with a local denoising cri- terion. The Journal of Machine Learning Research, 11:3371-3408.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Baselines and bigrams: Simple, good sentiment and topic classification", "authors": [ { "first": "Sida", "middle": [], "last": "Wang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "90--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sida Wang and Christopher D. Manning. 2012. Base- lines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics, pages 90-94.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Image denoising and inpainting with deep neural networks", "authors": [ { "first": "Junyuan", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Linli", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Enhong", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2012, "venue": "Advances in Neural Information Processing Systems 25", "volume": "", "issue": "", "pages": "341--349", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junyuan Xie, Linli Xu, and Enhong Chen. 2012. Image denoising and inpainting with deep neural networks. In Advances in Neural Information Processing Sys- tems 25, pages 341-349.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "The sentence vector construction method.", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "The learning process of a four layer stacked denoising auto-encoder.", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "Accuracy of each method with standard error.", "type_str": "figure", "num": null }, "FIGREF3": { "uris": null, "text": "Learning time with varying numbers of hidden layers.", "type_str": "figure", "num": null }, "FIGREF4": { "uris": null, "text": "Learning time with varying dimensions of hidden nodes.", "type_str": "figure", "num": null }, "TABREF0": { "html": null, "content": "
ParametersAccuracy
0%81.1%
10%81.5%
Noise rate20% 30%81.4% 80.9%
40%81.1%
50%81.6%
180.6%
280.4%
Number of hidden layers3 481.1% 81.6%
581.4%
681.1%
10081.1%
Number of hidden nodes300 50081.2% 81.3%
70081.2%
to all baselines, including the state-of-the-art Nak-
agawa et al. (2010)'s method by up to 11.3 points.
This result shows that the distributed word represen-
tation is sufficiently effective on the Japanese sen-
timent classification task, even though only a sim-
ple word embedding model, not a complex tuned
representation learning model such as dos Santos et
al. (2014)'s, is used.
", "text": "Accuracies of SdA models with different hyperparameters.", "num": null, "type_str": "table" }, "TABREF1": { "html": null, "content": "
Correct examples
", "text": "Correct and incorrect examples. BoF, LR, AE, Neg, SdA and Gold represent Bag-of-Features, LogRes, Auto-Encoder (one layer SdA without stacking), Negation Processed, Proposal and the Gold answer, respectively.", "num": null, "type_str": "table" } } } }