{ "paper_id": "C12-1036", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:25:23.480129Z" }, "title": "A System For Multilingual Sentiment Learning On Large Data Sets", "authors": [ { "first": "Alex", "middle": [], "last": "C H En G 1 Oles Z H U Ly N", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Toronto", "location": { "country": "Canada" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Classifying documents according to the sentiment they convey (whether positive or negative) is an important problem in computational linguistics. There has not been much work done in this area on general techniques that can be applied effectively to multiple languages, nor have very large data sets been used in empirical studies of sentiment classifiers. We present an empirical study of the effectiveness of several sentiment classification algorithms when applied to nine languages (including Germanic, Romance, and East Asian languages). The algorithms are implemented as part of a system that can be applied to multilingual data. We trained and tested the system on a data set that is substantially larger than that typically encountered in the literature. We also consider a generalization of the n-gram model and a variant that reduces memory consumption, and evaluate their effectiveness.", "pdf_parse": { "paper_id": "C12-1036", "_pdf_hash": "", "abstract": [ { "text": "Classifying documents according to the sentiment they convey (whether positive or negative) is an important problem in computational linguistics. There has not been much work done in this area on general techniques that can be applied effectively to multiple languages, nor have very large data sets been used in empirical studies of sentiment classifiers. We present an empirical study of the effectiveness of several sentiment classification algorithms when applied to nine languages (including Germanic, Romance, and East Asian languages). The algorithms are implemented as part of a system that can be applied to multilingual data. We trained and tested the system on a data set that is substantially larger than that typically encountered in the literature. We also consider a generalization of the n-gram model and a variant that reduces memory consumption, and evaluate their effectiveness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Classifying text documents according to the sentiment they convey is an important problem in computational linguistics. Sentiment reflects the emotional content in the document or the attitude of the speaker to the subject matter in the document, and can be positive or negative. For example, \"Thank you for the pleasant time we spent together\" conveys a positive sentiment, while \"I was devastated when you left\" conveys a negative sentiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Sentiment classifiers that can process massive amounts of data quickly and accurately have applications in many segments of society. Marketing and brand management firms that are interested in how consumers generally feel about particular companies and their products can apply sentiment classifiers to social media documents containing relevant keywords. Government agencies that monitor electronic communications in order to identify and locate dissidents can use sentiment classifiers to find subversive messages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To the best of our knowledge, there has not been much work done in this area on general techniques that can be applied effectively to multiple languages, nor have very large data sets been used in empirical studies of sentiment classifiers. In this paper, we present an empirical study of two sentiment classification algorithms applied to nine languages (including Germanic, Romance, and East Asian languages). One of these algorithms is a naive Bayes classifier, and the other is an algorithm that boosts a naive Bayes classifier with a logistic regression classifier, using majority vote. These algorithms are implemented as part of a system that can be applied to multilingual data. Our implementation is fast, allowing a large number of documents to be classified in a short amount of time, with high accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Automatic sentiment classification of text documents requires that the documents be modeled in a way that is amenable to the algorithm being used. The typical approach is to model the documents using n-grams. In this paper, we consider a generalization of the n-gram model that is more suitable for languages with a flexible word order, and a variant of this generalized n-gram model that helps reduce memory consumption. These models are built into our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For the empirical study, we trained and tested our system on a data set that is substantially larger than that typically encountered in the literature. To generate this data set, we wrote custom crawlers, and mined various web sites for reviews of products and services. The reviews were annotated by their authors with star ratings, which we used to automatically label the reviews as conveying either a positive or a negative sentiment. For each experiment in the study, we sampled disjoint training and testing sets uniformly at random from this large data set. Unlike the usual approach in the literature, the testing sets were much larger than the training sets (at least four times larger), and the experiments were repeated many times. We did this to ensure that our results were statistically significant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows. In Section 2, we provide a brief overview of related work done in this area. In Section 3, we describe our large data set and how we acquired it. In Section 4, we discuss the generalization of the n-gram model and its variant. In Section 5, we describe the sentiment classification algorithms that we considered. In Section 6, we describe our experimental setup, and present the results. We then conclude and suggest future directions for this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Pang and Lee (Pang and Lee, 2008) have written an excellent survey on the work done in the area of sentiment classification. Pang et al. (Pang et al., 2002) undertook an empirical study that resembles our own. They evaluated the effectiveness of several machine learning methods (naive Bayes (Domingos and Pazzani, 1997; Lewis, 1998) , maximum entropy (Csisz\u00e1r, 1996; Nigam et al., 1999) , and support vector machines (Cortes and Vapnik, 1995; Joachims, 1998) ) for sentiment classification of English-language documents. They generated their data set by mining movie reviews from the Internet Movie Database (IMDb) 1 and classifying them as positive or negative based on the author ratings expressed with stars or numerical values. They modeled the movie reviews as n-grams. Bespalov et al. (Bespalov et al., 2011) presented a method for classifying the sentiment of English-language documents modeled as high-order n-grams that are projected into a low-dimensional latent semantic space using a multi-layered \"deep\" neural network (Bengio et al., 2003; Collobert and Weston, 2008) . They evaluated the effectiveness of this method by comparing it to ones based on perceptrons (Rosenblatt, 1957) and support vector machines. Their data set was derived from reviews on Amazon 2 and TripAdvisor 3 , which were labeled as positive or negative based on their star ratings.", "cite_spans": [ { "start": 13, "end": 33, "text": "(Pang and Lee, 2008)", "ref_id": "BIBREF21" }, { "start": 125, "end": 156, "text": "Pang et al. (Pang et al., 2002)", "ref_id": "BIBREF22" }, { "start": 292, "end": 320, "text": "(Domingos and Pazzani, 1997;", "ref_id": "BIBREF9" }, { "start": 321, "end": 333, "text": "Lewis, 1998)", "ref_id": "BIBREF16" }, { "start": 352, "end": 367, "text": "(Csisz\u00e1r, 1996;", "ref_id": "BIBREF8" }, { "start": 368, "end": 387, "text": "Nigam et al., 1999)", "ref_id": "BIBREF19" }, { "start": 418, "end": 443, "text": "(Cortes and Vapnik, 1995;", "ref_id": "BIBREF7" }, { "start": 444, "end": 459, "text": "Joachims, 1998)", "ref_id": "BIBREF14" }, { "start": 776, "end": 815, "text": "Bespalov et al. (Bespalov et al., 2011)", "ref_id": "BIBREF3" }, { "start": 1033, "end": 1054, "text": "(Bengio et al., 2003;", "ref_id": "BIBREF2" }, { "start": 1055, "end": 1082, "text": "Collobert and Weston, 2008)", "ref_id": "BIBREF5" }, { "start": 1178, "end": 1196, "text": "(Rosenblatt, 1957)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our large data set consists of reviews of products and services mined from various web sites. We wrote custom crawlers for each of these web sites. The domain for the reviews is quite diverse, including such things as books, hotels, restaurants, electronic equipment, and baby care products. We only looked at web sites where the reviews were accompanied by star ratings (which we normalized to a scale between 1-and 5-stars). This enabled us to automatically assign a sentiment to each review.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Large Data Set", "sec_num": "3" }, { "text": "We considered reviews accompanied by a rating of 1-or 2-stars as having a negative sentiment, and those accompanied by 5-stars as having a positive sentiment. For some of the web sites (e.g. Ciao! 4 ), along with the star ratings, the reviews were also accompanied by a binary (recommended or not-recommended) rating. In this case, we assigned a negative sentiment to reviews accompanied by a rating of 1-or 2-stars, and a not-recommended rating, and a positive sentiment to reviews accompanied by a rating of 5-stars, and a recommended rating.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Large Data Set", "sec_num": "3" }, { "text": "The approach of automatically assigning sentiment to reviews based on accompanying author ratings has precedents in the literature (Pang et al., 2002; Bespalov et al., 2011) . Although it is likely that there is some noise in the data with this kind of approach, an automated approach is nevertheless essential for generating a large data set. , and TripAdvisor. The Japanese data was mined from Amazon, Rakuten 10 , and Kakaku.com 11 . Across these web sites, these languages are not equally well-represented. As a consequence, for some of the languages (e.g. Japanese) we were able to mine substantially more data than for others (e.g. Portuguese) ( Table 1) .", "cite_spans": [ { "start": 131, "end": 150, "text": "(Pang et al., 2002;", "ref_id": "BIBREF22" }, { "start": 151, "end": 173, "text": "Bespalov et al., 2011)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 652, "end": 660, "text": "Table 1)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Large Data Set", "sec_num": "3" }, { "text": "A text document is a sequence of tokens. Tokens can simply be single characters within the text document. However, in sentiment classification, the tokens of interest are typically n-grams, which are n-length sequences of contiguous whitespace-separated words. For example, if a document is the sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Representation", "sec_num": "4" }, { "text": "( 1 , 2 , . . . , N \u22121 , N ), where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Representation", "sec_num": "4" }, { "text": "In Chinese and Japanese, words are not delimited by whitespace in writing. For the results we present in this paper, we used third-party libraries (Taketa, 2012; Lin, 2012) to segment Chinese and Japanese documents into words. These libraries are based on machine learning methods, and do not require large dictionary files. Nie et al. (Nie et al., 2000) considered tokenizing Chinese documents as n-grams. We also experimented with this approach for both Chinese and Japanese documents (i.e. we treated single characters as tokens). Although we do not present them here, the results we achieved in these experiments were comparable to (though not quite as good as) the results we achieved with the third-party libraries. ", "cite_spans": [ { "start": 147, "end": 161, "text": "(Taketa, 2012;", "ref_id": "BIBREF24" }, { "start": 162, "end": 172, "text": "Lin, 2012)", "ref_id": "BIBREF17" }, { "start": 336, "end": 354, "text": "(Nie et al., 2000)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Document Representation", "sec_num": "4" }, { "text": "We can generalize the n-gram model by introducing a window size k \u2265 n. To iterate over all the tokens in a sequence, we first consider every window in the sequence (that is, every contiguous subsequence of length k). The tokens are all the (not necessarily contiguous) subsequences of length n within each window. When k = n, this is just the standard n-gram model. Guthrie et al. (Guthrie et al., 2006) refer to this as the skip-gram model. This model is suitable for languages with a flexible word order (e.g. German). With a flexible word order, the co-occurrence of several specific words in proximity may be indicative of a particular sentiment irrespective of any intermediary words. In the standard n-gram model, the relevant words can only be captured in a token along with the intermediary words. Due to the potential variety in the intermediary words, a single document may contain many tokens that are different, but that all correspond to the co-occurrence of these relevant words. In contrast, the generalized n-gram model enables these relevant words to be captured in a single token. This helps to mitigate against noise. However, for a given document, the generalized n-gram model requires that more tokens are processed than does the standard n-gram model.", "cite_spans": [ { "start": 381, "end": 403, "text": "(Guthrie et al., 2006)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Generalized N-gram", "sec_num": "4.1" }, { "text": "The hitting n-gram model is a variation on the generalized n-gram model. In the hitting ngram model, only the windows that are centered around (i.e. \"hit\") words from a predefined lexicon are considered. We can specify where inside a window we would like the hit to occur by giving the window size in terms of the number of words preceding a word from the lexicon and the number of words following that word from the lexicon. In contrast to the generalized n-gram model, the hitting n-gram model can drastically reduce the number of tokens that need to be processed, depending on the lexicon that is chosen. For this project, we processed our large data set using Pearson's chi-squared test to find the words that are most indicative of positive and negative sentiment to build a lexicon for each language. We discuss this in more detail in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hitting N-gram", "sec_num": "4.2" }, { "text": "For our experiments, we modeled documents using the 2-gram model, the generalized 2gram model with window size 3, the generalized 2-gram model with window size 5, and the hitting 2-gram model with (preceding) window size 1. For each of these models, we trained a naive Bayes classifier and a logistic regression classifier. During testing, we considered the results from the naive Bayes classifier, and the naive Bayes classifier boosted with the logistic regression classifier using majority vote. We repeated this for each language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers", "sec_num": "5" }, { "text": "Yang and Pedersen (Yang and Pedersen, 1997) Table 2, Table 3, and Table 4 ). We used these words as the lexicon for the hitting 2-gram model. Following Yang and Pedersen, we computed, for each word w and each sentiment s, the goodness of fit measure:", "cite_spans": [ { "start": 18, "end": 43, "text": "(Yang and Pedersen, 1997)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 44, "end": 73, "text": "Table 2, Table 3, and Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Hitting 2-gram Model", "sec_num": "5.1" }, { "text": "\u03c7 2 (w, s) = N \u00d7 (AD \u2212 C B) 2 (A + C) \u00d7 (B + D) \u00d7 (A + B) \u00d7 (C + D)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hitting 2-gram Model", "sec_num": "5.1" }, { "text": "where A is the number of documents with sentiment s in which w occurs, B is the number of documents without sentiment s in which w occurs, C is the number of documents with sentiment s in which w does not occur, D is the number of documents without sentiment s in which w does not occur, and N is the total number of documents. We did this once over our entire data set, and took the words that scored highest according to this measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hitting 2-gram Model", "sec_num": "5.1" }, { "text": "In our experiments, we set the window size to be 1 preceding word. We also tried other window sizes, but they did not produce substantially better results. We do not report these other results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hitting 2-gram Model", "sec_num": "5.1" }, { "text": "The technique we used to build the lexicon can be applied to other kinds of tokens. For example, Figure 1 is a word cloud of the English 2-grams most indicative of positive sentiment in our data set. We generated the word cloud using Wordle (Feinberg, 2012) .", "cite_spans": [ { "start": 241, "end": 257, "text": "(Feinberg, 2012)", "ref_id": null } ], "ref_spans": [ { "start": 97, "end": 105, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Hitting 2-gram Model", "sec_num": "5.1" }, { "text": "For the 2-gram model, we used the training data to compute for each 2-gram, ( , ), the probability that it belongs to a document with a positive sentiment, pos ( , ), and the probability that it belongs to a document with a negative sentiment, ne g ( , ). Given a document ( 1 , 2 , . . . , N \u22121 , N ) 12 to classify, we apply a decision rule based on the ratio", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Naive Bayes Classifier", "sec_num": "5.2" }, { "text": "\u220f i pos ( i , i\u22121 ) ne g ( i , i\u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Naive Bayes Classifier", "sec_num": "5.2" }, { "text": "computed using the probabilities determined from our training data. If this ratio is greater than 1, then we classify the document as positive. Otherwise, we classify the document as negative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Naive Bayes Classifier", "sec_num": "5.2" }, { "text": "The following derivation show what this ratio means.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Naive Bayes Classifier", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u220f i pos ( i , i\u22121 ) ne g ( i , i\u22121 ) = \u220f i pos ( i | i\u22121 ) ne g ( i | i\u22121 ) \u00d7 pos ( i\u22121 ) ne g ( i\u22121 ) (1) = \u220f i pos ( i | i\u22121 ) ne g ( i | i\u22121 ) \u00d7 \u220f i pos ( i\u22121 ) ne g ( i\u22121 ) (2) = \u220f i pos ( i | i\u22121 ) ne g ( i | i\u22121 ) \u00d7 \u220f i pos ( i ) ne g ( i )", "eq_num": "(3)" } ], "section": "Naive Bayes Classifier", "sec_num": "5.2" }, { "text": "Figure 2: Sum over the range to get the count for W y .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Naive Bayes Classifier", "sec_num": "5.2" }, { "text": "Line (1) follows from the definition of conditional probability. Line (2) follows from commutativity and associativity of multiplication. Line (3) follows from the fact that the missing term", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Naive Bayes Classifier", "sec_num": "5.2" }, { "text": "pos ( N ) neg ( N ) = 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Naive Bayes Classifier", "sec_num": "5.2" }, { "text": "since the occurrence of N , the special symbol , in a document with a positive sentiment is equally likely to its occurrence in a document with a negative sentiment. The numerator in the expression in line (3) is the probability that the given document has a positive sentiment according to both the 2-gram model and the 1-gram model. The denominator is the probability that the document has a negative sentiment according to both models. Our decision rule classifies the document according to which of these two probabilities is the greater. Notice that our confidence that the sentiment of the document was classified correctly can be increased using a threshold parameter. For example, if the ratio between the numerator and the denominator is very high, then we have high confidence that the document has a positive sentiment. At the cost of leaving some documents unclassified, the threshold parameter can be used to achieve arbitrarily high classification accuracies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Naive Bayes Classifier", "sec_num": "5.2" }, { "text": "Our implementation allows these values to be computed quickly. We represent each distinct word that we encounter in the training data with a nonnegative 32-bit integer, and use a hash map to store this representation. We represent each 2-gram that we encounter in the training data by packing the two integers corresponding to the two words in the 2gram in a 64-bit integer. After processing the training data, we sort all the 64-bit integers representing the 2-grams, and store the sorted list in an array. We use the index of each 2-gram in this array as an index into two other arrays: one representing the number of occurrences of the 2-grams in positive documents, and the other representing the number of occurrences of the 2-grams in negative documents. This approach gives us a minimal perfect hash function from 2-grams to their counts in positive and negative documents. Looking up a count for a given 2-gram is fast: binary search on the sorted array gives us the index to the counts for occurrences in positive and negative documents. Our minimal use of pointers also keeps memory consumption low.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Naive Bayes Classifier", "sec_num": "5.2" }, { "text": "One might be interested in computing the probabilities for a document under the 1-gram and 2-gram models. Our implementation allows this to be computed quickly. Given a word, one can perform binary search on the sorted list of 2-grams to find the first occurrence of a 2-gram whose first word is the given word. After this 2-gram is found, one needs only to sum up all the values in the list up to the last occurrence of a 2-gram whose first word is the given word ( Figure 2 ), and divide by the total sum of all the values in the list (which can be computed once, when the list is built). This is the probability for a 1-gram. The probability for a 2-gram can be evaluated directly from this using Bayes' rule.", "cite_spans": [], "ref_spans": [ { "start": 467, "end": 475, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Naive Bayes Classifier", "sec_num": "5.2" }, { "text": "The approach we took for the generalized 2-gram models, and the hitting 2-gram model is the same. However, the derivation for the value in our decision rule does not work out exactly, and only gives us a rough approximation of the probabilities. The results of the experiments reflect this fact: although classification speed is very fast, the accuracies are somewhat less impressive than what one might expect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Naive Bayes Classifier", "sec_num": "5.2" }, { "text": "We used a logistic regression classifier provided by the LIBLINEAR software (Fan et al., 2008) . For logistic regression, it is necessary to represent documents as feature vectors. We tried three representations. In all three cases, we had a feature for each token encountered in the training data. For the first representation, the value we used for each feature was the frequency of occurrence of the corresponding token, in the document. We normalized each feature to fall in the range [0, 1] (details in the following paragraph). The second representation was like the first, except we normalized the whole vector to the unit vector, instead of normalizing per feature. For the third representation, the value we used for each feature was 1 or 0, depending on whether the corresponding token was present in the document or not. We normalized the whole vector to the unit vector. All three approaches produced similar results. We only report the results for the first representation.", "cite_spans": [ { "start": 76, "end": 94, "text": "(Fan et al., 2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Logistic Regression Classifier", "sec_num": "5.3" }, { "text": "The normalization that we used for the first representation is the following. Suppose is the total set of training documents, and is the total set of tokens encountered across all documents in . For each document d \u2208 and each token t \u2208 , let f r eq d (t) be the frequency of occurrence of token t in document d (e.g. if d contains 10 tokens and t occurs 5 times in d, then f r eq d (t) = 5/10 = 0.5). The normalized value f r eq", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logistic Regression Classifier", "sec_num": "5.3" }, { "text": "d (t) of f r eq d (t) is f r eq d (t) = f r eq d (t) \u2212 min d \u2208 ( f r eq d (t)) max d \u2208 ( f r eq d (t)) \u2212 min d \u2208 ( f r eq d (t)) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logistic Regression Classifier", "sec_num": "5.3" }, { "text": "Notice that if d is a document from the testing set, then f r eq d (t) can fall outside the range [0, 1] . This is the intended behavior (Fan et al., 2008; Hsu et al., 2010) .", "cite_spans": [ { "start": 98, "end": 101, "text": "[0,", "ref_id": null }, { "start": 102, "end": 104, "text": "1]", "ref_id": null }, { "start": 137, "end": 155, "text": "(Fan et al., 2008;", "ref_id": "BIBREF10" }, { "start": 156, "end": 173, "text": "Hsu et al., 2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Logistic Regression Classifier", "sec_num": "5.3" }, { "text": "For our experiments, we boosted the naive Bayes classifier with the logistic regression classifier using majority vote. If both classifiers agreed, then we returned the value they agreed on. Otherwise, we returned no answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logistic Regression Classifier", "sec_num": "5.3" }, { "text": "For the empirical study, we evaluated two algorithms: a naive Bayes classifier, and a naive Bayes classifier boosted with a logistic regression classifier, using majority vote. In evaluating each algorithm, we considered four ways of modeling text documents: the 2-gram model (2g), the generalized 2-gram model with window size 3 (2g-w3), the generalized 2-gram model with window size 5 (2g-w5), and the hitting 2-gram model with (preceding) window size 1 (2g-h). We repeated this for nine languages: French (fr), Spanish (es), Italian (it), Portuguese (pt), Traditional and Simplified Chinese (zh), Japanese (ja), German (de), English (en), and Dutch (nl). In total, this constitutes 72 different experiments. We ran each experiment 10 times to validate the results. For each of the ten runs and each language, we sampled disjoint training and testing sets uniformly at random from the large data set. We ensured that the testing set was always at least four times larger than the training set. For each way of modeling text documents, we trained each algorithm using the training set, and tested it using the testing set. In Table 8 , we report the mean and standard deviation, over ten runs, for the number of positive and negative documents in the training and testing sets for each language.", "cite_spans": [], "ref_spans": [ { "start": 1127, "end": 1135, "text": "Table 8", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6.1" }, { "text": "We performed our experiments using commodity hardware consisting of a quad-core Core 2 (Q9650) processor running at 3.0GHz, 16GB DDR2 memory running at 800MHz, and a 64-bit operating system with Linux kernel version 3.0. Our sentiment classification system was implemented using Java, and we ran it using Oracle Java SE Runtime Environment (build 1.6.0 30-b12). Our system makes use of several third-party libraries. The versions of these that we used are Java LIBLINEAR version 1.8 (Waldvogel, 2012) , Apache Lucene Core version 3.6.0 (The Apache Software Foundation, 2012), cMeCab-Java version 2.0.1 (Taketa, 2012) , and IK Analyzer 2012 upgrade 5 (Lin, 2012) .", "cite_spans": [ { "start": 483, "end": 500, "text": "(Waldvogel, 2012)", "ref_id": "BIBREF25" }, { "start": 602, "end": 616, "text": "(Taketa, 2012)", "ref_id": "BIBREF24" }, { "start": 650, "end": 661, "text": "(Lin, 2012)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6.1" }, { "text": "Our multilingual sentiment classification system achieved very high accuracy (Table 6 and Table 7) , without resorting to ad hoc NLP techniques, like parts-of-speech tagging and regular expression matching. It was also very fast (Table 5) , because it did not rely on these techniques, which tend to be slow. The no answer rate for the naive Bayes classifier boosted with the logistic regression classifier is the rate at which documents were left unclassified because the two classifiers did not agree. Despite some documents being left unclassified, the two classifiers boosted together achieved a significantly higher accuracy than the naive Bayes classifier alone.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 99, "text": "(Table 6 and Table 7)", "ref_id": "TABREF9" }, { "start": 230, "end": 239, "text": "(Table 5)", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "Recall from 5.2 that, in our implementation, the probability ratio in the decision rule of the naive Bayes classifier is only a rough approximation of the true value for the generalized 2-gram model and the hitting 2-gram model. The consequence of this is that we do not see a substantial improvement in classification accuracy for these models (Figure 3 ).", "cite_spans": [], "ref_spans": [ { "start": 345, "end": 354, "text": "(Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "The less impressive performance overall for the Portuguese language is due to the quality of the data. For Portuguese, we had fewer documents to train on (Table 8) , and the testing documents were, on average, quite short in length ( Table 10) . Notice that while we also had fewer training documents for the Dutch language, the average testing document length for Dutch is substantially greater than that for Portuguese. This is why the classification accuracy for Dutch did not suffer as much as it did for Portuguese. On the other hand, while the average testing document length for Chinese and Japanese is very short, we trained the algorithms with far more documents for these languages, and so the classification accuracies did not suffer. Thus, we can see a tradeoff between the amount of training data and the average length of the documents being classified. In our experiments with data from Twitter 13 (which we do not report in this paper), we found the same tradeoff: more training data is needed to achieve higher classification accuracies with documents that are so short in length.", "cite_spans": [], "ref_spans": [ { "start": 154, "end": 163, "text": "(Table 8)", "ref_id": "TABREF12" }, { "start": 234, "end": 243, "text": "Table 10)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "As expected, more unique tokens need to be processed during training as the window size for the generalized 2-gram model is increased (Table 9 and Figure 5 ). These are the unique tokens that are used to compute the probabilities, and construct the data structure discussed in 5.2. When the number of unique tokens encountered during training is greater, the amount of memory that is consumed during classification is also greater. The classification speed also decreases as the number of unique tokens increases. The hitting 2-gram model drastically reduces the number of unique tokens, and, unsurprisingly, has a faster classification speed than the other models. The hitting 2-gram model also achieves greater accuracy than the other models when the amount of training data is less (i.e. for Portuguese and Dutch). In Figure 6 , we see that when we normalize for the number of unique tokens, the hitting 2-gram model achieves far greater accuracy than the other models. Thus, for faster classification speed, reduced memory consumption, and lower quality training data, the hitting 2-gram is the way to go. Our sentiment classification system was aggressively optimized for high speed and reduced memory consumption. The data for each language was aggregated in one flat file for ease of processing. Running the full set of ten runs of all experiments took less than an hour. Loading everything into memory consumed less than 3.5GB of the heap, which is unprecedented. When we ran the same set of experiments using LingPipe (Alias-i, 2012) for only the Spanish language and using only the 2-gram model, we found that more than 12GB of heap memory were required to even finish training.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 155, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 821, "end": 829, "text": "Figure 6", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "Our results show that a simple and straightforward statistical approach with a large amount of training data rivals the many complex, ad hoc NLP approaches that are optimized for small amounts of training data. Important advantages of our approach are increased training and classification speeds, and reduced memory consumption. These are practical concerns that are not generally adequately addressed in the literature, particularly for the NLP approaches, which place a great emphasis on classification accuracy at the cost of speed and memory consumption. Our sentiment classification system achieves a good balance between these concerns. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "In this paper, we presented an empirical study of two sentiment classification algorithms applied to nine languages (including Germanic, Romance, and East Asian languages). One of these algorithms is a naive Bayes classifier, and the other is an algorithm that boosts a naive Bayes classifier with a logistic regression classifier, using majority vote. We implemented these algorithms as part of a system for classifying the sentiment of multilingual text data. Our implementation is fast, and has high classification accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": null }, { "text": "We also considered a generalization of the n-gram model for representing text data, and a variant of this generalization that helps reduce memory consumption. Along with the standard n-gram model, these two models are built into our system. We evaluated all of these models in the empirical study that we presented in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": null }, { "text": "For the empirical study, we trained and tested our system on a data set that is substantially larger than that typically encountered in the literature. We generated this data set by crawling and mining various web sites for reviews of products and services. For each experiment in the study, we sampled disjoint training and testing sets uniformly at random from this large data set. Unlike the usual approach in the literature, the testing sets were much larger than the training sets (at least four times larger), and the experiments were repeated many times. We did this to ensure that our results were statistically significant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": null }, { "text": "As we have shown in this paper, statistical methods applied to large amounts of data are effective for the sentiment classification problem. It would be interesting to investigate the application of this approach to the problem of relevance (i.e. determining whether a document conveys any sentiment at all). Previous efforts have been overly complicated (Pang and Lee, 2004) . One approach that we are considering is to take a list of n-grams that are most indicative of sentiment (determined using Pearson's chi-squared test, as discussed in 5.1), and computing the mean and standard deviation for the frequency of occurrence of these words in the training documents. During testing, the frequency of occurrence for these words in the test documents can be compared to the mean we computed. If the frequency of occurrence is not less than one standard deviation below the mean, then a document can be deemed relevant.", "cite_spans": [ { "start": 355, "end": 375, "text": "(Pang and Lee, 2004)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": null }, { "text": "We are also interested in commercializing our sentiment classification system by selling it to social media analytics firms, such as Sysomos 14 and BrandWatch 15 . The existing players in the sentiment classification field (e.g. Saplo 16 , Lexalytics 17 , OpenAmplify 18 , and SNTMNT 19 ) are not transparent about what they are doing, and it is not clear how robust their offerings are. If commercialization fails, then we intend to make our sentiment classification system freely available under the GPL 20 , since one of our great passions is educating the public on the power of machine learning methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": null }, { "text": "http://reviews.imdb.com/Reviews/ 2 http://www.amazon.com 3 http://www.tripadvisor.com 4 http://www.ciao.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": ", . . . , N \u22121 are whitespace-separated words, and 1 and N are the special symbol , signifying the beginning or the end of the document, then the 2-grams are( 1 , 2 ), ( 2 , 3 ), ( 3 , 4 ), . . . , ( N \u22122 , N \u22121 ), ( N \u22121 , N ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.walmart.com.br 6 http://www.opinaki.com.br 7 http://www.buscape.com.br 8 http://www.bol.com 9 http://www.dangdang.com 10 http://www.rakuten.co.jp 11 http://www.kakaku.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "2 , . . . , N \u22121 are whitespace-separated words, and 1 and N are the special symbol , signifying the beginning or the end of the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.twitter.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.sysomos.com 15 http://www.brandwatch.com 16 http://saplo.com 17 http://www.lexalytics.com 18 http://www.openamplify.com 19 http://www.sntmnt.com 20 http://www.gnu.org/copyleft/gpl.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Apache Lucene Core version 3", "authors": [], "year": 2012, "venue": "The Apache Software Foundation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "The Apache Software Foundation (2012). Apache Lucene Core version 3.6.0. http: //lucene.apache.org/core.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A neural probabilistic language model", "authors": [ { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "P", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Janvin", "middle": [], "last": "", "suffix": "" }, { "first": "C", "middle": [], "last": "", "suffix": "" } ], "year": 2003, "venue": "The Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bengio, Y., Ducharme, R., Vincent, P., and Janvin, C. (2003). A neural probabilistic language model. The Journal of Machine Learning Research, 3(Feb):1137-1155.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Sentiment classification based on supervised latent n-gram analysis", "authors": [ { "first": "D", "middle": [], "last": "Bespalov", "suffix": "" }, { "first": "B", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Y", "middle": [], "last": "Qi", "suffix": "" }, { "first": "A", "middle": [], "last": "Shokoufandeh", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 20th ACM International Conference on Information and Knowledge Management (CIKM '11)", "volume": "", "issue": "", "pages": "375--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bespalov, D., Bai, B., Qi, Y., and Shokoufandeh, A. (2011). Sentiment classification based on supervised latent n-gram analysis. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management (CIKM '11), pages 375-382.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "LIBSVM: A library for support vector machines", "authors": [ { "first": "C.-C", "middle": [], "last": "Chang", "suffix": "" }, { "first": "C.-J", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2011, "venue": "ACM Transactions on Intelligent Systems and Technology", "volume": "2", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, C.-C. and Lin, C.-J. (2011). LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(3):27:1-27:27. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A unified architecture for natural language processing: deep neural networks with multitask learning", "authors": [ { "first": "R", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th International Conference on Machine Learning (ICML '08)", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collobert, R. and Weston, J. (2008). A unified architecture for natural language process- ing: deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning (ICML '08), pages 160-167.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Introduction to Algorithms", "authors": [ { "first": "T", "middle": [ "H" ], "last": "Cormen", "suffix": "" }, { "first": "C", "middle": [ "E" ], "last": "Leiserson", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Rivest", "suffix": "" }, { "first": "C", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cormen, T. H., Leiserson, C. E., Rivest, R. L., and Stein, C. (2001). Introduction to Algorithms. MIT Press and McGraw-Hill, 2nd edition.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Support-vector networks", "authors": [ { "first": "C", "middle": [], "last": "Cortes", "suffix": "" }, { "first": "V", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "Machine Learning", "volume": "20", "issue": "3", "pages": "273--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cortes, C. and Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3): 273-297.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Maxent, mathematics, and information theory", "authors": [ { "first": "I", "middle": [], "last": "Csisz\u00e1r", "suffix": "" } ], "year": 1996, "venue": "Maximum Entropy and Bayesian Methods: Proceedings of the 15th International Workshop on Maximum Entropy and Bayesian Methods", "volume": "", "issue": "", "pages": "35--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Csisz\u00e1r, I. (1996). Maxent, mathematics, and information theory. In Hanson, K. M. and Silver, R. N., editors, Maximum Entropy and Bayesian Methods: Proceedings of the 15th International Workshop on Maximum Entropy and Bayesian Methods, pages 35-50.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On the optimality of the simple bayesian classifier under zero-one loss", "authors": [ { "first": "P", "middle": [], "last": "Domingos", "suffix": "" }, { "first": "M", "middle": [], "last": "Pazzani", "suffix": "" } ], "year": 1997, "venue": "Machine Learning -Special issue on learning with probabilistic representations", "volume": "29", "issue": "2-3", "pages": "103--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Domingos, P. and Pazzani, M. (1997). On the optimality of the simple bayesian classi- fier under zero-one loss. Machine Learning -Special issue on learning with probabilistic representations, 29(2-3):103-130.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "LIBLIN-EAR: A library for large linear classification", "authors": [ { "first": "R.-E", "middle": [], "last": "Fan", "suffix": "" }, { "first": "K.-W", "middle": [], "last": "Chang", "suffix": "" }, { "first": "C.-J", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "X.-R", "middle": [], "last": "Wang", "suffix": "" }, { "first": "C.-J", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2008, "venue": "The Journal of Machine Learning Research", "volume": "9", "issue": "", "pages": "1871--1874", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., and Lin, C.-J. (2008). LIBLIN- EAR: A library for large linear classification. The Journal of Machine Learning Re- search, 9(Aug):1871-1874. Software available at http://www.csie.ntu.edu.tw/~cjlin/ liblinear.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A closer look at skip-gram modelling", "authors": [ { "first": "D", "middle": [], "last": "Guthrie", "suffix": "" }, { "first": "B", "middle": [], "last": "Allison", "suffix": "" }, { "first": "W", "middle": [], "last": "Liu", "suffix": "" }, { "first": "L", "middle": [], "last": "Guthrie", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC -2006)", "volume": "", "issue": "", "pages": "1222--1225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guthrie, D., Allison, B., Liu, W., Guthrie, L., and Wilks, Y. (2006). A closer look at skip-gram modelling. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC -2006), pages 1222-1225.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A practical guide to support vector classication", "authors": [ { "first": "C.-W", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "C.-C", "middle": [], "last": "Chang", "suffix": "" }, { "first": "C.-J", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsu, C.-W., Chang, C.-C., and Lin, C.-J. (2010). A practical guide to support vector classication. http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Text categorization with suport vector machines: Learning with many relevant features", "authors": [ { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 10th European Conference on Machine Learning (ECML '98)", "volume": "", "issue": "", "pages": "137--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joachims, T. (1998). Text categorization with suport vector machines: Learning with many relevant features. In Proceedings of the 10th European Conference on Machine Learning (ECML '98), pages 137-142.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Making large-scale support vector machine learning practical", "authors": [ { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1999, "venue": "Advances in kernel methods", "volume": "", "issue": "", "pages": "169--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joachims, T. (1999). Making large-scale support vector machine learning practical. In Sch\u00f6lkopf, B. and Smola, A., editors, Advances in kernel methods, pages 169-184. MIT Press Cambridge, MA, USA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Naive (bayes) at forty: The independence assumption in information retrieval", "authors": [ { "first": "D", "middle": [ "D" ], "last": "Lewis", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 10th European Conference on Machine Learning (ECML '98)", "volume": "", "issue": "", "pages": "4--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lewis, D. D. (1998). Naive (bayes) at forty: The independence assumption in information retrieval. In Proceedings of the 10th European Conference on Machine Learning (ECML '98), pages 4-15.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "IK Analyzer 2012 upgrade 5", "authors": [ { "first": "L", "middle": [ "Y" ], "last": "Lin", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, L. Y. (2012). IK Analyzer 2012 upgrade 5. http://code.google.com/p/ ik-analyzer.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "On the use of words and n-grams for chinese information retrieval", "authors": [ { "first": "J.-Y", "middle": [], "last": "Nie", "suffix": "" }, { "first": "J", "middle": [], "last": "Gao", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Fifth International Workshop on Information Retrieval with Asian Languages (IRAL '00)", "volume": "", "issue": "", "pages": "141--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nie, J.-Y., Gao, J., Zhang, J., and Zhou, M. (2000). On the use of words and n-grams for chinese information retrieval. In Proceedings of the Fifth International Workshop on Information Retrieval with Asian Languages (IRAL '00), pages 141-148.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Using maximum entropy for text classification", "authors": [ { "first": "K", "middle": [], "last": "Nigam", "suffix": "" }, { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 1999, "venue": "Workshop on Machine Learning for Information Filtering (IJCAI '99)", "volume": "", "issue": "", "pages": "61--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nigam, K., Lafferty, J., and McCallum, A. (1999). Using maximum entropy for text classification. In Workshop on Machine Learning for Information Filtering (IJCAI '99), pages 61-67.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics (ACL '04)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pang, B. and Lee, L. (2004). A sentimental education: sentiment analysis using subjec- tivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics (ACL '04).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "", "volume": "2", "issue": "", "pages": "1--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pang, B. and Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2):1-135.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Thumbs up?: sentiment classification using machine learning techniques", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing (EMNLP '02)", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pang, B., Lee, L., and Vaithyanathan, S. (2002). Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing (EMNLP '02), pages 79-86.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The perceptron-a perceiving and recognizing automaton", "authors": [ { "first": "F", "middle": [], "last": "Rosenblatt", "suffix": "" } ], "year": 1957, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosenblatt, F. (1957). The perceptron-a perceiving and recognizing automaton. Technical Report 85-460-1, Cornell Aeronautical Laboratory.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "cMeCab-Java version 2", "authors": [ { "first": "K", "middle": [], "last": "Taketa", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taketa, K. (2012). cMeCab-Java version 2.0.1. http://code.google.com/p/ cmecab-java.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Java LIBLINEAR version 1", "authors": [ { "first": "B", "middle": [], "last": "Waldvogel", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Waldvogel, B. (2012). Java LIBLINEAR version 1.8. http://www.bwaldvogel.de/ liblinear-java/.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A comparative study on feature selection in text categorization", "authors": [ { "first": "Y", "middle": [], "last": "Yang", "suffix": "" }, { "first": "J", "middle": [ "O" ], "last": "Pedersen", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fourteenth International Conference on Machine Learning (ICML '97)", "volume": "", "issue": "", "pages": "412--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, Y. and Pedersen, J. O. (1997). A comparative study on feature selection in text categorization. In Proceedings of the Fourteenth International Conference on Machine Learning (ICML '97), pages 412-420.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "English 2-grams most indicative of positive sentiment.", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Mean accuracy (in percent, over ten runs) of naive Bayes classifier for each model and each language.", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "Mean classification speed (in documents per second per CPU core, over ten runs) of naive Bayes classifier boosted with logistic regression classifier for each model and each language.", "uris": null, "type_str": "figure" }, "FIGREF3": { "num": null, "text": "Mean number of unique tokens after training (over ten runs) for each model and each language.", "uris": null, "type_str": "figure" }, "FIGREF4": { "num": null, "text": "Mean accuracy per million unique tokens after training (in percent, over ten runs) of naive Bayes classifier for each model and each language.", "uris": null, "type_str": "figure" }, "TABREF1": { "text": "Number of negative and positive documents for each language in our data set", "content": "
and TripAdvisor. The Portuguese data was mined from Walmart 5 , Opinaki 6 , Buscap\u00e9 7 ,
and TripAdvisor. The Dutch data was mined from bol.com 8 , Ciao!, and TripAdvisor. The
Chinese data was mined from Amazon, dangdang.com 9
", "type_str": "table", "num": null, "html": null }, "TABREF2": { "text": "\u4e0d\u9519, \u5f88\u597d, \u5f88\u559c\u6b22, \u503c\u5f97, \u559c\u6b22, \u5f88\u4e0d\u9519, \u5f88\u6709, \u975e\u5e38\u597d, \u5b9e\u7528, \u9002\u5408, \u5b69\u5b50, \u5b66\u4e60, \u6ee1\u610f, \u5f88, \u597d\u4e66, \u8d5e, \u975e\u5e38, \u633a\u597d, \u5f88\u5feb, \u5e2e\u52a9, \u5b9e\u60e0, \u8212\u670d, \u4e86\u89e3, \u5f88\u6ee1, \u5475\u5475, \u65b9\u4fbf, \u672c\u4e66, \u751f\u6d3b, \u54c8\u54c8, \u513f\u5b50, \u8fd9\u672c, \u8001\u5e08, \u7231, \u63a8\u8350, \u5f88\u6f02\u4eae, \u53d7\u76ca\u532a \u6d45, \u7cbe\u7f8e, \u5f88\u7cbe, \u53ef\u7231, \u5168\u9762, \u5212\u7b97, \u7ecf\u5178, \u8be6\u7ec6, \u611f\u52a8, \u8d85\u503c, \u5f88\u68d2, \u503c\u5f97\u4e00 \u770b, \u4e30\u5bcc, \u529b, \u6162\u6162, \u6f02\u4eae, \u4e0d\u8fc7, \u652f\u6301, \u5f88\u7ed9, \u4e00\u672c, \u4e16\u754c, \u6709\u8da3, \u5979, \u62e5\u6709, \u5408\u9002, \u77e5\u8bc6, \u9605\u8bfb, \u597d\u7528, \u633a, \u6536\u85cf, \u611f\u8c22, \u5e78\u798f, \u66f4\u597d, \u7231\u4e0d\u91ca\u624b, \u5c0f\u5de7, \u6700 \u559c\u6b22, \u6210\u957f, \u5f3a\u70c8\u63a8\u8350, \u901a\u4fd7\u6613\u61c2, \u6bcf\u5929, \u597d\u770b, \u63a8\u8350\u7ed9, \u5386\u53f2, \u5c31\u5230, \u633a\u4e0d\u9519, \u68d2, \u5f00\u5fc3, \u5e38\u503c, \u4e00\u53e3\u6c14, \u601d\u8003, \u5bf9\u4e8e, \u670b\u53cb, \u5feb\u4e50, \u7269\u8d85\u6240\u503c \u5931\u671b, \u6ca1\u6709, \u9000\u8d27, \u6839\u672c, \u4e0d\u597d, \u592a, \u5dee, \u4e0d\u662f, \u4e86, \u5f88\u4e0d, \u592a\u5dee, \u4e00\u822c, \u4e0d\u8981, \u5c31, \u5f88\u5dee, \u7ed3\u679c, \u4e0d\u77e5\u9053, \u4e0d, \u4e0d\u80fd, \u5353\u8d8a, \u4e0d\u5982, \u600e\u4e48, \u5ba2\u670d, \u6ca1, \u90fd\u6ca1\u6709, \u53ea \u80fd, \u53d1\u73b0, \u540e\u6094, \u5783\u573e, \u6000\u7591, \u90c1\u95f7, \u574f\u4e86, \u6362\u8d27, \u9875, \u662f\u4e0d\u662f, \u5c45\u7136, \u4e0d\u503c, \u9ebb \u70e6, \u5417, \u76d7\u7248, \u6253\u5f00, \u7c97\u7cd9, \u4e70, \u95ee\u9898, \u4e2a, \u800c\u4e14, \u6ca1\u4ec0\u4e48, \u4ec0\u4e48, \u4e3a\u4ec0\u4e48, \u6253\u7535 \u8bdd, \u7535\u8bdd, \u4e25\u91cd, \u53ef\u662f, \u624d, \u7adf\u7136, \u5757, \u65e0\u8bed, \u5efa\u8bae, \u4e0d\u884c, \u4f60\u4eec, \u6211, \u5c31\u4e0d, \u6d6a \u8d39, \u6362, \u5b9e\u5728, \u90fd\u4e0d, \u5b8c\u5168, \u7b97\u4e86, \u53ea\u6709, \u70c2, \u4e0d\u6ee1\u610f, \u4e5f\u6ca1\u6709, \u4e0d\u4e86, \u4e0d\u503c\u5f97, \u574f, \u6b21, \u8054\u7cfb, \u4e24, \u554a, \u6389\u4e86, \u672c\u6765, \u8bf4, \u4e0d\u8212\u670d, \u660e\u663e, \u94b1, \u51e0, \u4e0d\u559c\u6b22, \u65e0\u6cd5, \u672c\u5c31, \u4e0d\u5230, \u552e\u540e, \u5546\u54c1, \u6362\u4e86, \u4e00\u70b9, \u4e0d\u591f, \u70b9, \u4ea7\u54c1, \u4e0a, \u623f\u95f4", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF3": { "text": "Top Chinese automatically segmented words most indicative of positive and negative sentiment.great, love, easy, highly, best, perfect, excellent, amazing, loves, wonderful, favorite, awesome, fantastic, recommend, book, beautiful, perfectly, pleased, sturdy, fits, works, recommended, fun, definitely, life, price, album, comfortable, superb, happy, helps, gives, family, beautifully, brilliant, incredible, loved, classic, makes, glad, fast, delicious, outstanding, allows, easily, little, always, cd, heart, durable, easier, enjoy, unique, provides, truly, beat, favorites, solid, simple, handy, songs, collection, powerful, ease, size, super, greatest, keeps, song, smooth, books, thank, bonus, nicely, brings, friends, amazed, pleasantly, holds,", "content": "
poor, bad, waste, worst, money, customer, return, dis-
appointed, service, but, refund, told, terrible, returned,
nothing, did, unfortunately, hotel, didn't, back, horrible,
worse, problem, sent, useless, ok, company, awful, disap-
pointing, off, tried, why, stay, pay, asked, send, should,
returning, do, disappointment, poorly, don't, phone, bor-
ing, again, staff, said, call, trying, support, guess, maybe,
rude, unless, instead, get, seemed, supposed, contacted,
paid, wouldn't, fix, went, stopped, thought, avoid, beware,
defective, customers, received, sorry, booked, <NUM-
BER>, broke, manager, wrong, warranty, junk, mistake,
wasted, rooms, contact, left, never, doesn't, me, bro-
ken, replacement, failed, happened, crap, email, stupid,
garbage, annoying, wasn't, least, star, cheap, reviews,
months, properly, apparently, weeks, response, checked,
working, got, frustrating, stayed, slow, going, hoping,
masterpiece,waiting, error, ridiculous, completely, reason, try, either,
satisfied, crisp, affordable, fascinatingcredit, ended, please, half
", "type_str": "table", "num": null, "html": null }, "TABREF4": { "text": "Top English words most indicative of positive and negative sentiment.excellent, permet, plaisir, magnifique, livre, de\u0107ouvrir, bonheur, recommande, facile, parfait, tres, merveille, superbe, excellente, petit, parfaitement, gra\u0109e, agre\u00e1ble, the, re\u01f5al, of, indispensable, e\u01f5alement, grands, petits, facilement, douce, doux, j'adore, de\u013aicieux, chansons, conseille, rock, l'album, be\u1e3fol, ide\u00e1l, simple, vivement, pouvez, voix, cd, parfaite, meilleur, douceur, n'he\u015bitez, adore, de\u013aice, enfants, rapide, couleurs, bonne, magnifiques, grande, famille, toutes, ge\u0144ial, titres, de\u0107ouvert, pratique, to, pourrez, parfum, belle, adore, must, and, incontournable, aime, recommander, sublime, beaute, superbes, petite, guitare, ouvrage, diffe\u0155entes, me\u013aange, trouverez, bijou, lait, complet, sucre, remarquable, recette, univers, chanson, sel, mode\u0155ation, de\u01f5uster, super pas, ne, rien, service, client, me, re\u1e55onse, disant, mauvaise, j'ai, je, commande, pire, clients, mauvais, demande, aucune, de\u0107eption, payer, mois, remboursement, impossible, de\u0107onseille, te\u013ae\u1e55hone, n'est, dit, qu'ils, sav, mail, ete, suis, de\u0107ue, mal, n'a, bref, bout, arnaque, demande, n'ai, envoye, eux, de\u0107evant, eviter, eu, n'y, probleme, commande, semaines, recu, aucun, site, rembourser, paye, compte, personne, tard, contrat, chez, erreur, jours, n'etait, mails, nul, courrier, de\u0107u, euros, responsable, la, aurait, avons, avoir, commercial, mon, recois, mediocre, panne, de\u015bagre\u00e1ble, ma, sommes, vente, heureusement, chambre, ca, colis, du, j'avais, dommage, m'a, d'attente, j'appelle, semaine, retard, re\u1e55ond, n'ont, dossier, voulu, lendemain, pourtant, manque, etaient", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF5": { "text": "Top French words most indicative of positive and negative sentiment.", "content": "
", "type_str": "table", "num": null, "html": null }, "TABREF6": { "text": "proved to be the most effective. We used Pearson's chi-squared test to find, for each language, the top 200 words most indicative of positive sentiment and the top 200 words most indicative of negative sentiment, without filtering for stop words (e.g.", "content": "
", "type_str": "table", "num": null, "html": null }, "TABREF8": { "text": "", "content": "
Accuracy
2g2g-w32g-w52g-h
fr83.6\u00b10.1 83.5\u00b10.1 82.6\u00b10.1 81.6\u00b10.1
es 83.3\u00b10.1 83.2\u00b10.1 82.6\u00b10.1 82.5\u00b10.1
it84.0\u00b10.1 84.0\u00b10.1 83.3\u00b10.1 82.5\u00b10.2
pt 74.7\u00b10.5 72.4\u00b10.7 70.4\u00b10.7 77.3\u00b10.3
zh 85.3\u00b10.1 85.3\u00b10.1 84.5\u00b10.1 84.4\u00b10.1
ja 91.6\u00b10.1 92.3\u00b10.1 91.6\u00b10.1 91.2\u00b10.1
de 89.1\u00b10.1 88.9\u00b10.1 87.8\u00b10.1 87.1\u00b10.0
en 85.5\u00b10.0 85.2\u00b10.0 84.2\u00b10.0 84.3\u00b10.0
nl 86.2\u00b10.4 87.0\u00b10.3 86.7\u00b10.3 87.3\u00b10.4
: Classification speed (mean and
standard deviation, in documents per sec-
ond, over ten runs) of naive Bayes classifier
boosted with logistic regression classifier for
each model and each language.
", "type_str": "table", "num": null, "html": null }, "TABREF9": { "text": "Accuracy (mean and standard deviation, in percent, over ten runs) of naive Bayes classifier for each model and each language.", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF10": { "text": "3\u00b10.1 91.0\u00b10.1 90.6\u00b10.1 90.7\u00b10.1 14.2\u00b10.1 13.9\u00b10.1 14.1\u00b10.0 15.6\u00b10.1 es 90.4\u00b10.1 90.3\u00b10.1 90.1\u00b10.1 89.8\u00b10.1 14.5\u00b10.1 14.5\u00b10.2 14.7\u00b10.2 14.2\u00b10.2 it 91.7\u00b10.1 91.6\u00b10.1 91.3\u00b10.1 91.1\u00b10.1 14.9\u00b10.1 14.8\u00b10.2 15.1\u00b10.2 15.5\u00b10.1 pt 84.7\u00b10.2 84.1\u00b10.2 83.6\u00b10.2 85.2\u00b10.2 18.7\u00b10.8 20.5\u00b10.9 22.4\u00b11.1 16.9\u00b10.6 zh 91.0\u00b10.0 90.8\u00b10.1 90.3\u00b10.0 90.6\u00b10.1 11.7\u00b10.1 11.2\u00b10.1 11.0\u00b10.1 12.3\u00b10.1 ja 95.4\u00b10.0 95.5\u00b10.0 95.2\u00b10.0 95.1\u00b10.0 7.1\u00b10.0 93.8\u00b10.0 93.3\u00b10.0 93.5\u00b10.0 10.9\u00b10.1 10.6\u00b10.0 10.8\u00b10.1 12.1\u00b10.0 en 90.6\u00b10.0 90.2\u00b10.0 89.5\u00b10.0 90.7\u00b10.0 13.9\u00b10.0 13.4\u00b10.0 13.2\u00b10.0 18.9\u00b10.0 nl 92.0\u00b10.1 91.7\u00b10.1 91.0\u00b10.1 91.7\u00b10.2 17.1\u00b10.2 15.3\u00b10.1 14.3\u00b10.1 16.2\u00b10.2", "content": "
AccuracyNo answer rate
2g2g-w32g-w52g-h2g2g-w32g-w52g-h
fr91.8\u00b10.17.2\u00b10.17.3\u00b10.17.7\u00b10.1
de 94.
", "type_str": "table", "num": null, "html": null }, "TABREF11": { "text": "Accuracy and no answer rate (mean and standard deviation, in percent, over ten runs) of naive Bayes classifier boosted with logistic regression classifier for each model and each language.", "content": "
Positive documentsNegative documents
# trained# tested# trained# tested
fr26455\u00b179116704\u00b19326556\u00b166116704\u00b193
es 12234\u00b115855267\u00b19212061\u00b110255267\u00b192
it21175\u00b114092502\u00b17020272\u00b18992502\u00b170
pt 3593\u00b17816349\u00b1432931\u00b14116349\u00b143
zh 30914\u00b1194124232\u00b14830989\u00b148124232\u00b148
ja 218278\u00b1526 889453\u00b1391 219019\u00b1396 889453\u00b1391
de 54351\u00b1255237839\u00b1206 54578\u00b1142237839\u00b1206
en 87833\u00b1400367812\u00b1224 85626\u00b1207367812\u00b1224
nl 6907\u00b18327691\u00b1656765\u00b16727691\u00b165
", "type_str": "table", "num": null, "html": null }, "TABREF12": { "text": "Number of positive and negative documents in the training and testing sets (mean and standard deviation, over ten runs) for each language.", "content": "
Number of unique tokens
2g2g-w32g-w52g-h
fr2914884\u00b193606172487\u00b12012911914397\u00b139474 882036\u00b12352
es 2140713\u00b112048 4406273\u00b1263408373245\u00b151332664764\u00b13752
it3425993\u00b174587223768\u00b11653013926488\u00b133039 749045\u00b11400
pt 608836\u00b110411165038\u00b119532033085\u00b13804251139\u00b1499
zh 1640449\u00b157533321802\u00b1117196219173\u00b123299667135\u00b11756
ja 4142506\u00b1605510307368\u00b114345 20572997\u00b126398 2140987\u00b12610
de 5372769\u00b114153 10754868\u00b129605 20217297\u00b156647 1306724\u00b12862
en 3290897\u00b138046713768\u00b1723912604478\u00b113434 1021480\u00b11497
nl 809873\u00b126631508605\u00b157622634343\u00b111110359049\u00b11642
", "type_str": "table", "num": null, "html": null }, "TABREF13": { "text": "Number of unique tokens after training (mean and standard deviation, over ten runs) for each model and each language.", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF15": { "text": "Mean document length, with standard deviation, over ten runs, in test data.", "content": "
", "type_str": "table", "num": null, "html": null } } } }