{ "paper_id": "Y14-1024", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:44:11.562776Z" }, "title": "Automatically Building a Corpus for Sentiment Analysis on Indonesian Tweets", "authors": [ { "first": "Alfan", "middle": [ "Farizki" ], "last": "Wicaksono", "suffix": "", "affiliation": { "laboratory": "Information Retrieval Lab", "institution": "University of Indonesia Depok", "location": { "country": "Republic of Indonesia" } }, "email": "" }, { "first": "Clara", "middle": [], "last": "Vania", "suffix": "", "affiliation": { "laboratory": "Information Retrieval Lab", "institution": "University of Indonesia Depok", "location": { "country": "Republic of Indonesia" } }, "email": "c.vania@cs.ui.ac.id" }, { "first": "Bayu", "middle": [], "last": "Distiawan", "suffix": "", "affiliation": { "laboratory": "Information Retrieval Lab", "institution": "University of Indonesia Depok", "location": { "country": "Republic of Indonesia" } }, "email": "b.distiawan@cs.ui.ac.id" }, { "first": "Mirna", "middle": [], "last": "Adriani", "suffix": "", "affiliation": { "laboratory": "Information Retrieval Lab", "institution": "University of Indonesia Depok", "location": { "country": "Republic of Indonesia" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The popularity of the user generated content, such as Twitter, has made it a rich source for the sentiment analysis and opinion mining tasks. This paper presents our study in automatically building a training corpus for the sentiment analysis on Indonesian tweets. We start with a set of seed sentiment corpus and subsequently expand them using a classifier model whose parameters are estimated using the Expectation and Maximization (EM) framework. We apply our automatically built corpus to perform two tasks, namely opinion tweet extraction and tweet polarity classification using various machine learning approaches. Experiment result shows that a classifier model trained on our data, which is automatically constructed using our proposed method, outperforms the baseline system in terms of opinion tweet extraction and tweet polarity classification.", "pdf_parse": { "paper_id": "Y14-1024", "_pdf_hash": "", "abstract": [ { "text": "The popularity of the user generated content, such as Twitter, has made it a rich source for the sentiment analysis and opinion mining tasks. This paper presents our study in automatically building a training corpus for the sentiment analysis on Indonesian tweets. We start with a set of seed sentiment corpus and subsequently expand them using a classifier model whose parameters are estimated using the Expectation and Maximization (EM) framework. We apply our automatically built corpus to perform two tasks, namely opinion tweet extraction and tweet polarity classification using various machine learning approaches. Experiment result shows that a classifier model trained on our data, which is automatically constructed using our proposed method, outperforms the baseline system in terms of opinion tweet extraction and tweet polarity classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "There are millions of textual messages or posts generated by internet users everyday on various user generated content platfroms, such as microblogs (e.g. Twitter 1 ), review websites, and internet forums. They post about their stories, experiences, current events that are happening, as well as opinions about products. As a result, the user generated content has become a rich source for mining useful information about various topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Twitter, one the popular microblogging platforms, is currently getting a lot of attention from internet 1 http://twitter.com users because it allows users to easily and instantly post their thoughts of various topics. Twitter currently has over 200 million active users and produce 400 million posts each day 2 . The posts, known as tweets, often contain useful knowledge so that many researchers focus on Twitter for conducting NLPrelated research. McMinn et al. (2014) harnessed millions of tweets to develop an application for detecting, tracking, and visualizing events in real-time. Previously, Sakaki et al. (2013) also used twitter as a sensor for earthquake reporting system. They claimed that the system can detect an earthquake with high probability merely by monitoring tweets and the notification can be delivered faster than Japan Meteorology Agency announcements. Moreover, Tumasjan et al. (2010) demostrated that Twitter can also be used as a resource for political forecasting.", "cite_spans": [ { "start": 429, "end": 470, "text": "NLPrelated research. McMinn et al. (2014)", "ref_id": null }, { "start": 600, "end": 620, "text": "Sakaki et al. (2013)", "ref_id": "BIBREF19" }, { "start": 888, "end": 910, "text": "Tumasjan et al. (2010)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Due to the nature of Twitter, tweets usually express peoples personal thoughts or feelings. Therefore, tweets serve as good resources for sentiment analysis and opinion mining tasks. Many companies can benefit from tweets to know how many positive responses and/or negative responses towards their products as well as the reasons why consumers like/dislike their products. They can also leverage tweets to gain a lot of insight about their competitors. Consumers can also use information from tweets regarding the quality of a certain product. They commonly learn from peoples past experiences who have already used the product before they decide to purchase it. To realize the aforementioned PACLIC 28 ! 186 ideas, many researchers have put a lot of effort to tackle one of the important tasks on Twitter sentiment analysis, that is, tweet polarity classification (Nakov et al., 2013; Hu et al., 2013; Kouloumpis et al., 2011; Agarwal et al., 2011; Pak and Paroubek, 2010) . They proposed various approaches to determine whether a given tweet expresses positive or negative sentiment.", "cite_spans": [ { "start": 865, "end": 885, "text": "(Nakov et al., 2013;", "ref_id": "BIBREF14" }, { "start": 886, "end": 902, "text": "Hu et al., 2013;", "ref_id": "BIBREF8" }, { "start": 903, "end": 927, "text": "Kouloumpis et al., 2011;", "ref_id": "BIBREF10" }, { "start": 928, "end": 949, "text": "Agarwal et al., 2011;", "ref_id": "BIBREF0" }, { "start": 950, "end": 973, "text": "Pak and Paroubek, 2010)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we address the problem of sentiment analysis on Indonesian tweets. Indonesian language itself currently has more than 240 millions of speakers spread in mostly areas of south-east asia. In addition, Semiocast, a company who provides data intelligence and research on social media, has revealed that Indonesia ranked 5th in terms of Twitter accounts in July 2012 and users from Jakarta city (i.e. capital city of Indonesia) were the most active compared to the users from other big cities, such as Tokyo, London, and New York 3 . Therefore, there is absolutely a great need for natural language processing research on Indonesian tweets, especially sentiment analysis, since there would be a lot of information which is worth obtaining for many purposes. Unfortunately, Indonesian language is categorized as an under-resourced language because it still suffers from a lack of basic resources (especially labeled dataset) needed for a various language technologies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are two tasks addressed in this paper, namely opinion tweet extraction and tweet polarity classification. The former task is aimed at selecting all tweets comprising users' opinion towards something and the latter task is to determine the polarity type of an opinionated tweet (i.e., positive or negative tweet). To tackle the aforementioned tasks, we employ machine learning approach using training data and word features. However, a problem then appears when we do not have annotated data to train our models. Asking people to manually annotate thousands, even millions of tweets with high quality is not our choice since it is very expensive and time-consuming due to the massive scale and rapid growth of Twitter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To overcome the aforementioned problem, we propose a method that can automatically develop 3 http://semiocast.com/en/publications/ 2012_07_30_Twitter_reaches_half_a_billion_ accounts_140m_in_the_US training data from a pool of millions of tweets. First, we automatically construct a small set of labeled seed corpus (i.e. small collection of positive and negative tweets) that will be used for expanding the training data in the next step. Next, we expand the training data using the previously constructed seed corpus. To do that, we use the rationale that sentiment can be propagated from the labeled seed tweets to the other unlabeled tweets when they share similar word features, which means that the sentiment type of an unlabeled tweet can be revealed based on its closeness to the labeled tweets. Based on that idea, we employ a classifier model whose parameters are estimated using labeled and unlabeled tweets via Expectation and Maximization (EM) framework. In this method, we incorporate two types of dataset: the first dataset is a small set of labeled seed tweets and the second dataset is a huge set of unlabeled tweets that serve as a source for expanding the training data. Intuitively, this method allows us to propagate sentiment from labeled tweets to unlabeled tweets. Later, we show that the training data automatically constructed by our method can be used by the classifiers to effectively tackle the problem of opinion tweet extraction and tweet polarity classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In summary, the main contributions of this paper is two-folds: first, we present a method to automatically construct training instances for sentiment analysis on Indonesian tweets. Second, we show some significant works for sentiment analysis on Indonesian tweets which were rarely addressed before.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There have been extensive works on opinion mining and sentiment analysis as described in (Pang and Lee, 2008) . They presented various approaches and general challenges to develop applications that can retrieve opinion-oriented information. Moreover, Liu (2007) clearly mentions the definition of opinionated sentence as well as describes two sub-tasks required to perform sentence-level sentiment analysis, namely, subjectivity classification and sentencelevel sentiment classification. However, previous researchers primarily focused on performing sentiment analysis on review data. The trends has shifted recently when social networking platform, such as Facebook and Twitter, has been growing rapidly. As ! 187 a result, many researchers has now started to perform sentiment analysis on microblogging platform, such as twitter (Hu et al., 2013; Nakov et al., 2013; Kouloumpis et al., 2011; Pak and Paroubek, 2010) . In our work, we perform two-level sentiment analysis, similar to that described in (Liu, 2007) . In addition, we also perform sentiment analysis on tweets (i.e. Indonesian tweets), instead of general sentences.", "cite_spans": [ { "start": 89, "end": 109, "text": "(Pang and Lee, 2008)", "ref_id": "BIBREF17" }, { "start": 251, "end": 261, "text": "Liu (2007)", "ref_id": "BIBREF12" }, { "start": 831, "end": 848, "text": "(Hu et al., 2013;", "ref_id": "BIBREF8" }, { "start": 849, "end": 868, "text": "Nakov et al., 2013;", "ref_id": "BIBREF14" }, { "start": 869, "end": 893, "text": "Kouloumpis et al., 2011;", "ref_id": "BIBREF10" }, { "start": 894, "end": 917, "text": "Pak and Paroubek, 2010)", "ref_id": "BIBREF16" }, { "start": 1003, "end": 1014, "text": "(Liu, 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Current sentiment analysis research mostly relies on manually annotated training data (Nakov et al., 2013; Agarwal et al., 2011; Jiang et al., 2011; Bermingham and Smeaton, 2010) . However, employing humans for manually annotating thousands, even millions of tweets is absolutely laborintensive, time-consuming, and very expensive due to the massive scale and rapid growth of Twitter. This becomes a significant obstacle for researchers who want to perform sentiment analysis on tweets posted in under-resourced language, such as Indonesian tweets. Limited works have been done previously on automatically collecting training data (Pak and Paroubek, 2010; Bifet and Frank, 2010; Davidov et al., 2010) . Some researchers harnessed happy emoticons and sad emoticons to automatically collect training data (Pak and Paroubek, 2010; Bifet and Frank, 2010) . They assumed that tweets containing happy emoticons (e.g. \":)\", \":-)\") have positive sentiment, and tweets containing sad emoticons (e.g. \":(\", \":-(\") have negative sentiment. Unfortunately, their method clearly cannot get the coverage to reach sentiment-bearing tweets as many as possible since not all sentiment-bearing tweets contain emoticons.", "cite_spans": [ { "start": 86, "end": 106, "text": "(Nakov et al., 2013;", "ref_id": "BIBREF14" }, { "start": 107, "end": 128, "text": "Agarwal et al., 2011;", "ref_id": "BIBREF0" }, { "start": 129, "end": 148, "text": "Jiang et al., 2011;", "ref_id": "BIBREF9" }, { "start": 149, "end": 178, "text": "Bermingham and Smeaton, 2010)", "ref_id": "BIBREF3" }, { "start": 631, "end": 655, "text": "(Pak and Paroubek, 2010;", "ref_id": "BIBREF16" }, { "start": 656, "end": 678, "text": "Bifet and Frank, 2010;", "ref_id": "BIBREF4" }, { "start": 679, "end": 700, "text": "Davidov et al., 2010)", "ref_id": "BIBREF6" }, { "start": 803, "end": 827, "text": "(Pak and Paroubek, 2010;", "ref_id": "BIBREF16" }, { "start": 828, "end": 850, "text": "Bifet and Frank, 2010)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Limited attempts have been made to perform sentiment analysis on Indonesian tweets. Calvin and Setiawan (2014) performed tweet polarity classification limited to the tweets talking about telephone provider companies in Indonesia. Their classification method relies on a small set of domaindependent opinionated words. Before that, Aliandu (2014) conducted research on classifying an Indonesian tweet into three classes: positive, negative, and neutral. Aliandu (2014) used the method proposed by Pak and Paroubek (2010) to collect training data, that is, emoticons for collecting sentiment-bearing tweets. Even though those researchers performed similar works to us, we have two different points.", "cite_spans": [ { "start": 496, "end": 519, "text": "Pak and Paroubek (2010)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "First, we use different techniques to automatically collect training data. Second, we perform two-level sentiment analysis, namely, opinion tweet extraction and tweet polarity classification. Moreover, in the experiment section, we show that our method to collect training data is better than the one proposed by Pak and Paroubek (2010) . Our method also produces much larger data since we do not rely on sheer emoticon-containing tweets to collect training data.", "cite_spans": [ { "start": 313, "end": 336, "text": "Pak and Paroubek (2010)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "3 Automatically Building Training Data", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Our corpus consists of 5.3 million tweets which were collected using Twitter Streaming API between May 16th, 2013 and June 26th, 2013. As we wanted to build Indonesian sentiment corpus, we used tweet's geo-location to filter tweets posted in the area of Indonesia. We applied language filtering because based on our observation, Indonesian Twitter users also like to use English or local language in their tweets. We then divided our corpus into four disjoint datasets. To collect DATASET3 (i.e. neutral or nonopinion tweets), we used the same approach as in (Pak and Paroubek, 2010) . First, we selected some popular Indonesian news portal accounts from the overall corpus and then labeled them as objective. Here, we assume that tweets from news portal accounts are neutral as it usually comes from headline news. This method was actually proposed by (Pak and Paroubek, 2010) . But, we also did some empirical observation and acknowledged that this method performs quite well to collect neutral tweets.", "cite_spans": [ { "start": 559, "end": 583, "text": "(Pak and Paroubek, 2010)", "ref_id": "BIBREF16" }, { "start": 853, "end": 877, "text": "(Pak and Paroubek, 2010)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Data Collection", "sec_num": "3.1" }, { "text": "The remaining corpus which is not published by news portal accounts is then used to build seed corpus (DATASET2), development corpus (DATASET1), and gold-standard testing data (DATASET4). In this study, DATASET2 is used to construct labeled seed corpus. The seed corpus itself contains initial data that is believed to have opinion as well as sentiment. On the other side, development corpus DATASET1 contains unlabeled tweets used to expand our seed corpus. Our testing data (DATASET4) consists of 637 tweets which were tagged manually by the human annotators. These tweets were collected using some topic words which have tendency to be discussed by a lot of people. Two annotators were asked to independently classify each tweet into three classes: positive, negative, and neutral. The agreement of the annotators reached the level of Kappa value 0.95, which is considered as a satisfactory agreement. The label of each tweet in DATASET4 is the label agreed by the two annotators. But, when they did not agree, we asked the third annotators to decide the label. It is also worth to note that our testing data comes from various domains, such as telephone operator, public transportation, famous people, technology, and films. We also show some examples of Tweets found in DATASET4 as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Collection", "sec_num": "3.1" }, { "text": "\u2022 \" Telkomsel memang wokeeehhh (free internet) :)\" (Telkomsel is nice (free internet) :))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Collection", "sec_num": "3.1" }, { "text": "\u2022 \"Kecewa sama trans Jakarta. Manajemen blm bagus. Masa hrs nunggu lbh dr 30 menit utk naek busway.\" (really dissapointed in transjakarta. The management is not good. We waited for more than 30 minutes to get the bus on)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Collection", "sec_num": "3.1" }, { "text": "\u2022 \"man of steel keren bangeeeettttt :D\" (Man of steel is really cool :D)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Collection", "sec_num": "3.1" }, { "text": "\u2022 \"RT @detikcom: Lalin Macet, Pohon Tumbang di Perempatan Cilandak-TB Simatupang\" (RT @detikcom: Traffic jam, a tree tumbled down in the Cilandak-TB Simatupang intersection)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Collection", "sec_num": "3.1" }, { "text": "As we explained before, our seed corpus contains initial data used for expanding the training corpus. We propose two automatic techniques to constuct the seed corpus from DATASET2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Seed Training Instances", "sec_num": "3.2" }, { "text": "In the first technique, we use Indonesian opinion lexicon (Vania et al., 2014) to construct our seed corpus. A tweet will be classified as positive if it contains more positive words then negative words and vice versa. If a tweet contains word with a particular sentiment but the word is preceded by a negation, the polarity of the tweet will be shifted to its opposite sentiment. Moreover, we did not consider the tweets that do not contain any words from the opinion lexicon. In total, we have collected 135,490 positive seed tweets and 99,979 negative seed tweets.", "cite_spans": [ { "start": 58, "end": 78, "text": "(Vania et al., 2014)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Opinion Lexicon based Technique", "sec_num": "3.2.1" }, { "text": "The second technique was implemented by using clustering (Li and Liu, 2012) . This technique has several advantages, such as we do not need to provide any resources, such as lexicon or dictionary for a particular language. Each tweet from DATASET2 will be put into three clusters, namely positive tweets, negative tweets, or neutral tweets. We use all terms and POS tags from the tweet and each term is weighted using the TF-IDF as a features. Using this approach, 194 tweets were grouped PACLIC 28 ! 189", "cite_spans": [ { "start": 57, "end": 75, "text": "(Li and Liu, 2012)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Clustering based Technique", "sec_num": "3.2.2" }, { "text": "into negative tweets, 325 tweets were grouped into positive tweets, and the rest was left out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering based Technique", "sec_num": "3.2.2" }, { "text": "After we automatically construct labeled seed corpus from DATASET2, we are now ready to obtain more training instances. We use DATASET1, which is much bigger than DATASET2, as a source for expanding training data. The idea is that sentiment scores of all unlabeled tweets in DATASET2 can be revealed using propagation from labeled seed corpus. To realize that idea, we employ a classifier model whose parameters are estimated using labeled and unlabeled tweets via Expectation and Maximization (EM) framework. The wellknown research done by (Nigam et al., 2000) have shown that Expectation and Maximization framework works well for expanding training data to tackle the document-level text classification problem. In our work, we also show that this framework works quite well for tweets.", "cite_spans": [ { "start": 541, "end": 561, "text": "(Nigam et al., 2000)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "EM algorithm is an iterative algorithm for finding maximum likelihood estimates or maximum a posteori estimates for models when the data is incomplete (Dempster et al., 1977) . Here, our data is incomplete since the sentiment scores of unlabeled tweets are unknown. To reveal the sentiment scores of unlabeled tweets using EM algorithm, we perform several iterations. First, we train the classifier with just the labeled seed corpus. Second, we use the trained classifier to assign probabilisticallyweighted labels or sentiment scores (i.e. the probability of being a positive and negative tweet) to each unlabeled tweets. Third, we trained once again the model using all tweets (i.e. both the originally and newly labeled tweets). These last two steps are iterated until the parameters of the model do not change. At each iteration, the sentiment scores of each unlabeled tweets are improved as the likelihood of the parameters is guaranteed to improve until there is no more change (Dempster et al., 1977) . In addition, only tweets whose sentiment scores surpass a certain threshold will be considered as our new training instances.", "cite_spans": [ { "start": 151, "end": 174, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF7" }, { "start": 984, "end": 1007, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "Formally, we have a set of tweets T divided into two disjoint partitions: a set of labeled seed tweets T l and a set of unlabeled tweets T u , such that T = T l [ T u . In this case, T l represents seed tweets which are selected from DATASET2 and automatically labeled using the method described in the previous section and T u represents a set of all tweets in DATASET1. Each tweet t i 2 T , that has length |t i |, is defined as an ordered list of words (w 1 , w 2 , ..., w |V | ) and each word w k is an element of the vocabulary set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "V = {w 1 , w 2 , ..., w |V | }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "For the classifier in the iteration, we employ Naive Bayes classifier model. In our case, given a tweet t i and two class label C j , where j 2 S and S = {pos, neg}, the probability that each of the two component classes generated the tweet is determined using the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (C j |t i ) = P (C j ) Q |t i | k=1 P (w k |C j ) P j2S P (C j ) Q |t i | k=1 P (w k |C j )", "eq_num": "(1)" } ], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "The above equation holds since we assume that the probability of a word occuring within a tweet is independent of its position. Here, the collection of models parameters, denoted as \u2713, is the collection of word probabilities P (w k |C j ) and the class prior probabilities P (C j ). Given a set of tweet data, T = {t 1 , t 2 , ..., t |T | }, the Naive Bayes uses the maximum a posteori (MAP) estimation to determine the point estimate of \u2713, denoted by b \u2713. This can be done by finding \u2713 that maximize P (\u2713|T ) / P (T |\u2713)P (\u2713). This yields the following estimation formulas for each component of the parameter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "The word probabilities P (w k |C j ) are estimated using the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (w k |C j ) = 1+ P |T | i=1 N (w k ,t i ).P (C j |t i ) |V |+ P |V | n=1 P |T | i=1 N (wn,t i ).P (C j |t i )", "eq_num": "(2)" } ], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "N (w k , t i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "is the number of occurences of word w k in tweet t i . Similarly, the class prior probabilities P (C j ) are also estimated using the same fashion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (C j ) = 1 + P |T | i=1 P (C j |t i ) |S| + |T |", "eq_num": "(3)" } ], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "In the above equation, P (C j |t i ), j 2 {pos, neg}, are sentiment scores associated with each tweet t i 2 T , where P j P (C j |t i ) = 1. For the labeled seed tweets, P (C j |t i ) are rigidly assigned since the label is already known in advance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "P (C j |t i ) = \u21e2 1 if t i belongs to class C j 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "otherwise (4) Meanwhile, for the set of unlabeled tweets T u , P (C j |t i ) are probabilistically assigned in each iteration, so that 0 \uf8ff P (C j |t i ) \uf8ff 1. Thus, the probability of all the tweet data given the parameters, P (T |\u2713), is determined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (T |\u2713) = Y t i 2T X j P (t i |C j )P (C j )", "eq_num": "(5)" } ], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "Finally, we can compute the log-likelihood of the parameters, logL(\u2713|T ), using the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "logL(\u2713|T ) \u21e1 log P (T |\u2713) = P t i 2T log P j P (t i |C j )P (C j )", "eq_num": "(6)" } ], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "The last equation contains \"log of sums\", which is difficult for maximization process. Nigam et al. (2000) shows that the lower bound of the last equation can be found using Jensen's inequality. As a result, we can express the complete log-likelihood of the parameters, logL c (\u2713|T ), as follows:", "cite_spans": [ { "start": 87, "end": 106, "text": "Nigam et al. (2000)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "logL(\u2713|T ) logL c (\u2713|T ) \u21e1 P t i 2T P j P (C j |t i ) log(P (t i |C j )P (C j ))", "eq_num": "(7)" } ], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "The last equation is used in each iteration to check whether or not the parameters have converged. When the EM iterative procedure ends due to the convergence of the parameters, we then need to select several tweets from the set of unlabeled tweets T u , which are eligible for our new training instances. The criteria of selecting new training instances, denoted by T n , is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "T n = {t 2 T u | |P (C pos |t) P (C neg |t)| \u270f} (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "where \u270f is an empirical value, 0 \uf8ff \u270f \uf8ff 1. In our experiment, we set \u270f to 0.98 since we want to obtain very polarized tweets in terms of sentiment as our new training instances. In summary, the EM algorithm for expanding training data is described as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "\u2022 Input: A set of labeled seed tweets T l , and a large set of unlabeled tweets T u", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "\u2022 Train a Naive Bayes classifier using only the labeled seed teets T l . The estimated parameters, b \u2713, are obtained using equation 2 and 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "\u2022 Repeat until logL c (\u2713|T ) does not change (i.e. the parameters do not change):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "- [E- Step] Use the current classifier, b \u2713, to probabilistically label all unlabeled tweets in T u , i.e. we use equation 1 to obtain", "cite_spans": [ { "start": 2, "end": 5, "text": "[E-", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "P (C j |t i ) for all t i 2 T u . -[M-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "Step] Re-estimate the parameters of current classifier using all tweet data T u [ T l (i.e. both the originally and newly labeled tweets). Here, we once again use the equation 2 and 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "\u2022 Select the additional training instances, T n , using the criteria mentioned in formula 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "\u2022 Output: The expanded training data T n [ T l 4 Experiments and Evaluations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding New Training Instances", "sec_num": "3.3" }, { "text": "After we applied our training data construction method, we collected around 2.8 millions of opinion tweets when we used opinion lexicon based technique to automatically construct labeled seed corpus. Meanwhile, when we used clustering based technique to construct labeled seed corpus, we collected around 2.4 millions of opinion tweets. We refer to the former yielded training dataset as LEX-DATA and the latter as CLS-DATA. Ta We also automatically collected training data using the method proposed by Pak and Paroubek (2010) . We used the well-known positive/negative emoticons in Indonesian tweets, such as \":)\", \":-)\", \":(\", \":-(\", to capture the opinion tweets from DATASET1 and DATASET2. We refer to this training dataset as EMOTDATA, and we used it for comparison to our proposed method. Table 6 shows the detail of EMOTDATA.", "cite_spans": [ { "start": 503, "end": 526, "text": "Pak and Paroubek (2010)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 425, "end": 427, "text": "Ta", "ref_id": null }, { "start": 795, "end": 802, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Training Data Construction", "sec_num": "4.1" }, { "text": "#Tweets 276,970 103,740 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Type Pos Neg", "sec_num": null }, { "text": "To evaluate our automatic corpus construction method, we performed two tasks, namely opinion tweet extraction and tweet polarity classification, harnessing our constructed training data. In other words, we see whether or not a classifier model trained on our constructed training data is able to peform both the aforementioned tasks with high performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "4.2" }, { "text": "Task 1 -Opinion Tweet Extraction: Given a collection of tweets T, the task is to discover all opinion tweets in T. Liu (2011) defined an opinion as a positive or negative view, attitude, emotion, or appraisal about an entity or an aspect of the entity. Thus, we adapt the aforementioned definition for the opinion tweet.", "cite_spans": [ { "start": 115, "end": 125, "text": "Liu (2011)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "4.2" }, { "text": "Task 2 -Tweet Polarity Classification: The task is to determine whether each opinion tweet extracted from the first task is positive or negative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "4.2" }, { "text": "To measure the performance of the classifier, we tested the classifier on our gold-standard set, i.e. DATASET4, which was manually annotated by two people. In addition, we also compared our method against the method proposed by Pak and Paroubek (2010) . For the classifier, we employ two well-known classifier algorithms, namely the Naive Bayes classifier and the Maximum Entropy model (Berger et al., 1996) . We use the unigrams as our features, i.e. the presence of a word and its frequency in a tweet, since unigrams provide a good coverage of the data and most likely do not suffer from the sparsity problem. Morever, Pang et al. (2002) previously had shown that unigrams serve as good features for sentiment analysis.", "cite_spans": [ { "start": 228, "end": 251, "text": "Pak and Paroubek (2010)", "ref_id": "BIBREF16" }, { "start": 386, "end": 407, "text": "(Berger et al., 1996)", "ref_id": "BIBREF2" }, { "start": 622, "end": 640, "text": "Pang et al. (2002)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "4.2" }, { "text": "Before we train our classifier models, we apply data preprocessing process to all datasets. This is done because tweets usually contain many informal forms of text that can be difficult to be recognized by our classifiers. We use the following data preprocessing steps to our training data:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "4.2" }, { "text": "\u2022 Filtering: we remove URL links, Twitter user accounts (started with '@'), retweet (RT) information, and punctuation marks. All tweets are normalized to lower case and repeated characters are replaced by a single character.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "4.2" }, { "text": "\u2022 Tokenization: we split each tweet based on whitespaces.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "4.2" }, { "text": "\u2022 Normalization: we replace each abbreviation found in each tweet with its actual meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "4.2" }, { "text": "\u2022 Handling negation: each negation term is attached to a word that follows it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "4.2" }, { "text": "As we mentioned previously, we see the problem of opinion tweet extraction as a binary classification problem. Thus, we assume that a tweet can be classified into two categories: an opinion tweet and non-opinion tweet. For the testing data, we use DATASET4 that consists of 303 neutral/non-opinion tweets and 334 opinion tweets (i.e. the combination of positive and negative tweets). For the training data, we only have 12,614 non-opinion tweets from DATASET3. But, we have a larger set of opinion tweets either from LEX-DATA, CLS-DATA, or EMOTDATA depending on the method we apply. To cope with this problem, we randomly selected 12,614 opinion tweets either from LEX-DATA, CLS-DATA, or EMOTDATA so that the training data is balanced. Moreover, we use the precision, recall, and F1-score as our evaluation metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluations on Opinion Tweet Extraction", "sec_num": "4.3" }, { "text": "First, we measured the performance of the classifiers trained on the data constructed by the method proposed by Pak and Paroubek (2010) . We refer to this method as BASELINE. Furthermore, the non-opinion training data consists of all tweets in DATASET3 and the opinion training data consists of 12,614 tweets randomly selected from EMOTDATA. Second, we evaluated the classifiers trained on the data constructed using our proposed method. In this case, we run experiment using the two different seed corpus construction techniques. We refer to the method that use clustering based technique (for constructing seed corpus) as CLS-METHOD and the method that use opinion lexicon as LEX-METHOD. The opinion training data was constructed in the same manner as before. This time, we used LEX-DATA and CLS-DATA to randomly select 12,614 opinion tweets for LEX-METHOD and CLS-METHOD, respectively.", "cite_spans": [ { "start": 112, "end": 135, "text": "Pak and Paroubek (2010)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluations on Opinion Tweet Extraction", "sec_num": "4.3" }, { "text": "Prec ( Table 7 shows the results of the experiment. We can see that the classifiers trained on EMOTDATA, which was constructed using BASELINE, actually perform quite well. Maximum Entropy model achived 76,56% in terms of F1-score, which is far from the performance score resulting from Naive Bayes model. It is worth to note that the classifiers trained on LEX-DATA outperform those trained on EMOTDATA by over 3% and 4% for Naive Bayes and Maximum Entropy model, respectively, which means that LEX-METHOD is better than BASELINE. But, the situation is different for CLS-METHOD. This is actually no surprise since LEX-METHOD uses a good prior knowledge obtained from opinion lexicon. This might also suggest that the seed corpus construction is an important aspect in our method.", "cite_spans": [], "ref_spans": [ { "start": 7, "end": 14, "text": "Table 7", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "After we extract the opinion tweets, we then classify the sentiment type of the opinion tweets into two classes: positive and negative. In the first scenario, we evaluated the classifiers trained on both positive and negative tweets from EMOTDATA since we aimed at comparing BASELINE against our proposed method. In the second scenario, we then measured the performance of the classifiers when they were trained on the data constructed by our method (i.e. LEX-METHOD and CLS-METHOD).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluations on Tweet Polarity Classification", "sec_num": "4.4" }, { "text": "For the testing data, both scenarios use DATASET4 that consists of 202 positive tweets and 132 negative tweets. We left the neutral/non-opinion tweets. For the training data, the first scenario uses all tweets in EMOTDATA as the training data. But, we cannot directly use all tweets in LEX-DATA or CLS-DATA for the second scenario since the size of LEX-DATA and CLS-DATA, respectively, is much bigger than EMOTDATA. As a result, due to fairness, we randomly selected 276,970 positive tweets and 103,740 negative tweets from LEX-DATA and CLS-DATA, respectively, and subsequently use them for the second scenario. Moreover, we use a classification accuracy as our metric in this experiment. Table 8 shows the results. We can see that the classifiers trained on LEX-DATA significantly outperform those trained on EMOTDATA by over 7% and 13% for Naive Bayes and Maximum Entropy model, respectively. Just like the previ- Here, we used the training data constructed using LEX-METHOD ous experiment, CLS-METHOD is no better than LEX-METHOD and BASELINE. We also suggest that Maximum Entropy model is a good model for our sentiment analysis task since the results show that this model is mostly superior to Naive Bayes model.", "cite_spans": [], "ref_spans": [ { "start": 689, "end": 696, "text": "Table 8", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Evaluations on Tweet Polarity Classification", "sec_num": "4.4" }, { "text": "We further investigated the effect of increasing the size of training dataset on the accuracy of the classifiers. In this case, we only examined LEX-DATA since LEX-METHOD yielded the best result before. Figures 1 shows the results. Training data of size N means that we use N/2 positive tweets and N/2 negative tweets as the training instances. As we can see, learning from large training data plays an important role in tweet polarity classification task. But, we also notice a strange case. When the size of training data is increased at the last point, the performance of Naive Bayes significantly drops. This should not be the case for Naive Bayes. We admit that the quality of our training data set is far away from perfect since it is automatically constructed. As a result, our training data set is still prone to noise disturbance and we guess that this is why the performance of Naive Bayes drops at the last point.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We propose a method to automatically construct training instances for sentiment analysis and opinion mining on Indonesian tweets. First, we automatically build a set of labeled seed corpus using opinion lexicon based technique and clustering based technique. Second, we harness the labeled seed corpus to obtain more training instances from a huge set of unlabeled tweets by employing a classifier model whose parameters are estimated using the EM framework. For the evaluation, we test our automatically built corpus on the opinion tweet extraction and tweet polarity classification tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Works", "sec_num": "5" }, { "text": "Our experiment shows that our proposed method outperforms the baseline system which merely uses emoticons as the features for automatically building the sentiment corpus. When we tested on the opinion tweet extraction and tweet polarity classification tasks, the classifier models trained on the training data using our proposed method was able to extract opinionated tweets as well as classify tweets polarity with high performance. Moreover, we found that the seed corpus construction technique is an important aspect in our method since the evaluation shows that prior knowledge from the opinion lexicon can help building better training instances than just using clustering based technique.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Works", "sec_num": "5" }, { "text": "In the future, this corpus can be used as one of the basic resources for sentiment analysis task, especially for Indonesian language. For the sentiment analysis task itself, it will be interesting to investigate various features beside unigram that may be useful in detecting sentiment on Indonesian Twitter messages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Works", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sentiment analysis of twitter data", "authors": [ { "first": "Apoorv", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Boyi", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Ilia", "middle": [], "last": "Vovsha", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Passonneau", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Workshop on Languages in Social Media, LSM '11", "volume": "", "issue": "", "pages": "30--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Apoorv Agarwal, Boyi Xie, Ilia Vovsha, Owen Rambow, and Rebecca Passonneau. 2011. Sentiment analysis of twitter data. In Proceedings of the Workshop on Languages in Social Media, LSM '11, pages 30-38, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Sentiment analysis on indonesian tweet", "authors": [ { "first": "Paulina", "middle": [], "last": "Aliandu", "suffix": "" } ], "year": 2014, "venue": "The Proceedings of The 7th ICTS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paulina Aliandu. 2014. Sentiment analysis on indone- sian tweet. In The Proceedings of The 7th ICTS.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "Adam", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" } ], "year": 1996, "venue": "Comput. Linguist", "volume": "22", "issue": "1", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Comput. Linguist., 22(1):39-71, March.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Classifying sentiment in microblogs: Is brevity an advantage?", "authors": [ { "first": "Adam", "middle": [], "last": "Bermingham", "suffix": "" }, { "first": "Alan", "middle": [ "F" ], "last": "Smeaton", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 19th ACM International Conference on Information and Knowledge Management, CIKM '10", "volume": "", "issue": "", "pages": "1833--1836", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Bermingham and Alan F. Smeaton. 2010. Clas- sifying sentiment in microblogs: Is brevity an advan- tage? In Proceedings of the 19th ACM International Conference on Information and Knowledge Manage- ment, CIKM '10, pages 1833-1836, New York, NY, USA. ACM.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Sentiment knowledge discovery in twitter streaming data", "authors": [ { "first": "Albert", "middle": [], "last": "Bifet", "suffix": "" }, { "first": "Eibe", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 13th International Conference on Discovery Science, DS'10", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Albert Bifet and Eibe Frank. 2010. Sentiment knowl- edge discovery in twitter streaming data. In Proceed- ings of the 13th International Conference on Discov- ery Science, DS'10, pages 1-15, Berlin, Heidelberg. Springer-Verlag.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Using text mining to analyze mobile phone provider service quality (case study: Social media twitter)", "authors": [ { "first": "Calvin", "middle": [], "last": "", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Setiawan", "suffix": "" } ], "year": 2014, "venue": "International Journal of Machine Learning and Computing", "volume": "4", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Calvin and Johan Setiawan. 2014. Using text mining to analyze mobile phone provider service quality (case study: Social media twitter). International Journal of Machine Learning and Computing, 4(1), February.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Enhanced sentiment learning using twitter hashtags and smileys", "authors": [ { "first": "Dmitry", "middle": [], "last": "Davidov", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Tsur", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10", "volume": "", "issue": "", "pages": "241--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Enhanced sentiment learning using twitter hashtags and smileys. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10, pages 241-249, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Maximum likelihood from incomplete data via the em algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B, 39(1):1-38.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Exploiting social relations for sentiment analysis in microblogging", "authors": [ { "first": "Xia", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Jiliang", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, WSDM '13", "volume": "", "issue": "", "pages": "537--546", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xia Hu, Lei Tang, Jiliang Tang, and Huan Liu. 2013. Ex- ploiting social relations for sentiment analysis in mi- croblogging. In Proceedings of the Sixth ACM Inter- national Conference on Web Search and Data Mining, WSDM '13, pages 537-546, New York, NY, USA. ACM.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Target-dependent twitter sentiment classification", "authors": [ { "first": "Long", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xiaohua", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tiejun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "151--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment clas- sification. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies -Volume 1, HLT '11, pages 151-160, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Twitter sentiment analysis: The good the bad and the omg! In", "authors": [ { "first": "Efthymios", "middle": [], "last": "Kouloumpis", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Johanna", "middle": [], "last": "Moore", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Efthymios Kouloumpis, Theresa Wilson, and Johanna Moore. 2011. Twitter sentiment analysis: The good the bad and the omg! In Lada A. Adamic, Ricardo A. Baeza-Yates, and Scott Counts, editors, ICWSM. The AAAI Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Application of a clustering method on sentiment analysis", "authors": [ { "first": "Gang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2012, "venue": "J. Inf. Sci", "volume": "38", "issue": "2", "pages": "127--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gang Li and Fei Liu. 2012. Application of a clustering method on sentiment analysis. J. Inf. Sci., 38(2):127- 139, April.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data. Data-Centric Systems and Applications", "authors": [ { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Liu. 2007. Web Data Mining: Exploring Hyper- links, Contents, and Usage Data. Data-Centric Sys- tems and Applications. Springer.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An interactive interface for visualizing events on twitter", "authors": [ { "first": "Andrew", "middle": [ "J" ], "last": "Mcminn", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Tsvetan", "middle": [], "last": "Yordanov", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Patterson", "suffix": "" }, { "first": "Rrobi", "middle": [], "last": "Szk", "suffix": "" }, { "first": "Jesus", "middle": [ "A" ], "last": "Rodriguez Perez", "suffix": "" }, { "first": "Joemon", "middle": [ "M" ], "last": "Jose", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR '14", "volume": "", "issue": "", "pages": "1271--1272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew J. McMinn, Daniel Tsvetkov, Tsvetan Yor- danov, Andrew Patterson, Rrobi Szk, Jesus A. Ro- driguez Perez, and Joemon M. Jose. 2014. An interac- tive interface for visualizing events on twitter. In Pro- ceedings of the 37th International ACM SIGIR Confer- ence on Research & Development in Information Retrieval, SIGIR '14, pages 1271-1272, New York, NY, USA. ACM.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Semeval-2013 task 2: Sentiment analysis in twitter", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", "volume": "2", "issue": "", "pages": "312--320", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter, and Theresa Wilson. 2013. Semeval-2013 task 2: Sentiment analysis in twitter. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Pro- ceedings of the Seventh International Workshop on Se- mantic Evaluation (SemEval 2013), pages 312-320, Atlanta, Georgia, USA, June. Association for Compu- tational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Text classification from labeled and unlabeled documents using em", "authors": [ { "first": "Kamal", "middle": [], "last": "Nigam", "suffix": "" }, { "first": "Andrew", "middle": [ "Kachites" ], "last": "Mccallum", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Thrun", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2000, "venue": "Mach. Learn", "volume": "39", "issue": "2-3", "pages": "103--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kamal Nigam, Andrew Kachites McCallum, Sebastian Thrun, and Tom Mitchell. 2000. Text classifica- tion from labeled and unlabeled documents using em. Mach. Learn., 39(2-3):103-134, May.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Twitter as a corpus for sentiment analysis and opinion mining", "authors": [ { "first": "Alexander", "middle": [], "last": "Pak", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Paroubek", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Pak and Patrick Paroubek. 2010. Twit- ter as a corpus for sentiment analysis and opinion mining. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10), Valletta, Malta, May. European Language Resources Association (ELRA).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Opinion mining and sentiment analysis", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "Found. Trends Inf. Retr", "volume": "2", "issue": "1-2", "pages": "1--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Found. Trends Inf. Retr., 2(1-2):1- 135, January.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Thumbs up?: Sentiment classification using machine learning techniques", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shivakumar", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing", "volume": "10", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: Sentiment classification using machine learning techniques. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing -Volume 10, EMNLP '02, pages 79-86, Stroudsburg, PA, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Tweet analysis for real-time event detection and earthquake reporting system development. Knowledge and Data Engineering", "authors": [ { "first": "T", "middle": [], "last": "Sakaki", "suffix": "" }, { "first": "M", "middle": [], "last": "Okazaki", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsuo", "suffix": "" } ], "year": 2013, "venue": "IEEE Transactions on", "volume": "25", "issue": "4", "pages": "919--931", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Sakaki, M. Okazaki, and Y. Matsuo. 2013. Tweet analysis for real-time event detection and earthquake reporting system development. Knowledge and Data Engineering, IEEE Transactions on, 25(4):919-931, April.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Predicting elections with twitter: What 140 characters reveal about political sentiment", "authors": [ { "first": "A", "middle": [], "last": "Tumasjan", "suffix": "" }, { "first": "T", "middle": [ "O" ], "last": "Sprenger", "suffix": "" }, { "first": "P", "middle": [ "G" ], "last": "Sandner", "suffix": "" }, { "first": "I", "middle": [ "M" ], "last": "Welpe", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media", "volume": "", "issue": "", "pages": "178--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Tumasjan, T.O. Sprenger, P.G. Sandner, and I.M. Welpe. 2010. Predicting elections with twitter: What 140 characters reveal about political sentiment. In Proceedings of the Fourth International AAAI Confer- ence on Weblogs and Social Media, pages 178-185.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Sentiment lexicon generation for an underresourced language", "authors": [ { "first": "Clara", "middle": [], "last": "Vania", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Ibrahim", "suffix": "" }, { "first": "Mirna", "middle": [], "last": "Adriani", "suffix": "" } ], "year": 2014, "venue": "International Journal of Computational Linguistics and Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clara Vania, Mohammad Ibrahim, and Mirna Adriani. 2014. Sentiment lexicon generation for an under- resourced language. International Journal of Compu- tational Linguistics and Applications (IJCLA) (To Ap- pear).", "links": null } }, "ref_entries": { "FIGREF1": { "text": "The effect of training data size.", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "content": "
DatasetLabel#Tweets
DATASET1Unlabeled4,291,063
DATASET2Unlabeled1,000,000
DATASET3Neutral12,614
DATASET4 Pos, Neg, Neutral637
Total5,304,314
", "html": null, "type_str": "table", "text": "shows the overall statistics of our Twitter corpus.", "num": null }, "TABREF1": { "content": "", "html": null, "type_str": "table", "text": "", "num": null }, "TABREF2": { "content": "
Sentiment Type #Tweets
Positive202
Negative132
Neutral303
Total637
", "html": null, "type_str": "table", "text": "and 3 shows the details of DATASET4.", "num": null }, "TABREF3": { "content": "
Domain#Tweets
Telephone operators94
Public transportations53
Government companies11
Figures/People61
Technologies12
Sports and Athletes41
Actress29
Films67
Food and Restaurants34
News214
Others21
Total637
", "html": null, "type_str": "table", "text": "The statistics of DATASET4", "num": null }, "TABREF4": { "content": "", "html": null, "type_str": "table", "text": "", "num": null }, "TABREF6": { "content": "
", "html": null, "type_str": "table", "text": "", "num": null }, "TABREF8": { "content": "
", "html": null, "type_str": "table", "text": "The statistics of CLS-DATA", "num": null }, "TABREF9": { "content": "
", "html": null, "type_str": "table", "text": "The statistics of EMOTDATA", "num": null }, "TABREF11": { "content": "
", "html": null, "type_str": "table", "text": "The evaluation results for opinion Tweet extraction task", "num": null }, "TABREF13": { "content": "
", "html": null, "type_str": "table", "text": "The evaluation results for Tweet polarity classification task", "num": null } } } }