{ "paper_id": "S16-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:26:55.126106Z" }, "title": "SteM at SemEval-2016 Task 4: Applying Active Learning to Improve Sentiment Classification", "authors": [ { "first": "Stefan", "middle": [], "last": "R\u00e4biger", "suffix": "", "affiliation": { "laboratory": "", "institution": "Sabanci University", "location": { "settlement": "Istanbul", "country": "Turkey" } }, "email": "stefan@sabanciuniv.edu" }, { "first": "Mishal", "middle": [], "last": "Kazmi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Sabanci University", "location": { "settlement": "Istanbul", "country": "Turkey" } }, "email": "mishalkazmi@sabanciuniv.edu" }, { "first": "Y\u00fccel", "middle": [], "last": "Sayg\u0131n", "suffix": "", "affiliation": { "laboratory": "", "institution": "Sabanci University", "location": { "settlement": "Istanbul", "country": "Turkey" } }, "email": "ysaygin@sabanciuniv.edu" }, { "first": "Peter", "middle": [], "last": "Sch\u00fcller", "suffix": "", "affiliation": { "laboratory": "", "institution": "Marmara University", "location": { "settlement": "Istanbul", "country": "Turkey" } }, "email": "peter.schuller@marmara.edu.tr" }, { "first": "Myra", "middle": [], "last": "Spiliopoulou", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes our approach to the Se-mEval 2016 task 4, \"Sentiment Analysis in Twitter\", where we participated in subtask A. Our system relies on AlchemyAPI and Senti-WordNet to create 43 features based on which we select a feature subset as final representation. Active Learning then filters out noisy tweets from the provided training set, leaving a smaller set of only 900 tweets which we use for training a Multinomial Naive Bayes classifier to predict the labels of the test set with an F1 score of 0.478.", "pdf_parse": { "paper_id": "S16-1007", "_pdf_hash": "", "abstract": [ { "text": "This paper describes our approach to the Se-mEval 2016 task 4, \"Sentiment Analysis in Twitter\", where we participated in subtask A. Our system relies on AlchemyAPI and Senti-WordNet to create 43 features based on which we select a feature subset as final representation. Active Learning then filters out noisy tweets from the provided training set, leaving a smaller set of only 900 tweets which we use for training a Multinomial Naive Bayes classifier to predict the labels of the test set with an F1 score of 0.478.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Gaining an overview of opinions on recent events or trends is an appealing feature of Twitter. For example, receiving real-time feedback from the public about a politician's speech provides insights to media for the latest polls, analysts and interested individuals including the politician herself. However, detecting tweet sentiment still poses a challenge due to the frequent use of informal language, acronyms, neologisms which constantly change, and the shortness of tweets, which are limited to 140 characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "SemEval's subtask 4A (Nakov et al., 2016) deals with the sentiment classification of single tweets into one of the classes \"positive\", \"neutral\" or \"negative\". Concretely a training set of 5481 tweets and a development set comprising 1799 tweets was given, and the sentiment of 32009 tweets in a test set had to be predicted and was evaluated using F1-score. The distribution of labels for the given datasets is depicted in Table 1 . Tweets with positive polarity outnumber the other two classes and the least number of instances is available for negative tweets. Initially, the organizers provided 6000 tweets for the training set and 2000 tweets for the development set, but only the tweet IDs were released to abide by the Twitter terms of agreement. By the time we downloaded the data, around 10% of the tweets (519 in the training set, 201 in the test set) were not available anymore. For further details about the labeling process and the datasets see (Nakov et al., 2016) .", "cite_spans": [ { "start": 21, "end": 41, "text": "(Nakov et al., 2016)", "ref_id": "BIBREF11" }, { "start": 958, "end": 978, "text": "(Nakov et al., 2016)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 424, "end": 431, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lexicon-based approaches have done very well in this competition over the past years, i.e, the winners of the years 2013 -2015 (Mohammad et al., 2013 Miura et al., 2014; Hagen et al., 2015) relied heavily on them. Our goal is to explore alternatives and to complement lexicon-based strategies. The SteM system performs preprocessing, including canonicalization of tweets, and based on that we extract 43 features as our representation, some features are based on AlchemyAPI 1 and SentiWordNet (Esuli and Sebastiani, 2006) . We choose 28 of the features as final representation and learn three classifiers based on different subsets of these 28 features. We refer to the latter as feature subspaces or subspaces hereafter. However, independently of the subspace we use, we face the fact that tweet datasets inherently contain noise and all subspaces will -to a greater or lesser extend-be affected by this noise. To alleviate this problem, we propose to concentrate on only few of the labeled tweets, those likely to be most discriminative. To this purpose, we use Active Learning (AL) (Settles, 2012) , as explained in Section 6. For AL, we set up a \"budget\", translating Train 5481 2817 1882 782 Dev 1799 755 685 359 Table 1 : Distribution of sentiment labels in the datasets.", "cite_spans": [ { "start": 116, "end": 120, "text": "2013", "ref_id": "BIBREF9" }, { "start": 121, "end": 126, "text": "-2015", "ref_id": "BIBREF2" }, { "start": 127, "end": 149, "text": "(Mohammad et al., 2013", "ref_id": "BIBREF9" }, { "start": 150, "end": 169, "text": "Miura et al., 2014;", "ref_id": "BIBREF8" }, { "start": 170, "end": 189, "text": "Hagen et al., 2015)", "ref_id": "BIBREF2" }, { "start": 493, "end": 521, "text": "(Esuli and Sebastiani, 2006)", "ref_id": "BIBREF1" }, { "start": 1085, "end": 1100, "text": "(Settles, 2012)", "ref_id": null } ], "ref_spans": [ { "start": 1172, "end": 1232, "text": "Train 5481 2817 1882 782 Dev 1799 755 685 359 Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "to the maximum number of tweets, for which labels are requested. The term \"budget\" is motivated by the fact that labeling is a costly human activity. In our study, this budget is set to 900. On those 900 tweet we learn a classifier for each of the three feature subspaces to predict the labels of the test set. The remainder of this paper is organized according to the pipeline of the SteM system. Section 2 explains preprocessing steps, Section 3 describes our features, Section 4 describes how we select our feature subset for the final representation, Section 5 gives details about learning the classifiers on the different subsets, Section 6 describes our Active Learning component, and Section 7 outlines the experiments we performed to select the best model for the competition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neu Neg", "sec_num": null }, { "text": "We note that while preprocessing tweets, we also extract related features. These features are described in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "2" }, { "text": "Removing URLs, mentions and replacing slang, abbreviations: we first remove Twitter handles (@username) and URLs. We remove dates and numbers with regular expressions and canonicalize common abbreviations, slang and negations using a lexicon we assembled from online resources. 2 Our list of negations encompasses: don't, mustn't, shouldn't, isn't, aren't, wasn't, weren't, not, couldn't, won't, can't, wouldn't. We replace these with the respective formal forms and we do the same for their apostrophe-free forms (e.g., 'cant'), which are more likely to occur in hashtags. In total we use 114 abbreviations, some are shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 627, "end": 634, "text": "Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Preprocessing", "sec_num": "2" }, { "text": "Spelling correction: the remaining unknown words are replaced by the most likely alternative according to the PyEnchant dictionary. 3 Splitting hashtags into words: instead of remov-2 http://searchcrm.techtarget.com/definit ion/Twitter-chat-and-text-messaging-abbrev iations 3 https://pypi.python.org/pypi/pyenchant Slang c'mon l8r FB Replacement come on later Facebook ing hashtags, we chunk them as follows. If a hashtag is comprised of a single word that exists in our dictionary, we only remove the hashtag symbol. In case of camel case hashtags (#HelloWorld), we split the words on the respective transitions from upper to lower case or vice versa. Otherwise we try to recover the multiple words in the following manner. If a hashtag contains less than 22 characters, we apply an exhaustive search to find a combination of words that all exist in a dictionary. For hashtags longer than 22 characters the exhaustive approach takes too long (about 10s), hence we opt for a greedy algorithm instead: we start at the end of the hashtag and traverse to the front trying to find the longest words that are found in a dictionary. In case not all parts can be resolved, we only keep the existing words and discard the remainder. Furthermore we remove emoticons and replace elongated characters by a single occurrence, e.g., woooooow \u2192 wow.", "cite_spans": [ { "start": 132, "end": 133, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "2" }, { "text": "Determine POS tags: on the resulting canonicalized text, we determine part-of-speech (POS) tags using the Stanford POS tagger (Manning et al., 2014). Finally we eliminate any punctuation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "2" }, { "text": "Discard unbiased polarity words: since some of the words in the training corpus are uniformly distributed across the three labels, encountering such words in any tweet does not allow us to learn anything about the overall tweet sentiment. Hence, we compile a list of these words and exclude them when calculating tweet polarities. We detect such words with the help of categorical proportional difference (CPD) (Simeon and Hilderman, 2008) which describes how much a word w contributes to distinguishing the different classes. It is calculated as CP D w = |A \u2212 B|/(A + B), where A corresponds to the occurrences of the word w.r.t. one of the three classes, while B denotes the number of occurrences of the word in the remaining two classes. After computing this value for all three classes separately, the maximum value is chosen as a result. High values close to 1 indicate a strong bias towards one of the classes, while a value close to 0 signals that w is almost uniformly distributed. If this value is below a fixed threshold of 0.6 we exclude the word from sentiment computation. Large CPD values indicate that a word occurs frequently with a specific class, while low values signal no particular association of the word with any class. Note that we consider only the absolute value of the enumerator in our equation, similar to (O' Keefe and Koprinska, 2009) , while this is not the case in the original paper. The reason is that the direction of the association of a word with a class is not important to us.", "cite_spans": [ { "start": 411, "end": 439, "text": "(Simeon and Hilderman, 2008)", "ref_id": "BIBREF16" }, { "start": 1339, "end": 1365, "text": "Keefe and Koprinska, 2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "2" }, { "text": "In this section we describe the extracted features and motivate our choice. Table 3 presents an overview of the 43 extracted features. Column 'Used' lists the features that comprise our final representation after feature subset selection which is described in Section 4. We explain our reasoning for the different feature subspaces (column 'Subspace') in Section 5.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Extracted features", "sec_num": "3" }, { "text": "Since emoticons correlate with sentiment, we exploit this knowledge in our features. To do so, we create a lexicon encompassing 81 common positive and negative emoticons based on Wikipedia. We manually labeled these emoticons as either expressing positive or negative sentiment. While preprocessing we extract the respective emoticon features 1 \u2212 2. We assign features 3 \u2212 4 into the same category, as all four of them are easy to identify in a tweet. Hashtags share a similar relationship with sentiment like emoticons, i.e., they correlate with the overall tweet sentiment (Mohammad, 2012). Thus, we extract the features 6 \u2212 16 describing the sentiment of hashtags. Exclamation marks also hint at amplified overall tweet sentiment which is covered by feature 17. The length of a tweet affects whether it contains sentiment: if tweets are longer, it is more likely they contain mixed polarity and hence it is more difficult to label them. Features 18 \u2212 20 deal with this issue. We expect that sentiment bearing tweets contain a different sentence parts composition which is reflected in the features 21 \u2212 24.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracted features", "sec_num": "3" }, { "text": "We query AlchemyAPI about sentiment to benefit from a system that is known to yield accurate results, for example for the task of extracting entitylevel sentiment (Saif et al., 2012) . Moreover, Alche-myAPI allows to retrieve sentiment on different levels of granularity for documents, e.g., for a whole tweet or for single entities within a tweet. Features 25 \u2212 32 describe the relevant sentiment information from this online resource. The remaining features 33 \u2212 43 address sentiment on the whole tweet.", "cite_spans": [ { "start": 163, "end": 182, "text": "(Saif et al., 2012)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Extracted features", "sec_num": "3" }, { "text": "Note that we normalize the features 6\u22128, 27, 33\u2212 35 by taking their absolute values and limiting them to the interval [0 . . . 1]. For extracting tweet and hashtag sentiment, we consider negations, diminishers (\"a little\") and intensifiers (\"very\") when summing up the polarity scores of the separate words. To account for the correlation of emoticons with the overall tweet sentiment, we eventually multiply the respective positive or negative overall tweet sentiment by 1.5 if emoticons are present. The resulting value is multiplied by 1.5 * the number of multiple exclamation marks if multiple exclamation marks exist in a sentence. We query SentiWordNet to determine word polarities in order to obtain a triple of positive, neutral and negative sentiment scores per word. This allows us to define positive/neutral/negative words according to its prevalent sentiment. Similarly, we express the overall tweet sentiment with a triple representing positive, neutral and negative polarity. Values close to 0 indicate that only little sentiment is contained in a tweet, while larger ones imply stronger sentiment. In case of negations, we employ a simple sliding windows approach to switch positive and negative sentiment of the four succeeding words. If tweets end with \"not\", e.g., \"I like you -not\", we also switch their overall sentiment as this is a common pattern in tweets indicating sarcasm. The sentiment of capitalized and elongated words is amplified. We consider a word elongated if the same letter occurs more than twice consecutively. In case of intensifiers and diminishers, the sentiment of the four succeeding words is increased in the former case by multiplying it by 1.5, and decreased in the latter one by multiplying the term by 0.5. However, if \"a bit\" is encountered, also the sentiment of the four preceding words is updated, e.g., \"I like you a bit\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracted features", "sec_num": "3" }, { "text": "We first use Weka (Hall et al., 2009) to compute F1-scores and with the help of 10-fold cross-validation on the training set the merit of different feature subsets is determined. This leaves us with a feature subset comprising 10 features. But we must consider the fact that we are using different feature subspaces as opposed to a single one for which Weka selected the features. Hence, our approach might benefit from other features not selected by Weka as well as some currently selected features could affect the performance of our system negatively. To investigate this, we work with SteM and perform 10-fold cross-validation on the training set to monitor effects on the F1-scores. We first try to reduce the feature subset determined by Weka further by removing features separately. This yields a feature subset encompassing 7 features. Now we add all features that Weka discarded separately back into SteM and observe the effects on the F1-scores. Following this procedure, we added 21 more features to our subset leading to the final tweet representation with 28 features. The list of used features is found in Column 'Used' in Table 3 .", "cite_spans": [ { "start": 18, "end": 37, "text": "(Hall et al., 2009)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1137, "end": 1144, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Selecting a feature subset", "sec_num": "4" }, { "text": "We employ Scikit-learn (Pedregosa et al., 2011) for building our classifiers. Since some of our features occur only in a small portion of the dataset, we build classifiers on different feature subspaces which we manually defined (column 'Subspace' in Table 3 ). Otherwise these underrepresented features would not be selected as informative features when removing noisy features during feature subset selection, although they actually help discriminate the different classes. For example, only 10-15% of the tweets in the training, development and test set contain hashtags. Likewise few tweets include emoticons. Hence, we consider emoticons and hashtags as separate feature subspaces. We learn in total three classifiers for three different subspaces: default, default + emoticons, default + hashtags. We choose MNB as our classifier as it is competitive with Logistic Regression and linear SVM and fast in learning models which allows us to carry out multiple experiments for quantifying the merit of different AL strategies, feature subsets, etc., as these experiments are time-consuming. With a similar reasoning we decide against ensemble methods for now as we first want to obtain reliable results for single classifiers before studying ensembles.", "cite_spans": [ { "start": 23, "end": 47, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 251, "end": 258, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Learning a model", "sec_num": "5" }, { "text": "6 Active learning AL is motivated well in (Settles, 2010) : \"The key idea behind active learning is that a machine learning algorithm can achieve greater accuracy with fewer training labels if it is allowed to choose the data from which it learns.\" In (Martineau et al., 2014) , the authors apply AL to detect misclassified samples and let experts relabel those instances to reduce noise. We utilize AL in a similar fashion, but instead of relabeling tweets, we discard them. We set a fixed budget for the AL strategy, which indicates for how many tweets the classifer can use the label for training, after starting from a small seed set of labeled tweets.", "cite_spans": [ { "start": 42, "end": 57, "text": "(Settles, 2010)", "ref_id": "BIBREF15" }, { "start": 252, "end": 276, "text": "(Martineau et al., 2014)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Learning a model", "sec_num": "5" }, { "text": "As AL strategies we pick uncertainty sampling (UC) and certainty sampling (C) and choose MNB as classifier. We calculate certainty/uncertainty according to two different criteria, namely margin and confidence (Li et al., 2012) . In margin-based UC the tweet with the highest margin is selected for labeling, while in confidence-based UC the instance with the least confidence is chosen. Contrary to UC, C always selects the tweet about which the classifier is most confident in case of confidence, or the tweet with the lowest margin respectively. We initialize the seed set with approximately the same number of tweets from all three classes where the tweets are chosen randomly per class. Whenever an AL strategy selects a tweet to be labeled, we reveal its actual label. To identify a fixed budget for our AL strategies, we test different configurations of budgets and seed set sizes on the training set. For each run we perform 10-fold cross validation and average results over three executions to account for chance and then the labels of the development set are predicted. As a baseline method we choose random sampling which selects arbitrary tweets to be labeled. After conducting multiple experiments, we find that initializing the seed set with 500 tweets, setting the budget to 400 tweets and choosing confidence-based UC yields the highest weighted F1-scores with F 1 = 0.48. Our experimental evaluation of different AL methods is visualized in Figure 1 using a seed set with 500 tweets and a budget of 400 tweets for which the strategies request the revealed labels. Although margin-based C seems to outperform confidence-based UC, we select the latter strategy. Tests on the development set using only the tweets selected by the respective AL strategies revealed that C achieved an F1-score of 0.30 while confidence-based UC achieved around 0.48. This inferiority of C on our data confirms reports from the literature, see e.g., (Kumar et al., 2010; Ambati et al., 2011) .", "cite_spans": [ { "start": 209, "end": 226, "text": "(Li et al., 2012)", "ref_id": "BIBREF5" }, { "start": 1943, "end": 1963, "text": "(Kumar et al., 2010;", "ref_id": "BIBREF4" }, { "start": 1964, "end": 1984, "text": "Ambati et al., 2011)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 1457, "end": 1465, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Learning a model", "sec_num": "5" }, { "text": "In this section we evaluate our approach on the development set as no labels for the test set are available. As AlchemyAPI is a full-fledged system, we use the 8 features extracted from it (Features 25\u221232 in Table 3 ) as a baseline in our experiments to compare it with SteM using a) the full training set and b) the reduced tweets after performing confidencebased UC as explained in the previous section. We then reapply the learning procedure described in Section 5 to obtain our F1-scores. The results are depicted in Table 4 . Initially, our system achieves an F1-score of 0.454 using all training instances. After selecting the 900 most informative tweets using confidence-based UC from the previous section, the score increases to 0.473. We observe a similar trend for our baseline and note that it is outperformed by SteM, although the margin shrinks when reducing the number of tweets in the training set. When analyzing the corresponding confusion matrix of SteM with 900 tweets in Table 5 , it becomes obvious that it fails to distinguish neutral sentiment from the remaining ones properly.", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 215, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 521, "end": 528, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 991, "end": 998, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments", "sec_num": "7" }, { "text": "In this paper we proposed SteM to predict tweet sentiment. After preprocessing, it extracts 43 features from tweets, selects 28 of these features as an appropriate subset to represent tweets for Multinomial Naive Bayes. One such classifier is trained for each of our three overlapping feature subspaces. To predict the labels of unknown instances, they are passed to the classifier that was trained on the respective feature subspace. Due to the noisy nature of labels in sentiment analysis, we select only few tweets for our training set by applying Active Learning. Despite utilizing only 26.3% (900 out of 5481) of the provided tweets for training, SteM outperforms an identical system trained on the full training set. Overall, our approach looks promising, but has room for improvement. Firstly, we plan to test our approach with ensembles and secondly, identify tweets with neutral sentiment more accurately. To this purpose, we plan to incorporate more features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future work", "sec_num": "8" }, { "text": "http://www.alchemyapi.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is partially supported by TUBITAK Grant 114E777.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Active learning with multiple annotations for comparable data classification task", "authors": [ { "first": "Vamshi", "middle": [], "last": "Ambati", "suffix": "" }, { "first": "Sanjika", "middle": [], "last": "Hewavitharana", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web", "volume": "", "issue": "", "pages": "69--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vamshi Ambati, Sanjika Hewavitharana, Stephan Vogel, and Jaime Carbonell. 2011. Active learning with multiple annotations for comparable data classification task. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Cor- pora and the Web, pages 69-77. Association for Com- putational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Sentiwordnet: A publicly available lexical resource for opinion mining", "authors": [ { "first": "Andrea", "middle": [], "last": "Esuli", "suffix": "" }, { "first": "Fabrizio", "middle": [], "last": "Sebastiani", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 5th Conference on Language Resources and Evaluation (LREC06", "volume": "", "issue": "", "pages": "417--422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiword- net: A publicly available lexical resource for opinion mining. In In Proceedings of the 5th Conference on Language Resources and Evaluation (LREC06, pages 417-422.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Webis: An ensemble for twitter sentiment detection", "authors": [ { "first": "Matthias", "middle": [], "last": "Hagen", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" }, { "first": "Michael", "middle": [], "last": "B\u00fcchner", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "582--589", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthias Hagen, Martin Potthast, Michael B\u00fcchner, and Benno Stein. 2015. Webis: An ensemble for twit- ter sentiment detection. In Proceedings of the 9th In- ternational Workshop on Semantic Evaluation, pages 582-589.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The weka data mining software: an update", "authors": [ { "first": "Mark", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Eibe", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Holmes", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Pfahringer", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Reutemann", "suffix": "" }, { "first": "Ian", "middle": [ "H" ], "last": "Witten", "suffix": "" } ], "year": 2009, "venue": "ACM SIGKDD explorations newsletter", "volume": "11", "issue": "1", "pages": "10--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The weka data mining software: an update. ACM SIGKDD explorations newsletter, 11(1):10-18.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Empirical comparison of active learning strategies for handling temporal drift", "authors": [ { "first": "Mohit", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Rayid", "middle": [], "last": "Ghani", "suffix": "" }, { "first": "Mohak", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Jaime", "middle": [ "G" ], "last": "Carbonell", "suffix": "" }, { "first": "Alexander", "middle": [ "I" ], "last": "Rudnicky", "suffix": "" } ], "year": 2010, "venue": "ACM Transactions on Embedded Computing Systems", "volume": "9", "issue": "4", "pages": "161--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Kumar, Rayid Ghani, Mohak Shah, Jaime G Car- bonell, and Alexander I Rudnicky. 2010. Empirical comparison of active learning strategies for handling temporal drift. ACM Transactions on Embedded Com- puting Systems, 9(4):161-168.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Active learning for imbalanced sentiment classification", "authors": [ { "first": "Shoushan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shengfeng", "middle": [], "last": "Ju", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Li", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "139--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shoushan Li, Shengfeng Ju, Guodong Zhou, and Xi- aojun Li. 2012. Active learning for imbalanced sentiment classification. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natu- ral Language Processing and Computational Natural Language Learning, pages 139-148. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The stanford corenlp natural language processing toolkit", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Manning", "suffix": "" }, { "first": "John", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Jenny", "middle": [ "Rose" ], "last": "Bauer", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "David", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "ACL (System Demonstrations)", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In ACL (System Demonstrations), pages 55-60.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Active learning with efficient feature weighting methods for improving data quality and classification accuracy", "authors": [ { "first": "Justin", "middle": [], "last": "Martineau", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Doreen", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Amit", "middle": [], "last": "Sheth", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1104--1112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Martineau, Lu Chen, Doreen Cheng, and Amit Sheth. 2014. Active learning with efficient fea- ture weighting methods for improving data quality and classification accuracy. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1104-1112, Baltimore, Maryland, June. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Teamx: A sentiment analyzer with enhanced lexicon mapping and weighting scheme for unbalanced data", "authors": [ { "first": "Yasuhide", "middle": [], "last": "Miura", "suffix": "" }, { "first": "Shigeyuki", "middle": [], "last": "Sakaki", "suffix": "" }, { "first": "Keigo", "middle": [], "last": "Hattori", "suffix": "" }, { "first": "Tomoko", "middle": [], "last": "Ohkuma", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation (Se-mEval 2014)", "volume": "", "issue": "", "pages": "628--632", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yasuhide Miura, Shigeyuki Sakaki, Keigo Hattori, and Tomoko Ohkuma. 2014. Teamx: A sentiment ana- lyzer with enhanced lexicon mapping and weighting scheme for unbalanced data. In Proceedings of the 8th International Workshop on Semantic Evaluation (Se- mEval 2014), pages 628-632, Dublin, Ireland, August. Association for Computational Linguistics and Dublin City University.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Nrc-canada: Building the state-of-theart in sentiment analysis of tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the seventh international workshop on Semantic Evaluation Exercises (SemEval-2013)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. Nrc-canada: Building the state-of-the- art in sentiment analysis of tweets. In Proceedings of the seventh international workshop on Semantic Eval- uation Exercises (SemEval-2013), Atlanta, Georgia, USA, June.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "# emotional tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", "volume": "1", "issue": "", "pages": "246--255", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M Mohammad. 2012. # emotional tweets. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 246-255. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "SemEval-2016 task 4: Sentiment analysis in Twitter", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Fabrizio", "middle": [], "last": "Sebastiani", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Veselin Stoy- anov, and Fabrizio Sebastiani. 2016. SemEval-2016 task 4: Sentiment analysis in Twitter. In Proceed- ings of the 10th International Workshop on Semantic Evaluation (SemEval 2016). Association for Compu- tational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Feature selection and weighting methods in sentiment analysis", "authors": [ { "first": "O'", "middle": [], "last": "Tim", "suffix": "" }, { "first": "Irena", "middle": [], "last": "Keefe", "suffix": "" }, { "first": "", "middle": [], "last": "Koprinska", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 14th Australasian document computing symposium", "volume": "", "issue": "", "pages": "67--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim O'Keefe and Irena Koprinska. 2009. Feature selec- tion and weighting methods in sentiment analysis. In Proceedings of the 14th Australasian document com- puting symposium, Sydney, pages 67-74. Citeseer.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duches- nay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825- 2830.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Semantic sentiment analysis of twitter", "authors": [ { "first": "Hassan", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "" }, { "first": "Harith", "middle": [], "last": "Alani", "suffix": "" } ], "year": 2012, "venue": "The Semantic Web-ISWC 2012", "volume": "", "issue": "", "pages": "508--524", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hassan Saif, Yulan He, and Harith Alani. 2012. Seman- tic sentiment analysis of twitter. In The Semantic Web- ISWC 2012, pages 508-524. Springer.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Active learning literature survey", "authors": [ { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2010, "venue": "11. Burr Settles. 2012. Active learning. Synthesis Lectures on Artificial Intelligence and Machine Learning", "volume": "52", "issue": "", "pages": "1--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burr Settles. 2010. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11. Burr Settles. 2012. Active learning. Synthesis Lec- tures on Artificial Intelligence and Machine Learning, 6(1):1-114.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Categorical proportional difference: A feature selection method for text categorization", "authors": [ { "first": "Mondelle", "middle": [], "last": "Simeon", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Hilderman", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 7th Australasian Data Mining Conference", "volume": "87", "issue": "", "pages": "201--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mondelle Simeon and Robert Hilderman. 2008. Cat- egorical proportional difference: A feature selection method for text categorization. In Proceedings of the 7th Australasian Data Mining Conference-Volume 87, pages 201-208. Australian Computer Society, Inc.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "F1-scores on development set for different AL strategies with a seed set size of 500 tweets and a budget of 400 tweets." }, "TABREF0": { "html": null, "type_str": "table", "content": "", "num": null, "text": "Examplary slang words to be replaced by our lexicon." }, "TABREF2": { "html": null, "type_str": "table", "content": "
", "num": null, "text": "Overview of our extracted features." }, "TABREF4": { "html": null, "type_str": "table", "content": "
Predicted neg neu pos
Truthneg 181 93 neu 206 181 298 85 pos 116 124 515
", "num": null, "text": "Comparing F1-scores on development set using SteM and our baseline." }, "TABREF5": { "html": null, "type_str": "table", "content": "", "num": null, "text": "Confusion matrix of SteM with 900 training instances." } } } }