{ "paper_id": "S16-1014", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:24:59.977307Z" }, "title": "NTNUSentEval at SemEval-2016 Task 4: * Combining General Classifiers for Fast Twitter Sentiment Analysis", "authors": [ { "first": "Brage", "middle": [], "last": "Ekroll", "suffix": "", "affiliation": {}, "email": "brageej@stud.ntnu.no" }, { "first": "Jahren", "middle": [ "Valerij" ], "last": "Fredriksen", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Bj\u00f6rn", "middle": [], "last": "Gamb\u00e4ck", "suffix": "", "affiliation": {}, "email": "gamback@idi.ntnu.no" }, { "first": "Lars", "middle": [], "last": "Bungum", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The paper describes experiments on sentiment classification of microblog messages using an architecture allowing general machine learning classifiers to be combined either sequentially to form a multi-step classifier, or in parallel, creating an ensemble classifier. The system achieved very competitive results in the shared task on sentiment analysis in Twitter, in particular on non-Twitter social media data, that is, input it was not specifically tailored to. * Thanks to Mikael Brevik, J\u00f8rgen Faret, Johan Reitan and \u00d8yvind Selmer for their work on two previous NTNU systems.", "pdf_parse": { "paper_id": "S16-1014", "_pdf_hash": "", "abstract": [ { "text": "The paper describes experiments on sentiment classification of microblog messages using an architecture allowing general machine learning classifiers to be combined either sequentially to form a multi-step classifier, or in parallel, creating an ensemble classifier. The system achieved very competitive results in the shared task on sentiment analysis in Twitter, in particular on non-Twitter social media data, that is, input it was not specifically tailored to. * Thanks to Mikael Brevik, J\u00f8rgen Faret, Johan Reitan and \u00d8yvind Selmer for their work on two previous NTNU systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As a growing platform for people to express themselves on a global scale, Twitter has become exceedingly attractive as an information source. In addition to text, a tweet comes with metadata such as the sender's location and language, and hashtags, making it possible to quickly gather vast amounts of data regarding a specific product, person or event. With a working Twitter Sentiment Analysis system, companies could get a feel of what consumers think of their products, or politicians could estimate their popularity amongst Twitter users in specific regions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, tweets and other informal texts on social media are quite different from texts elsewhere. They are short in length and contain a lot of abbreviations, misspellings, Internet slang, and creative syntax. Although the relative occurrence of nonstandard English syntax is fairly constant among many types of social media (Baldwin et al., 2013) , analysing such texts using traditional language processing systems can be problematic, primarily since the main common denominator of social media text is not that it is informal, but that it describes language in rapid change (Androutsopoulos, 2011; Eisenstein, 2013) , so that resources targeted directly at social media language quickly become outdated.", "cite_spans": [ { "start": 326, "end": 348, "text": "(Baldwin et al., 2013)", "ref_id": null }, { "start": 578, "end": 601, "text": "(Androutsopoulos, 2011;", "ref_id": "BIBREF0" }, { "start": 602, "end": 619, "text": "Eisenstein, 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Twitter Sentiment Analysis (TSA) has been a rapibly growing research area in recent years, and a typical approach to TSA has been identified, using a supervised machine learning strategy, consisting of three main steps: preprocessing, feature extraction and classifier training. Preprocessing is used in order to remove noise and standardize the tweet format, for example, by replacing or removing URLs. Desired features of the tweets are then extracted, such as sentiment scores using specific sentiment lexica or the occurrence of different emoticons. Finally, a classifier is trained on the extracted features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since the machine learning algorithms used commonly are supervised, sentiment-annotated data is a prerequisite for training -and the growth of the TSA research field can largely be attributed to the International Workshop on Semantic Evaluation (Sem-Eval) having run shared tasks on this theme since 2013 (Wilson et al., 2013) , annually producing new annotated data. The SemEval-2016 version (Task 4) of the TSA task and the data sets are described by Nakov et al. (2016) . Here we will specifically address Subtask A, which is a 3-way sentiment polarity classification problem, attributing the labels 'positive', 'negative' or 'neutral' to tweets.", "cite_spans": [ { "start": 305, "end": 326, "text": "(Wilson et al., 2013)", "ref_id": "BIBREF16" }, { "start": 453, "end": 472, "text": "Nakov et al. (2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is laid out as follows: Section 2 describes a general architecture for building Figure 1 : Overview of the core system architecture Twitter sentiment classifiers, drawing on the experiences of developing two previous TSA systems (Selmer et al., 2013; Reitan et al., 2015) . Section 3 reports the application of such a system ('NTNU-SentEval') to the SemEval data sets, while Section 4 points to ways that the results could be improved.", "cite_spans": [ { "start": 251, "end": 272, "text": "(Selmer et al., 2013;", "ref_id": "BIBREF13" }, { "start": 273, "end": 293, "text": "Reitan et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 102, "end": 110, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To solve the three-way sentiment classification task, a general multi-class classifier, BaseClassifier, was created. Utilizing a general methodology enables the combination of several BaseClassifiers in various ways, either sequentially to create a multi-step classifier, or in parallel, as a classifier ensemble.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Classifier Architecture", "sec_num": "2" }, { "text": "The BaseClassifier consists of three steps: preprocessing, feature extraction, and then either classification or training. These are handled by a Pipeline object built in the Scikit-Learn Python machine learning library (Pedregosa et al., 2011) . Scikit-Learn Transformer objects are used to extract or generate feature representations of the data. Figure 1 illustrates the overall architecture of the system. When creating a BaseClassifier instance, a set of parameters is specified, including the classification algorithm, the preprocessing functions to use, and options for each of the transformers. The preprocessing methods invoked depend on the transformers and the features they aim to extract.", "cite_spans": [ { "start": 220, "end": 244, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 349, "end": 355, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Sentiment Classifier Architecture", "sec_num": "2" }, { "text": "The preprocessing step modifies the raw tweets before they are passed to feature extraction: noise is filtered out and negation scope is detected. The filtering consists of a chain of simple methods using regular expressions. There are ten basic filters that can be invoked, six of which replace various twitter-specific objects with the empty string: emoticons, username mentions, RT (retweet) tags, URLs, only hashtag signs (#), and hashtags (incl. the string following the sign). The other four filters transform uppercase characters to lowercase, remove non-alphabetic or space characters, limit the maximum repetitions of a single character to three, and perform tokenization using Pott's tweet tokenizer (Potts, 2011) .", "cite_spans": [ { "start": 710, "end": 723, "text": "(Potts, 2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "2.1" }, { "text": "Negation detection uses a simple approach where n words appearing after a negation cue, but before the next punctuation mark, are marked as negated. The negation cues were adopted from Councill et al. 2010, supplemented by five common misspellings obtained by looking up each negation cue in TweetNLP's Twitter word cluster (Owoputi et al., 2013) : anit, couldnt, dnt, does'nt, and wont.", "cite_spans": [ { "start": 324, "end": 346, "text": "(Owoputi et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "2.1" }, { "text": "The feature extraction is implemented as a Scikit-Learn Feature Union, which is a collection of independent transformers (feature extractors), that build a feature matrix for the classifier. Each feature is represented by a transformer. Eight such transformers have been implemented: two extract the number of punctuations (repeated alphabetical and grammatical signs) and the number of happy and sad emoticons found in the tweet. Two other transformers extract TF-IDF values for word n-grams and character n-grams using a bag-of-words vectorizer implementation, which is an extension of Scikit-Learn's default TfidfVectorizer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction", "sec_num": "2.2" }, { "text": "A part-of-speech transformer uses the GATE TwitIE tagger (Derczynski et al., 2013) to assign part-of-speech tags to every token in the text; the tag occurrences are then counted and returned. A word cluster transformer counts the occurrences of different TweetNLP word clusters (Owoputi et al., 2013) , that is, if a word in a tweet is a member of a cluster, a counter for that specific cluster is incremented.", "cite_spans": [ { "start": 57, "end": 82, "text": "(Derczynski et al., 2013)", "ref_id": "BIBREF2" }, { "start": 278, "end": 300, "text": "(Owoputi et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction", "sec_num": "2.2" }, { "text": "The last two transformers are essentially lexical: the VADER transformer runs the lexicon-based social media sentiment analysis tool VADER (Hutto and Gilbert, 2014) and extracts its output. VADER (Valence Aware Dictionary and sEntiment Reasoner) goes beyond the bag-of-words model, taking into consideration word order and degree modifiers.", "cite_spans": [ { "start": 139, "end": 164, "text": "(Hutto and Gilbert, 2014)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction", "sec_num": "2.2" }, { "text": "The lexicon transformer is a single transformer using a combination of six automatically and manually annotated prior polarity sentiment lexica. The automatically annotated lexica used are NRC Senti-ment140 and HashtagSentiment (Kiritchenko et al., 2014) , that contain sentiment scores for both unigrams and bigrams, where some are in a negated context. Similarly, two manually annotated lexica, AFINN (Nielsen, 2011) and NRC Emoticon (Mohammad and Turney, 2010), give a sentiment score for each word (AFINN) or each emoticon (NRC Emoticon). However, two further manually annotated lexica, MPQA (Wilson et al., 2005) and Bing Liu (Ding et al., 2008) , do not list sentiment scores for words, but only whether a word contains positive or negative sentiment. For those two lexica, negative and positive word sentiments were mapped to the scores \u22121 or +1, respectively. For all lexica, four different features were extracted from each tweet. Following Kiritchenko et al. 2014, the four features for manually annotated lexica are the sums of positive scores and of negative scores for words in both affirmative and negated contexts, while the four features for automatically annotated lexica comprise the number of unigrams or bigrams with sentiment score = 0, the sum of all sentiment scores, the highest sentiment score, and the score of the last unigram or bigram in the tweet.", "cite_spans": [ { "start": 228, "end": 254, "text": "(Kiritchenko et al., 2014)", "ref_id": null }, { "start": 403, "end": 418, "text": "(Nielsen, 2011)", "ref_id": "BIBREF8" }, { "start": 596, "end": 617, "text": "(Wilson et al., 2005)", "ref_id": "BIBREF15" }, { "start": 631, "end": 650, "text": "(Ding et al., 2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction", "sec_num": "2.2" }, { "text": "After all desired features have been extracted, a BaseClassifier instance allows for the use of stateof-the-art classification algorithms such as Support Vector Machines (SVM), Na\u00efve Bayes and Maximum Entropy (MaxEnt). Scikit-Learn includes a series of implementations of the SVM algorithm (Vapnik, 1995). The NTNUSentEval system uses the SVC variant, also known as C-Support SVM classifier since it is based on the idea of setting a constant C to penalize incorrectly classified instances. High C values create a narrower margin, enabling more elements to be correctly classified. However, this can lead to overfitting, so it is desirable to perform some kind of parameter optimization to find the best C value. For multi-class classification, Scikit-Learn uses a One-vs-One method with a run time complexity more than quadratic to the number of elements; however, this is not a problem for our relatively small (under 10,000 elements) datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification", "sec_num": "2.3" }, { "text": "A single BaseClassifier acts as a one-step classifier, but by chaining BaseClassifiers sequentially, Figure 2 : Data flow in the two-step classifier a multi-step classifier can be created. Each classifier can be trained independently on different data, thereby learning a different classification function. Figure 2 illustrates how chaining two BaseClassifiers can create a two-step classifier. The first Base-Classifier is trained only on data labeled as subjective or objective, while the second BaseClassifier is trained only on subjective data, labeled positive or negative. When classifying, if the first Base-Classifier classifies an instance as subjective, the instance is forwarded to the second BaseClassifier to determine if it is positive or negative. The results from both classifiers are then combined and the final classification is returned.", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 109, "text": "Figure 2", "ref_id": null }, { "start": 307, "end": 315, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Classification", "sec_num": "2.3" }, { "text": "By combining BaseClassifiers in parallel, an ensemble of classifiers can be created. Each of the classifiers is independent of the others and all classify the same instances. In the end, the classifiers vote to decide on the classification of an instance. Since the BaseClassifiers are so general, it is possible to create BaseClassifiers that extract different features, do different preprocessing, or use different classification algorithms -and then combine these to create an ensemble system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification", "sec_num": "2.3" }, { "text": "In order to find the optimal parameter values for the NTNUSentEval system, an extensive grid search was performed through the Scikit-Learn framework over all subsets of the training set (shuffled), using stratified 5-fold cross-validation and optimizing on F 1 -score. During development we were able to find parameters that yielded better results on the complete test set than the parameters from the grid search. However, the optimal parameters are those that perform best on average, and using the parameters identified through development when presented with new data would most likely perform worse than using the parameters identified through grid search. As described in Section 2.2, a total of eight different feature extractors have been implemented, all of which can be enabled or disabled. Each feature extractor utilizes a specific preprocessor setting, as shown in Table 1 . Further, there are three option settings for the SVM algorithm: type, kernel and C, which after grid search were set to SVC, Linear, and 0.1, respectively. In addition to the preprocessor options, there are eleven more feature extractor options, whose grid-searched optimal values are displayed in Table 2 , where n-range gives the lower and upper n-gram sizes, use idf enables Inverse Document Frequency weighting, min df and max df give the proportions of lowest resp. highest document frequency occurring terms to be excluded from the final vocabulary, and negation length the maximum number of tokens inside a negation scope.", "cite_spans": [], "ref_spans": [ { "start": 878, "end": 885, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 1186, "end": 1193, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Parameter Optimization", "sec_num": "2.4" }, { "text": "The NTNUSentEval TSA system was trained on the Twitter training set (8,748 tweets), using the optimal parameters identified through grid search, and tested on the SemEval Twitter test sets from 2013 and 2014. The complete results on these test sets are shown in Table 4 below, while Nakov et al. (2016) give the results on all test sets, including the unknown 2016 tweet set, in terms of the official evaluation metric, F PN 1 , which is the average of the F1scores on the negative and the positive tweets. Notably, our system performed extremely well on the out-of-domain test sets (i.e., the non-Twitter data), being the best of all 34 participating systems on the 2013-SMS set (with a 0.641 F PN 1 score, compared to a 0.190 F PN 1 baseline), the 3rd on the 2014-Live-journal set (F PN 1 = 0.719, with a 0.272 baseline), and overall tied for first on the out-of-domain data, supporting our claim that the approach taken in itself is quite general. However, the lack of domain fine-tuning of the system showed in comparison to the best systems on Twitter data, with the NTNUSentEval system consistently placing 11-13 on the different test sets, including 11th on the 2016 set (F PN 1 = 0.583, with baseline 0.255).", "cite_spans": [ { "start": 283, "end": 302, "text": "Nakov et al. (2016)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 262, "end": 269, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "3" }, { "text": "In order to detect the overall importance or impact each feature has, a simple ablation study was conducted by removing each feature in turn and checking how the performance of the system was affected. The results of this study are shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 241, "end": 248, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.1" }, { "text": "Evidently, the single most important feature is Sentiment Lexica. On the 2013-test set, system accuracy is reduced from 0.7227 to 0.6945 when the feature is removed, while the effect of removing it when testing on the 2014 set is not as apparent. A possible reason for this difference may be that most of the sentiment lexica used were created at the same time as the 2013-test set, and they might thus better reflect the language in that period of time. As noted in Section 1, the language of social media is rapidly changing, so that a lexicon created in 2013 might have reduced value already for data collected a year later. This effect is also noticable when testing the system on the 2014-test set, where the VADER Sentiment feature is the most important one, reducing the accuracy from 0.6905 to 0.6793 when being removed. On the 2013-test set, the VADER Sentiment feature, which was created in 2014, does not have the same impact, again indicating a change in how the language is used and that VADER might better reflect the Twitter language of 2014.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.1" }, { "text": "The second most important contribution comes from the n-gram features. Table 3 : Feature ablation study results (F 1 -scores)", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 78, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.1" }, { "text": "acter n-grams and word n-grams lead to a degradation in performance. On the 2013-test set the degradation in performance is quite significant, while on the 2014-test set the degradation is more subtle. Another interesting result is the impact of the Emoticons and Punctuation count features. On the 2013-test set, removing them gives a slight reduction in performance, while on the 2014-test set we can observe a slight increase in performance. One possible reason for this could be that the way emoticons and punctuation are used in tweets changes over time, but the most likely cause is merely noise in the data. Although causing slightly increased or decreased performance, the individual count features do not significantly affect the overall results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.1" }, { "text": "Two instances of the BaseClassifier can be chained sequentially creating a 2-step classifier. Such a classifier was tested on the 2013 and 2014 test sets, as shown in Table 4 . The 2-step classifier performs worse than the 1-step classifier on the 2013 set, while their performances on the 2014 set are comparable, so based on these results it is not clear that 1-step classification is better than 2-step.", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 174, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Architectural Experiments", "sec_num": "3.2" }, { "text": "The GATE TwitIE part-of-speech tagger uses an underlying model when tagging tweets. In addition to the standard best performing model, another highspeed model trading 2.5% token accuracy for half Table 4 . Although a slight reduction in performance can be observed compared to using the best tagger model, the high-speed model significantly reduced the total execution time, from 107 to 80 seconds on the 2013-test set and from 53 to 41 seconds on the 2014-test set.", "cite_spans": [], "ref_spans": [ { "start": 196, "end": 203, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Architectural Experiments", "sec_num": "3.2" }, { "text": "Drawing on the experiences from two previous Twitter Sentiment Analysis systems (Selmer et al., 2013; Reitan et al., 2015) , a new TSA system was created using a simplified and generalised architecture, allowing for accurate and fast tweet classification.", "cite_spans": [ { "start": 80, "end": 101, "text": "(Selmer et al., 2013;", "ref_id": "BIBREF13" }, { "start": 102, "end": 122, "text": "Reitan et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "4" }, { "text": "As seen in the ablation study of Section 3.1, the Sentiment Lexica is the single most important feature, while also being one of the simplest: our implementation is based only on summing up the sentiment value of each word. A possible improvement would thus be to extract more information by considering the order of the words, part-of-speech tags, and degree modifiers, such as 'very', 'really' and 'somewhat', that can affect the sentiment value of the following word. These modifiers are currently not handled by the Sentiment Lexica extractor, yet they clearly carry a lot of sentiment weight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "4" }, { "text": "Another interesting feature of lexicon-based systems is their good run-time performance, which is also confirmed in our system, where the lexicon feature extractor is one of the fastest feature extractors. This is a particularly important property for a TSA system to be useful in a real world setting, as the opinion mining accuracy confidence depends on the number of opinions examined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "4" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Language change and digital media: a review of conceptions and evidence", "authors": [ { "first": "Jannis", "middle": [], "last": "Androutsopoulos", "suffix": "" }, { "first": ". ; Marco", "middle": [], "last": "Lui", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mackinlay", "suffix": "" }, { "first": "Li", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2011, "venue": "6th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "356--364", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jannis Androutsopoulos. 2011. Language change and digital media: a review of conceptions and evidence. In Kristiansen and Coupland, editors, Standard Lan- guages and Language Standards in a Changing Eu- rope, pages 145-159. Novus, Oslo, Norway, February. Timothy Baldwin, Paul Cook, Marco Lui, Andrew MacKinlay, and Li Wang. 2013. How noisy social media text, how diffrnt social media sources? In 6th International Joint Conference on Natural Language Processing, pages 356-364, Nagoya, Japan, October.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "What's great and what's not: learning to classify the scope of negation for improved sentiment analysis", "authors": [ { "first": "G", "middle": [], "last": "Isaac", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Councill", "suffix": "" }, { "first": "Leonid", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "", "middle": [], "last": "Velikovich", "suffix": "" } ], "year": 2010, "venue": "48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "51--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isaac G. Councill, Ryan McDonald, and Leonid Ve- likovich. 2010. What's great and what's not: learning to classify the scope of negation for improved senti- ment analysis. In 48th Annual Meeting of the Associa- tion for Computational Linguistics, pages 51-59, Up- psala, Sweden, July. ACL. Workshop on Negation and Speculation in Natural Language Processing.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Twitter part-of-speech tagging for all: Overcoming sparse and noisy data", "authors": [ { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" } ], "year": 2013, "venue": "9th International Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "198--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leon Derczynski, Alan Ritter, Sam Clark, and Kalina Bontcheva. 2013. Twitter part-of-speech tagging for all: Overcoming sparse and noisy data. In 9th Interna- tional Conference on Recent Advances in Natural Lan- guage Processing, pages 198-206, Hissar, Bulgaria, September.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A holistic lexicon-based approach to opinion mining", "authors": [ { "first": "Xiaowen", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Philip", "middle": [ "S" ], "last": "Yu", "suffix": "" } ], "year": 2008, "venue": "2008 International Conference on Web Search and Data Mining", "volume": "", "issue": "", "pages": "231--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaowen Ding, Bing Liu, and Philip S. Yu. 2008. A holistic lexicon-based approach to opinion mining. In 2008 International Conference on Web Search and Data Mining, pages 231-240, Stanford, California, February. ACM.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "What to do about bad language on the internet", "authors": [ { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2013, "venue": "2013 Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "359--369", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Eisenstein. 2013. What to do about bad language on the internet. In 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics, pages 359-369, Atlanta, Georgia, June.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "VADER: A parsimonious rule-based model for sentiment analysis of social media text", "authors": [ { "first": "C", "middle": [ "J" ], "last": "Hutto", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Gilbert", "suffix": "" } ], "year": 2014, "venue": "8th International Conference on Weblogs and Social Media", "volume": "50", "issue": "", "pages": "723--762", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.J. Hutto and Eric Gilbert. 2014. VADER: A parsimo- nious rule-based model for sentiment analysis of social media text. In 8th International Conference on We- blogs and Social Media, Ann Arbor, Michigan, June. Svetlana Kiritchenko, Xiaodan Zhu, and Saif M. Mo- hammad. 2014. Sentiment analysis of short infor- mal texts. Journal of Artificial Intelligence Research, 50:723-762, August.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Emotions evoked by common words and phrases: Using Mechanical Turk to create an emotion lexicon", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2010, "venue": "2010 Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "26--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad and Peter Turney. 2010. Emotions evoked by common words and phrases: Using Me- chanical Turk to create an emotion lexicon. In 2010 Conference of the North American Chapter of the As- sociation for Computational Linguistics, pages 26-34, Los Angeles, California, June. ACL. Workshop on Computational Approaches to Analysis and Genera- tion of Emotion in Text.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "SemEval-2016 Task 4: Sentiment analysis in Twitter", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Fabrizio", "middle": [], "last": "Sebastiani", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2016, "venue": "10th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Se- bastiani, and Veselin Stoyanov. 2016. SemEval-2016 Task 4: Sentiment analysis in Twitter. In 10th Interna- tional Workshop on Semantic Evaluation, San Diego, California, June. ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A new ANEW: Evaluation of a word list for sentiment analysis in microblogs", "authors": [ { "first": "Finn\u00e5rup", "middle": [], "last": "Nielsen", "suffix": "" } ], "year": 2011, "venue": "1st Workshop on Making Sense of Microposts (#MSM2011)", "volume": "", "issue": "", "pages": "93--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Finn\u00c5rup Nielsen. 2011. A new ANEW: Evalu- ation of a word list for sentiment analysis in mi- croblogs. In 1st Workshop on Making Sense of Micro- posts (#MSM2011), pages 93-98, Heraklion, Greece, May.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improved part-of-speech tagging for online conversational text with word clusters", "authors": [ { "first": "Olutobi", "middle": [], "last": "Owoputi", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Schneider", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "2013 Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "380--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversa- tional text with word clusters. In 2013 Conference of the North American Chapter of the Association for Computational Linguistics, pages 380-390, Atlanta, Georgia, June. ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "Fabian", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "Ga\u00ebl", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Bertrand", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Ron", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "1", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vin- cent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Per- rot, and\u00c9douard Duchesnay. 2011. Scikit-learn: Ma- chine learning in Python. Journal of Machine Learn- ing Research, 12(1):2825-2830.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Sentiment symposium tutorial", "authors": [ { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2011, "venue": "Sentiment Analysis Symposium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Potts. 2011. Sentiment symposium tutorial. In Sentiment Analysis Symposium, San Francisco, Cal- ifornia, November. Alta Plana Corporation. sentiment.christopherpotts.net/", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Negation scope detection for Twitter sentiment analysis", "authors": [ { "first": "Johan", "middle": [], "last": "Reitan", "suffix": "" }, { "first": "J\u00f8rgen", "middle": [], "last": "Faret", "suffix": "" }, { "first": "Bj\u00f6rn", "middle": [], "last": "Gamb\u00e4ck", "suffix": "" }, { "first": "Lars", "middle": [], "last": "Bungum", "suffix": "" } ], "year": 2015, "venue": "Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "99--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Reitan, J\u00f8rgen Faret, Bj\u00f6rn Gamb\u00e4ck, and Lars Bungum. 2015. Negation scope detection for Twitter sentiment analysis. In 2015 Conference on Empirical Methods in Natural Language Processing, pages 99- 108, Lisbon, Portugal, September. 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "NTNU: Domain semiindependent short message sentiment classification", "authors": [ { "first": "\u00d8yvind", "middle": [], "last": "Selmer", "suffix": "" }, { "first": "Mikael", "middle": [], "last": "Brevik", "suffix": "" }, { "first": "Bj\u00f6rn", "middle": [], "last": "Gamb\u00e4ck", "suffix": "" }, { "first": "Lars", "middle": [], "last": "Bungum", "suffix": "" } ], "year": 2013, "venue": "2nd Joint Conference on Lexical and Computational Semantics (*SEM)", "volume": "2", "issue": "", "pages": "430--437", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u00d8yvind Selmer, Mikael Brevik, Bj\u00f6rn Gamb\u00e4ck, and Lars Bungum. 2013. NTNU: Domain semi- independent short message sentiment classification. In 2nd Joint Conference on Lexical and Computational Semantics (*SEM), Vol. 2: 7th International Work- shop on Semantic Evaluation, pages 430-437, Atlanta, Georgia, June. ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The Nature of Statistical Learning Theory", "authors": [ { "first": "Vladimir", "middle": [ "N" ], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir N. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer, New York, New York.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "OpinionFinder: A system for subjectivity analysis", "authors": [ { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "Swapna", "middle": [], "last": "Somasundaran", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Kessler", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Patwardhan", "suffix": "" } ], "year": 2005, "venue": "2005 Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "34--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theresa Wilson, Paul Hoffmann, Swapna Somasun- daran, Jason Kessler, Janyce Wiebe, Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patwardhan. 2005. OpinionFinder: A system for subjectivity analysis. In 2005 Human Language Technology Conference and Conference on Empirical Methods in Natural Lan- guage Processing, pages 34-35, Vancouver, British Columbia, October. ACL. Demonstration Abstracts.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "SemEval-2013 Task 2: Sentiment analysis in Twitter", "authors": [ { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2013, "venue": "2nd Joint Conference on Lexical and Computational Semantics (*SEM)", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theresa Wilson, Zornitsa Kozareva, Preslav Nakov, Alan Ritter, Sara Rosenthal, and Veselin Stoyanov. 2013. SemEval-2013 Task 2: Sentiment analysis in Twitter. In 2nd Joint Conference on Lexical and Computational Semantics (*SEM), Vol. 2: 7th International Workshop on Semantic Evaluation, Atlanta, Georgia, June. ACL.", "links": null } }, "ref_entries": { "TABREF1": { "content": "
Parametern-grams Word CharacterLexicon
n-range use idf min df max df negation length(1, 5) True 0.0 0.5 4(3, 6) True 0.0 0.5 NoneN/A N/A N/A N/A -1
", "type_str": "table", "num": null, "html": null, "text": "Preprocessing used by feature extractors" }, "TABREF2": { "content": "", "type_str": "table", "num": null, "html": null, "text": "" }, "TABREF5": { "content": "
", "type_str": "table", "num": null, "html": null, "text": "Sentiment classifier performance the tagging speed is available, and the results from testing BaseClassifier using the high-speed tagger model are also shown in" } } } }