{ "paper_id": "S18-1048", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:43:45.229268Z" }, "title": "SSN MLRG1 at SemEval-2018 Task 1: Emotion and Sentiment Intensity Detection Using Rule Based Feature Selection", "authors": [ { "first": "Angel", "middle": [], "last": "Deborah", "suffix": "", "affiliation": {}, "email": "angeldeborahs@ssn.edu.in" }, { "first": "Milton", "middle": [], "last": "Rajendram", "suffix": "", "affiliation": {}, "email": "miltonrs@ssn.edu.in" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The system developed by the SSN MLRG1 team for Semeval-2018 task 1 on affect in tweets uses rule based feature selection and one-hot encoding to generate the input feature vector. Multilayer Perceptron was used to build the model for emotion intensity ordinal classification, sentiment analysis ordinal classification and emotion classfication subtasks. Support Vector Regression was used to build the model for emotion intensity regression and sentiment intensity regression subtasks.", "pdf_parse": { "paper_id": "S18-1048", "_pdf_hash": "", "abstract": [ { "text": "The system developed by the SSN MLRG1 team for Semeval-2018 task 1 on affect in tweets uses rule based feature selection and one-hot encoding to generate the input feature vector. Multilayer Perceptron was used to build the model for emotion intensity ordinal classification, sentiment analysis ordinal classification and emotion classfication subtasks. Support Vector Regression was used to build the model for emotion intensity regression and sentiment intensity regression subtasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages (Saif and Felipe, 2017) . Tweets are often used to convey ones emotions, opinions towards products, and stance over issues (Nabil et al., 2016) . Automatically detecting emotion intensities in tweets has several applications, including commerce (Jansen et al., 2009) , crisis management (Verma et al., 2011) , tracking brand and product perception, tracking support for issues and policies, and tracking public health and well-being (Chew and Eysenbach, 2010) . The task is challenging because of the informal writing style, the semantic diversity of content as well as the \"unconventional\" grammar. These challenges in building a classification model and regression model can be handled by using proper approaches to feature generation and machine learning.", "cite_spans": [ { "start": 157, "end": 170, "text": "Felipe, 2017)", "ref_id": "BIBREF9" }, { "start": 270, "end": 290, "text": "(Nabil et al., 2016)", "ref_id": "BIBREF7" }, { "start": 392, "end": 413, "text": "(Jansen et al., 2009)", "ref_id": "BIBREF5" }, { "start": 434, "end": 454, "text": "(Verma et al., 2011)", "ref_id": "BIBREF12" }, { "start": 580, "end": 606, "text": "(Chew and Eysenbach, 2010)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are several machine learning techniques that can be used for sentiment intensity prediction or emotion intensity prediction. Some of the approaches inlclude Artificial Neural Network (ANN) (Sudipta et al., 2017) , Random Forests, Support Vector Machine (SVM), Naive Bayes (NB) (Tabari et al., 2017) , Multi-Kernel Gaussian Process (MKGP) (Angel Deborah et al., 2017a,b) , AdaBoost Regressor (ABR), Bagging Regressor (BR) (Jiang et al., 2017) and Deep Learning (DL) (Pivovarova et al., 2017) .", "cite_spans": [ { "start": 195, "end": 217, "text": "(Sudipta et al., 2017)", "ref_id": "BIBREF10" }, { "start": 283, "end": 304, "text": "(Tabari et al., 2017)", "ref_id": "BIBREF11" }, { "start": 344, "end": 375, "text": "(Angel Deborah et al., 2017a,b)", "ref_id": null }, { "start": 427, "end": 447, "text": "(Jiang et al., 2017)", "ref_id": "BIBREF6" }, { "start": 471, "end": 496, "text": "(Pivovarova et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A Multilayer Perceptron (MLP) as shown in Figure 1 is a feed-forward Neural Network model that maps input data sets onto appropriate output sets. An MLP has many layers of nodes in a directed graph, with each layer connected to the next layer. A neuron is a processing element with activation function (in the input layer the activation function is not applied). The output layer has as many nuerons as the number of class labels in the problem. Each connection has a weight assigned to it. Output of each neuron is calculated by applying the activation function on the weighted sum of the inputs. Linear, sigmoid, tanh, elu, softplus, softmax and relu are some of the commonly used activation functions. The supervised learning problem of the MLP can be solved with the backpropagation algorithm (Haykin, 1998) . The algorithm consists of two steps. In the forward pass, the predicted outputs are calculated for the given inputs . In the backward pass, partial derivatives of the cost function with respect to the weight parameters are propagated back through the network. The chain rule of differentiation gives very similar computational rules for the backward pass as the ones for the forward pass. The network weights can then be adapted using any gradient-based optimisation algorithm.", "cite_spans": [ { "start": 797, "end": 811, "text": "(Haykin, 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 42, "end": 48, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Multi-Layer Perceptron", "sec_num": "2" }, { "text": "MLP was used in implementing for the following subtasks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Layer Perceptron", "sec_num": "2" }, { "text": "1. EI-oc (an emotion intensity ordinal classification task): Given a tweet and an emotion E, classify the tweet into one of four ordinal intensity classes of E that best represents the mental state of the tweeter. 2. V-oc (a sentiment analysis, ordinal classification task): Given a tweet, classify it into one of seven ordinal classes, corresponding to various levels of positive and negative sentiment intensity, that best represents the mental state of the tweeter. 3. E-c (an emotion classification task): Given a tweet, classify it as neutral or no emotion or as one, or more, of eleven given emotions that best represent the mental state of the tweeter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Layer Perceptron", "sec_num": "2" }, { "text": "Support Vector Machines (SVM) are characterized by usage of kernels, absence of local minima, sparseness of the solution and capacity control obtained by acting on the margin or on number of support vectors, etc. As in classification, support vector regression (SVR) is characterized by the use of kernels, sparse solution, and Vapnik-Chervonenkis control of the margin and the number of support vectors. Although less popular than SVM, SVR has been proven to be an effective tool in real-value function estimation (Awad and Khanna, 2015) . The idea of SVR is based on the computation of a linear regression function in a high dimensional feature space where the input data are mapped via a non-linear function. It contains all the main features that characterize maximum margin algorithm: a non-linear function is learned by a linear learning machine mapping into high dimensional kernel-induced feature space. The capacity of the system is controlled by parameters that do not depend on the dimensionality of feature space. Instead of minimizing the observed training error, Support Vector Regression (SVR) attempts to minimize the generalization error bound so as to achieve generalization perfor-mance. SVR was used in implementing for the following subtasks:", "cite_spans": [ { "start": 515, "end": 538, "text": "(Awad and Khanna, 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Support Vector Regression", "sec_num": "3" }, { "text": "1. EI-reg (an emotion intensity regression task): Given a tweet and an emotion E, determine the intensity of E that best represents the mental state of the tweeter, a real-valued score between 0 (least E) and 1 (most E). 2. V-reg (a sentiment intensity regression task):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Support Vector Regression", "sec_num": "3" }, { "text": "Given a tweet, determine the intensity of sentiment or valence (V) that best represents the mental state of the tweeter, a real-valued score between 0 (most negative) and 1 (most positive).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Support Vector Regression", "sec_num": "3" }, { "text": "The system consists of the following modules: data extraction, preprocessing, rule based feature selection, feature vector generation and building the model -Multilayer Perceptron model for clasification subtasks and Support Vector Regression for regression subtasks. The algorithm for preprocessing of the data is outlined below: Algorithm: Data extraction and Preprocessing. Input: Input dataset. Output: Tokenized words and their parts of speech. begin 1. Separate labels and sentences 2. Perform tokenization using word tokenize, the function for tokenizing in the NLTK toolkit. 3. Perform Parts of Speech tagging using pos tag function from the NLTK toolkit. 4. Return the tokenized words and their parts of speech as inputs to rule based feature selection. end", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "4" }, { "text": "The algorithm for rule based feature selection and feature vector generation is outlined below: Algorithm: Rule based feature selection and feature vector generation. Input: Tokenized words and their parts of speech. Output: Feature vector. begin For each of the tokenized words, falling under one of the categories listed in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "4" }, { "text": "We evaluated the system only for English language. The results obtained using MLP and SVR for the subtasks are tabulated in Table 2 to Table 6 . From Table 2 which shows the Pearson scores obtained for SVR, we can infer that SVR predicts joy better compared to anger, fear and sadness. Similarly, from Table 3 which shows the Pearson scores obtained for MLP, we observe that MLP model predicts joy better compared to anger, fear and sadness. The Pearson score for valence intensity regression and sentiment intensity ordinal classification are given in Table 4 and 5 respectively.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 142, "text": "Table 2 to Table 6", "ref_id": "TABREF2" }, { "start": 150, "end": 157, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 302, "end": 309, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 553, "end": 560, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "anger fear joy sadness macro averaged 0.490 0.490 0.502 0.461 0.486 Pearson (all instances) Pearson (gold in 0.5-1) 0.582 0.424 Pearson (all classes) Pearson (some-emotion) 0.427 0.479 The Pearson score r is calculated using the Equation 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "r = n i=1 (Y i \u2212\u0232 )(y i \u2212\u0233) n i=1 (Y i \u2212\u0232 ) 2 n i=1 (y i \u2212\u0233) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "(1) Accuracy Micro-avg F1 Macro-avg F1 0.468 0.595 0.476 Table 6 : Pearson score for E-c (multi-label emotion class) using MLP.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 64, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "where Y is actual output and y is predicted output. The accuracy, micro-averaged F score and macro-avearged F score for emotion classification are given in Table 6 . The metrics are defined from Equations 2 to 5.", "cite_spans": [], "ref_spans": [ { "start": 156, "end": 163, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "Accuracy = 1 |T | t\u2208T |G t \u2229 P t | |G t \u222a P t | (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "where G t is the set of the gold labels for tweet t, P t is the set of the predicted labels for tweet t, and T is the set of tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "Micro-avearge F = 2 \u00d7 micro-P \u00d7 micro-R micro-P \u00d7 micro-R (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "where micro-P is micro-averaged precision and micro-R is micro-averaged recall", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F = 2 \u00d7 P e \u00d7 R e P e \u00d7 R e", "eq_num": "(4)" } ], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "Macro-average F = 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "|E| e\u2208E F e (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "where P e is precision, R e is recall and E is the given set of eleven emotions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5" }, { "text": "We have presented the results of using MultiLayer Perceptron for emotion intensity ordinal classification, sentiment analysis ordinal classification and emotion classification. We built a basic MLP, which has an input layer, two hidden layers with 128 and 64 neurons, and an output layer with as many neurons as the number of class labels. We have used nadam optimizer with learning rate as 0.01. We have also presented the results of using Support Vector Regression for emotion intenisty and sentiment intensity regression. It is observed that both MLP and SVR predict joy more accurately when compared to anger, fear and sadness. We analyzed the feature vectors generated for various emotions. Feature vectors generated for joy helps to achieve better results than for other emotions. We used rule based feature selection and one-hot encoding to generate input feature vectors for buliding the models. The results obtained can be enhanced by using different feature selection approaches and incorporating sentiment lexicons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Ssn mlrg1 at semeval-2017 task 4: Sentiment analysis in twitter using multi-kernel gaussian process classifier", "authors": [ { "first": "Milton", "middle": [], "last": "S Angel Deborah", "suffix": "" }, { "first": "T T", "middle": [], "last": "Rajendram", "suffix": "" }, { "first": "", "middle": [], "last": "Mirnalinee", "suffix": "" } ], "year": 2017, "venue": "Proceedings the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "709--712", "other_ids": {}, "num": null, "urls": [], "raw_text": "S Angel Deborah, S Milton Rajendram, and T T Mir- nalinee. 2017a. Ssn mlrg1 at semeval-2017 task 4: Sentiment analysis in twitter using multi-kernel gaussian process classifier. In Proceedings the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 709-712. ACL,Vancouver, Canada.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Ssn mlrg1 at semeval-2017 task 5: Fine-grained sentiment analysis using multiple kernel gaussian process regression model", "authors": [ { "first": "Milton", "middle": [], "last": "S Angel Deborah", "suffix": "" }, { "first": "T T", "middle": [], "last": "Rajendram", "suffix": "" }, { "first": "", "middle": [], "last": "Mirnalinee", "suffix": "" } ], "year": 2017, "venue": "Proceedings the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "823--826", "other_ids": {}, "num": null, "urls": [], "raw_text": "S Angel Deborah, S Milton Rajendram, and T T Mir- nalinee. 2017b. Ssn mlrg1 at semeval-2017 task 5: Fine-grained sentiment analysis using multiple kernel gaussian process regression model. In Pro- ceedings the 11th International Workshop on Se- mantic Evaluation (SemEval-2017), pages 823-826. ACL,Vancouver, Canada.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Support Vector Regression", "authors": [ { "first": "M", "middle": [], "last": "Awad", "suffix": "" }, { "first": "", "middle": [], "last": "Khanna", "suffix": "" } ], "year": 2015, "venue": "Efficient Learning Machines", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M Awad and R Khanna. 2015. Support Vector Re- gression. In: Efficient Learning Machines. Apress, Berkeley, CA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Pandemics in the age of twitter: content analysis of tweets during the 2009 h1n1 outbreak", "authors": [ { "first": "C", "middle": [], "last": "Chew", "suffix": "" }, { "first": "G", "middle": [], "last": "Eysenbach", "suffix": "" } ], "year": 2010, "venue": "PloS ONE", "volume": "5", "issue": "11", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Chew and G. Eysenbach. 2010. Pandemics in the age of twitter: content analysis of tweets during the 2009 h1n1 outbreak. PloS ONE, 5(11):1-13.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Neural Networks -A Comprehensive Foundation", "authors": [ { "first": "Simon", "middle": [], "last": "Haykin", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Haykin. 1998. Neural Networks -A Compre- hensive Foundation. Second Edition, Prentice-Hall, Englewood Cliffs, NJ.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Twitter power: Tweets as electronic word of mouth", "authors": [ { "first": "B", "middle": [ "J" ], "last": "Jansen", "suffix": "" }, { "first": "K", "middle": [], "last": "Sobel", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "A", "middle": [], "last": "Chowdury", "suffix": "" } ], "year": 2009, "venue": "Journal of the American Society for Information Science and Technology", "volume": "60", "issue": "11", "pages": "2169--2188", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. J. Jansen, K. Sobel M. Zhang, and A. Chowdury. 2009. Twitter power: Tweets as electronic word of mouth. Journal of the American Society for Infor- mation Science and Technology, 60(11):2169-2188.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Ecnu at semeval-2017 task 5: An ensemble of regression algorithms with effective features for finegrained sentiment analysis in financial domain", "authors": [ { "first": "Mengxiao", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Man", "middle": [], "last": "Lan1", "suffix": "" }, { "first": "Yuanbin", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2017, "venue": "Proceedings the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "888--893", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mengxiao Jiang, Man Lan1, and Yuanbin Wu. 2017. Ecnu at semeval-2017 task 5: An ensemble of re- gression algorithms with effective features for fine- grained sentiment analysis in financial domain. In Proceedings the 11th International Workshop on Se- mantic Evaluation (SemEval-2017), pages 888-893. ACL,Vancouver, Canada.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Cufe at semeval-2016 task 4: A gated recurrent model for sentiment classification", "authors": [ { "first": "Mahmoud", "middle": [], "last": "Nabil", "suffix": "" }, { "first": "Mohamed", "middle": [], "last": "Aly", "suffix": "" }, { "first": "Amir", "middle": [ "F" ], "last": "Atiya", "suffix": "" } ], "year": 2016, "venue": "Proceedings of SemEval-2016.ACL", "volume": "", "issue": "", "pages": "52--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahmoud Nabil, Mohamed Aly, and Amir F. Atiya. 2016. Cufe at semeval-2016 task 4: A gated recur- rent model for sentiment classification. In Proceed- ings of SemEval-2016.ACL, Vancouver, Canada., pages 52-57.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Hcs at semeval-2017 task 5: Sentiment detection in business news using convolutional neural networks", "authors": [ { "first": "Lidia", "middle": [], "last": "Pivovarova", "suffix": "" }, { "first": "Llorenc", "middle": [], "last": "Escoter", "suffix": "" }, { "first": "Arto", "middle": [], "last": "Klami", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "" } ], "year": 2017, "venue": "Proceedings the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "842--846", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lidia Pivovarova, Llorenc Escoter, Arto Klami, and Roman Yangarber. 2017. Hcs at semeval-2017 task 5: Sentiment detection in business news using con- volutional neural networks. In Proceedings the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 842-846. ACL,Vancouver, Canada.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Emotion intensities in tweets", "authors": [ { "first": "M", "middle": [], "last": "", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Bravo-Marquez", "middle": [], "last": "Felipe", "suffix": "" } ], "year": 2017, "venue": "Proceedings of sixth joint conference on lexical and computational semantics (*Sem)", "volume": "", "issue": "", "pages": "65--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Mohammad Saif and Bravo-Marquez Felipe. 2017. Emotion intensities in tweets. In Proceedings of sixth joint conference on lexical and computational semantics (*Sem). Vancouver, Canada., pages 65- 77.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Ritual-uh at semeval-2017 task 5: Sentiment analysis on financial data using neural networks", "authors": [ { "first": "Kar", "middle": [], "last": "Sudipta", "suffix": "" }, { "first": "Suraj", "middle": [], "last": "Maharjan", "suffix": "" }, { "first": "Thamar", "middle": [], "last": "Solorio", "suffix": "" } ], "year": 2017, "venue": "Proceedings the 11th International Workshop on Semantic Evaluation (SemEval-2017)", "volume": "", "issue": "", "pages": "877--882", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kar Sudipta, Suraj Maharjan, and Thamar Solorio. 2017. Ritual-uh at semeval-2017 task 5: Sentiment analysis on financial data using neural networks. In Proceedings the 11th International Workshop on Se- mantic Evaluation (SemEval-2017), pages 877-882. ACL,Vancouver, Canada.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Sentiheros at semeval-2017 task 5: An application of sentiment analysis on financial tweets", "authors": [ { "first": "Narges", "middle": [], "last": "Tabari", "suffix": "" }, { "first": "Armin", "middle": [], "last": "Seyeditabari", "suffix": "" }, { "first": "Wlodek", "middle": [], "last": "Zadrozny", "suffix": "" } ], "year": 2017, "venue": "Proceedings the 11th International Workshop on Semantic Evaluation (SemEval-2017)", "volume": "", "issue": "", "pages": "857--860", "other_ids": {}, "num": null, "urls": [], "raw_text": "Narges Tabari, Armin Seyeditabari, and Wlodek Zadrozny. 2017. Sentiheros at semeval-2017 task 5: An application of sentiment analysis on finan- cial tweets. In Proceedings the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 857-860. ACL,Vancouver, Canada.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Natural language processing to the rescue extracting situational awareness tweets during mass emergency", "authors": [ { "first": "S", "middle": [], "last": "Verma", "suffix": "" }, { "first": "S", "middle": [], "last": "Vieweg", "suffix": "" }, { "first": "W", "middle": [ "J" ], "last": "Corvey", "suffix": "" }, { "first": "L", "middle": [], "last": "Palen", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Martin", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "A", "middle": [], "last": "Schram", "suffix": "" }, { "first": "K", "middle": [ "M" ], "last": "Anderson", "suffix": "" } ], "year": 2011, "venue": "Proceedings of 5th International Conference on Web and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Verma, S. Vieweg, W. J. Corvey, L. Palen, J. H. Martin, M. Palmer, A. Schram, and K. M. Ander- son. 2011. Natural language processing to the res- cue extracting situational awareness tweets during mass emergency. In Proceedings of 5th Interna- tional Conference on Web and Social Media.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Multi-Layer Perceptron.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "text": "Return the feature vector generated as the input to build the model.", "num": null, "type_str": "table", "content": "
, do the follow-
", "html": null }, "TABREF1": { "text": "Parts of speech categories.", "num": null, "type_str": "table", "content": "
The Multi-Layer Perceptron (MLP) model is
built by the following algorithm.
Algorithm: Build a Multi-Layer Perceptron model.
Input: Feature vectors and actual output labels. Output: Learned MLP model. begin 1. Represent feature vectors as XTrain and ac-
tual output label as YTrain.
2. Build the initial classification model with two
hidden layers and the output layer, using
relu and softmax activation functions, re-
spectively.
3. Optimize the classification model using
nadam optimizer of keras package
4. Return the learned model.
end The Support Vector Regression(SVR) model is
bulit with the following algorithm.
Algorithm: Build a SVR model. Input: Feature vectors and actual output labels. Output: Learned SVR model. begin 1. Represent feature vectors as XTrain and ac-
", "html": null }, "TABREF2": { "text": "", "num": null, "type_str": "table", "content": "
: Pearson score for EI-reg (emotion intensity regression) using SVR.
anger fearjoysadness macro
averaged
0.365 0.363 0.390 0.3830.375
", "html": null }, "TABREF3": { "text": "", "num": null, "type_str": "table", "content": "
: Pearson score for EI-oc (emotion intensity or-dinal classification) using MLP.
", "html": null }, "TABREF4": { "text": "", "num": null, "type_str": "table", "content": "
: Pearson score for V-reg (valence intensity re-gression) using SVR.
", "html": null }, "TABREF5": { "text": "", "num": null, "type_str": "table", "content": "
: Pearson score for V-oc (Sentiment intensity ordinal classification) using MLP.
", "html": null } } } }