{ "paper_id": "S18-1033", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:45:13.236808Z" }, "title": "Amobee at SemEval-2018 Task 1: GRU Neural Network with a CNN Attention Mechanism for Sentiment Classification", "authors": [ { "first": "Alon", "middle": [], "last": "Rozental", "suffix": "", "affiliation": {}, "email": "alon.rozental@amobee.com" }, { "first": "Daniel", "middle": [], "last": "Fleischer", "suffix": "", "affiliation": {}, "email": "daniel.fleischer@amobee.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes the participation of Amobee in the shared sentiment analysis task at SemEval 2018. We participated in all the English sub-tasks and the Spanish valence tasks. Our system consists of three parts: training task-specific word embeddings, training a model consisting of gated-recurrentunits (GRU) with a convolution neural network (CNN) attention mechanism and training stacking-based ensembles for each of the subtasks. Our algorithm reached 3rd and 1st places in the valence ordinal classification subtasks in English and Spanish, respectively.", "pdf_parse": { "paper_id": "S18-1033", "_pdf_hash": "", "abstract": [ { "text": "This paper describes the participation of Amobee in the shared sentiment analysis task at SemEval 2018. We participated in all the English sub-tasks and the Spanish valence tasks. Our system consists of three parts: training task-specific word embeddings, training a model consisting of gated-recurrentunits (GRU) with a convolution neural network (CNN) attention mechanism and training stacking-based ensembles for each of the subtasks. Our algorithm reached 3rd and 1st places in the valence ordinal classification subtasks in English and Spanish, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer. The main motivation is enabling computers to better understand human language, particularly sentiment carried by the speaker. Among the popular sources of textual data for NLP is Twitter, a social network service where users communicate by posting short messages, no longer than 280 characters long-called tweets. Tweets can carry sentimental information when talking about events, public figures, brands or products. Unique linguistic features, such as the use of slang, emojis, misspelling and sarcasm, make Twitter a challenging source for NLP research, attracting the interest of both academia and the industry.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Semeval is a yearly event in which international teams of researchers work on tasks in a competition format where they tackle open research questions in the field of semantic analysis. We participated in Semeval 2018 task 1, which focuses on sentiment and emotions evaluation in tweets. There were three main problems: identifying the * These authors contributed equally to this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "presence of a given emotion in a tweet (sub-tasks EI-reg, EI-oc), identifying the general sentiment (valence) in a tweet (sub-tasks V-reg, V-oc) and identifying which emotions are expressed in a tweet (sub-task E-c). For a complete description of Semeval 2018 task 1, see the official task description .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We developed an architecture based on gatedrecurrent-units (GRU, Cho et al. (2014) ). We used a bi-directional GRU layer, together with a convolutional neural network (CNN) attentionmechanism, where its input is the hidden states of the GRU layer; lastly there were two fully connected layers. We will refer to this architecture as the Amobee sentiment classifier (ASC). We used ASC to train word embeddings to incorporate sentiment information and to classify sentiment using annotated tweets. We participated in all the English sub-tasks and in the valence Spanish subtasks, achieving competitive results.", "cite_spans": [ { "start": 59, "end": 82, "text": "(GRU, Cho et al. (2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows: section 2 describes our data sources, section 3 describes the data pre-processing pipeline. A description of the main architecture is in section 4. Section 5 describes the word embeddings generation; section 6 describes the extraction of features. In section 7 we describe the performance of our models; finally, in section 8 we review and summarize the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We used four sources of data:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": "2" }, { "text": "1. Twitter Firehose: we randomly sampled 200 million tweets using the Twitter Firehose service. They were used for training word embeddings and for distant supervision learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": "2" }, { "text": "2. Semeval 2017 task 4 datasets of tweets, annotated according to their general sentiment on 3 and 5 level scales; used to train the ASC model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": "2" }, { "text": "3. Annotated tweets from an external source 1 , annotated on a 3-level scale; used to train the ASC model. 4. Official Semeval 2018 task 1 datasets: used to train task specific models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": "2" }, { "text": "Datasets of Semeval 2017 and the external source were combined with compression 2 ; the resulting dataset contained 88,623 tweets with the following distribution: positive: 30097 sentences (34%), neutral: 35818 (40%), negative: 22708 (26%). Description of the official Semeval 2018 task 1 datasets can be found in Mohammad et al. 2018; Mohammad and Kiritchenko (2018).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": "2" }, { "text": "We started by defining a cleaning pipeline that produces two cleaned version of an original text; we refer to them as \"simple\" and \"complex\" versions. Both versions share the same initial cleaning steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3" }, { "text": "1. Word tokenization using the CoreNLP library (Manning et al., 2014) .", "cite_spans": [ { "start": 47, "end": 69, "text": "(Manning et al., 2014)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3" }, { "text": "2. Parts of speech (POS) tagging using the Tweet NLP tagger, trained on Twitter data (Owoputi et al., 2013) .", "cite_spans": [ { "start": 85, "end": 107, "text": "(Owoputi et al., 2013)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3" }, { "text": "3. Grouping similar emojis and replacing them with representative keywords.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3" }, { "text": "replacing URLs with a special keyword, removing duplications, breaking #CamelCasingHashtags into individual words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regex:", "sec_num": "4." }, { "text": "The complex version contains these additional steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regex:", "sec_num": "4." }, { "text": "1. Word lemmatization, using CoreNLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regex:", "sec_num": "4." }, { "text": "CoreNLP and replacing the entities with representative keywords, e.g. date , number , brand , etc. 3. Synonym replacement, based on a manuallycreated dictionary. 4. Word replacement using a Wikipedia dictionary, created by crawling and extracting lists of places, brands and names.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named entity recognition (NER) using", "sec_num": "2." }, { "text": "1 https://github.com/monkeylearn/sentiment-analysisbenchmark 2 Transformed 5 labels to 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named entity recognition (NER) using", "sec_num": "2." }, { "text": "{\u22122, \u22121} \u2192 {\u22121}, {1, 2} \u2192 {1}, {0} \u2192 {0}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named entity recognition (NER) using", "sec_num": "2." }, { "text": "As an example, table 1 shows a fictitious tweet and the results after the simple and complex cleaning stages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named entity recognition (NER) using", "sec_num": "2." }, { "text": "Our main contribution is an RNN network, based on GRU units with a CNN-based attention mechanism; we will refer to it as the Amobee sentiment classifier (ASC). It is comprised of four identical sub-models, which differ by the input data each of them receives. Sub-model inputs are composed of word embeddings and embeddings of the POS tags-see section 5 for a description of our embedding procedure. The words were embedded in a 200 or 150 dimensional vector spaces and the POS tags were embedded in a 8 dimensional vector space. We pruned the tweets to have 40 words, padding shorter sentences with a zero vector. The embeddings form the input layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ASC Architecture", "sec_num": "4" }, { "text": "Next we describe the sub-model architecture; the embeddings were fed to a bi-directional GRU layer of dimension 200. Inspired by the attention mechanism introduced in Bahdanau et al. 2014, we extracted the hidden states of the GRU layer; each state corresponds to a decoded word in the GRU as it reads each tweet word by word. The hidden states were arranged in a matrix of dimension 40 \u00d7 400 for each tweet (bi-directionality of the GRU layer contributes a factor of 2). We fed the hidden states to a CNN layer, instead of a weighted sum as in the original paper. We used 6 filter sizes [1, 2, 3, 4, 5, 6] , with 100 filters for each size. After a max-pooling layer we concatenated all outputs, creating a 600 dimensional vector. Next was a fully connected layer of size 30 with tanh activation, and finally a fully connected layer of size 3 with a softmax activation function.", "cite_spans": [ { "start": 588, "end": 591, "text": "[1,", "ref_id": null }, { "start": 592, "end": 594, "text": "2,", "ref_id": null }, { "start": 595, "end": 597, "text": "3,", "ref_id": null }, { "start": 598, "end": 600, "text": "4,", "ref_id": null }, { "start": 601, "end": 603, "text": "5,", "ref_id": null }, { "start": 604, "end": 606, "text": "6]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ASC Architecture", "sec_num": "4" }, { "text": "We defined 4 such sub-models with embedding inputs of the following settings: w2v-200, w2v-150, ft-200, ft-150 (ft=FastText, w2v=Word2Vec, see discussion in the next section). We combined the four sub-models by extracting their hidden d = 30 layer and concatenating them. Next we added a fully connected d = 25 layer with tanh activation and a final fully connected layer of size 3. See figure 1 for an illustration of the entire architecture. We used the AdaGrad optimizer (Duchi et al., 2011) and a cross-entropy loss function. We used the Keras library (Chollet et al., 2015) and the TensorFlow framework (Abadi et al., 2016) . ", "cite_spans": [ { "start": 474, "end": 494, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF6" }, { "start": 556, "end": 578, "text": "(Chollet et al., 2015)", "ref_id": null }, { "start": 608, "end": 628, "text": "(Abadi et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "ASC Architecture", "sec_num": "4" }, { "text": "GRU (F) GRU (F) GRU (F) GRU (F) GRU (B) GRU (B) GRU (B) GRU (B)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ASC Architecture", "sec_num": "4" }, { "text": "Word embedding is a family of techniques in which words are encoded as real-valued vectors of lower dimensionality. These word representations have been used successfully in sentiment analysis tasks in recent years. Among the popular algorithms are Word2Vec (Mikolov et al., 2013) and FastText (Bojanowski et al., 2016) . Word embeddings are useful representations of words and can uncover hidden relationships. However, one disadvantage they have is the typical lack of sentiment information. For example, the word vector \"good\" can be very close to the word vector \"bad\" in some trained, off-the-shelf word embeddings. Our goal was to train word embeddings based on Twitter data and then relearn them so they will contain emotion-specific sentiment.", "cite_spans": [ { "start": 258, "end": 280, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF11" }, { "start": 294, "end": 319, "text": "(Bojanowski et al., 2016)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Embeddings Training", "sec_num": "5" }, { "text": "We started with our 200 million tweets dataset; we cleaned them using the pre-processing pipeline (described in section 3) and then trained generic embeddings using the Gensim package (\u0158eh\u016f\u0159ek and Sojka, 2010); we created four embeddings for the words and two embeddings for the POS tags: for each sentence we created a list of corresponding POS tags (there are 25 tags offered by the tagger we used); treating the tags as words, we trained d = 8 embeddings using the word2vec algorithm on the simple and complex cleaned datasets. The embeddings parameters are specified in table 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings Training", "sec_num": "5" }, { "text": "Following Tang et al. (2014) ; Cliche (2017), who explored training word embeddings for sentiment classification, we employed a similar approach. We created distant supervision datasets, first, by manually compiling 4 lists of representative words for each emotion: anger, fear, joy and sadness; then, we built two datasets for each emotion: the first containing tweets with the representative words and the second does not. Each list contained about 40 words and each dataset contained roughly 2 million tweets. We used the ASC sub-model architecture (section 4) to train as following: training for one epoch with embeddings set to be untrainable (fixed). Then train for 6 epochs where the embeddings can change. Overall we trained 16 word embeddings-4 embedding configurations for each emotion. In addition, we decided to use the trained models' final hidden layer (d = 15) as a feature vector in the task-specific architectures; our motivation was using them as emotion and intensity classifiers via transfer learning. ", "cite_spans": [ { "start": 10, "end": 28, "text": "Tang et al. (2014)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Embeddings Training", "sec_num": "5" }, { "text": "In addition to our ASC models, we extracted semantic and syntactic features, based on domain knowledge:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6 Features Description", "sec_num": "220" }, { "text": "\u2022 Number of magnifier and diminisher words, e.g. \"incredibly\", \"hardly\" in each tweet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6 Features Description", "sec_num": "220" }, { "text": "\u2022 Logarithm of length of sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6 Features Description", "sec_num": "220" }, { "text": "\u2022 Existence of elongated words, e.g. \"wowww\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6 Features Description", "sec_num": "220" }, { "text": "\u2022 Fully capitalized words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6 Features Description", "sec_num": "220" }, { "text": "\u2022 The symbols #,@ appearing in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6 Features Description", "sec_num": "220" }, { "text": "\u2022 Predictions of external packages: Vader (part of the NLTK library, Hutto and Gilbert, 2014) and TextBlob (Loria et al., 2014) .", "cite_spans": [ { "start": 69, "end": 93, "text": "Hutto and Gilbert, 2014)", "ref_id": "BIBREF7" }, { "start": 107, "end": 127, "text": "(Loria et al., 2014)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "6 Features Description", "sec_num": "220" }, { "text": "Additionally, we compiled a list of 338 emojis and words in 16 categories of emotion, annotated with scores from the set {0.5, 1, 1.5, 2}. For each sentence, we summed up the scores in each category, up to a maximum value of 5, generating 16 features. The categories are: anger, disappointed, fear, hopeful, joy, lonely, love, negative, neutral, positive, sadness and surprise. Finally, we used the NRC Affect Intensity lexicon (Mohammad, 2017) containing 5814 entries; each entry is a word with a score between 0 and 1 for a given emotion out of the following: anger, fear, joy and sadness. We used the lexicon to produce 4 emotion features from hashtags in the tweets; each feature contained the largest score of all the hashtags in the tweet. For a summary of all features used, see table 6 in the appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6 Features Description", "sec_num": "220" }, { "text": "Our general workflow for the tasks is as follows: for each sub-task, we started by cleaning the datasets, obtaining two cleaned versions. We ran a pipeline that produced all the features we designed: the ASC predictions and the features described in section 6. We removed sparse features (less than 8 samples). Next, we defined a shallow neural network with a soft-voting ensemble. We chose the best features and metaparameters-such as learning rate, batch size and number of epochs-based on the dev dataset. Finally, we generated predictions for the regression tasks. For the classification tasks, we used a grid search method on the regression predictions to optimize the loss. Most model trainings were conducted on a local machine equipped with a Nvidia GTX 1080 Ti GPU. Our official results are summarized in table 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "7" }, { "text": "In the valence sub-tasks, we identified how intense a general sentiment (valence) is; the score is either in a continuous scale between 0 and 1 or classified into 7 ordinal classes {\u22123, \u22122, \u22121, 0, 1, 2, 3}, and is evaluated using the Pearson correlation coefficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "We started with the regression task and defined the following model: first, we normalized the features to have zero mean and SD = 1. Then, we inserted 300 instances of fully connected layers of size 3, with a softmax activation and no bias term. For each copy, we applied the function f (x) = (x 0 \u2212 x 2 ) /2 + 0.5 where x 0 , x 2 are the 1st and 3rd component of each hidden layer. Our aim was transforming the label predictions of the ASCs (trained on 3-label based sentiment annotation) into a regression score such that high certainty in either label (negative, neutral or positive) would produce scores close to 0, 0.5 or 1, respectively. Finally, we calculated the mean of all 300 prediction to get the final node; this is also known as a soft-voting ensemble. We used the Adam optimizer (Kingma and Ba, 2014) with default values, mean-square-error loss function, batch size of 400 and 65 epochs of training. For an illustration of the network, see figure 2. We experimented with the dev dataset, testing different subsets of the features. Finally we produced predictions for the regression sub-task V-reg.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "We analyzed the relative contribution of each feature by measuring variable importance using Pratt (1987) approach. We calculated scores d i for each feature using the following formula: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "d i = \u03b2 i\u03c1i /R 2 where\u03b2 i denotes the sample estimation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "V v 6 l + t V a s u 3 Q y K 4 K o k t c V 2 V 3 D j s g V r C 2 0 o k + m k H T u Z h J m J U E K f w I 0 L F b e + h c / h z p 2 P 4 j Q p 4 t + B g Y 9 z 7 + X e O V 7 E m d K 2 / W 7 l V l b X 1 j f y m 4 W t 4 v b O b m l v / 1 q F s S S 0 Q 0 I e y p 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "H F e V M 0 I 5 m m t N e J C k O P E 6 7 3 v R i U e / e U q l Y K K 7 0 L K J u g M e C + Y x g b a y 2 P y w d 2 2 U 7 F f o L z h K O m 8 X X 9 g c A t I a l t 8 E o J H F A h S Y c K 9 V 3 7 E i 7 C Z a a E U 7 n h U G s a I T J F I 9 p 3 6 D A A ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "V V u k h 4 6 R y f G G S E / l O Y J j V L 3 + 0 S C A 6 V m g W c 6 A 6 w n 6 n d t Y f 5 X 6 8 f a r 7 s J E 1 G s q S D Z I j / m S I d o 8 W s 0 Y p I S z W c G M J H M 3 I r I B E t M t M m m k I Z w 5 j T s W h 1 l U G 0 s o V b 9 C q F T K T f K d t u E U Y F M e T i E I z g F B 8 6 h C Z f Q g g 4 Q o H A H D / B o 3 V j 3 1 p P 1 n L X m r O X M A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "Z B N S w J B G M e f t T c z K 6 t j E E M S d J L V l P Q m d O m o 0 K a g i 8 y O s z o 5 + 8 L M b C C L x 0 5 d O l R 0 7 V v 4 O b r 1 G f o S j b s S v f 1 h 4 M f / e R 6 e Z / 5 O y J l U p v l u Z F Z W 1 9 Y 3 s p u 5 r f z 2 z m 5 h b / 9 a B p E g 1 C I B D 0 T X w Z J y 5 l N L M c V p N x Q U e w 6 n H W d y s a h 3 b q m Q L P C v 1 D S k t o d H P n M Z w U p b b X d Q K J o l M x H 6 C + U l F J v 5 e f v j 7 m j e G h T e + s O A R B 7 1 F e F Y y l 7 Z D J U d Y 6 E Y 4 X S W 6 0 e S h p h M 8 I j 2 N P r Y o 9 K O k 0 N n 6 E Q 7 Q + Q G Q j 9 f o c T 9 P h F j T 8 q p 5 + h O D 6 u x / F 1 b m P / V e p F y 6 3 b M / D B S 1 C f p I j f i S A V o 8 W s 0 Z I I S x a c a M B F M 3 4 r I G A t M l M 4 m l 4 R w V m 6 Y t T p K o d p Y Q q 3 6 F Y J V K T V K Z l u H U Y F U W T i E Y z i F M p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "x D E y 6 h B R Y Q o H A P j / B k 3 B g P x r P x k r Z m j O X M A f y Q 8 f o J J g i Q i w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" t b z 6 j 6 j R E Q M P D d o b u c s X K s t e 2 7 U = \" > A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "A A B 5 3 i c b Z B N S w J B G M e f t T c z K 6 t j E E M S d J L V l P Q m d O m o 0 K a g i 8 y O s z o 5 + 8 L M b C C L x 0 5 d O l R 0 7 V v 4 O b r 1 G f o S j b s S v f 1 h 4 M f / e R 6 e Z / 5 O y J l U p v l u Z F Z W 1 9 Y 3 s p u 5 r f z 2 z m 5 h b / 9 a B p E g 1 C I B D 0 T X w Z J y 5 l N L M c V p N x Q U e w 6 n H W d y s a h 3 b q m Q L P C v 1 D S k t o d H P n M Z w U p b b X d Q K J o l M x H 6 C + U l F J v 5 e f v j 7 m j e G h T e + s O A R B 7 1 F e F Y y l 7 Z D J U d Y 6 E Y 4 X S W 6 0 e S h p h M 8 I j 2 N P r Y o 9 K O k 0 N n 6 E Q 7 Q + Q G Q j 9 f o c T 9 P h F j T 8 q p 5 + h O D 6 u x / F 1 b m P / V e p F y 6 3 b M / D B S 1 C f p I j f i S A V o 8 W s 0 Z I I S x a c a M B F M 3 4 r I G A t M l M 4 m l 4 R w V m 6 Y t T p K o d p Y Q q 3 6 F Y J V K T V K Z l u H U Y F U W T i E Y z i F M p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "x D E y 6 h B R Y Q o H A P j / B k 3 B g P x r P x k r Z m j O X M A f y Q 8 f o J J g i Q i w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" t b z 6 j 6 j R E Q M P D d o b u c s X K s t e 2 7 U = \" > A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "A A B 5 3 i c b Z B N S w J B G M e f t T c z K 6 t j E E M S d J L V l P Q m d O m o 0 K a g i 8 y O s z o 5 + 8 L M b C C L x 0 5 d O l R 0 7 V v 4 O b r 1 G f o S j b s S v f 1 h 4 M f / e R 6 e Z / 5 O y J l U p v l u Z F Z W 1 9 Y 3 s p u 5 r f z 2 z m 5 h b / 9 a B p E g 1 C I B D 0 T X w Z J y 5 l N L M c V p N x Q U e w 6 n H W d y s a h 3 b q m Q L P C v 1 D S k t o d H P n M Z w U p b b X d Q K J o l M x H 6 C + U l F J v 5 e f v j 7 m j e G h T e + s O A R B 7 1 F e F Y y l 7 Z D J U d Y 6 E Y 4 X S W 6 0 e S h p h M 8 I j 2 N P r Y o 9 K O k 0 N n 6 E Q 7 Q + Q G Q j 9 f o c T 9 P h F j T 8 q p 5 + h O D 6 u x / F 1 b m P / V e p F y 6 3 b M / D B S 1 C f p I j f i S A V o 8 W s 0 Z I I S x a c a M B F M 3 4 r I G A t M l M 4 m l 4 R w V m 6 Y t T p K o d p Y Q q 3 6 F Y J V K T V K Z l u H U Y F U W T i E Y z i F M p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "x D E y 6 h B R Y Q o H A P j / B k 3 B g P x r P x k r Z m j O X M A f y Q 8 f o J J g i Q i w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" t b z 6 j 6 j R E Q M ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "V m 6 Y t T p K o d p Y Q q 3 6 F Y J V K T V K Z l u H U Y F U W T i E Y z i F M p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "x D E y 6 h B R Y Q o H A P j / B k 3 B g P x r P x k r Z m j O X M A f y Q 8 f o J J g i Q i w = = < / l a t e x i t > f < l a t e x i t s h a 1 _ b a s e 6 4 = \" w W R c Y J 4 y e w m D 9 i 1 1 L h 2 0 y C H T R 8 M = \" > A A A B 5 3 i c b Z D N S s N A F I V v 6 l + t V a s u 3 Q y K 4 K o k t c V 2 V 3 D j s g V r C 2 0 o k + m k H T u Z h J m J U E K f w I 0 L F b e + h c / h z p 2 P 4 j Q p 4 t + B g Y 9 z 7 + X e O V 7 E m d K 2 / W 7 l V l b X 1 j f y m 4 W t 4 v b O b m l v / 1 q F s S S 0 Q 0 I e y p 6 H F e V M 0 I 5 m m t N e J C k O P E 6 7 3 v R i U e / e U q l Y K K 7 0 L K J u g M e C + Y x g b a y 2 P y w d 2 2 U 7 F f o L z h K O m 8 X X 9 g c A t I a l t 8 E o J H F A h S Y c K 9 V 3 7 E i 7 C Z a a E U 7 n h U G s a I T J F I 9 p 3 6 D A A V V u k h 4 6 R y f G G S E / l O Y J j V L 3 + 0 S C A 6 V m g W c 6 A 6 w n 6 n d t Y f 5 X 6 8 f a r 7 s J E 1 G s q S D Z I j / m S I d o 8 W s 0 Y p I S z W c G M J H M 3 I r I B E t M t M m m k I Z w 5 j T s W h 1 l U G 0 s o V b 9 C q F T K T f K d t u E U Y F M e T i E I z g F B 8 6 h C Z f Q g g 4 Q o H A H D / B o 3 V j 3 1 p P 1 n L X m r O X M A f y Q 9 f I J S S m P J Q = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" t b z 6 j 6 j R E Q M ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "V m 6 Y t T p K o d p Y Q q 3 6 F Y J V K T V K Z l u H U Y F U W T i E Y z i F M p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "x D E y 6 h B R Y Q o H A P j / B k 3 B g P x r P x k r Z m j O X M A f y Q 8 f o J J g i Q i w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" t b z 6 j 6 j R E Q M ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "V m 6 Y t T p K o d p Y Q q 3 6 F Y J V K T V K Z l u H U Y F U W T i E Y z i F M p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "x D E y 6 h B R Y Q o H A P j / B k 3 B g P x r P x k r Z m j O X M A f y Q 8 f o J J g i Q i w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" t b z 6 j 6 j R E Q M ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "V m 6 Y t T p K o d p Y Q q 3 6 F Y J V K T V K Z l u H U Y F U W T i E Y z i F M p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "x D E y 6 h B R Y Q o H A P j / B k 3 B g P x r P x k r Z m j O X M A f y Q 8 f o J J g i Q i w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" t b z 6 j 6 j R E Q M ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "M / D B S 1 C f p I j f i S A V o 8 W s 0 Z I I S x a c a M B F M 3 4 r I G A t M l M 4 m l 4 R w V m 6 Y t T p K o d p Y Q q 3 6 F Y J V K T V K Z l u H U Y F U W T i E Y z i F M p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "x D E y 6 h B R Y Q o H A P j / B k 3 B g P x r P x k r Z m j O X M A f y Q 8 f o J J g i Q i w = = < / l a t e x i t > f < l a t e x i t s h a 1 _ b a s e 6 4 = \" w W R c Y J 4 y e w m D 9 i 1 1 L h 2 0 y C H T R 8 M = \" > A A A B 5 3 i c b Z D N S s N A F I V v 6 l + t V a s u 3 Q y K 4 K o k t c V 2 V 3 D j s g V r C 2 0 o k + m k H T u Z h J m J U E K f w I 0 L F b e + h c / h z p 2 P 4 j Q p 4 t + B g Y 9 z 7 + X e O V 7 E m d K 2 / W 7 l V l b X 1 j f y m 4 W t 4 v b O b m l v / 1 q F s S S 0 Q 0 I e y p 6 H F e V M 0 I 5 m m t N e J C k O P E 6 7 3 v R i U e / e U q l Y K K 7 0 L K J u g M e C + Y x g b a y 2 P y w d 2 2 U 7 F f o L z h K O m 8 X X 9 g c A t I a l t 8 E o J H F A h S Y c K 9 V 3 7 E i 7 C Z a a E U 7 n h U G s a I T J F I 9 p 3 6 D A A ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "V V u k h 4 6 R y f G G S E / l O Y J j V L 3 + 0 S C A 6 V m g W c 6 A 6 w n 6 n d t Y f 5 X 6 8 f a r 7 s J E 1 G s q S D Z I j / m S I d o 8 W s 0 Y p I S z W c G M J H M 3 I r I B E t M t M m m k I Z w 5 j T s W h 1 l U G 0 s o V b 9 C q F T K T f K d t u E U Y F M e T i E I z g F B 8 6 h C Z f Q g g 4 Q o H A H D / B o 3 V j 3 1 p P 1 n L X m r O X M A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "j E E M S d J L V l P Q m d O m o 0 K a g i 8 y O s z o 5 + 8 L M b C C L x 0 5 d O l R 0 7 V v 4 O b r 1 G f o S j b s S v f 1 h 4 M f / e R 6 e Z / 5 O y J l U p v l u Z F Z W 1 9 Y 3 s p u 5 r f z 2 z m 5 h b / 9 a B p E g 1 C I B D 0 T X w Z J y 5 l N L M c V p N x Q U e w 6 n H W d y s a h 3 b q m Q L P C v 1 D S k t o d H P n M Z w U p b b X d Q K J o l M x H 6 C + U l F J v 5 e f v", "eq_num": "j" } ], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "7 m j e G h T e + s O A R B 7 1 F e F Y y l 7 Z D J U d Y 6 E Y 4 X S W 6 0 e S h p h M 8 I j 2 N P r Y o 9 K O k 0 N n 6 E Q 7 Q + Q G Q j 9 f o c T 9 P h F j T 8 q p 5 + h O D 6 u x / F 1 b m P / V e p F y 6 3 b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "M / D B S 1 C f p I j f i S A V o 8 W s 0 Z I I S x a c a M B F M 3 4 r I G A t M l M 4 m l 4 R w V m 6 Y t T p K o d p Y Q q 3 6 F Y J V K T V K Z l u H U Y F U W T i E Y z i F M p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "x D E y 6 h B R Y Q o H A P j / B k 3 B g P x r P x k r Z m j O X M A f y Q 8 f o J J g i Q i w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" t b z 6 j 6 j R E Q M P D d o b u c s X K s t e 2 7 U = \" > A A A B 5 3 i c b Z B N S w J B G M e f t T c z ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "K 6 t j E E M S d J L V l P Q m d O m o 0 K a g i 8 y O s z o 5 + 8 L M b C C L x 0 5 d O l R 0 7 V v 4 O b r 1 G f o S j b s S v f 1 h 4 M f / e R 6 e Z / 5 O y J l U p v l u Z F Z W 1 9 Y 3 s p u 5 r f z 2 z m 5 h b / 9 a B p E g 1 C I B D 0 T X w Z J y 5 l N L M c V p N x Q U e w 6 n H W d y s a h 3 b q m Q L P C v 1 D S k t o d H P n M Z w U p b b X d Q K J o l M x H 6 C + U l F J v 5 e f v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "V m 6 Y t T p K o d p Y Q q 3 6 F Y J V K T V K Z l u H U Y F U W T i E Y z i F M p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "x D E y 6 h B R Y Q o H A P j / B k 3 B g P x r P x k r Z m j O X M A f y Q 8 f o J J g i Q i w = = < / l a t e x i t > of the feature,\u03c1 i is the simple correlation between the labels and the ith feature and R is the coefficient of determination (see Thomas et al. 1998) . We present the relative contribution of each feature in figure 3 and the top 10 features in table 4. We can see that the ASC models, both general and emotion-specific, contributed about 72% of the total contribution made by all features, in this sub-task. For the ordinal classification task, we used the predictions of the regression task on the sentences, which were the same in both tasks. Using a grid search method, we partitioned the regression scores into 7 categories such that the Pearson correlation coefficient was maximized. We submitted the classes predictions as sub-task V-oc. Our final scores were 0.843, 0.813 in the regression and classification sub-tasks, respectively. ", "cite_spans": [ { "start": 251, "end": 270, "text": "Thomas et al. 1998)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Valence Prediction", "sec_num": "7.1" }, { "text": "In the emotion intensity sub-tasks, we identified how intense a given emotion is in the given tweets. The four emotions were: anger, fear, joy and sadness; the score is either in a scale between 0 and 1 or classified into 4 ordinal classes {0, 1, 2, 3}. Performance was evaluated using the Pearson correlation coefficient. Our approach was similar to the valence tasks; first we generated features, then we used the same architecture as in the valence sub-tasks, depicted in figure 2. However, in these sub-tasks we used the emotionspecific embeddings for each emotion sub-task. We generated regression predictions and submitted them as the EI-reg sub-tasks; finally we carried a grid search for the best partition, maximizing the Pearson correlation and submitted the classes predictions as sub-tasks EI-oc. For a summary of the training parameters used in the regression subtasks, see table 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Intensity", "sec_num": "7.2" }, { "text": "Our system performed as following: in the regression tasks, the scores were: 0.748, 0.670, 0.748, 0.721 for the anger, fear, joy and sadness, respectively, with a macro-average of 0.721. In the classification tasks, the scores were: 0.667, 0.536, 0.705, 0.673 for the anger, fear, joy and sadness, respectively, with a macro-average of 0.646.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Intensity", "sec_num": "7.2" }, { "text": "In the multi-label classification sub-task, we had to label tweets with respect to 11 emotions: anger, anticipation, disgust, fear, joy, love, optimism, pessimism, sadness, surprise and trust. The score was evaluated using the Jaccard similarity coefficient. We started with the same cleaning and feature-generation pipelines as before, creating an input layer of size 217. We added a fully connected layer of size 100 with tanh activation. Next there were 300 instances of fully connected layers of size 11 with sigmoid activation function. We calculated the mean of all d = 11 vectors, producing the final d = 11 vector. For an illustration, see figure 4 for an illustration. We used A S C _ a n g e r A S C A S C _ fe a r A S C _ jo y y+\u1ef9 1 \u2212y\u2022\u1ef9+ , where \u2022 1 is an L 1 norm and = 10 \u22127 is used for numerical stability. We trained with a batch size of 10, for 40 epochs with Adam optimization with default parameters. Our final score was 0.566.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-label Classification", "sec_num": "7.3" }, { "text": "We participated in the Spanish valence tasks to examine the current state of neural machine translation (NMT) algorithms. We used the Google Cloud Translation API to translate the Spanish training, development and test datasets for the two valence tasks from Spanish to English. We then treated the tasks the same way as the English valence tasks, using the same cleaning and feature extraction pipelines and the same architecture described in section 7.1 to generate regression and classification predictions. We reached 1st and 2nd places in the classification and regression subtasks, with scores of 0.765, 0.770, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spanish Valence Tasks", "sec_num": "7.4" }, { "text": "In this paper we described the system developed to participate in the Semeval 2018 task 1 workshop. We reached 3rd place in the valence ordinal classification sub-task and 5th place in the valence regression sub-task. In the Spanish valence tasks, we reached 1st and 2nd places in the classification and regression sub-tasks, respectively. In the emotions intensity sub-tasks we reached 4th and 13th places in the classification and regression sub-tasks, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Review and Conclusions", "sec_num": "8" }, { "text": "Summarizing the methods used: training of word embeddings based on a Twitter corpus (200M tweets), developing and using Amobee sentiment classifier (ASC) architecture-a bidirectional GRU layer with a CNN-based attention mechanism and an additional hidden layer-used to adjust the embeddings to include emotional context, and finally a shallow feed-forward NN with a stack-based ensemble of final hidden layers from all previous classifiers we trained. This form of transfer learning proved to be important, as the hidden layers features achieved a significant contribution to minimizing the loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Review and Conclusions", "sec_num": "8" }, { "text": "Overall, we had better performance in the valence tasks, both in English and Spanish. We posit this is due to the fact our annotated supervised training dataset (non task-specific) was based on Semeval 2017 task 4, which focused on valence classification. In addition, the annotations in Semeval 2017 were label-based, lending themselves more easily to the ordinal classification tasks. In the Spanish tasks, we used external translation (Google API) and achieved good results without the use of Spanish-specific features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Review and Conclusions", "sec_num": "8" } ], "back_matter": [ { "text": "We thank Zohar Kelrich for assisting in translating the Spanish datasets to English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null }, { "text": "List of features used as inputs for the task-specific models. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Features List", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Tensorflow: A system for large-scale machine learning", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Abadi", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Barham", "suffix": "" }, { "first": "Jianmin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Devin", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Ghemawat", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Irving", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Isard", "suffix": "" } ], "year": 2016, "venue": "OSDI", "volume": "16", "issue": "", "pages": "265--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mart\u00edn Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In OSDI, volume 16, pages 265- 283.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.04606" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "On the properties of neural machine translation: Encoder-decoder approaches", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "KyungHyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. CoRR, abs/1409.1259.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bb twtr at semeval-2017 task 4: Twitter sentiment analysis with cnns and lstms", "authors": [ { "first": "", "middle": [], "last": "Mathieu Cliche", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.06125" ] }, "num": null, "urls": [], "raw_text": "Mathieu Cliche. 2017. Bb twtr at semeval-2017 task 4: Twitter sentiment analysis with cnns and lstms. arXiv preprint arXiv:1704.06125.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Vader: A parsimonious rule-based model for sentiment analysis of social media text", "authors": [ { "first": "C", "middle": [ "J" ], "last": "Hutto", "suffix": "" }, { "first": "E", "middle": [ "E" ], "last": "Gilbert", "suffix": "" } ], "year": 2014, "venue": "Eighth International Conference on Weblogs and Social Media (ICWSM-14)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.J. Hutto and E.E. Gilbert. 2014. Vader: A parsimo- nious rule-based model for sentiment analysis of so- cial media text. In Eighth International Conference on Weblogs and Social Media (ICWSM-14), Ann Arbor, MI.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Textblob: simplified text processing", "authors": [ { "first": "Steven", "middle": [], "last": "Loria", "suffix": "" }, { "first": "", "middle": [], "last": "Keen", "suffix": "" }, { "first": "", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "", "middle": [], "last": "Yankovsky", "suffix": "" }, { "first": "", "middle": [], "last": "Karesh", "suffix": "" }, { "first": "", "middle": [], "last": "Dempsey", "suffix": "" } ], "year": 2014, "venue": "Secondary TextBlob: Simplified Text Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Loria, P Keen, M Honnibal, R Yankovsky, D Karesh, E Dempsey, et al. 2014. Textblob: simplified text processing. Secondary TextBlob: Simplified Text Processing.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Association for Computational Linguistics (ACL) System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "26", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Semeval-2018 Task 1: Affect in tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Felipe", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Bravo-Marquez", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Salameh", "suffix": "" }, { "first": "", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2018, "venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalu- ation (SemEval-2018), New Orleans, LA, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Understanding emotions: A dataset of tweets to study interactions between affect categories", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th Edition of the Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad and Svetlana Kiritchenko. 2018. Understanding emotions: A dataset of tweets to study interactions between affect categories. In Proceedings of the 11th Edition of the Language Resources and Evaluation Conference, Miyazaki, Japan.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Improved part-of-speech tagging for online conversational text with word clusters. Association for Computational Linguistics", "authors": [ { "first": "Olutobi", "middle": [], "last": "Owoputi", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Dividing the indivisible: Using simple symmetry to partition variance explained", "authors": [ { "first": "W", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Pratt", "suffix": "" } ], "year": 1987, "venue": "Proceedings of the second international Tampere conference in statistics", "volume": "", "issue": "", "pages": "245--260", "other_ids": {}, "num": null, "urls": [], "raw_text": "John W Pratt. 1987. Dividing the indivisible: Using simple symmetry to partition variance explained. In Proceedings of the second international Tampere conference in statistics, 1987, pages 245-260. De- partment of Mathematical Sciences, University of Tampere.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Software Framework for Topic Modelling with Large Corpora", "authors": [ { "first": "Petr", "middle": [], "last": "Radim\u0159eh\u016f\u0159ek", "suffix": "" }, { "first": "", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Cor- pora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45- 50, Valletta, Malta. ELRA. http://is.muni. cz/publication/884893/en.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Learning sentimentspecific word embedding for twitter sentiment classification", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1555--1565", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment- specific word embedding for twitter sentiment clas- sification. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1555- 1565.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "On variable importance in linear regression", "authors": [ { "first": "D Roland", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "Bruno", "middle": [ "D" ], "last": "Zumbo", "suffix": "" } ], "year": 1998, "venue": "Social Indicators Research", "volume": "45", "issue": "1-3", "pages": "253--275", "other_ids": {}, "num": null, "urls": [], "raw_text": "D Roland Thomas, Edward Hughes, and Bruno D Zumbo. 1998. On variable importance in linear re- gression. Social Indicators Research, 45(1-3):253- 275.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "t e x i t s h a 1 _ b a s e 6 4 = \" w W R c Y J 4 y e w m D 9 i 1 1 L h 2 0 y C H T R 8 M = \" > A A A B 5 3 i c b Z D N S s N A F I", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "f y Q 9 f I J S S m P J Q = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" t b z 6 j 6 j R E Q M P D d o b u c s X K s t e 2 7 U = \" > A A A B 5 3 i c b", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "P D d o b u c s X K s t e 2 7 U = \" > A A A B 5 3 i c b Z B N S w J B G M e f t T c zK 6 t j E E M S d J L V l P Q m d O m o 0 K a g i 8 y O s z o 5 + 8 L M b C C L x 0 5 d O l R 0 7 V v 4 O b r 1 G f o S j b s S v f 1 h 4 M f / e R 6 e Z / 5 O y J l U p v l u Z F Z W 1 9 Y 3 s p u 5 r f z 2 z m 5 h b / 9 a B p E g 1 C I B D 0 T X w Z J y 5 l N L M c V p N x Q U e w 6 n H W d y s a h 3 b q m Q L P C v 1 D S k t o d H P n M Z w U p b b X d Q K J o l M x H 6 C + U l F J v 5 e f vj 7 m j e G h T e + s O A R B 7 1 F e F Y y l 7 Z D J U d Y 6 E Y 4 X S W 6 0 e S h p h M 8 I j 2 N P r Y o 9 K O k 0 N n 6 E Q 7 Q + Q G Q j 9 f o c T 9 P h F j T 8 q p 5 + h O D 6 u x / F 1 b m P / V e p F y 6 3 b M / D B S 1 C f p I j f i S A V o 8 W s 0 Z I I S x a c a M B F M 3 4 r I G A t M l M 4 m l 4 R w", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "P D d o b u c s X K s t e 2 7 U = \" > A A A B 5 3 i c b Z B N S w J B G M e f t T c zK 6 t j E E M S d J L V l P Q m d O m o 0 K a g i 8 y O s z o 5 + 8 L M b C C L x 0 5 d O l R 0 7 V v 4 O b r 1 G f o S j b s S v f 1 h 4 M f / e R 6 e Z / 5 O y J l U p v l u Z F Z W 1 9 Y 3 s p u 5 r f z 2 z m 5 h b / 9 a B p E g 1 C I B D 0 T X w Z J y 5 l N L M c V p N x Q U e w 6 n H W d y s a h 3 b q m Q L P C v 1 D S k t o d H P n M Z w U p b b X d Q K J o l M x H 6 C + U l F J v 5 e f vj 7 m j e G h T e + s O A R B 7 1 F e F Y y l 7 Z D J U d Y 6 E Y 4 X S W 6 0 e S h p h M 8 I j 2 N P r Y o 9 K O k 0 N n 6 E Q 7 Q + Q G Q j 9 f o c T 9 P h F j T 8 q p 5 + h O D 6 u x / F 1 b m P / V e p F y 6 3 b M / D B S 1 C f p I j f i S A V o 8 W s 0 Z I I S x a c a M B F M 3 4 r I G A t M l M 4 m l 4 R w", "num": null, "uris": null, "type_str": "figure" }, "FIGREF4": { "text": "P D d o b u c s X K s t e 2 7 U = \" > A A A B 5 3 i c b Z B N S w J B G M e f t T c zK 6 t j E E M S d J L V l P Q m d O m o 0 K a g i 8 y O s z o 5 + 8 L M b C C L x 0 5 d O l R 0 7 V v 4 O b r 1 G f o S j b s S v f 1 h 4 M f / e R 6 e Z / 5 O y J l U p v l u Z F Z W 1 9 Y 3 s p u 5 r f z 2 z m 5 h b / 9 a B p E g 1 C I B D 0 T X w Z J y 5 l N L M c V p N x Q U e w 6 n H W d y s a h 3 b q m Q L P C v 1 D S k t o d H P n M Z w U p b b X d Q K J o l M x H 6 C + U l F J v 5 e f vj 7 m j e G h T e + s O A R B 7 1 F e F Y y l 7 Z D J U d Y 6 E Y 4 X S W 6 0 e S h p h M 8 I j 2 N P r Y o 9 K O k 0 N n 6 E Q 7 Q + Q G Q j 9 f o c T 9 P h F j T 8 q p 5 + h O D 6 u x / F 1 b m P / V e p F y 6 3 b M / D B S 1 C f p I j f i S A V o 8 W s 0 Z I I S x a c a M B F M 3 4 r I G A t M l M 4 m l 4 R w", "num": null, "uris": null, "type_str": "figure" }, "FIGREF5": { "text": "P D d o b u c s X K s t e 2 7 U = \" > A A A B 5 3 i c b Z B N S w J B G M e f t T c zK 6 t j E E M S d J L V l P Q m d O m o 0 K a g i 8 y O s z o 5 + 8 L M b C C L x 0 5 d O l R 0 7 V v 4 O b r 1 G f o S j b s S v f 1 h 4 M f / e R 6 e Z / 5 O y J l U p v l u Z F Z W 1 9 Y 3 s p u 5 r f z 2 z m 5 h b / 9 a B p E g 1 C I B D 0 T X w Z J y 5 l N L M c V p N x Q U e w 6 n H W d y s a h 3 b q m Q L P C v 1 D S k t o d H P n M Z w U p b b X d Q K J o l M x H 6 C + U l F J v 5 e f vj 7 m j e G h T e + s O A R B 7 1 F e F Y y l 7 Z D J U d Y 6 E Y 4 X S W 6 0 e S h p h M 8 I j 2 N P r Y o 9 K O k 0 N n 6 E Q 7 Q + Q G Q j 9 f o c T 9 P h F j T 8 q p 5 + h O D 6 u x / F 1 b m P / V e p F y 6 3 b M / D B S 1 C f p I j f i S A V o 8 W s 0 Z I I S x a c a M B F M 3 4 r I G A t M l M 4 m l 4 R w", "num": null, "uris": null, "type_str": "figure" }, "FIGREF6": { "text": "P D d o b u c s X K s t e 2 7 U = \" > A A A B 5 3 i c b ZB N S w J B G M e f t T c z K 6 t j E E M S d J L V l P Q m d O m o 0 K a g i 8 y O s z o 5 + 8 L M b C C L x 0 5 d O l R 0 7 V v 4 O b r 1 G f o S j b s S v f 1 h 4 M f / e R 6 e Z / 5 O y J l U p v l u Z F Z W 1 9 Y 3 s p u 5 r f z 2 z m 5 h b / 9 a B p E g 1 C I B D 0 T X w Z J y 5 l N L M c V p N x Q U e w 6 n H W d y s a h 3 b q m Q L P C v 1 D S k t o d H P n M Z w U p b b X d Q K J o l M x H 6 C + U l F J v 5 e f vj 7 m j e G h T e + s O A R B 7 1 F e F Y y l 7 Z D J U d Y 6 E Y 4 X S W 6 0 e S h p h M 8 I j 2 N P r Y o 9 K O k 0 N n 6 E Q 7 Q + Q G Q j 9 f o c T 9 P h F j T 8 q p 5 + h O D 6 u x / F 1 b m P / V e p F y 6 3 b", "num": null, "uris": null, "type_str": "figure" }, "FIGREF7": { "text": "f y Q 9 f I J S S m P J Q = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" t b z 6 j 6 j R E Q M P D d o b u c s X K s t e 2 7 U = \" > A A A B 5 3 i c b Z B N S w J B G M e f t T c z K 6 t", "num": null, "uris": null, "type_str": "figure" }, "FIGREF8": { "text": "j 7 m j e G h T e + s O A R B 7 1 F e F Y y l 7 Z D J U d Y 6 E Y 4 X S W 6 0 e S h p h M 8 I j 2 N P r Y o 9 K O k 0 N n 6 E Q 7 Q + Q G Q j 9 f o c T 9 P h F j T 8 q p 5 + h O D 6 u x / F 1 b m P / V e p F y 6 3 bM / D B S 1 C f p I j f i S A V o 8 W s 0 Z I I S x a c a M B F M 3 4 r I G A t M l M 4 m l 4 R w V m 6 Y t T p K o d p Y Q q 3 6 F Y J V K T V K Z l u H U Y F U W T i E Y z i F M p x D E y 6 h B R Y Q o H A P j / B k 3 B g P x r P x k r Z m j O X M A f y Q 8 f o J J g i Q i w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" t b z 6 j 6 j R E Q M P D d o b u c s X K s t e 2 7 U = \" > A A A B 5 3 i c b Z B N S w J B G M e f t T c z K 6 t j E E M S d J L V l P Q m d O m o 0 K a g i 8 y O s z o 5 + 8 L M b C C L x 0 5 d O l R 0 7 V v 4 O b r 1 G f o S j b s S v f 1 h 4 M f / e R 6 e Z / 5 O y J l U p v l u Z F Z W 1 9 Y 3 s p u 5 r f z 2 z m 5 h b / 9 a B p E g 1 C I B D 0 T X w Z J y 5 l N L M c V p N x Q U e w 6 n H W d y s a h 3 b q m Q L P C v 1 D S k t o d H P n M Z w U p b b X d Q K J o l M x H 6 C + U l F J v 5 e f v j 7 m j e G h T e + s O A R B 7 1 F e F Y y l 7 Z D J U d Y 6 E Y 4 X S W 6 0 e S h p h M 8 I j 2 N P r Y o 9 K O k 0 N n 6 E Q 7 Q + Q G Q j 9 f o c T 9 P h F j T 8 q p 5 + h O D 6 u x / F 1 b m P / V e p F y 6 3 b M / D B S 1 C f p I j f i S A V o 8 W s 0 Z I I S x a c a M B F M 3 4 r I G A t M l M 4 m l 4 R w V m 6 Y t T p K o d p Y Q q 3 6 F Y J V K T V K Z l u H U Y F U W T i E Y z i F M px D E y 6 h B R Y Q o H A P j / B k 3 B g P x r P x k r Z m j O X M A f y Q 8 f o J J g i Q i w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" t b z 6 j 6 j R E Q M P D d o b u c s X K s t e 2 7 U = \" > A A A B 5 3 i c b Z B N S w J B G M e f t T c z K 6 t jE E M S d J L V l P Q m d O m o 0 K a g i 8 y O s z o 5 + 8 L M b C C L x 0 5 d O l R 0 7 V v 4 O b r 1 G f o S j b s S v f 1 h 4 M f / e R 6 e Z / 5 O y J l U p v l u Z F Z W 1 9 Y 3 s p u 5 r f z 2 z m 5 h b / 9 a B p E g 1 C I B D 0 T X w Z J y 5 l N L M c V p N x Q U e w 6 n H W d y s a h 3 b q m Q L P C v 1 D S k t o d H P n M Z w U p b b X d Q K J o l M x H 6 C + U l F J v 5 e f vj 7 m j e G h T e + s O A R B 7 1 F e F Y y l 7 Z D J U d Y 6 E Y 4 X S W 6 0 e S h p h M 8 I j 2 N P r Y o 9 K O k 0 N n 6 E Q 7 Q + Q G Q j 9 f o c T 9 P h F j T 8 q p 5 + h O D 6 u x / F 1 b m P / V e p F y 6 3 b M / D B S 1 C f p I j f i S A V o 8 W s 0 Z I I S x a c a M B F M 3 4 r I G A t M l M 4 m l 4 R w", "num": null, "uris": null, "type_str": "figure" }, "FIGREF9": { "text": "Architecture of the final classifier in the valence sub-tasks, where f = (x0 \u2212 x2) /2 + 0.5 and the input dimension is 212 for the V-reg sub-task.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF10": { "text": "ly h a s h c a p s m a g d im le n g t h a t ir o n y lo n Relative contribution of features in the valence regression sub-task. Architecture of the multi-label sub-task E-c. the following loss function, based on Tanimoto distance: L(y,\u1ef9) = 1\u2212 y\u2022\u1ef9", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "content": "", "type_str": "table", "text": "An example of a tweet processing, producing two cleaned versions.", "num": null, "html": null }, "TABREF3": { "content": "
", "type_str": "table", "text": "", "num": null, "html": null }, "TABREF5": { "content": "
", "type_str": "table", "text": "Summary of results.", "num": null, "html": null }, "TABREF7": { "content": "
EI-regAnger FearJoySadness
Features204274150181
Learning rate Epochs10 \u22124 33010 \u22125 10 \u22125 3 \u2022 10 \u22125 700 700 1000
", "type_str": "table", "text": "Relative contribution of features in the valence regression sub-task.", "num": null, "html": null }, "TABREF8": { "content": "", "type_str": "table", "text": "Summary of training parameters for the emotion intensity regression tasks.", "num": null, "html": null } } } }