{ "paper_id": "S18-1042", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:43:48.857097Z" }, "title": "ISCLAB at SemEval-2018 Task 1: UIR-Miner for Affect in Tweets", "authors": [ { "first": "Meng", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of International Relations", "location": {} }, "email": "" }, { "first": "Zhenyuan", "middle": [], "last": "Dong", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of International Relations", "location": {} }, "email": "zydong@uir.edu.cn" }, { "first": "Zhihao", "middle": [], "last": "Fan", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of International Relations", "location": {} }, "email": "zhfan@uir.edu.cn" }, { "first": "Kongming", "middle": [], "last": "Meng", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of International Relations", "location": {} }, "email": "kmmeng@uir.edu.cn" }, { "first": "Jinghua", "middle": [], "last": "Cao", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of International Relations", "location": {} }, "email": "jhcao@uir.edu.cn" }, { "first": "Guanqi", "middle": [], "last": "Ding", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of International Relations", "location": {} }, "email": "gqding@uir.edu.cn" }, { "first": "Yuhan", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of International Relations", "location": {} }, "email": "" }, { "first": "Jiawei", "middle": [], "last": "Shan", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of International Relations", "location": {} }, "email": "jwshan@uir.edu.cn" }, { "first": "Binyang", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of International Relations", "location": {} }, "email": "byli@uir.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a UIR-Miner system for emotion and sentiment analysis evaluation in Twitter in SemEval 2018. Our system consists of three main modules: preprocessing module, stacking module to solve the intensity prediction of emotion and sentiment, LSTM network module to solve multi-label classification, and the hierarchical attention network module for solving emotion and sentiment classification problem. According to the metrics of SemEval 2018, our system gets the final scores of 0.636, 0.531, 0.731, 0.708, and 0.408 in terms of Pearson Correlation on 5 subtasks, respectively.", "pdf_parse": { "paper_id": "S18-1042", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a UIR-Miner system for emotion and sentiment analysis evaluation in Twitter in SemEval 2018. Our system consists of three main modules: preprocessing module, stacking module to solve the intensity prediction of emotion and sentiment, LSTM network module to solve multi-label classification, and the hierarchical attention network module for solving emotion and sentiment classification problem. According to the metrics of SemEval 2018, our system gets the final scores of 0.636, 0.531, 0.731, 0.708, and 0.408 in terms of Pearson Correlation on 5 subtasks, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recently, social media platforms are becoming more and more popular, such as Twitter microblogging, Facebook, and so on. Through these platforms, online users would like to share their opinions and emotions. Therefore, the analysis about the information on \"affect\" in the social media has attracted much interest from both academia and industries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, the short texts are usually consisted of informal expressions with much casual forms and emoticons, it brings great challenges for such research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For this purpose, SemEval organized the evaluation of sentiment analysis on Tweet. This year comes the fifth edition that consists of new genres, including emotion intensity regression task, emotion intensity ordinal classification task, sentiment intensity regression task, sentiment degree ordinal classification task, and emotion classification task . We participated in SemEval 2018 task 1 for English, i.e. Affect in Tweet. Our system considers EIreg and V-reg (subtask A and C) as regression problems to get the emotion intensity and sentiment intensity by using regression models, while regards EI-oc and V-oc (subtask B and D) as categorization problems to classify each tweet into its corresponding emotion category and sentiment category by implementing hierarchical attention networks. Moreover, subtask E, i.e., E-c, is considered as a multi-label classification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organized as follows. Section 2 overviews the framework of our system. Section 3 describes the methods for subtask A and C. Section 4 describes the hierarchical attention networks for subtask B and D. Subtask E will be introduced in Section 5. Section 6 presents the evaluation results. Section 7 will conclude this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The architecture of UIR-Miner is shown in Figure 1 . UIR-Miner system is comprised of 4 modules:", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 51, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "(1) Preprocessing module: involves data cleaning, topic classification, and tweets embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "(2) Regressor module: creates an ensemble regressor model by using different basic models simultaneously to calculate the emotion intensity and sentiment intensity, i.e. subtask A and subtask C; (3) Classification module: constructs an LSTM network with multi-layer attention mechanism for emotion and sentiment categorization, i.e. subtask B and subtask D; (4) Multi-label Classification module: builds a LSTM network for subtask E.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "Our system will firstly preprocess the Tweets data, and the main steps are as follows. \uf0b7 Delete the unrelated texts, including the id, some mentions, stop words, and some meaningless punctuation combinations. \uf0b7 Normalize synonymous words, like replacing \"cant\" and \"can't\" with \"cannot\". \uf0b7 Extract emoticons from tweets through regular expressions, and then maintain the emotional ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "2.1" }, { "text": "In the preprocessing, we used the pre-trained word embedding by Glove (Penningto et. al, 2014) , in which each word will be represented by a 200dimensional vector , \u2208 [1, ] , \u2208 [1, ] . Here, denotes the location of the sentence in the tweet and is the maximum number of sentences for each tweet, denotes the location of the word in the sentence and is the maximum number of words for each sentence. Set = 140 and = 5.", "cite_spans": [ { "start": 70, "end": 94, "text": "(Penningto et. al, 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word embedding", "sec_num": "2.2" }, { "text": "This section will describe the methods for subtask A and C. Given a tweet and an emotion E (or a sentiment V), determine the intensity of E (or V) that best represents the mental state of the tweeter-a real-valued score between 0 and 1. We consider both of subtask A and C as a regression problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subtask A and C", "sec_num": "3" }, { "text": "On the whole, we use a stacking framework to enhance the accuracy of final prediction. The original features are selected as input into the stacking model, including hashtags, emoticons, and ngram features. Then, the stacking model is divided into two layer, the base layer and the stacking layer. In the base layer, we choose four basic regressors due to their excellent performance. In the stacking layer, we still use SVM model, especially, the NuSVR model, which can control its error rate. Finally, we get the final result of intensity value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subtask A and C", "sec_num": "3" }, { "text": "Since there are many irregular expressions in tweet, we combine the features, including emoticon, hashtag, and special punctuations. In our system, we mainly select the following features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Selection", "sec_num": "3.1" }, { "text": "\u2022 Hashtags: the number of hashtags in one tweet; \u2022 Ill format: the presence of ill format with some characters replacing by *; \u2022 Punctuation: the number of contiguous sequences of exclamation marks, question marks, and both exclamation and question marks; whether the last token contains an exclamation or question mark; \u2022 Emoticons: the presence of positive and negative emoticons at any position in the tweet; whether the last token is an emoticon; \u2022 OOV: the ratio of words out of vocabulary; \u2022 Elongated words: the presence of sentiment words with one character repeated more than two times, for example, 'cooool'; \u2022 URL: whether the tweet contains a URL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Selection", "sec_num": "3.1" }, { "text": "\u2022 Reply or Retweet: is the current tweet a reply/ retweet tweet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Selection", "sec_num": "3.1" }, { "text": "To avoid overfitting, we test 6 basic models to construct our stacking model. \uf0b7 B: Bayesian Ridge (Hsiang, T.C 1975) \uf0b7 G: Gradient Boosting Regressor (Jerome H. Friedman, 2001 ) \uf0b7 K: Kernel Ridge (Zhang Y et. al, 2013) \uf0b7 L: Lasso Regressor (Tibshirani et al., 1996) \uf0b7 M: MLP Regression (Pal and Mitra, 1992) \uf0b7 R: Random Forest Regressor (Ho, 1995) \uf0b7 S: SVR (Vapnik 1995) To achieve the best performance, we also compare different combinations of our basic models with the metrics of Mean Squired Error (MSE) in the stacking method, and the experimental result is shown in Table 1 . \uf0b7 Baseline: we use SVR as the Baseline; \uf0b7 Stacking1: B+K+S; \uf0b7 Stacking2: M+K+R; \uf0b7 Stacking3: B+K+R+S; \uf0b7 Stacking4: B+G+K+M; \uf0b7 Stacking5: G+K+L+ S; \uf0b7 Stacking6: B+G+K+S.", "cite_spans": [ { "start": 98, "end": 116, "text": "(Hsiang, T.C 1975)", "ref_id": null }, { "start": 161, "end": 175, "text": "Friedman, 2001", "ref_id": "BIBREF3" }, { "start": 196, "end": 218, "text": "(Zhang Y et. al, 2013)", "ref_id": "BIBREF2" }, { "start": 240, "end": 265, "text": "(Tibshirani et al., 1996)", "ref_id": "BIBREF10" }, { "start": 286, "end": 307, "text": "(Pal and Mitra, 1992)", "ref_id": null }, { "start": 337, "end": 347, "text": "(Ho, 1995)", "ref_id": "BIBREF11" }, { "start": 357, "end": 370, "text": "(Vapnik 1995)", "ref_id": null } ], "ref_spans": [ { "start": 572, "end": 579, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Stacking Model", "sec_num": "3.2" }, { "text": "Since Stacking 6 achieves the best performance, we use the same setting in our system. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stacking Model", "sec_num": "3.2" }, { "text": "This section will introduce our hierarchical attention model for subtask B and D. Given a tweet and an emotion category E (or a sentiment category V), classify the tweet into one of the ordinal classes of intensity of E (or V) that best represents the mental state of the tweeter. Note that, the number of category of E is 4, while that of V is 7. In our system, we consider both of subtask B and D as a classification problem. Each tweet contains several sentences that are comprised by several words. In order to better represent the semantics of emotion or sentiment, we utilize the hierarchical structure of a tweet to capture the contextual information of both intra and inter-tweet. The architecture is shown as Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 718, "end": 726, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Hierarchical Attention Networks for Subtask B and D", "sec_num": "4" }, { "text": "We build a hierarchical model which contains two layers, word layer and sentence layer. Since words and sentences are highly sensitive to the con-texts, recurrent neural networks based on bidirectional long short-term memory (BiLSTM) (Hochreiter and Schmidhuber, 1997) are implemented on both layers to get tweets' representations. Furthermore, since the words in one sentence or different sentences in a given tweet can indicate different emotion intensity or sentiment intensity. To better represent the semantics, attention mechanisms are added to both layers respectively (Xu et. al., 2015) . We then use softmax as the activation", "cite_spans": [ { "start": 234, "end": 268, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF4" }, { "start": 576, "end": 594, "text": "(Xu et. al., 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Attention Networks for Subtask B and D", "sec_num": "4" }, { "text": "A word level BiLSTM (Hochreiter and Schmidhuber, 1997 ", "cite_spans": [ { "start": 20, "end": 53, "text": "(Hochreiter and Schmidhuber, 1997", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "BiLSTM-based Word Encoder", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= ( + \u210e \u22121 + ) (1) = ( + \u210e \u22121 + ) (2) = ( + \u210e \u22121 + ) (3) = tanh( + \u210e \u22121 + ) (4) = \u2a00 + \u2a00 \u22121 (5) \u210e = \u2a00 tanh( )", "eq_num": "(6)" } ], "section": "BiLSTM-based Word Encoder", "sec_num": "4.1" }, { "text": "where , and are the input gate, forget gate and output gate, is the logistic sigmoid function, \u2a00 denotes elementwise multiplication, \u210e is the network output activation function, and softmax is used for categorization. To better support Twitter, we input the word embedding with 200 dimensions, and the max number of words in a sentence as 140.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BiLSTM-based Word Encoder", "sec_num": "4.1" }, { "text": "are given to different words. Attention mechanism (Xu et. al., 2015) is added to the word layer and the sentence can be represented as _ . = tanh( \u210e + )", "cite_spans": [ { "start": 50, "end": 68, "text": "(Xu et. al., 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Different weights", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= exp( ) \u2211 ( )", "eq_num": "(7)" } ], "section": "Different weights", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "_ = \u2211 \u210e (9)", "eq_num": "(8)" } ], "section": "Different weights", "sec_num": null }, { "text": "More specifically, after putting \u210e into a fullyconnected layer, we get . Then calculate weight with a word level context . Finally, we can get the sentence vector through an attention layer by calculating the sum of \u210e .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Different weights", "sec_num": null }, { "text": "Similarly, a sentence level BiLSTM (Hochreiter and Schmidhuber, 1997) More specifically, after putting \u210e into a fullyconnected layer, we get . Then calculate weight with a sentence level context . Finally, we can get the tweet vector through an attention layer by calculating the sum of \u210e .", "cite_spans": [ { "start": 35, "end": 69, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence Layer Attention", "sec_num": "4.3" }, { "text": "This section will introduce neural network model for subtask E. Given a tweet, classify the tweet as \"neutral or no emotion\" or as one, or more, of eleven given emotions that best represent the mental state of the tweeter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subtask E", "sec_num": "5" }, { "text": "Each tweet will be classified with different numbers of labels. Since there exists eleven labels each of which may be suitable, considering one of these labels every time is reasonable. Our system will calculate a score for each of the eleven labels for each tweet, and select the top-3 as the final results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subtask E", "sec_num": "5" }, { "text": "We also used a LSTM network for this task, and get the classification result by using softmax. The other settings of this model is quite similar to that in Section 4 except for multi-label classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subtask E", "sec_num": "5" }, { "text": "In this section, we will report our evaluation results in SemEval 2018 based on the given dataset as well as the metrics. The statistics of the dataset is shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 164, "end": 171, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiment", "sec_num": "6" }, { "text": "Note that any other extra external resources, such as sentiment lexicon, emoticons, and annotated corpus, are not used in the evaluation except for the training dataset provided by the organization. Table 3 shows the results of our UIR-Miner for all the subtasks on both Dev set and Test set, and the final ranking. ", "cite_spans": [], "ref_spans": [ { "start": 199, "end": 206, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiment", "sec_num": "6" }, { "text": "In this paper, we present a framework for SemEval 2018 Affect in Tweet task. After the preprocessing, we firstly propose an ensembling method to calculate the intensity score of emotion and sentiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Then a LSTM network model with multi-layer attention mechanism is constructed for emotion and sentiment classification. According to SemEval 2018's metrics, our model runs got final scores of 0.636, 0.531, 0.731, 0.708, and 0.408 in terms of Pearson Correlation on 5 subtasks, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [ { "text": "This paper is funded by the National Natural Foundation of China 61502115, 61602326, U1636103, U1536207, 61572043, 61672361, 61632011, the Hong Kong Applied Science and Technology Research Institute Project 7050854, and the Fundamental Research Fund for the Central Universityies 3262015T70, 3262017T12.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A tutorial on support vector regression", "authors": [ { "first": "Alex", "middle": [ "J" ], "last": "Smola", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Sch\u00f6lkopf", "suffix": "" } ], "year": 2004, "venue": "Kluwer Academic Publishers", "volume": "", "issue": "", "pages": "199--222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex J. Smola, Bernhard Sch\u00f6lkopf. 2004. A tutorial on support vector regression. In 2004 Kluwer Aca- demic Publishers, pages 199-222", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Bayesian View on Ridge Regression", "authors": [ { "first": "T", "middle": [ "C" ], "last": "Hsiang", "suffix": "" } ], "year": 1975, "venue": "In Journal of the Royal Statistical Society", "volume": "", "issue": "", "pages": "267--268", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsiang, T.C. 1975.A Bayesian View on Ridge Regres- sion. In Journal of the Royal Statistical Society, page 267-268.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Divide and conquer kernel ridge regression", "authors": [ { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "J", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "M", "middle": [], "last": "Wainwright", "suffix": "" } ], "year": 2013, "venue": "Conference on Learning Theory", "volume": "", "issue": "", "pages": "592--617", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang Y, Duchi J, Wainwright M. 2013. Divide and conquer kernel ridge regression. In Conference on Learning Theory, pages 592-617.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Greedy function approximation: a gradient boosting machine", "authors": [ { "first": "Jerome", "middle": [ "H" ], "last": "Friedman", "suffix": "" } ], "year": 2001, "venue": "Annals of Statistics", "volume": "", "issue": "", "pages": "1189--1232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerome H. Friedman. 2001. Greedy function approxi- mation: a gradient boosting machine. In Annals of Statistics, pages 1189-1232", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "In Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. In Neural computation, 9(8): 1735-1780.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "SemEval-2018 Task1: Affect in tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Felipe", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Bravo-Marquez", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Salameh", "suffix": "" }, { "first": "", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2018, "venue": "Proceedings of International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Moham- mad Salameh, and Svetlana Kiritchenko. 2018. SemEval-2018 Task1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalu- ation (SemEval-2018).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Understanding emotions: A dataset of tweets to study interactions between affect categories", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11 th Edition of the Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad and Svetlana Kiritchenko. 2018. Understanding emotions: A dataset of tweets to study interactions between affect categories. In Pro- ceedings of the 11 th Edition of the Language Re- sources and Evaluation Conference.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing, pages 1532-1543.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Show, attend and tell: Neural image caption generation with visual attention", "authors": [ { "first": "Kelvin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhudinov", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "2048--2057", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Richard Ze- mel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual at- tention. In International Conference on Machine Learning, pages 2048-2057.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Hierarchical attention networks for document classification", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Smola", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1480--1489", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1480-1489.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Least absolute shrinkage and selection operator", "authors": [ { "first": "R", "middle": [], "last": "Tibshirani", "suffix": "" }, { "first": "P", "middle": [], "last": "Bickel", "suffix": "" }, { "first": "Y", "middle": [], "last": "Ritov", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tibshirani R, Bickel P, Ritov Y, et al. Least absolute shrinkage and selection operator[J]. 1996.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Random decision forests[C]//Document analysis and recognition", "authors": [ { "first": "T", "middle": [], "last": "Ho", "suffix": "" } ], "year": 1995, "venue": "proceedings of the third international conference on", "volume": "1", "issue": "", "pages": "278--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ho T K. Random decision forests[C]//Document anal- ysis and recognition, 1995, proceedings of the third international conference on. IEEE, 1995, 1: 278- 282.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Multilayer perceptron, fuzzy sets, and classification", "authors": [ { "first": "S K", "middle": [], "last": "Pal", "suffix": "" }, { "first": "Mitra", "middle": [ "S" ], "last": "", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pal S K, Mitra S. Multilayer perceptron, fuzzy sets, and classification[J].", "links": null } }, "ref_entries": { "FIGREF0": { "text": "System architecture.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "BiLSTM network with multi-layer attention mechanism.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": ") is used to represent each word. The BiLSTM consists of the forward LSTM and the backward LSTM. Forward LSTM reads the sentence from 1 to and represents the word as \u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7 ( ), \u2208 [1, ]. Backward LSTM reads the sentence from to 1 and represents the word as \u20d6\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7 ( ), \u2208 [ , 1]. Then word can be annotated by combining both forward information and backward information, \u210e = [ \u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7 ( ), \u20d6\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7\u20d7 ( ) ] . The equations are listed as follows:", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "can be used to represent sentence by adding sentence level context information, We then add weights to different sentence. Take as input and get _ to represent each tweet through an attention layer.", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "type_str": "table", "content": "
MethodAngFearMetrics JoySadAve
Baseline 9.774 8.390 9.055 9.086 9.076
Stacking1 9.404 7.926 8.352 8.629 8.578
Stacking2 9.596 7.849 8.192 8.520 8.539
Stacking3 9.351 7.900 8.206 8.536 8.500
Stacking4 9.557 7.715 8.045 8.454 8.443
Stacking5 9.381 7.790 8.170 8.387 8.432
Stacking6 9.
", "html": null, "num": null, "text": "Evaluation on different combinations in stacking method." }, "TABREF1": { "type_str": "table", "content": "
Training setDev setTest set
EI-reg anger: 1701anger: 388anger: 17939
fear: 2252fear: 389fear: 17923
joy: 1616joy: 290joy: 18042
sadness: 1533sadness: 397sadness: 17912
EI-oc anger: 1701anger: 388anger: 1002
fear: 2252fear: 389fear: 986
joy: 1616joy: 290joy: 1105
sadness: 1533sadness: 397sadness: 975
V-reg 118144917874
V-oc 1181449937
E-c68388863259
", "html": null, "num": null, "text": "Statistics of the dataset." }, "TABREF2": { "type_str": "table", "content": "
Score in Dev Score in Test Ranking
EI-reg 0.5760.63628/48
EI-oc 0.4950.53115/39
V-reg 0.7290.78121/38
V-oc0.6940.70816/37
E-c0.4210.40723/35
", "html": null, "num": null, "text": "The results on different datasets." } } } }