{ "paper_id": "S18-1044", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:45:11.895058Z" }, "title": "DMCB at SemEval-2018 Task 1: Transfer Learning of Sentiment Classification Using Group LSTM for Emotion Intensity prediction", "authors": [ { "first": "Youngmin", "middle": [], "last": "Kim", "suffix": "", "affiliation": { "laboratory": "Data mining and Computational Biology Lab", "institution": "Gwangju Institute of Science and Technology", "location": { "settlement": "Gwangju", "country": "Korea" } }, "email": "" }, { "first": "Hyunju", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "Data mining and Computational Biology Lab", "institution": "Gwangju Institute of Science and Technology", "location": { "settlement": "Gwangju", "country": "Korea" } }, "email": "hyunjulee@gist.ac.kr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes a system attended in the SemEval-2018 Task 1 \"Affect in tweets\" that predicts emotional intensities. We use Group LSTM with an attention model and transfer learning with sentiment classification data as a source data (SemEval 2017 Task 4a). A transfer model structure consists of a source domain and a target domain. Additionally, we try a new dropout that is applied to LSTMs in the Group LSTM. Our system ranked 8th at the subtask 1a (emotion intensity regression). We also show various results with different architectures in the source, target and transfer models.", "pdf_parse": { "paper_id": "S18-1044", "_pdf_hash": "", "abstract": [ { "text": "This paper describes a system attended in the SemEval-2018 Task 1 \"Affect in tweets\" that predicts emotional intensities. We use Group LSTM with an attention model and transfer learning with sentiment classification data as a source data (SemEval 2017 Task 4a). A transfer model structure consists of a source domain and a target domain. Additionally, we try a new dropout that is applied to LSTMs in the Group LSTM. Our system ranked 8th at the subtask 1a (emotion intensity regression). We also show various results with different architectures in the source, target and transfer models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Sentiment analysis is one of the most famous Natural Language Process (NLP) task. In this study, we perform a task that predicts emotional intensities of anger, joy, fear and sadness with tweet messages, where intensity values range from 0 to 1. This task is competed at SemEval-2018 Task 1 (Mohammad et al., 2018) . In previous studies, neural networks with word embedding and affective lexicons were widely used (Goel et al., 2017; He et al., 2017) . Also, many studies employed support vector regression (Duppada and Hiray, 2017; Akhtar et al., 2017) .", "cite_spans": [ { "start": 291, "end": 314, "text": "(Mohammad et al., 2018)", "ref_id": "BIBREF15" }, { "start": 414, "end": 433, "text": "(Goel et al., 2017;", "ref_id": "BIBREF8" }, { "start": 434, "end": 450, "text": "He et al., 2017)", "ref_id": "BIBREF9" }, { "start": 507, "end": 532, "text": "(Duppada and Hiray, 2017;", "ref_id": "BIBREF7" }, { "start": 533, "end": 553, "text": "Akhtar et al., 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Transfer learning was recently proposed as an effecive approach to have higher performance, when data is not abundant. Using a pre-trained deep-learning model with an abundant data set has been popular and shows good results in various tasks (Donahue et al., 2014; Conneau et al., 2017) . Especially in a medical image task, it is very efficient because of lacks of medical data (Tajbakhsh et al., 2016) . Just as humans can learn new things better with their past knowledge, neural networks can also be trained on target domains by transferring knowledge from the source domain.", "cite_spans": [ { "start": 242, "end": 264, "text": "(Donahue et al., 2014;", "ref_id": "BIBREF6" }, { "start": 265, "end": 286, "text": "Conneau et al., 2017)", "ref_id": "BIBREF4" }, { "start": 379, "end": 403, "text": "(Tajbakhsh et al., 2016)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We make a transfer model that can be divided into a source model and a target model. The source model is constructed based on the paper (Baziotis et al., 2017) . The model of this paper uses LSTM with attention. However, we introduce Group LSTM (GLSTM) (Kuchaiev and Ginsburg, 2017) with a new dropout. After then, we make the target model with LSTM.", "cite_spans": [ { "start": 136, "end": 159, "text": "(Baziotis et al., 2017)", "ref_id": "BIBREF2" }, { "start": 253, "end": 282, "text": "(Kuchaiev and Ginsburg, 2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the result section, we provide comparison of LSTM and GLSTM in the source model, and results of various pre-trained word embeddings with target model. Finally, we discuss about the result of the transfer model that is a combined model with the source and target models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For transfer learning, we use a source data provided by SemEval 2017 Task4 (a) (Rosenthal et al., 2017) . The task of the source domain is to classify sentences to positive, negative and neutral sentences. Training data is 44,613 sentences (10% are used as a development set), and test data is 12,284 sentences for the source model evaluation. For transfer learning in this study, all training and test data are used as training data.", "cite_spans": [ { "start": 79, "end": 103, "text": "(Rosenthal et al., 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Label", "sec_num": "2.1" }, { "text": "For the target domain, training data is about 2,000 sentences for each emotion. Although the main task is regression prediction, we change it as distribution prediction (Tai et al., 2015) . In this way, we deal it as a classification problem. Intensity scores y are changed to labels t satisfying:", "cite_spans": [ { "start": 169, "end": 187, "text": "(Tai et al., 2015)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Label", "sec_num": "2.1" }, { "text": "t i = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 y' \u2212 y' if i = y' + 1 y' \u2212 y' + 1 if i = y' 0 otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Label", "sec_num": "2.1" }, { "text": "where i = [1, 2, 3, 4, 5] and y' = 4y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Label", "sec_num": "2.1" }, { "text": "Size of the final output is 5. For example, if an intensity score y is 0.7, label t is [0, 0, 0.2, 0.8, 0].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Label", "sec_num": "2.1" }, { "text": "With given r = [0, 0.25, 0.5, 0.75, 1], label y can be obtained again by dot product with t and r (0.7 = 0.2*0.5 + 0.8*0.75).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Label", "sec_num": "2.1" }, { "text": "To normalize words and remove noise in sentences, we use ekphrasis library (Baziotis et al., 2017) . It helps to apply social tokenizer, spell correction, word segmentation and various preprocessing. We normalize time and number, and omit URL, email and user tag. Annotations are added on hashtags, emphasized and repeated words. We annotate them as a group because hashtags are gathered in many cases (see Table 1 ). Lastly, emoticons are changed to words that represent emoticons. #letsdance #dancinginthemoonlight #singing \u21d2 hashtag lets dance dancing in the moonlight singing /hashtag ", "cite_spans": [ { "start": 75, "end": 98, "text": "(Baziotis et al., 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 407, "end": 414, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Text preprocessing", "sec_num": "2.2" }, { "text": "We try five pre-trained word embeddings to choose the best one for the target model. Two are trained with GloVe (Pennington et al., 2014) using different data sets: one 1 is trained with very large data in Common crawl, and the other 2 is made with tweets (Baziotis et al., 2017) . Other word embedding methods are fastText 3 (Bojanowski et al., 2016) , word2vec 4 (Mikolov et al., 2013) and LexVec 5 (Salle et al., 2016) . LexVec is the mixed version of GloVe and word2vec. Dimensions of them are all 300. Among them, GloVe with tweet is used for the source and transfer models. Emoji can be good features but most of emoji ideograms are not contained in embedding vocabulary. Hence, we change a emoji to a phrase with python 'emoji' library. For example, is decoded to \"Smiling Face with Open Mouth and Smiling Eyes\". Because it is quite long, embedding vectors of emoji are changed to mean of vectors of each decoded words. In this way, we reduce Out-Of-Vocabulary and prevent the sentence from lengthening.", "cite_spans": [ { "start": 112, "end": 137, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF16" }, { "start": 256, "end": 279, "text": "(Baziotis et al., 2017)", "ref_id": "BIBREF2" }, { "start": 326, "end": 351, "text": "(Bojanowski et al., 2016)", "ref_id": "BIBREF3" }, { "start": 365, "end": 387, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF14" }, { "start": 401, "end": 421, "text": "(Salle et al., 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Word embedding", "sec_num": "2.3" }, { "text": "Recurrent Neural Network (RNN) works well in a sequence model like language by addressing its arbitrary length (Tai et al., 2015) . However, RNN is difficult to be optimized because of a gradient vanishing problem. To solve it, LSTM suggested a cell state and gates as bridges to control the flow of error (Hochreiter and Schmidhuber, 1997) .", "cite_spans": [ { "start": 111, "end": 129, "text": "(Tai et al., 2015)", "ref_id": "BIBREF20" }, { "start": 306, "end": 340, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "LSTM and GLSTM", "sec_num": "2.4" }, { "text": "GLSTM is just a group of several LSTMs, where outputs of LSTMs are concatenated. The idea is that LSTM can be divided into several sub-LSTMs (Kuchaiev and Ginsburg, 2017) . This model has some advantages compared to the original LSTM. The number of parameters is reduced with a preserving feature size. Also, it can be parallelized and computation times are reduced because the computation of each sub-LSTM is independent.", "cite_spans": [ { "start": 141, "end": 170, "text": "(Kuchaiev and Ginsburg, 2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "LSTM and GLSTM", "sec_num": "2.4" }, { "text": "To avoid overfitting and achieve generality, we use three types of dropout. One is normal dropout between layers (Srivastava et al., 2014 ). If a shape of the layer is sequential, dropout mask is shared on sequential axis. Another dropout is inside cells of LSTM. In the each LSTM cell, the same dropout mask is applied on hidden values that come from the previous cell (Zaremba et al., 2014) . Applying different dropout masks for each cell can mislead memory and information. With the same dropout mask, however, LSTM cell can dropout nodes consistently so that the model can forget or memorize information stably. The last one is dropout between sub-LSTMs. To get more generality, we dropped several LSTMs in GLSTM. For example, if GLSTM consist of five sub-LSTMs, we dropped two LSTMs and only use the rest three LSTMs.", "cite_spans": [ { "start": 113, "end": 137, "text": "(Srivastava et al., 2014", "ref_id": "BIBREF19" }, { "start": 370, "end": 392, "text": "(Zaremba et al., 2014)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Dropout", "sec_num": "2.5" }, { "text": "3 Model structure", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dropout", "sec_num": "2.5" }, { "text": "For the source model, Glove with tweets is used as input vectors of the embedding layer. After embedding layer, two GLSTM layers are stacked. GLSTM is made of 5 LSTMs with 40 feature size. Additionally, we concatenate forward and backward GLSTM to be bidirectional. So hidden size of each recurrent layer is 400 ( = 5 \u00d740 \u00d7 2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source model", "sec_num": "3.1" }, { "text": "Next is an attention layer, which calculates importance of each time step. Attention mechanism shows good performance on sequential tasks like machine translation (Bahdanau et al., 2014) and sentiment analysis (Baziotis et al., 2017) . It helps to concentrate position related to emotion. Attention values are calculated:", "cite_spans": [ { "start": 163, "end": 186, "text": "(Bahdanau et al., 2014)", "ref_id": "BIBREF1" }, { "start": 210, "end": 233, "text": "(Baziotis et al., 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Source model", "sec_num": "3.1" }, { "text": "e t = W h h t + b t a t = exp(e t ) l i exp(e i ) , a t = 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source model", "sec_num": "3.1" }, { "text": "Calculated attention values are multiplied by each current hidden state and they are all added up. Passing through the attention layer, the output becomes non-sequential representation vectors. It enters a fully connected softmax layer as a final classification layer, where the size of the layer is 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source model", "sec_num": "3.1" }, { "text": "Unlike the source model, a normal bi-LSTM is used with 100 feature size. After then, attention and output layers are stacked. The size of output layer is 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target model", "sec_num": "3.2" }, { "text": "For transfer learning, outputs of several layers on the source model are used as additional features. The LSTM layer on the target model takes as input the concatenation of the embedding layer and the first LSTM layer output of the source model. After the attention layer, in a similar way, outputs of the attention and the final layers on the source model are concatenated and entered into the final layer as input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target model", "sec_num": "3.2" }, { "text": "At the embedding layer, Gaussian noise is applied with sigma = 0.2. It helps models to be robust by avoiding overfitting on specific features of words. Dropouts are used everywhere between layers with probability p = 0.3 except before the final layer. Before the final layer, p = 0.5 dropout is applied. Additionally, LSTM dropout was applied on every LSTM layers with p = 0.3. The probability of dropout at GLSTM on the source model is 0.3. Also, we use L2 normalization. It prevents weights to be large values by adding weight penalty to loss. We set up it with 0.001 for the source model and 0.0001 for the target model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regularization", "sec_num": "3.3" }, { "text": "For the source and target models, categorical cross-entropy is used as a loss function. For updating weights, we apply the Adam (Kingma and Ba, 2014) optimizer with a learning rate of 0.001. During training the transfer model, since we want to preserve target model weight parameters with a little updating, we decrease gradient flow of backpropagation from the source model to the target model by 0.05 times (see large arrows on Figure 1 ). Because there are many parameters on the final model, we take that constraint to prevent overfitting. Figure 2 shows the result of GLSTM and normal LSTM on the source model for Sentiment Classification (SemEval 2017 Task 1a). We tried various feature sizes. The number of sub-LSTM in GLSTM is fixed to 5 and the feature size of each sub-LSTM is changed. As the sizes of features increase, the performances of GLSTM increase. On the other hand, although the performances of LSTM gradually improve with larger feature sizes, it starts to decrease rapidly after 100. Thus, we infer that GLSTM with dropout is more effective on overfitting than LSTM with larger feature size. Based on this result, we use GLSTM for the source model.", "cite_spans": [], "ref_spans": [ { "start": 430, "end": 439, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 545, "end": 553, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Training", "sec_num": "3.4" }, { "text": "We tested five different word embedding vectors using the target model to choose the best embedding. To compare the performances of embeddings, the embedding layers was not trained Performance comparison between GLSTM and LSTM on the source model for sentiment classification. A dotted line is the result of (Baziotis et al., 2017). Table 2 : Pearsons correlation of Dev set on the target model for SemEval-2018 Task1(a).", "cite_spans": [], "ref_spans": [ { "start": 333, "end": 340, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Various Embedding", "sec_num": "4.2" }, { "text": "(static). Note that we did not use transfer learning in this experiment. Table 2 shows Pearson correlation between the given emotion intensities and predicted intensities by the models on the development set. Tweet GloVe had the best score and Common GloVe showed the second best score. Hence, we decided to do transfer learning with Tweet GloVe and Common GloVe.", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 80, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Various Embedding", "sec_num": "4.2" }, { "text": "Our main task results are described in Table 3 . There are four models. Tweet Glove and Common GloVe were picked from the conclusion of 4.2, and we performed two approaches: training the embedding layer or not (non-static or static) (Kim, 2014) . Tweet GloVe with static showed the best performance as a single model and it is almost same to non-static. However, the non-static method had a higher score than the static for Common GloVe embedding. In addition, the ensemble model by averaging all single models showed better performance than the single models. We also found that compared to the scores without trans-fer learning on dev set (Table 2) , there were significant performance improvements when transfer learning used in Table 3 .", "cite_spans": [ { "start": 233, "end": 244, "text": "(Kim, 2014)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 39, "end": 46, "text": "Table 3", "ref_id": null }, { "start": 641, "end": 650, "text": "(Table 2)", "ref_id": null }, { "start": 732, "end": 739, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Transfer", "sec_num": "4.3" }, { "text": "This paper described the system submitted to SemEval-2018 Task 1: Affect in tweets and analysis of various models. Various embedding vectors were tried and we chose Tweet GloVe with static. The main method is LSTM with attention and transfer learning that uses sentiment classification as source domain. In future work, we will perform transfer learning with labeled data sets such as SNLI or SST data sets. Also, training tagging or tree parsing can be used for transfer learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://nlp.stanford.edu/projects/glove/ 2 https://github.com/cbaziotis/datastories-semeval2017-task43 https://github.com/facebookresearch/fastText 4 https://code.google.com/archive/p/word2vec/ 5 https://github.com/alexandres/lexvec", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported by the Bio-Synergy Research Project (NRF-2016M3A9C4939665) of the Ministry of Science, ICT and Future Planning through the National Research Foundation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Iitp at emoint-2017: Measuring intensity of emotions using sentence embeddings and optimized features", "authors": [ { "first": "Palaash", "middle": [], "last": "Md Shad Akhtar", "suffix": "" }, { "first": "Asif", "middle": [], "last": "Sawant", "suffix": "" }, { "first": "Jyoti", "middle": [], "last": "Ekbal", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Pawar", "suffix": "" }, { "first": "", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "212--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Md Shad Akhtar, Palaash Sawant, Asif Ekbal, Jyoti Pawar, and Pushpak Bhattacharyya. 2017. Iitp at emoint-2017: Measuring intensity of emotions us- ing sentence embeddings and optimized features. In Proceedings of the 8th Workshop on Computa- tional Approaches to Subjectivity, Sentiment and So- cial Media Analysis, pages 212-218.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis", "authors": [ { "first": "Christos", "middle": [], "last": "Baziotis", "suffix": "" }, { "first": "Nikos", "middle": [], "last": "Pelekis", "suffix": "" }, { "first": "Christos", "middle": [], "last": "Doulkeridis", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)", "volume": "", "issue": "", "pages": "747--754", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christos Baziotis, Nikos Pelekis, and Christos Doulk- eridis. 2017. Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis. In Proceedings of the 11th International Workshop on Semantic Eval- uation (SemEval-2017), pages 747-754, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.04606" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Supervised learning of universal sentence representations from natural language inference data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Loic", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.02364" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Experiment results of the transfer model on SemEval-2018 Task 1(a) Emotional Intensity regression", "authors": [], "year": null, "venue": "The submitted system to the task is Tweet GloVe with static", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Table 3: Experiment results of the transfer model on SemEval-2018 Task 1(a) Emotional Intensity regres- sion. The submitted system to the task is Tweet GloVe with static.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Decaf: A deep convolutional activation feature for generic visual recognition", "authors": [ { "first": "Jeff", "middle": [], "last": "Donahue", "suffix": "" }, { "first": "Yangqing", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Judy", "middle": [], "last": "Hoffman", "suffix": "" }, { "first": "Ning", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Tzeng", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" } ], "year": 2014, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "647--655", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoff- man, Ning Zhang, Eric Tzeng, and Trevor Darrell. 2014. Decaf: A deep convolutional activation fea- ture for generic visual recognition. In International conference on machine learning, pages 647-655.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Seernet at emoint-2017: Tweet emotion intensity estimator", "authors": [ { "first": "Venkatesh", "middle": [], "last": "Duppada", "suffix": "" }, { "first": "Sushant", "middle": [], "last": "Hiray", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1708.06185" ] }, "num": null, "urls": [], "raw_text": "Venkatesh Duppada and Sushant Hiray. 2017. Seernet at emoint-2017: Tweet emotion intensity estimator. arXiv preprint arXiv:1708.06185.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Prayas at emoint 2017: An ensemble of deep neural architectures for emotion intensity prediction in tweets", "authors": [ { "first": "Pranav", "middle": [], "last": "Goel", "suffix": "" }, { "first": "Devang", "middle": [], "last": "Kulshreshtha", "suffix": "" }, { "first": "Prayas", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Kaushal Kumar", "middle": [], "last": "Shukla", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "58--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Goel, Devang Kulshreshtha, Prayas Jain, and Kaushal Kumar Shukla. 2017. Prayas at emoint 2017: An ensemble of deep neural architectures for emotion intensity prediction in tweets. In Pro- ceedings of the 8th Workshop on Computational Ap- proaches to Subjectivity, Sentiment and Social Me- dia Analysis, pages 58-65.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Yzu-nlp at emoint-2017: Determining emotion intensity using a bi-directional lstmcnn model", "authors": [ { "first": "Yuanye", "middle": [], "last": "He", "suffix": "" }, { "first": "Liang-Chih", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Weiyi", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "238--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuanye He, Liang-Chih Yu, K Robert Lai, and Weiyi Liu. 2017. Yzu-nlp at emoint-2017: Determin- ing emotion intensity using a bi-directional lstm- cnn model. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sen- timent and Social Media Analysis, pages 238-242.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1408.5882" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Factorization tricks for lstm networks", "authors": [ { "first": "Oleksii", "middle": [], "last": "Kuchaiev", "suffix": "" }, { "first": "Boris", "middle": [], "last": "Ginsburg", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.10722" ] }, "num": null, "urls": [], "raw_text": "Oleksii Kuchaiev and Boris Ginsburg. 2017. Factor- ization tricks for lstm networks. arXiv preprint arXiv:1703.10722.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Semeval-2018 Task 1: Affect in tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Felipe", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Bravo-Marquez", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Salameh", "suffix": "" }, { "first": "", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2018, "venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalu- ation (SemEval-2018), New Orleans, LA, USA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Semeval-2017 task 4: Sentiment analysis in twitter", "authors": [ { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Noura", "middle": [], "last": "Farra", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "502--518", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. Semeval-2017 task 4: Sentiment analysis in twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 502-518.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Matrix factorization using window sampling and negative sampling for improved word representations", "authors": [ { "first": "Alexandre", "middle": [], "last": "Salle", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Idiart", "suffix": "" }, { "first": "Aline", "middle": [], "last": "Villavicencio", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.00819" ] }, "num": null, "urls": [], "raw_text": "Alexandre Salle, Marco Idiart, and Aline Villavicencio. 2016. Matrix factorization using window sampling and negative sampling for improved word represen- tations. arXiv preprint arXiv:1606.00819.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Dropout: A simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "The Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Improved semantic representations from tree-structured long short-term memory networks", "authors": [ { "first": "Kai Sheng", "middle": [], "last": "Tai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.00075" ] }, "num": null, "urls": [], "raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. arXiv preprint arXiv:1503.00075.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Convolutional neural networks for medical image analysis: Full training or fine tuning?", "authors": [ { "first": "Nima", "middle": [], "last": "Tajbakhsh", "suffix": "" }, { "first": "Jae", "middle": [ "Y" ], "last": "Shin", "suffix": "" }, { "first": "R", "middle": [], "last": "Suryakanth", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Gurudu", "suffix": "" }, { "first": "", "middle": [], "last": "Hurst", "suffix": "" }, { "first": "B", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Kendall", "suffix": "" }, { "first": "B", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Jianming", "middle": [], "last": "Gotway", "suffix": "" }, { "first": "", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "IEEE transactions on medical imaging", "volume": "35", "issue": "5", "pages": "1299--1312", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nima Tajbakhsh, Jae Y Shin, Suryakanth R Gurudu, R Todd Hurst, Christopher B Kendall, Michael B Gotway, and Jianming Liang. 2016. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE transactions on medi- cal imaging, 35(5):1299-1312.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Recurrent neural network regularization", "authors": [ { "first": "Wojciech", "middle": [], "last": "Zaremba", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.2329" ] }, "num": null, "urls": [], "raw_text": "Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Structure of models. For the transfer model, connections between source and target models are used. Large arrows are paths of reduced gradient flow during backpropagation." }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Figure 2:" }, "TABREF0": { "html": null, "text": "", "num": null, "type_str": "table", "content": "" } } } }