{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:18:42.594713Z" }, "title": "YNU-HPCC at SemEval-2020 Task 8: Using a Parallel-Channel Model for Memotion Analysis", "authors": [ { "first": "Li", "middle": [], "last": "Yuan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yunnan University Kunming", "location": { "country": "China" } }, "email": "" }, { "first": "Jin", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yunnan University Kunming", "location": { "country": "China" } }, "email": "wangjin@ynu.edu.cn" }, { "first": "Xuejie", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yunnan University Kunming", "location": { "country": "China" } }, "email": "xjzhang@ynu.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In recent years, the growing ubiquity of Internet memes on social media platforms, such as Facebook, Instagram, and Twitter, has become a topic of immense interest. However, the classification and recognition of memes is much more complicated than that of social text since it involves visual cues and language understanding. To address this issue, this paper proposed a parallel-channel model to process the textual and visual information in memes and then analyze the sentiment polarity of memes. In the shared task of identifying and categorizing memes, we preprocess the dataset according to the language behaviors on social media. Then, we adapt and fine-tune the Bidirectional Encoder Representations from Transformers (BERT), and two types of convolutional neural network models (CNNs) were used to extract the features from the pictures. We applied an ensemble model that combined the BiLSTM, BIGRU, and Attention models to perform cross domain suggestion mining. The officially released results show that our system performs better than the baseline algorithm. Our team won nineteenth place in subtask A (Sentiment Classification). The code of this paper is availabled at : https://github.com/YuanLi95/Semveal2020-Task8-emotion-analysis.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In recent years, the growing ubiquity of Internet memes on social media platforms, such as Facebook, Instagram, and Twitter, has become a topic of immense interest. However, the classification and recognition of memes is much more complicated than that of social text since it involves visual cues and language understanding. To address this issue, this paper proposed a parallel-channel model to process the textual and visual information in memes and then analyze the sentiment polarity of memes. In the shared task of identifying and categorizing memes, we preprocess the dataset according to the language behaviors on social media. Then, we adapt and fine-tune the Bidirectional Encoder Representations from Transformers (BERT), and two types of convolutional neural network models (CNNs) were used to extract the features from the pictures. We applied an ensemble model that combined the BiLSTM, BIGRU, and Attention models to perform cross domain suggestion mining. The officially released results show that our system performs better than the baseline algorithm. Our team won nineteenth place in subtask A (Sentiment Classification). The code of this paper is availabled at : https://github.com/YuanLi95/Semveal2020-Task8-emotion-analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, memes that combine pictures and text have been widely used in social media. Using memes can help users to express richer meaning and emotion compared with using text or images alone; hence, it is worthwhile to analyze the sentiment expressions of memes.Moreover, recognizing and analyzing the meaning and sentiment of memes is much more difficult than analyzing social texts or pictures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In SemEval-2020 Task 8:Memotion Analysis (Sharma et al., 2020) , the organizers hoped that the task would increase the research attention given to the topic. The task is divided into three subtasks.", "cite_spans": [ { "start": 41, "end": 62, "text": "(Sharma et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Task A-Sentiment Classification: Given an Internet meme, the first task is to classify its sentiment polarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Task B-Humor Classification: Given an Internet meme, the system has to identify the type of humor expressed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Task C-Scales of Semantic Classes: The third task is to quantify the extent to which a particular effect is being expressed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Memes and this issue have attracted the attention of researchers. In a previous study, Borth (2013) pioneered the sentiment analysis of visual content with SentiBank. Another study implemented Optical Character Recognition (OCR) to extract the text captions of memes and then classified the sentiment polarity of the text using the Naive Bayes algorithm (Amalia et al., 2018) . For a similar meme sentiment analysis task, Zhao (2019) developed a multimodal sentiment analysis method for image-text posts, and their experiments showed that this method achieves excellent performance on the Flickr benchmark dataset. Hu and Flaxman (2018) used GloVe to map the text to a high dimensional space and fine-tuned the pictures through Inception (a pretrained deep convolutional neural network). In this paper, we propose a parallel channel model that includes a text channel, which is implemented to process the text in memes, and an image channel for image analysis. The text channel implements the BiLSTM, BiGRU, and BiLSTM with attention models. For the image channel, a multilayer CNN model and ResNet152 (He et al., 2016) were applied to capture the image features. Then, the information in the two modalities is combined by a dense layer after concatenation. The experimental results show that our approach achieved good performance.", "cite_spans": [ { "start": 354, "end": 375, "text": "(Amalia et al., 2018)", "ref_id": "BIBREF0" }, { "start": 615, "end": 636, "text": "Hu and Flaxman (2018)", "ref_id": "BIBREF6" }, { "start": 1102, "end": 1119, "text": "(He et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organized as follows. Section 2 describes the proposed parallel channel model, and Section 3 presents the implementation details and experimental results. The conclusions of this study are presented in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As illustrated in Figure 1 , the proposed model consists of two channels: the image channel and the text channel. We propose two different types of pretraining vectors and three different models in the text channel and two different models in the image channel as a way to extract picture features. We combined multiple models, use the soft voting mechanism, and output the results. For an input meme, w s represents the extracted text and I is the image. Then, the proposed model can be expressed as follows:", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 26, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Overview", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h T i = f T i (w s ) h I j = f I j (I) f (w s , I) = voting h T i \u2295 h I j", "eq_num": "(1)" } ], "section": "Overview", "sec_num": "2.1" }, { "text": "where i \u2208 (1, 2, 3, 4) and j \u2208 (1, 2) , f T and f I represent the way to obtain special text and image features. h T i and h I j are the text vector and the image vector, respectively,and f (w s , I) is the final result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "2.1" }, { "text": "Embedding Layer. The embedding layer is the first layer of the text channel. We constructed the word vectors from a 768-dimensional BERT vector. Then, a word vector matrix was loaded into the embedding layer and then fed into different hidden layers. For longer posts, we only keep the first 128 words, which is a reasonable choice since 90% of the posts in the dataset contain less than 128 words. We also use the sentence-level vectors from a 768-dimensional BERT vector as the text features and fed them into the fully connected layer. Bidirectional Long Short-Term Memory (BiLSTM) (Greff et al., 2017 ) is a special Recurrent Neural Network. The LSTM model can better capture the long-distance dependencies. There are various novel models based on LSTM, for instance: Wang (2020) proposed a tree-structured regional CNN-LSTM model for valence-arousal (VA) prediction. A capsule tree LSTM model introduces a dynamic routing algorithm to construct sentence representations (Wang et al., 2019) , and experiments prove that the method improves the performance of the tree LSTM and the basic LSTM model. BiLSTM is based on LSTM and can better capture forward and backward semantic dependencies. We show how a memory block calculates the hidden state h T t and output C t using the following equations.", "cite_spans": [ { "start": 585, "end": 604, "text": "(Greff et al., 2017", "ref_id": "BIBREF4" }, { "start": 975, "end": 994, "text": "(Wang et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Text Channel", "sec_num": "2.2" }, { "text": "\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Channel", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Gate f t = \u03c3(W f \u2022 [h T t\u22121 , x t ] + b f ) i t = \u03c3(W i \u2022 [h T t\u22121 , x t ] + b i ) o t = \u03c3(W o \u2022 [h T t\u22121 , x t ] + b o )", "eq_num": "(2)" } ], "section": "Text Channel", "sec_num": "2.2" }, { "text": "\u2022 TransformationC", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Channel", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "t = tanh(W c \u2022 [h T t\u22121 , x t ] + b c )", "eq_num": "(3)" } ], "section": "Text Channel", "sec_num": "2.2" }, { "text": "\u2022 Status update", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Channel", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C t = f t * C t\u22121 + i t * C t h T t = o t * tanh(C t )", "eq_num": "(4)" } ], "section": "Text Channel", "sec_num": "2.2" }, { "text": "here, x t is the input vector; C t is the cell state vector; W and b are cell parameters; f t , i t and o t are gate vectors; and \u03c3 denotes the sigmoid function. Gated Recurrent Unit (GRU) (Cho et al., 2014 ) is a variant of LSTM that combines the forget gate and the input gate into a single update gate. It also mixes cell states and hidden states. The final model is simpler than the standard LSTM model. The effect is similar to LSTM but with fewer parameters, and it is not easy to overfit. Attention mechanism (Bahdanau et al., 2015) breaks the limitation that the traditional encoder-decoder structure depends on a fixed-length vector when encoding and decoding. Its implementation retains the intermediate output results of the input sequence via the LSTM encoder, trains a model to selectively learn these inputs and associates the output sequence with it when the model is output. Attention mechanisms have been widely used in various NLP fields such as the Transformer (Vaswani et al., 2017) , Neural Machine Translation (Yang et al., 2016) and aspect-level sentiment analysis (Tang et al., 2019) .", "cite_spans": [ { "start": 189, "end": 206, "text": "(Cho et al., 2014", "ref_id": "BIBREF3" }, { "start": 516, "end": 539, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF1" }, { "start": 980, "end": 1002, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF11" }, { "start": 1032, "end": 1051, "text": "(Yang et al., 2016)", "ref_id": "BIBREF14" }, { "start": 1088, "end": 1107, "text": "(Tang et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Text Channel", "sec_num": "2.2" }, { "text": "Convolutional neural networks (CNNs) (Krizhevsky et al., 2012) are often used to extract image representations. A CNN is usually divided into convolution layers and pooling layers. The convolution layers are used to extract n-gram features from the picture pixels. Pooling selects a part of the input matrix and chooses the best representative for the region. The max pooling layer selects the max feature. ResNet model (He et al., 2016) is one of the widely used image recognition models, and it solves the deep vanishing gradient problem. The basic structure of the residual is shown in Figure 2 . We used PyTorch's pretrained ResNet152 model for the feature extraction from pictures.", "cite_spans": [ { "start": 37, "end": 62, "text": "(Krizhevsky et al., 2012)", "ref_id": "BIBREF7" }, { "start": 420, "end": 437, "text": "(He et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 589, "end": 597, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Image Channel", "sec_num": "2.3" }, { "text": "In this section, experiments were conducted to evaluate the proposed models on both subtasks. We also report the results of the official review. The details of the experiment are described as follows. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Evaluation", "sec_num": "3" }, { "text": "The organizers provided 7K human annotated Internet memes labeled with semantic dimensions, namely, sentiment and the type of humor that is sarcastic, humorous, or offensive. For subtask A and subtask B, the data distributions are a little unbalanced, which make the tasks much harder. We randomly used 20% of the memes from the provided data as the dev set to fine-tune the parameters. The Stanford tokenizer toolkit was employed to process the memes-text into an array of tokens. Meanwhile, before feeding the token array to any neural networks, they are preprocessed by following procedures:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Preparation", "sec_num": "3.1" }, { "text": "\u2022 Punctuation marks, websites URLs and mailing addresses are removed, \u2022 Common nonstandard expressions are restored, and \u2022 Non-English letters are treated as unknown words represented by .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Preparation", "sec_num": "3.1" }, { "text": "This experiment used Keras with the TensorFlow backend. For subtask A, we used two different pretrained word vectors, and we introduced other models. For subtask A, we tried different batch sizes and attempts, and the results are shown in Figure 3 .The best batch size is 60, the best number of training epochs is 14 and the learning rate is set as 1e-5. We use Scikit-Learn to execute the grid search (Pedregosa et al., 2011) to adjust the hyperparameters, through which we can find the best parameters for evaluating the system. The parameters given are as follows: the time step of the RNN for hidden layers 1, 2 (h 1,2 ) and 3 (h 3 ); the dimension of the dense layer (d); and the dropout rate (r). For the image channel, we also have the number of convolution layers (c), the number of filters (m), the length of the filter (l) and the pool (p). Table 1 summarizes these fine-tuned parameters.", "cite_spans": [ { "start": 402, "end": 426, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 239, "end": 247, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 851, "end": 858, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Implementation Details", "sec_num": "3.2" }, { "text": "For the submission to subtask A and subtask B, its performance will evaluated based on the macro-F1 score. The F1-score is often used as an evaluation indicator of unbalanced data, and is defined as follows: where P denotes the precision and R denotes the recall. A higher F1-score indicates better classification performance. Table 2 shows the detailed results of the proposed our model compared to the other baseline models in ours dev set.", "cite_spans": [], "ref_spans": [ { "start": 327, "end": 334, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F 1 = 2 * P * R (P + R)", "eq_num": "(5)" } ], "section": "Evaluation Metrics", "sec_num": "3.3" }, { "text": "Subtask A. Our system achieved a score that was 0.115 higher than the baseline score (0.2176). The results show that our proposed system significantly outperforms the baseline models. The main reason is that we have combined a variety of information from memes and used the BERT word embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "3.4" }, { "text": "Subtask B. Our model score was lower than the baseline score of 0.5118. We guess that it may be caused by the inconsistent data distribution between the dev set and test set, and so we need to do more research on class imbalance in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "3.4" }, { "text": "In this paper, we describe a task system that we submitted to SemEval-2020 for Memotion Analysis. We propose a two parallel channel model. In the text channel, we use 3 RNN models and 2 types of pretraining vectors. In the image channel, we used a pretrained model and a CNN model. We participated in subtasks A and B, and obtained nineteenth place in subtask A. In future work, we will test more novel fusion methods so that the picture features can be better combined with token embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" } ], "back_matter": [ { "text": "This work was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 61966038, 61702443 and 61762091. The authors would like to thank the anonymous reviewers for their constructive comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Meme opinion categorization by using optical character recognition (ocr) and na\u00efve bayes algorithm", "authors": [ { "first": "Amalia", "middle": [], "last": "Amalia", "suffix": "" }, { "first": "Arner", "middle": [], "last": "Sharif", "suffix": "" }, { "first": "Fikri", "middle": [], "last": "Haisar", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Gunawan", "suffix": "" }, { "first": "Benny", "middle": [ "B" ], "last": "Nasution", "suffix": "" } ], "year": 2018, "venue": "2018 Third International Conference on Informatics and Computing (ICIC)", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amalia Amalia, Arner Sharif, Fikri Haisar, Dani Gunawan, and Benny B Nasution. 2018. Meme opinion cate- gorization by using optical character recognition (ocr) and na\u00efve bayes algorithm. In 2018 Third International Conference on Informatics and Computing (ICIC), pages 1-5.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyung", "middle": [ "Hyun" ], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations, ICLR 2015 -Conference Track Proceedings", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015 -Conference Track Proceedings, pages 1-15.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Large-scale visual sentiment ontology and detectors using adjective noun pairs", "authors": [ { "first": "Damian", "middle": [], "last": "Borth", "suffix": "" }, { "first": "Rongrong", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Breuel", "suffix": "" }, { "first": "Shih-Fu", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2013, "venue": "MM 2013 -Proceedings of the 2013 ACM Multimedia Conference", "volume": "", "issue": "", "pages": "223--232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Damian Borth, Rongrong Ji, Tao Chen, Thomas Breuel, and Shih-Fu Chang. 2013. Large-scale visual sentiment ontology and detectors using adjective noun pairs. In MM 2013 -Proceedings of the 2013 ACM Multimedia Conference, pages 223-232.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "EMNLP 2014 -2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical ma- chine translation. In EMNLP 2014 -2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, pages 1724-1734.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "LSTM: A search space odyssey", "authors": [ { "first": "Klaus", "middle": [], "last": "Greff", "suffix": "" }, { "first": "K", "middle": [], "last": "Rupesh", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "", "middle": [], "last": "Koutnik", "suffix": "" }, { "first": "R", "middle": [], "last": "Bas", "suffix": "" }, { "first": "Jurgen", "middle": [], "last": "Steunebrink", "suffix": "" }, { "first": "", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2017, "venue": "IEEE Transactions on Neural Networks and Learning Systems", "volume": "28", "issue": "10", "pages": "2222--2232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaus Greff, Rupesh K. Srivastava, Jan Koutnik, Bas R. Steunebrink, and Jurgen Schmidhuber. 2017. LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems, 28(10):2222-2232.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Deep residual learning for image recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "770--778", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 770-778.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multimodal sentiment analysis to explore the structure of emotions", "authors": [ { "first": "Anthony", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Seth", "middle": [], "last": "Flaxman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "350--358", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony Hu and Seth Flaxman. 2018. Multimodal sentiment analysis to explore the structure of emotions. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 350-358.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Imagenet classification with deep convolutional neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2012, "venue": "Advances in Neural Information Processing Systems 25", "volume": "", "issue": "", "pages": "1097--1105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1097-1105. Curran Asso- ciates, Inc.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Scikit-learn : Machine learning in Python. the", "authors": [ { "first": "Fabian", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "Ron", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Brucher", "suffix": "" } ], "year": 2011, "venue": "Journal of machine Learning research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian Pedregosa, Ron Weiss, and Matthieu Brucher. 2011. Scikit-learn : Machine learning in Python. the Journal of machine Learning research, 12:2825-2830.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Viswanath Pulabaigari, and Bj\u00f6rn Gamb\u00e4ck. 2020. SemEval-2020 Task 8: Memotion Analysis-The Visuo-Lingual Metaphor! In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)", "authors": [ { "first": "Chhavi", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Deepesh", "middle": [], "last": "Bhageria", "suffix": "" }, { "first": "William", "middle": [], "last": "Paka", "suffix": "" }, { "first": "", "middle": [], "last": "Scott", "suffix": "" }, { "first": "P Y K L", "middle": [], "last": "Srinivas", "suffix": "" }, { "first": "Amitava", "middle": [], "last": "Das", "suffix": "" }, { "first": "Tanmoy", "middle": [], "last": "Chakraborty", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chhavi Sharma, Deepesh Bhageria, William Paka, Scott, Srinivas P Y K L, Amitava Das, Tanmoy Chakraborty, Viswanath Pulabaigari, and Bj\u00f6rn Gamb\u00e4ck. 2020. SemEval-2020 Task 8: Memotion Analysis-The Visuo- Lingual Metaphor! In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, Sep. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Progressive self-supervised attention learning for aspect-level sentiment analysis", "authors": [ { "first": "Jialong", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Ziyao", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Jinsong", "middle": [], "last": "Su", "suffix": "" }, { "first": "Yubin", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "557--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jialong Tang, Ziyao Lu, Jinsong Su, Yubin Ge, and Linfeng Song. 2019. Progressive self-supervised attention learning for aspect-level sentiment analysis. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 557-566. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "2017", "issue": "", "pages": "5999--6009", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 2017-Decem, pages 5999-6009.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Investigating dynamic routing in tree-structured LSTM for sentiment analysis", "authors": [ { "first": "Jin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Liang-Chih", "middle": [], "last": "Yu", "suffix": "" }, { "first": "K", "middle": [ "Robert" ], "last": "Lai", "suffix": "" }, { "first": "Xuejie", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "3430--3435", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin Wang, Liang-Chih Yu, K. Robert Lai, and Xuejie Zhang. 2019. Investigating dynamic routing in tree-structured LSTM for sentiment analysis. pages 3430-3435.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Tree-structured regional CNN-LSTM model for dimensional sentiment analysis", "authors": [ { "first": "Jin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "K", "middle": [ "Robert" ], "last": "Liang Chih Yu", "suffix": "" }, { "first": "Xuejie", "middle": [], "last": "Lai", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2020, "venue": "IEEE/ACM Transactions on Audio Speech and Language Processing", "volume": "28", "issue": "", "pages": "581--591", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin Wang, Liang Chih Yu, K. Robert Lai, and Xuejie Zhang. 2020. Tree-structured regional CNN-LSTM model for dimensional sentiment analysis. IEEE/ACM Transactions on Audio Speech and Language Processing, 28:581- 591.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A character-aware encoder for neural machine translation", "authors": [ { "first": "Zhen", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Feng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, number 95", "volume": "", "issue": "", "pages": "3063--3070", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2016. A character-aware encoder for neural machine translation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, number 95, pages 3063-3070.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "An imagetext consistency driven multimodal sentiment analysis approach for social media", "authors": [ { "first": "Ziyuan", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Huiying", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Zhao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Matthew", "middle": [ "Chin" ], "last": "Heng Chua", "suffix": "" }, { "first": "M", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Information Processing and Management", "volume": "56", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ziyuan Zhao, Huiying Zhu, Z. Xue, Zhao Liu, Jing Tian, Matthew Chin Heng Chua, and M. Liu. 2019. An image- text consistency driven multimodal sentiment analysis approach for social media. Information Processing and Management, 56(6):102097.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "The Multimodal architecture of the parallel channel model.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "Residual learning: a building block.", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "Fine tune of epochs and batch size.", "num": null, "type_str": "figure" }, "TABREF3": { "content": "", "html": null, "num": null, "text": "The dev data experiment results.", "type_str": "table" } } } }