{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:20:53.843689Z" }, "title": "HUB@DravidianLangTech-EACL2021: Meme Classification for Tamil Text-Image Fusion", "authors": [ { "first": "Bo", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yunnan University", "location": { "settlement": "Yunnan", "country": "P.R. China" } }, "email": "" }, { "first": "Yang", "middle": [], "last": "Bai", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yunnan University", "location": { "settlement": "Yunnan", "country": "P.R. China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This article describes our system for task Dra-vidianLangTech-EACL2021: Meme classification for Tamil. In recent years, we have witnessed the rapid development of the Internet and social media. Compared with traditional TV and radio media platforms, there are not so many restrictions on the use of online social media for individuals and many functions of online social media platforms are free. Based on this feature of social media, it is difficult for people's posts/comments on social media to be strictly and effectively controlled like TV and radio content. Therefore, the detection of negative information in social media has attracted attention from academic and industrial fields in recent years. The task of classifying memes is also driven by offensive posts/comments prevalent on social media. The data of the meme classification task is the fusion data of text and image information. To identify the content expressed by the meme, we develop a system that combines Bi-GRU and CNN. It can fuse visual features and text features to achieve the purpose of using multi-modal information from memetic data. In this article, we discuss our methods, models, experiments, and results.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This article describes our system for task Dra-vidianLangTech-EACL2021: Meme classification for Tamil. In recent years, we have witnessed the rapid development of the Internet and social media. Compared with traditional TV and radio media platforms, there are not so many restrictions on the use of online social media for individuals and many functions of online social media platforms are free. Based on this feature of social media, it is difficult for people's posts/comments on social media to be strictly and effectively controlled like TV and radio content. Therefore, the detection of negative information in social media has attracted attention from academic and industrial fields in recent years. The task of classifying memes is also driven by offensive posts/comments prevalent on social media. The data of the meme classification task is the fusion data of text and image information. To identify the content expressed by the meme, we develop a system that combines Bi-GRU and CNN. It can fuse visual features and text features to achieve the purpose of using multi-modal information from memetic data. In this article, we discuss our methods, models, experiments, and results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The meme is a kind of multimedia document based on image level, which is a picture containing text. Netizens can express their feelings, opinions, interests, and so on through it (Yus, 2018a) . The meme is an image with some kind of subtitle text embedded in the image pixels. In the past few years, emoticons have become very popular and used in many different contexts, especially young people . However, this form is also used to produce and spread hate speech in the form of black humor. The meme is a term proposed by biologist Richard Dawkins. It was used to describe the flow, flow, mutation, and evolution of culture and is a means of cultural resistance to genes (Milner, 2012) . But the meaning of this term has changed in our public life. In social media, Meme is a kind of self-media work, which is produced and spread by the majority of Internet users through social media networks (Casta\u00f1o D\u00edaz, 2013) . Negative information detection on the Internet has become a core social challenge. Nowadays, the detection of negative information in social media has become more and more intelligent (Chatzakou et al., 2017; Pfeffer et al., 2014) .", "cite_spans": [ { "start": 179, "end": 191, "text": "(Yus, 2018a)", "ref_id": "BIBREF22" }, { "start": 672, "end": 686, "text": "(Milner, 2012)", "ref_id": "BIBREF10" }, { "start": 904, "end": 915, "text": "D\u00edaz, 2013)", "ref_id": "BIBREF2" }, { "start": 1102, "end": 1126, "text": "(Chatzakou et al., 2017;", "ref_id": "BIBREF3" }, { "start": 1127, "end": 1148, "text": "Pfeffer et al., 2014)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and Background", "sec_num": "1" }, { "text": "However, due to the multimodal nature of memes, it is difficult to be intelligently identified whether it is aggressive (Suryawanshi et al., 2020a) . Especially in India, some recent memes containing hate messages have threatened people's lives several times (Suryawanshi et al., 2020b) . Due to the large population and the mixing of multiple languages, a large number of Indian memes are difficult to monitor due to the lack of intelligent systems for specific languages. This adds another serious challenge to the problem of memetic classification. In this work, we participated in the shared task of memetic classification in Tamil (Suryawanshi and Chakravarthi, 2021) . The Indian state of Tamil Nadu, as well as two independent countries, Singapore and Sri Lanka, speak Tamil as their official language. Tamil was the first Indian classical language to be listed as such, and it is still one of the world's oldest classical languages. Dravidian civilisations are believed to have flourished in the Indus Valley civilization (3,300-1,900 BCE), which was situated in the Northwestern Indian subcontinent, this period is considered as second Sangam period in Tamil. Tamil is India's oldest language. Tamil, Pali, and Prakrit all added words, texts, and grammar to Sanskrit. We are committed to making our contribution to the identification of Tamil memetic classification.", "cite_spans": [ { "start": 120, "end": 147, "text": "(Suryawanshi et al., 2020a)", "ref_id": "BIBREF17" }, { "start": 259, "end": 286, "text": "(Suryawanshi et al., 2020b)", "ref_id": "BIBREF18" }, { "start": 636, "end": 672, "text": "(Suryawanshi and Chakravarthi, 2021)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and Background", "sec_num": "1" }, { "text": "In theory, a meme is an example of a large number of humorous words on the Internet, copied or modified, and then spread to other users. But sometimes some information in Internet memes can bring a negative impact (Yus, 2018b) . Compared with the propagation speed of memes on the Internet, the research progress of memes is much slower. The work of Wang et al. showed us that the combination of text and vision can help identify popular memetic descriptions (Wang and Wen, 2015) . The automatic meme generation system implemented by Vyalla et al. based on the transformer model can allow users to generate memes of their choice (Vyalla and Udandarao, 2020). Compared with the text field, there is less research on sentiment analysis on memetic data. In recent years, deep learning technology has attracted the attention of researchers, especially in sentiment analysis tasks that are significantly better than traditional methods. Zhang et al. used deep learning techniques for sentiment analysis on text datasets (Zhang et al., 2018) and Poria et al. used deep learning models for sentiment analysis on image and video datasets (Poria et al., 2017a) . The work of Sabat found that visual modalities provide more information in hate memetic detection than language modalities (Sabat et al., 2019). Suryawanshi and others shared with us their data sets and methods for detecting offensive content in multimodal memetic data sets (Suryawanshi et al., 2020a) . Kumar et al. (Kumar et al., 2020 ) used a multimodal approach to determine the sentiment of a meme.", "cite_spans": [ { "start": 214, "end": 226, "text": "(Yus, 2018b)", "ref_id": "BIBREF23" }, { "start": 459, "end": 479, "text": "(Wang and Wen, 2015)", "ref_id": null }, { "start": 1015, "end": 1035, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF25" }, { "start": 1130, "end": 1151, "text": "(Poria et al., 2017a)", "ref_id": "BIBREF13" }, { "start": 1429, "end": 1456, "text": "(Suryawanshi et al., 2020a)", "ref_id": "BIBREF17" }, { "start": 1459, "end": 1491, "text": "Kumar et al. (Kumar et al., 2020", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The method of using both text and visual information to improve performance has also proven to be effective (Guillaumin et al., 2010; Zahavy et al., 2016) . Generally speaking, the fusion strategy of text information and visual information can be divided into two methods (Corchs et al., 2019 ). On the one hand, some models use feature-level fusion methods, where text input sources or image input sources are processed to extract a set of features. Then, the feature set is put together for the final decision (Atrey et al., 2007; . On the other hand, other models perform complete processing of each input source and perform fusion at the decision-making level (Atreya V et al., 2013; Poria et al., 2017b) . The result we submitted was predicted using the second strategy.", "cite_spans": [ { "start": 108, "end": 133, "text": "(Guillaumin et al., 2010;", "ref_id": "BIBREF6" }, { "start": 134, "end": 154, "text": "Zahavy et al., 2016)", "ref_id": "BIBREF24" }, { "start": 272, "end": 292, "text": "(Corchs et al., 2019", "ref_id": "BIBREF4" }, { "start": 512, "end": 532, "text": "(Atrey et al., 2007;", "ref_id": "BIBREF0" }, { "start": 664, "end": 687, "text": "(Atreya V et al., 2013;", "ref_id": "BIBREF1" }, { "start": 688, "end": 708, "text": "Poria et al., 2017b)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Data And Methods", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The training set provided by the task organizer is mainly divided into two types of data, one is a text data set, and the other is a picture data set. There are 2300 pieces of text data in the text data set. The proportion of \"Not Troll\" text data and \"Troll\" text data to the total data volume are 44% and 56%, respectively. The text content of some of the text data is \"No Captions\". This situation means that no text annotations in the Tamil language appear on the meme picture corresponding to this text data. Such meme pictures without text annotations are not many in the dataset. The other text data is the text appearing on the corresponding meme picture. We present the text data set in the training set provided by the task organizer in a word cloud diagram. It is not difficult to see that in these text data, the most frequent words are mainly demonstrative pronouns and some modal particles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Description and Analysis", "sec_num": "3.1" }, { "text": "The work of Suryawanshi et al. on the task data set shows us the source of the training set of 2969 meme images that we used in the training phase (Suryawanshi et al., 2020b; Suryawanshi and Chakravarthi, 2021) . The proportions of the number of pictures marked as \"Not Troll\" and the number of pictures marked as \"Troll\" in the total number of meme picture datasets are 0.66 and 0.34 respectively. Regardless of whether there are texts in the meme pictures, the content of these meme pictures can also express the information of \"Troll\" and \"Not Troll\". These meme pictures mainly come from many popular social media platforms (such as YouTube, Facebook, WhatsApp, Instagram, etc.). So the size and style of the pictures are also different. From the comparison between the entire text data and the meme picture data, the relationship between them is not one-to-one. It is not difficult to find that their numbers are not equal. This data distribution feature is not good information for us.", "cite_spans": [ { "start": 147, "end": 174, "text": "(Suryawanshi et al., 2020b;", "ref_id": "BIBREF18" }, { "start": 175, "end": 210, "text": "Suryawanshi and Chakravarthi, 2021)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Data Description and Analysis", "sec_num": "3.1" }, { "text": "Combine our analysis and understanding of task description and data set. Our system must process text data while also processing meme image data. Therefore, we choose to use the BiGRU artificial neural network and CNN artificial neural network capable of processing text data as the basic components of our system model. BiGRU can learn the contextual semantic information in the The word cloud image of the text training set is provided by the task organizer. The word \"Caption\" can not be used as reference information, because it mainly comes from the annotation of the text data, not the text content that appears in the meme picture. text through the encoded-word vectors. CNN can learn the information in the picture through convolution operation and pooling operation. We use stacking to combine BiGRU and CNN blocks to form the overall architecture of our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "3.2" }, { "text": "The CNN block is mainly composed of three two-dimensional convolutional layers and three maximum pooling layers. The size of the convolution kernel is selected as 2. After getting the image processed by the third convolutional layer(Conv2d 2) and the third pooling layer(MaxPooling2D 2), the multi-dimensional tensor is converted into a low-latitude tensor through a straightening(Flatten) operation. Then, the result obtained in the previous step is used as the input of the two dense layers(Dense 0, Dense 1). Finally, the output result of the CNN block is obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "3.2" }, { "text": "Text data We use the Tamil pre-training language vector provided by fasttext 1 to encode the text data (Grave et al., 2018) . Then input the encoded text vector into the BiGRU network to obtain the output result of the BiGRU network. Next, input the Bi-GRU result from the previous step into the Dense layer to get the final output of the BiGRU block. Connect the output result of the CNN block and the output result of the BiGRU block to obtain a new tensor. Finally, input this new tensor into the Dense layer to get the final result of the model. The final effect that our system model needs to achieve is to merge text and image data information. We ", "cite_spans": [ { "start": 103, "end": 123, "text": "(Grave et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "3.2" }, { "text": "The work in the data preprocessing stage is mainly for text data. We split the text training set released by the task organizer into a new text training set and a text verification set under the premise of ensuring the same data distribution. The text data is divided by spaces to get each word, and then encoded using the fasttext we mentioned in the Methods section. Use the results predicted by the model on the validation set to evaluate our model system and adjust the parameters of the model system. For meme image data, we set their size uniformly to (300, 300).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Preprocessing", "sec_num": "4.1" }, { "text": "2 https://github.com/Hub-Lucas/hub-at-meme Figure 6 : The CNN block, BiGRU block, and Dense layer together constitute the main part of our system.", "cite_spans": [], "ref_spans": [ { "start": 43, "end": 51, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Data Preprocessing", "sec_num": "4.1" }, { "text": "F1 Score Precision Recall Top1 results 0.55 0.57 0.6 Our results 0.4 0.5 0.54 Validation set 0.50 0.52 0.56 Table 1 : The result score of the top1 team on the test set. We submit the test set prediction result score. The score of our system on the validation set.", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Language", "sec_num": null }, { "text": "In our experiment, the optimizer uniformly uses Adam optimizer. Because Adam uses momentum and adaptive learning rate to speed up convergence. The number of BiGRU layers is set to 2, and the word embedding vector uses 300 dimensions. epoch, learning rate, and batch seize are set to 10, 0.003, and 32 respectively. The activation function used between the three layers in the CNN block is Relu. The activation function used by the dense layer in BiGRU is Softmax. Connect the results in the CNN block and the BiGRU block as the input of the classifier (Dense layer). Use the classifier to get the output result. Figure 6 shows the architecture of our system.", "cite_spans": [], "ref_spans": [ { "start": 612, "end": 620, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Experiment setting", "sec_num": "4.2" }, { "text": "The leaderboard results announced by the task organizer are ranked using the weighted average F1 score. At the same time, the Precision and Recall scores of all participants' submission results will be announced on the leaderboard. Table 1 shows the final result of our system model on the test set, the result score of the top1 team's system on the test set, and the result score of our model on the validation set. Comparing the scores in the table, the results of my system on the validation set are quite different from the results on the test set. There is also a gap between the result score of my system on the test set and the top1 result score. Our team solution ranked 9th in the final leaderboard. The number of data sets we can use is not large, and we randomly select a part of them as the verification set. The result of this is that there are fewer data sets that can be used to train the model. There is no restriction on overfitting in our model. These are the shortcomings of our system.", "cite_spans": [], "ref_spans": [ { "start": 232, "end": 239, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Analysis of Results", "sec_num": "4.3" }, { "text": "This article introduces the method and model system used by our team in the Meme classification for the Tamil shared task. We use a text and image fusion scheme to detect memetic categories in the Tamil language environment. We analyzed the deficiencies of our system. These shortcomings are what we need to improve in our future work. We also have a lot of room for improvement in methods and systems. For example, in image processing, we can try some other network models such as Mo-bileNet (Howard et al., 2017) , ResNext (He et al., 2016) , etc. Some pre-trained language models in text processing.", "cite_spans": [ { "start": 482, "end": 514, "text": "Mo-bileNet (Howard et al., 2017)", "ref_id": null }, { "start": 525, "end": 542, "text": "(He et al., 2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Considering that meme pictures can quickly spread on Internet social media and can express negative information across languages, we determined that dealing with similar issues in social media is very valuable and meaningful. Similar issues that exist in social media in some small-language communities should also deserve our attention and study. In addition to improving our models and methods in future work, we will continue to pay attention to the research and development of memetic analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://fasttext.cc/docs/en/crawl-vectors.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Goal-oriented optimal subset selection of correlated multimedia streams", "authors": [ { "first": "K", "middle": [], "last": "Pradeep", "suffix": "" }, { "first": "", "middle": [], "last": "Atrey", "suffix": "" }, { "first": "S", "middle": [], "last": "Mohan", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Kankanhalli", "suffix": "" }, { "first": "", "middle": [], "last": "Oommen", "suffix": "" } ], "year": 2007, "venue": "ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pradeep K Atrey, Mohan S Kankanhalli, and John B Oommen. 2007. Goal-oriented optimal subset selec- tion of correlated multimedia streams. ACM Trans- actions on Multimedia Computing, Communications, and Applications (TOMM), 3(1):2-es.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Structure cognizant pseudo relevance feedback", "authors": [ { "first": "Arjun", "middle": [], "last": "Atreya", "suffix": "" }, { "first": "V", "middle": [], "last": "", "suffix": "" }, { "first": "Yogesh", "middle": [], "last": "Kakde", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "Ganesh", "middle": [], "last": "Ramakrishnan", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "982--986", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arjun Atreya V, Yogesh Kakde, Pushpak Bhat- tacharyya, and Ganesh Ramakrishnan. 2013. Struc- ture cognizant pseudo relevance feedback. In Pro- ceedings of the Sixth International Joint Conference on Natural Language Processing, pages 982-986, Nagoya, Japan. Asian Federation of Natural Lan- guage Processing.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Defining and characterizing the concept of internet meme", "authors": [ { "first": "Carlos Mauricio Casta\u00f1o", "middle": [], "last": "D\u00edaz", "suffix": "" } ], "year": 2013, "venue": "Ces Psicolog\u00eda", "volume": "6", "issue": "2", "pages": "82--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos Mauricio Casta\u00f1o D\u00edaz. 2013. Defining and characterizing the concept of internet meme. Ces Psicolog\u00eda, 6(2):82-104.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Mean birds: Detecting aggression and bullying on twitter", "authors": [ { "first": "Despoina", "middle": [], "last": "Chatzakou", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Kourtellis", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Blackburn", "suffix": "" }, { "first": "Emiliano", "middle": [], "last": "De Cristofaro", "suffix": "" }, { "first": "Gianluca", "middle": [], "last": "Stringhini", "suffix": "" }, { "first": "Athena", "middle": [], "last": "Vakali", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 ACM on web science conference", "volume": "", "issue": "", "pages": "13--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Despoina Chatzakou, Nicolas Kourtellis, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, and Athena Vakali. 2017. Mean birds: Detecting aggression and bullying on twitter. In Pro- ceedings of the 2017 ACM on web science confer- ence, pages 13-22.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Ensemble learning on visual and textual data for social image emotion classification", "authors": [ { "first": "Silvia", "middle": [], "last": "Corchs", "suffix": "" }, { "first": "Elisabetta", "middle": [], "last": "Fersini", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Gasparini", "suffix": "" } ], "year": 2019, "venue": "International Journal of Machine Learning and Cybernetics", "volume": "10", "issue": "8", "pages": "2057--2070", "other_ids": {}, "num": null, "urls": [], "raw_text": "Silvia Corchs, Elisabetta Fersini, and Francesca Gas- parini. 2019. Ensemble learning on visual and tex- tual data for social image emotion classification. In- ternational Journal of Machine Learning and Cyber- netics, 10(8):2057-2070.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning word vectors for 157 languages", "authors": [ { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Prakhar", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the International Conference on Language Re- sources and Evaluation (LREC 2018).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multimodal semi-supervised learning for image classification", "authors": [ { "first": "Matthieu", "middle": [], "last": "Guillaumin", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Verbeek", "suffix": "" }, { "first": "Cordelia", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 2010, "venue": "2010 IEEE Computer society conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "902--909", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthieu Guillaumin, Jakob Verbeek, and Cordelia Schmid. 2010. Multimodal semi-supervised learn- ing for image classification. In 2010 IEEE Com- puter society conference on computer vision and pat- tern recognition, pages 902-909. IEEE.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Deep residual learning for image recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "770--778", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Efficient convolutional neural networks for mobile vision applications", "authors": [ { "first": "G", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Menglong", "middle": [], "last": "Howard", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Weijun", "middle": [], "last": "Kalenichenko", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Weyand", "suffix": "" }, { "first": "Hartwig", "middle": [], "last": "Andreetto", "suffix": "" }, { "first": "", "middle": [], "last": "Adam", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.04861" ] }, "num": null, "urls": [], "raw_text": "Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mo- bilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Hybrid context enriched deep learning model for fine-grained sentiment analysis in textual and visual semiotic modality social data", "authors": [ { "first": "Akshi", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Kathiravan", "middle": [], "last": "Srinivasan", "suffix": "" }, { "first": "Wen-Huang", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Albert", "middle": [ "Y" ], "last": "Zomaya", "suffix": "" } ], "year": 2020, "venue": "Information Processing & Management", "volume": "57", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akshi Kumar, Kathiravan Srinivasan, Wen-Huang Cheng, and Albert Y Zomaya. 2020. Hybrid context enriched deep learning model for fine-grained senti- ment analysis in textual and visual semiotic modality social data. Information Processing & Management, 57(1):102141.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The world made meme: Discourse and identity in participatory media", "authors": [ { "first": "M", "middle": [], "last": "Ryan", "suffix": "" }, { "first": "", "middle": [], "last": "Milner", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan M Milner. 2012. The world made meme: Dis- course and identity in participatory media. Ph.D. thesis, University of Kansas.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Hate speech in pixels: Detection of offensive memes towards automatic moderation. arXiv e-prints", "authors": [ { "first": "", "middle": [], "last": "Benet Oriol", "suffix": "" }, { "first": "Cristian Canton", "middle": [], "last": "Sabat", "suffix": "" }, { "first": "Xavier Giro-I", "middle": [], "last": "Ferrer", "suffix": "" }, { "first": "", "middle": [], "last": "Nieto", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benet Oriol Sabat, Cristian Canton Ferrer, and Xavier Giro-i Nieto. 2019. Hate speech in pixels: Detection of offensive memes towards automatic moderation. arXiv e-prints, pages arXiv-1910.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Understanding online firestorms: Negative word-of-mouth dynamics in social media networks", "authors": [ { "first": "J\u00fcrgen", "middle": [], "last": "Pfeffer", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Zorbach", "suffix": "" }, { "first": "Kathleen", "middle": [ "M" ], "last": "Carley", "suffix": "" } ], "year": 2014, "venue": "Journal of Marketing Communications", "volume": "20", "issue": "1-2", "pages": "117--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00fcrgen Pfeffer, Thomas Zorbach, and Kathleen M Car- ley. 2014. Understanding online firestorms: Nega- tive word-of-mouth dynamics in social media net- works. Journal of Marketing Communications, 20(1-2):117-128.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Context-dependent sentiment analysis in user-generated videos", "authors": [ { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" }, { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Navonil", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Zadeh", "suffix": "" }, { "first": "Louis-Philippe", "middle": [], "last": "Morency", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th annual meeting of the association for computational linguistics", "volume": "1", "issue": "", "pages": "873--883", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, and Louis-Philippe Morency. 2017a. Context-dependent sentiment anal- ysis in user-generated videos. In Proceedings of the 55th annual meeting of the association for compu- tational linguistics (volume 1: Long papers), pages 873-883.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Context-dependent sentiment analysis in user-generated videos", "authors": [ { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" }, { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Navonil", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Zadeh", "suffix": "" }, { "first": "Louis-Philippe", "middle": [], "last": "Morency", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "873--883", "other_ids": { "DOI": [ "10.18653/v1/P17-1081" ] }, "num": null, "urls": [], "raw_text": "Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, and Louis-Philippe Morency. 2017b. Context-dependent sentiment anal- ysis in user-generated videos. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 873-883, Vancouver, Canada. Association for Com- putational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Hate speech in pixels: Detection of offensive memes towards automatic moderation", "authors": [ { "first": "", "middle": [], "last": "Benet Oriol", "suffix": "" }, { "first": "Cristian Canton", "middle": [], "last": "Sabat", "suffix": "" }, { "first": "Xavier Giro-I", "middle": [], "last": "Ferrer", "suffix": "" }, { "first": "", "middle": [], "last": "Nieto", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.02334" ] }, "num": null, "urls": [], "raw_text": "Benet Oriol Sabat, Cristian Canton Ferrer, and Xavier Giro-i Nieto. 2019. Hate speech in pixels: Detection of offensive memes towards automatic moderation. arXiv preprint arXiv:1910.02334.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Findings of the shared task on Troll Meme Classification in Tamil", "authors": [ { "first": "Shardul", "middle": [], "last": "Suryawanshi", "suffix": "" }, { "first": "", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shardul Suryawanshi and Bharathi Raja Chakravarthi. 2021. Findings of the shared task on Troll Meme Classification in Tamil. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Compu- tational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multimodal meme dataset (multioff) for identifying offensive content in image and text", "authors": [ { "first": "Shardul", "middle": [], "last": "Suryawanshi", "suffix": "" }, { "first": "Mihael", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Arcan", "suffix": "" }, { "first": "", "middle": [], "last": "Buitelaar", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying", "volume": "", "issue": "", "pages": "32--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shardul Suryawanshi, Bharathi Raja Chakravarthi, Mi- hael Arcan, and Paul Buitelaar. 2020a. Multimodal meme dataset (multioff) for identifying offensive content in image and text. In Proceedings of the Second Workshop on Trolling, Aggression and Cy- berbullying, pages 32-41.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A dataset for troll classification of TamilMemes", "authors": [ { "first": "Shardul", "middle": [], "last": "Suryawanshi", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "Mihael", "middle": [], "last": "Verma", "suffix": "" }, { "first": "John", "middle": [], "last": "Arcan", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Philip Mccrae", "suffix": "" }, { "first": "", "middle": [], "last": "Buitelaar", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the WILDRE5-5th Workshop on Indian Language Data: Resources and Evaluation", "volume": "", "issue": "", "pages": "7--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shardul Suryawanshi, Bharathi Raja Chakravarthi, Pranav Verma, Mihael Arcan, John Philip McCrae, and Paul Buitelaar. 2020b. A dataset for troll clas- sification of TamilMemes. In Proceedings of the WILDRE5-5th Workshop on Indian Language Data: Resources and Evaluation, pages 7-13, Marseille, France. European Language Resources Association (ELRA).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Memeify: A large-scale meme generation system", "authors": [ { "first": "Reddy", "middle": [], "last": "Suryatej", "suffix": "" }, { "first": "Vishaal", "middle": [], "last": "Vyalla", "suffix": "" }, { "first": "", "middle": [], "last": "Udandarao", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 7th ACM IKDD CoDS and 25th COMAD", "volume": "", "issue": "", "pages": "307--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suryatej Reddy Vyalla and Vishaal Udandarao. 2020. Memeify: A large-scale meme generation system. In Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, pages 307-311.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "2015. I can has cheezburger? a nonparanormal approach to combining textual and visual information for predicting and generating popular meme descriptions", "authors": [ { "first": "Yang", "middle": [], "last": "William", "suffix": "" }, { "first": "Miaomiao", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Wen", "suffix": "" } ], "year": null, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "355--365", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Yang Wang and Miaomiao Wen. 2015. I can has cheezburger? a nonparanormal approach to com- bining textual and visual information for predicting and generating popular meme descriptions. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 355-365.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Eann: Event adversarial neural networks for multi-modal fake news detection", "authors": [ { "first": "Yaqing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Fenglong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Zhiwei", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Ye", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Guangxu", "middle": [], "last": "Xun", "suffix": "" }, { "first": "Kishlay", "middle": [], "last": "Jha", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Su", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 24th acm sigkdd international conference on knowledge discovery & data mining", "volume": "", "issue": "", "pages": "849--857", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaqing Wang, Fenglong Ma, Zhiwei Jin, Ye Yuan, Guangxu Xun, Kishlay Jha, Lu Su, and Jing Gao. 2018. Eann: Event adversarial neural networks for multi-modal fake news detection. In Proceed- ings of the 24th acm sigkdd international conference on knowledge discovery & data mining, pages 849- 857.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Identity-related issues in meme communication", "authors": [ { "first": "Francisco", "middle": [], "last": "Yus", "suffix": "" } ], "year": 2018, "venue": "Internet Pragmatics", "volume": "1", "issue": "1", "pages": "113--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francisco Yus. 2018a. Identity-related issues in meme communication. Internet Pragmatics, 1(1):113- 133.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Identity-related issues in meme communication", "authors": [ { "first": "Francisco", "middle": [], "last": "Yus", "suffix": "" } ], "year": 2018, "venue": "Internet Pragmatics", "volume": "1", "issue": "1", "pages": "113--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francisco Yus. 2018b. Identity-related issues in meme communication. Internet Pragmatics, 1(1):113- 133.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Is a picture worth a thousand words? a deep multi-modal fusion architecture for product classification in e-commerce", "authors": [ { "first": "Tom", "middle": [], "last": "Zahavy", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Magnani", "suffix": "" }, { "first": "Abhinandan", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "Shie", "middle": [], "last": "Mannor", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.09534" ] }, "num": null, "urls": [], "raw_text": "Tom Zahavy, Alessandro Magnani, Abhinandan Krish- nan, and Shie Mannor. 2016. Is a picture worth a thousand words? a deep multi-modal fusion ar- chitecture for product classification in e-commerce. arXiv preprint arXiv:1611.09534.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Deep learning for sentiment analysis: A survey", "authors": [ { "first": "Lei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shuai", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery", "volume": "8", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Zhang, Shuai Wang, and Bing Liu. 2018. Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowl- edge Discovery, 8(4):e1253.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Labels distribution of Tamil training set and validation set. In the training set, Troll: 55.7%, Not troll: 44.3%. In the validation set, Troll: 55.94%, Not troll:44.06%.", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "No text annotations in the Tamil language appear on the meme picture.", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "Figure 3: The word cloud image of the text training set is provided by the task organizer. The word \"Caption\" can not be used as reference information, because it mainly comes from the annotation of the text data, not the text content that appears in the meme picture.", "uris": null, "type_str": "figure" }, "FIGREF3": { "num": null, "text": "BiGRU structure and data flow.", "uris": null, "type_str": "figure" }, "FIGREF4": { "num": null, "text": "The CNN block in our system. provide the code implementation of our system 2 .", "uris": null, "type_str": "figure" } } } }