{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:53:29.518752Z" }, "title": "Modeling Intra and Inter-modality Incongruity for Multi-Modal Sarcasm Detection", "authors": [ { "first": "Hongliang", "middle": [], "last": "Pan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences Bejing", "location": { "settlement": "Bejing", "country": "China, China" } }, "email": "panhongliang@iie.ac.cn" }, { "first": "Zheng", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences Bejing", "location": { "settlement": "Bejing", "country": "China, China" } }, "email": "linzheng@iie.ac.cn" }, { "first": "Peng", "middle": [], "last": "Fu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences Bejing", "location": { "settlement": "Bejing", "country": "China, China" } }, "email": "fupeng@iie.ac.cn" }, { "first": "Yatao", "middle": [], "last": "Qi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences Bejing", "location": { "settlement": "Bejing", "country": "China, China" } }, "email": "qiyatao@iie.ac.cn" }, { "first": "Weiping", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences Bejing", "location": { "settlement": "Bejing", "country": "China, China" } }, "email": "wangweiping@iie.ac.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Sarcasm is a pervasive phenomenon in today's social media platforms such as Twitter and Reddit. These platforms allow users to create multi-modal messages, including texts, images, and videos. Existing multi-modal sarcasm detection methods either simply concatenate the features from multi modalities or fuse the multi modalities information in a designed manner. However, they ignore the incongruity character in sarcastic utterance, which is often manifested between modalities or within modalities. Inspired by this, we propose a BERT architecture-based model, which concentrates on both intra and inter-modality incongruity for multi-modal sarcasm detection. To be specific, we are inspired by the idea of self-attention mechanism and design intermodality attention to capturing inter-modality incongruity. In addition, the co-attention mechanism is applied to model the contradiction within the text. The incongruity information is then used for prediction. The experimental results demonstrate that our model achieves state-of-the-art performance on a public multi-modal sarcasm detection dataset.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Sarcasm is a pervasive phenomenon in today's social media platforms such as Twitter and Reddit. These platforms allow users to create multi-modal messages, including texts, images, and videos. Existing multi-modal sarcasm detection methods either simply concatenate the features from multi modalities or fuse the multi modalities information in a designed manner. However, they ignore the incongruity character in sarcastic utterance, which is often manifested between modalities or within modalities. Inspired by this, we propose a BERT architecture-based model, which concentrates on both intra and inter-modality incongruity for multi-modal sarcasm detection. To be specific, we are inspired by the idea of self-attention mechanism and design intermodality attention to capturing inter-modality incongruity. In addition, the co-attention mechanism is applied to model the contradiction within the text. The incongruity information is then used for prediction. The experimental results demonstrate that our model achieves state-of-the-art performance on a public multi-modal sarcasm detection dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Sarcasm is a form of figurative language where the literal meaning of words does not hold, and instead, the opposite interpretation is intended (Joshi et al., 2017) . Sarcasm is prevalent in today's social media platforms, and it can completely flip the polarity of sentiment or opinion. Thus, an effective sarcasm detector is beneficial to applications like sentiment analysis, opinion mining (Pang and Lee, 2007) , and other tasks that require people's real sentiment. However, the figurative nature of sarcasm makes it a challenging task (Liu, 2010) . The scholars notice that sarcasm is often associated with a concept called incongruity which is used to suggest a distinction between reality and expectation (Gibbs Jr (a) . such a packed game . it is amazing we even got a seat . # pelicans (b). well that looks appetising ... # ubereats Figure 1 : Examples of image modality aiding sarcasm detection. (a) It suggests a contradiction of \"it is amazing we even got a seat\" in the text and \"many unoccupied seats\" on the image. (b) The food on the image doesn't look appetising as the text describes. et al., 1994) . Consequently, many approaches for sarcasm detection have been proposed by capturing the incongruity within text (Riloff et al., 2013; Joshi et al., 2015; Tay et al., 2018; Xiong et al., 2019) .", "cite_spans": [ { "start": 144, "end": 164, "text": "(Joshi et al., 2017)", "ref_id": "BIBREF8" }, { "start": 394, "end": 414, "text": "(Pang and Lee, 2007)", "ref_id": "BIBREF16" }, { "start": 541, "end": 552, "text": "(Liu, 2010)", "ref_id": "BIBREF12" }, { "start": 713, "end": 726, "text": "(Gibbs Jr (a)", "ref_id": null }, { "start": 1104, "end": 1117, "text": "et al., 1994)", "ref_id": null }, { "start": 1232, "end": 1253, "text": "(Riloff et al., 2013;", "ref_id": "BIBREF20" }, { "start": 1254, "end": 1273, "text": "Joshi et al., 2015;", "ref_id": "BIBREF9" }, { "start": 1274, "end": 1291, "text": "Tay et al., 2018;", "ref_id": "BIBREF22" }, { "start": 1292, "end": 1311, "text": "Xiong et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 843, "end": 851, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "More and more applications like Twitter allow users to post multi-modal messages. Accordingly, only modeling the incongruity within text modality is not enough to identify the inter-modality contradiction's sarcasm. Consider the given examples in Figure 1 ; people can not recognize sarcasm merely from text unless they find the contradiction between text and images. As a result, capturing the incongruity between modalities is significant for multi-modal sarcasm detection.", "cite_spans": [], "ref_spans": [ { "start": 247, "end": 255, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, the existing models for multi-modal sarcasm detection either concatenate the features from multi modalities (Schifanella et al., 2016) or fuse the information from different modalities in a designed manner (Cai et al., 2019) . Previous multimodal sarcasm detection approaches neglect the incongruity character of sarcasm. We believe that it is meaningful to capture both intra and inter-modality incongruity for multi-modal sarcasm detection.", "cite_spans": [ { "start": 117, "end": 143, "text": "(Schifanella et al., 2016)", "ref_id": "BIBREF21" }, { "start": 215, "end": 233, "text": "(Cai et al., 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We treat images and text as two modalities in this work and propose a novel BERT architecture-based model for multi-modal sarcasm detection. BERT as a pre-trained language model proposed by Devlin et al. (2019) , which can be used to produce outstanding representations of text. For this reason, we utilize BERT to acquire the representation of text and the hashtags (use the word with a '#' in front to indicate the topic of the tweet) within the text. We notice that hashtags might contain the information that contrasts the text. Maynard and Greenwood (2014) also studies the sentiment and sarcasm with the help of hashtags. Consequently, we apply a co-attention matrix to model the incongruity between text and hashtags as the intra-modality incongruity. Besides, the self-attention mechanism considers the interaction between keys and queries and the inter-modality incongruity information can also be treated as an interaction between text and images. As a result, inspired by the key idea of self-attention, we design the inter-modality attention which treats textual features as queries, image features as keys and values to capture the intermodality incongruity. The intra and inter-modality incongruity information are then combined for prediction.", "cite_spans": [ { "start": 190, "end": 210, "text": "Devlin et al. (2019)", "ref_id": "BIBREF3" }, { "start": 533, "end": 561, "text": "Maynard and Greenwood (2014)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of our work can be summarised as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a novel BERT architecture-based model for multi-modal sarcasm detection, aiming to address the problem that existing multi-modal sarcasm detection models do not consider the incongruity character of sarcasm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We design the inter-modality attention to model the incongruity between modalities and apply the co-attention mechanism to model the incongruity within text modality for multimodal sarcasm detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We conduct a series of experiments to show our model's effectiveness and our model achieves a 2.74% improvement on F1 score than state-of-the-art method. Furthermore, we find that considering the text on the images can bring significant improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we first define the multi-modal sarcasm detection task. We then briefly present the background of the BERT model and describe the architecture of our proposed model in detail. Figure 2 gives an overview of our model.", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 202, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "Multi-modal sarcasm detection aims to identify if a given text associated with an image has sarcastic meaning. Formally, given a set of multimodal samples D, for each sample d \u2208 D, it contains a sentence T with n words { t 1 , t 2 , t 3 , . . . , t n } and an associated image I. The goal of our model is to learn a multi-modal sarcasm detection classifier to correctly predict the results of unseen samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2.1" }, { "text": "Language model pretraining has been proven to be useful for many natural language processing tasks (Peters et al., 2018; Howard and Ruder, 2018) . BERT was proposed by Devlin et al. (2019) , which is designed to pre-train deep bidirectional representations from large unlabelled data by jointly conditioning on both left and right context in all layers. The pretraining procedure makes BERT have the capacity to acquire well representations of text. The BERT model consists of multi-layer bi-directional transformer encoders (Vaswani et al., 2017) . Devlin et al. (2019) propose two BERT models in their work. A Base BERT model with 12 transformer blocks, feed-forward networks with 768 hidden units and 12 attention heads, and a Large BERT model with 24 transformer blocks, feed-forward networks with 1024 hidden units and 16 attention heads, In our work, we apply a pre-trained Base BERT model to obtain text representations.", "cite_spans": [ { "start": 99, "end": 120, "text": "(Peters et al., 2018;", "ref_id": "BIBREF18" }, { "start": 121, "end": 144, "text": "Howard and Ruder, 2018)", "ref_id": "BIBREF7" }, { "start": 168, "end": 188, "text": "Devlin et al. (2019)", "ref_id": "BIBREF3" }, { "start": 525, "end": 547, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF23" }, { "start": 550, "end": 570, "text": "Devlin et al. (2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2.2" }, { "text": "Our model can be divided into three parts: the Image and Text Processing module, the intermodality attention module, and the intra-modality attention module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "2.3" }, { "text": "For text processing, given a sequence of words ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image and Text Processing", "sec_num": null }, { "text": "X = {x 1 , x 2 , . . . , x N }, where x i \u2208 R d is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image and Text Processing", "sec_num": null }, { "text": "! \" # $ Co-attention Matrix ! \" # $ \u210e! \u210e# \u210e\" \u210e$", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image and Text Processing", "sec_num": null }, { "text": "Figure 2: Overview of our proposed model. A pre-trained BERT model encodes a given sequence and the hashtags within it. ResNet is used to obtain the image representation. We apply intra-modality attention to model the incongruity within the text and inner-modality attention to model the incongruity between text and images. The incongruity information is then combined and used to predict.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "As for image processing, given an image I, we first resize it to 224*224 pixels, and then we use ResNet-152 (He et al., 2016) to obtain the representation of the image. To be specific, we chop off the last fully-connected (FC) layer and obtain the output of the last convolutional layer:", "cite_spans": [ { "start": 108, "end": 125, "text": "(He et al., 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "ResN et(I) = {r i |r i \u2208 R 2048 , i = 1, 2, . . . , 49}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "(1) where each r i is a 2048-dimensional vector representing a region on the image. Consequently, an image I can be represented as ResN et(I) \u2208 R 2048 * 49 . Finally, in order to project the visual features into the same dimension of textual features, we conduct a linear transformation on the encoded image representation ResN et(I) as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "G = W v ResN et(I)", "eq_num": "(2)" } ], "section": "Concat", "sec_num": null }, { "text": "where Inter-modality Attention Self-attention can be used to generate an internal representation of a sequence. The internal representation considers the interaction between each pair of tokens in the sequence. Inter-modality incongruity information can be represented as a kind of interaction between the features of multi modalities. Particularly, the input tokens will give high attention values to the image regions contradicting them as incongruity is a key character of sarcasm. Hence, we borrow the idea from the self-attention mechanism and design a text-image matching layer to capture the incongruity information between text and images. Our text-image matching layer accepts the text features H \u2208 R d * N as queries, and the image features G \u2208 R d * 49 as keys and values. In this way, the text features can guide the model to pay more attention to the incongruous image regions. Specifically, for the ith head of the textimage matching layer, it has the following form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "W v \u2208 R d *", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "AT Ti(H, G) = sof tmax( [W Q i H] T [W K i G] \u221a d k )[W V i G] T", "eq_num": "(3)" } ], "section": "Concat", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "d k \u2208 R d/h , AT T i (H, G) \u2208 R N * d k , and {W Q i , W K i , W V i } \u2208 R d k * d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "are learnable parameters. The outputs of h heads are then concatenated and followed by a linear transformation as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M AT T (H, G) = [AT T1(H, G), . . . , AT T h (H, G)]W o", "eq_num": "(4)" } ], "section": "Concat", "sec_num": null }, { "text": "where W o \u2208 R d * d is a learnable parameter. After that, a residual connection is worked on the text feature H and the output of self-attention layer M AT T (H, G) as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "Z = LN (H + M AT T (H, G)) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "where LN is the layer normalization operation proposed by Ba et al. (2016) . After that, a feedforward network (a.k.a M LP ) and another residual connection are employed on Z to obtain the output of the first transformer encoder:", "cite_spans": [ { "start": 58, "end": 74, "text": "Ba et al. (2016)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "T IM (H, G) = LN (Z + M LP (Z)) (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "T IM (H, G) \u2208 R N * d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "is the output of the first text-image matching layer. We stack l m such text-image matching layers and get T IM lm (H, G) as the output of the last layer, where T IM lm (H, G) \u2208 R N * d and l m is a predefined hyper-parameter. The final representation of inter-modality incongruity can be describes as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "H G \u2208 R d , which is the encoding of [CLS] token in T IM lm (H, G)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "Intra-modality Attention As the incongruity might only appear within the text (e.g., a sarcastic text associated with an unrelated image), it is necessary to consider the intramodality incongruity. Social media like Twitter allow users to add hashtags to indicate the topic or their real minds. Maynard and Greenwood (2014) point out that hashtags are useful when analyzing a user's real sentiment (e.g., I am happy that I woke up at 5:15 this morning. # not). Accordingly, we take the contradiction between the original text and the hashtags within it as intra-modality incongruity (i.e., for those samples without hashtags, we use a special token instead). Intuitively, we can use the same way as inter-modality attention to gain the intra-modality incongruity information. However, we find that it doesn't bring much improvement even it contains more parameters. Hence, inspired by Lu et al. (2016) 's work, we introduce an affinity matrix C to model the interaction between the text and the hashtags. C is calculated by:", "cite_spans": [ { "start": 295, "end": 323, "text": "Maynard and Greenwood (2014)", "ref_id": "BIBREF14" }, { "start": 885, "end": 901, "text": "Lu et al. (2016)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C = tanh(H T W b T )", "eq_num": "(7)" } ], "section": "Concat", "sec_num": null }, { "text": "where H \u2208 R d * N and T \u2208 R d * M represent the text features and the hashtag features separately. N and M are pre-defined hyper-parameters denoting the input sequence's max length and hashtags, respectively. W b \u2208 R d * d is a learnable parameter containing weights. After computing the affinity matrix C \u2208 R N * M , we maximize the affinity matrix over text features' locations to get hashtag attention. To be specific, we compute a weight vector a \u2208 R M by applying a column-wised max-pooling operation on the matrix C. Tay et al. (2018) argues that the words that contribute to the incongruity (usually accompany with a high attention value) should be highlighted. Therefore, a more discriminative pooling operator like max-pooling is desirable in our case. Finally, the intra-modality incongruity is computed as:", "cite_spans": [ { "start": 523, "end": 540, "text": "Tay et al. (2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H T = aT T", "eq_num": "(8)" } ], "section": "Concat", "sec_num": null }, { "text": "where H T \u2208 R d contains the intra-modality incongruity information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concat", "sec_num": null }, { "text": "After obtaining the intra-modality incongruity representation H T and inter-modality incongruity representation H G , we concatenate them for prediction. The prediction part consists of a linear layer to reduce the dimension and a Sof tmax function to distribute probabilities to each category. Our model will classify the given text into the category with the highest probability. This procedure can be described as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y = Sof tmax(W [H G : H T ] + b)", "eq_num": "(9)" } ], "section": "Prediction", "sec_num": "2.4" }, { "text": "where W \u2208 R 2d is learnable parameter training along with the model.\u0177 is the classification result of our model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction", "sec_num": "2.4" }, { "text": "Cross-entropy loss function is used in our work for optimizing the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training objectives", "sec_num": "2.5" }, { "text": "J = \u2212 N i=1 [y i log\u0177 i + (1 \u2212 y i ) log(1 \u2212\u0177 i )] + \u03bbR (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training objectives", "sec_num": "2.5" }, { "text": "where J is the cost function.\u0177 i is the prediction result of our model for sample i, and y i is the true label for sample i. N is the size of training data. R is the standard L2 regularization and \u03bb is the weight of R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training objectives", "sec_num": "2.5" }, { "text": "This section first describes the dataset, experimental settings, baseline models, and experimental results. Then, we conduct a series of ablative experiments to verify the components' effectiveness in Finally, we give out a model visualization on several given sarcastic cases and perform an analysis of the wrongly predicted samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "3" }, { "text": "We evaluate our model on a publicly available multi-modal sarcasm detection dataset, 1 which is collected by Cai et al. (2019) . Each sample in the dataset consists of a sequence of text and an associated image. The tweets containing the words like sarcasm, sarcastic, irony, ironic or U RLs are discarded during data pre-processing. Cai et al. (2019) divides the data into a training set, a development set, and a testing set with a ratio of 80%:10%:10%. They also manually check the development set and testing set to ensure the accuracy of labels. Detailed statistics are summarized in Table 1.", "cite_spans": [ { "start": 109, "end": 126, "text": "Cai et al. (2019)", "ref_id": "BIBREF1" }, { "start": 334, "end": 351, "text": "Cai et al. (2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "3.1" }, { "text": "We divide the baseline models into three categories: visual modality models, Text modality models, and Text+Visual modality models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.2" }, { "text": "\u2022 Visual modality models: Image-Only: The image feature G is directly used to predict the results after an average pooling operation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.2" }, { "text": "\u2022 Text modality models: TextCNN: It is proposed by Kim (2014) , which is a deep learning model based on CNN for addressing text classification tasks. SIARN: SIARN is proposed by Tay et al. (2018) . It employs inner-attention for textual sarcasm detection to overcome the weakness of previous sequential models such as RNNs, which cannot capture the interaction between word pairs and hampers the ability to explicitly model incongruity.", "cite_spans": [ { "start": 51, "end": 61, "text": "Kim (2014)", "ref_id": "BIBREF10" }, { "start": 178, "end": 195, "text": "Tay et al. (2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.2" }, { "text": "SMSD: Following the work of (Tay et al., 2018) , Xiong et al. (2019) propose a selfmatching network to capture sentence incongruity information by exploring word-to-word interaction.", "cite_spans": [ { "start": 28, "end": 46, "text": "(Tay et al., 2018)", "ref_id": "BIBREF22" }, { "start": 49, "end": 68, "text": "Xiong et al. (2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.2" }, { "text": "BERT: BERT as a pre-trained model proposed by Devlin et al. (2019) , which achieves state-of-the-art results in many NLP tasks. We consider it a baseline to investigate whether the performance gain comes from BERT or our proposed method.", "cite_spans": [ { "start": 46, "end": 66, "text": "Devlin et al. (2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.2" }, { "text": "\u2022 the training set. We save the model, which has the best performance on the validation set. The full parameters are listed in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 127, "end": 134, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.2" }, { "text": "We compare our model with the baseline models on the standard metrics, including precision, recall, F1 score, and accuracy. 3 The results are shown in Table 3 . The experimental results illustrate that our model achieves the best performance across the baseline models. Specifically, our model obtains a 2.74% improvement in terms of F1 score compared with the state-of-the-art Hierarchical Fusion Model (HFM) proposed by Cai et al. (2019) . Our model also outperforms the fine-tuned BERT model with a 2.7% improvement, which shows our model's effectiveness and the important role of the images. We can see from table 3, the model only using image features does not perform well, which demonstrates that images cannot be treated independently for the multi-modal sarcasm detection task. Obviously, the methods based on text modality achieve better performance than the method based on image modality. Consequently, text information is more useful than image information for sarcasm detection. It is worth noticing that the finetuned BERT model performs far better than other text-based non-pre-trained models, which supports our motivation that pre-trained models like BERT can improve our task. The models belonging to Vi-sual+Text modality generally achieve better results than the others, indicating that images are useful to enhance performance.", "cite_spans": [ { "start": 124, "end": 125, "text": "3", "ref_id": null }, { "start": 422, "end": 439, "text": "Cai et al. (2019)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 151, "end": 158, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.4" }, { "text": "Looking at the models inside text modality, both SIARN (Tay et al., 2018) and SMSD (Xiong et al., 2019) take incongruity information into consid- 3 We implement the metrics by using sklearn.metrics.", "cite_spans": [ { "start": 55, "end": 73, "text": "(Tay et al., 2018)", "ref_id": "BIBREF22" }, { "start": 83, "end": 103, "text": "(Xiong et al., 2019)", "ref_id": "BIBREF24" }, { "start": 146, "end": 147, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.4" }, { "text": "Precision eration and outperform TextCNN. Hence, the incongruity information is beneficial to identify sarcasm. Our proposed method achieves better results than Res-bert, proving that modeling both intra and inter-modality incongruity is more effective than a simple concatenation of modalities for multi-modal sarcasm detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "To evaluate the effectiveness of the components in our model, we conduct a series of ablative experiments. We first remove the intra-modality attention and get model(w\\o intra), which only uses H G for prediction. Then, we eliminate the inter-modality attention and get model(w\\o inter). This model concatenates H and H T to the classifier layer as the experimental results indicate that H T only plays a supporting role in our model. Table 4 gives the results of ablative experiments. It shows that our proposed model achieves the best performance when including both intra and inter-modality attention modules. The absence of inter-modality attention leads to decreased results, proving that considering the contradiction between modalities is meaningful for multi-modal sarcasm detection. The model without the intra-modality attention also impedes the performance. As a result, both intra and inter-modality attention plays an indispensable role in our model. Figure 3: The figure illustrates the attention visualization of some sarcastic tweets. We find our model is capable of focusing its attention on the incongruous regions, marked by bright colour.", "cite_spans": [], "ref_spans": [ { "start": 435, "end": 442, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.5" }, { "text": "The impact of the number of text-image matching layer l m : We measure the model performance on the F1 score along with a range of the text-image matching layer number lm from 1 to 7. We can see in Figure 4 , the F1 score increases until reaching a peak point when l m equals to 3. Our model achieves the best performance at this point. Then, the model performance begins to decrease as l m continues to grow. We guess the performance worsens, probably due to the increase of the model parameter, suggesting that adding more text-image matching layers might not enhance but impede the performance. ", "cite_spans": [], "ref_spans": [ { "start": 198, "end": 206, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Model Analysis", "sec_num": "3.6" }, { "text": "In this section, we visualize the text-image attention distributions. Our model is designed to capture the incongruity information. Therefore, incongruous regions on the images are more likely to be attended by our model. We demonstrate several sarcastic cases collected from the dataset:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model visualization:", "sec_num": null }, { "text": "\u2022 \"such a packed game . it's amazing we even got a seat . # pelicans\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model visualization:", "sec_num": null }, { "text": "\u2022 \"well that looks appetising ... # ubereats\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model visualization:", "sec_num": null }, { "text": "\u2022 \"good thing my 2nd graders aren not distracted by chainsaws , falling trees , and chippers !\" Figure 3 illustrates that our model is highly effective in attending the incongruous regions. In the first example, our model attends to the regions indicating \"lots of unoccupied seats,\" which forms a contradiction with the text \"it is amazing we even got a seat.\". Similar patterns can also be noticed in the second and third instances. ", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 104, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Model visualization:", "sec_num": null }, { "text": "We also perform a qualitative analysis of the wrongly predicted samples. We check approximately 50 false classified instances and find that our model might incorrectly classify those samples containing necessary text information on the images (see Figure 5 ). Consequently, considering the text on the images might bring improvements for the multi-modal sarcasm detection task. Based on this observation, we further implement an experiment in which the text on the images is considered. Specifically, we apply a General Character Recognition API to acquire the text on the pictures and use a co-attention matrix to model the incongruity information between the original tweet and the text. Table 5 shows that our model achieves a significant improvement when considering the text on the images. In addition, we find that our model might struggle in those instances requiring external knowledge, such as a speaker's facial gesture or contextual information. Thus, external information is also essential for sarcasm detection. 4 Related Work", "cite_spans": [], "ref_spans": [ { "start": 248, "end": 256, "text": "Figure 5", "ref_id": "FIGREF4" }, { "start": 690, "end": 697, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Error analysis:", "sec_num": null }, { "text": "The existing text-based approaches can be classified into three categories: rule-based approaches, feature-based machine learning approaches, and deep learning-based approaches (Joshi et al., 2017) . Rule-based methods aim to spot sarcasm by detecting some fixed patterns. Riloff et al. (2013) observe that a common form of sarcasm that both positive sentiment and negative situation appear simultaneously. Inspired by this, they develop a bootstrapping algorithm that iteratively expands positive and negative phrase sets. The learned phrases are then used to detect sarcasm. Maynard and Greenwood (2014) design a hashtag tokenizer to analyze the sentiment and sarcasm within hashtags. They also compile a set of rules to determine the sentiment polarity when knowing sarcasm. However, rule-based methods strongly rely on the collected patterns, and it is challenging to identify the sarcasm caused by uncollected patterns. Accordingly, researchers begin to design various textual features and apply machine learning methods for recognizing sarcasm. Joshi et al. (2015) develop a system considering lexical features, pragmatic features, and incongruity features. SVM is used as their classifier. Ghosh et al. (2015) also apply SVM as their classifier and treat sarcasm detection as a word sense disambiguation problem.", "cite_spans": [ { "start": 177, "end": 197, "text": "(Joshi et al., 2017)", "ref_id": "BIBREF8" }, { "start": 273, "end": 293, "text": "Riloff et al. (2013)", "ref_id": "BIBREF20" }, { "start": 577, "end": 605, "text": "Maynard and Greenwood (2014)", "ref_id": "BIBREF14" }, { "start": 1051, "end": 1070, "text": "Joshi et al. (2015)", "ref_id": "BIBREF9" }, { "start": 1197, "end": 1216, "text": "Ghosh et al. (2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Text-based Sarcasm detection", "sec_num": "4.1" }, { "text": "Though machine learning approaches have achieved significant improvement, feature extraction is a time-consuming job. Recent works are mainly based on deep learning methods as they are capable of automatically extracting features and obtain promising results. Poria et al. (2016) use pre-trained CNNs to extract sentiment, emotion and personality features for sarcasm detection. Both Tay et al. (2018) and Xiong et al. (2019) try to explicitly model the incongruity between the word pairs using attention mechanism and receive satisfying results.", "cite_spans": [ { "start": 260, "end": 279, "text": "Poria et al. (2016)", "ref_id": "BIBREF19" }, { "start": 384, "end": 401, "text": "Tay et al. (2018)", "ref_id": "BIBREF22" }, { "start": 406, "end": 425, "text": "Xiong et al. (2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Text-based Sarcasm detection", "sec_num": "4.1" }, { "text": "It is worth noticing that there are also some valuable works concentrating on multi-modal sarcasm detection. Schifanella et al. (2016) first consider both textual and visual features for sarcasm detection and propose two alternative frameworks. Mishra et al. (2017) propose a cognitive NLP system for sentiment and sarcasm classification. They introduce a framework to extract cognitive features from the eye-movement/gaze data automatically. They use CNN to encode both gaze-based and textual features for classification. Castro et al. (2019) propose a new sarcasm dataset, compiled from TV shows. They treat text features, speech features, and video features as three modalities and use SVM as the classifier. Cai et al. (2019) introduce a hierarchical fusion model. They take image features, image attribute features, and text features as three modalities. Features of three modalities are reconstructed and fused for prediction.", "cite_spans": [ { "start": 109, "end": 134, "text": "Schifanella et al. (2016)", "ref_id": "BIBREF21" }, { "start": 245, "end": 265, "text": "Mishra et al. (2017)", "ref_id": "BIBREF15" }, { "start": 523, "end": 543, "text": "Castro et al. (2019)", "ref_id": "BIBREF2" }, { "start": 712, "end": 729, "text": "Cai et al. (2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-modal Sarcasm detection", "sec_num": "4.2" }, { "text": "In this paper, we propose a novel BERT architecture-based model to address the issue that existing multi-modal sarcasm detection approaches do not consider incongruity character of sarcasm. To be specific, our model considers both intra and inter-modality incongruity and achieves state-ofthe-art performance on a public multi-modal sarcasm detection dataset. Besides, we also conduct a series of experiments to verify the effectiveness of our model. Finally, we perform error analysis and find that the text on the images is essential for multi-modal sarcasm detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://github.com/headacheboy/ data-of-multimodal-sarcasm-detection", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/transformers/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Multimodal sarcasm detection in twitter with hierarchical fusion model", "authors": [ { "first": "Yitao", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Huiyu", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "1", "issue": "", "pages": "2506--2515", "other_ids": { "DOI": [ "10.18653/v1/p19-1239" ] }, "num": null, "urls": [], "raw_text": "Yitao Cai, Huiyu Cai, and Xiaojun Wan. 2019. Multi- modal sarcasm detection in twitter with hierarchical fusion model. In Proceedings of the 57th Confer- ence of the Association for Computational Linguis- tics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 2506-2515.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Towards multimodal sarcasm detection (an obviously perfect paper)", "authors": [ { "first": "Santiago", "middle": [], "last": "Castro", "suffix": "" }, { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Ver\u00f3nica", "middle": [], "last": "P\u00e9rez-Rosas", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Zimmermann", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "", "issue": "", "pages": "4619--4629", "other_ids": {}, "num": null, "urls": [], "raw_text": "Santiago Castro, Devamanyu Hazarika, Ver\u00f3nica P\u00e9rez- Rosas, Roger Zimmermann, Rada Mihalcea, and Soujanya Poria. 2019. Towards multimodal sarcasm detection (an obviously perfect paper). In Proceed- ings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, pages 4619- 4629.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/n19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2015, "venue": "EMNLP 2015", "volume": "", "issue": "", "pages": "1003--1012", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debanjan Ghosh, Weiwei Guo, and Smaranda Muresan. 2015. Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words. In EMNLP 2015, pages 1003-1012.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The poetics of mind: Figurative thought, language, and understanding", "authors": [ { "first": "Raymond", "middle": [ "W" ], "last": "Raymond W Gibbs", "suffix": "" }, { "first": "Jr", "middle": [], "last": "Gibbs", "suffix": "" }, { "first": "", "middle": [], "last": "Gibbs", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raymond W Gibbs Jr, Raymond W Gibbs, and Jr Gibbs. 1994. The poetics of mind: Figurative thought, language, and understanding. Cambridge University Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Deep residual learning for image recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "770--778", "other_ids": { "DOI": [ "10.1109/CVPR.2016.90" ] }, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In 2016 IEEE Conference on Computer Vi- sion and Pattern Recognition, CVPR 2016, Las Ve- gas, NV, USA, June 27-30, 2016, pages 770-778.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Universal language model fine-tuning for text classification", "authors": [ { "first": "Jeremy", "middle": [], "last": "Howard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018", "volume": "1", "issue": "", "pages": "328--339", "other_ids": { "DOI": [ "10.18653/v1/P18-1031" ] }, "num": null, "urls": [], "raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 328-339.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic sarcasm detection: A survey", "authors": [ { "first": "Aditya", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "Mark", "middle": [ "James" ], "last": "Carman", "suffix": "" } ], "year": 2017, "venue": "ACM Comput. Surv", "volume": "50", "issue": "5", "pages": "", "other_ids": { "DOI": [ "10.1145/3124420" ] }, "num": null, "urls": [], "raw_text": "Aditya Joshi, Pushpak Bhattacharyya, and Mark James Carman. 2017. Automatic sarcasm detection: A sur- vey. ACM Comput. Surv., 50(5):73:1-73:22.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Harnessing context incongruity for sarcasm detection", "authors": [ { "first": "Aditya", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Vinita", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics ACL 2015", "volume": "", "issue": "", "pages": "757--762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Joshi, Vinita Sharma, and Pushpak Bhat- tacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics ACL 2015, pages 757-762.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": { "DOI": [ "10.3115/v1/d14-1181" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746-1751.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Sentiment analysis and subjectivity", "authors": [ { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2010, "venue": "Handbook of Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "http://www.crcnetbase.com/doi/abs/10.1201/9781420085938-c26" ] }, "num": null, "urls": [], "raw_text": "Bing Liu. 2010. Sentiment analysis and subjectivity. In Handbook of Natural Language Processing, Sec- ond Edition.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Hierarchical question-image co-attention for visual question answering", "authors": [ { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Jianwei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016", "volume": "", "issue": "", "pages": "289--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances in Neu- ral Information Processing Systems 29: Annual Conference on Neural Information Processing Sys- tems 2016, December 5-10, 2016, Barcelona, Spain, pages 289-297.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis", "authors": [ { "first": "Diana", "middle": [], "last": "Maynard", "suffix": "" }, { "first": "Mark", "middle": [ "A" ], "last": "Greenwood", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation, LREC 2014, Reykjavik", "volume": "", "issue": "", "pages": "4238--4243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana Maynard and Mark A. Greenwood. 2014. Who cares about sarcastic tweets? investigating the im- pact of sarcasm on sentiment analysis. In Proceed- ings of the Ninth International Conference on Lan- guage Resources and Evaluation, LREC 2014, Reyk- javik, Iceland, May 26-31, 2014, pages 4238-4243.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Learning cognitive features from gaze data for sentiment and sarcasm classification using convolutional neural network", "authors": [ { "first": "Abhijit", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Kuntal", "middle": [], "last": "Dey", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "377--387", "other_ids": { "DOI": [ "10.18653/v1/P17-1035" ] }, "num": null, "urls": [], "raw_text": "Abhijit Mishra, Kuntal Dey, and Pushpak Bhat- tacharyya. 2017. Learning cognitive features from gaze data for sentiment and sarcasm classification using convolutional neural network. In ACL 2017, pages 377-387.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2007, "venue": "", "volume": "2", "issue": "", "pages": "1--135", "other_ids": { "DOI": [ "10.1561/1500000011" ] }, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2007. Opinion mining and sentiment analysis. Foundations and Trends in In- formation Retrieval, 2(1-2):1-135.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "K\u00f6pf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Raison", "suffix": "" }, { "first": "Alykhan", "middle": [], "last": "Tejani", "suffix": "" }, { "first": "Sasank", "middle": [], "last": "Chilamkurthy", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Steiner", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learn- ing library. In Advances in Neural Information Pro- cessing Systems 32: Annual Conference on Neu- ral Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 8024-8035.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/n18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, NAACL-HLT 2018, New Or- leans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227-2237.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A deeper look into sarcastic tweets using deep convolutional neural networks", "authors": [ { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" } ], "year": 2016, "venue": "COLING 2016", "volume": "", "issue": "", "pages": "1601--1612", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soujanya Poria, Erik Cambria, Devamanyu Hazarika, and Prateek Vij. 2016. A deeper look into sarcastic tweets using deep convolutional neural networks. In COLING 2016, pages 1601-1612.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Sarcasm as contrast between a positive sentiment and negative situation", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Ashequl", "middle": [], "last": "Qadir", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Surve", "suffix": "" }, { "first": "Lalindra De", "middle": [], "last": "Silva", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gilbert", "suffix": "" }, { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "704--714", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalin- dra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sen- timent and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP 2013, pages 704- 714.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Detecting sarcasm in multimodal social platforms", "authors": [ { "first": "Rossano", "middle": [], "last": "Schifanella", "suffix": "" }, { "first": "Paloma", "middle": [], "last": "De Juan", "suffix": "" }, { "first": "Joel", "middle": [ "R" ], "last": "Tetreault", "suffix": "" }, { "first": "Liangliang", "middle": [], "last": "Cao", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 ACM Conference on Multimedia Conference, MM 2016", "volume": "", "issue": "", "pages": "1136--1145", "other_ids": { "DOI": [ "10.1145/2964284.2964321" ] }, "num": null, "urls": [], "raw_text": "Rossano Schifanella, Paloma de Juan, Joel R. Tetreault, and Liangliang Cao. 2016. Detecting sarcasm in multimodal social platforms. In Proceedings of the 2016 ACM Conference on Multimedia Conference, MM 2016, Amsterdam, The Netherlands, October 15-19, 2016, pages 1136-1145.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Reasoning with sarcasm by reading inbetween", "authors": [ { "first": "Yi", "middle": [], "last": "Tay", "suffix": "" }, { "first": "Anh", "middle": [ "Tuan" ], "last": "Luu", "suffix": "" }, { "first": "Siu", "middle": [ "Cheung" ], "last": "Hui", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018", "volume": "", "issue": "", "pages": "1010--1020", "other_ids": { "DOI": [ "10.18653/v1/P18-1093" ] }, "num": null, "urls": [], "raw_text": "Yi Tay, Anh Tuan Luu, Siu Cheung Hui, and Jian Su. 2018. Reasoning with sarcasm by reading in- between. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics, ACL 2018, pages 1010-1020.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 5998-6008.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Sarcasm detection with self-matching networks and low-rank bilinear pooling", "authors": [ { "first": "Tao", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Peiran", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hongbo", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yihui", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2019, "venue": "The World Wide Web Conference, WWW 2019", "volume": "", "issue": "", "pages": "2115--2124", "other_ids": { "DOI": [ "10.1145/3308558.3313735" ] }, "num": null, "urls": [], "raw_text": "Tao Xiong, Peiran Zhang, Hongbo Zhu, and Yihui Yang. 2019. Sarcasm detection with self-matching networks and low-rank bilinear pooling. In The World Wide Web Conference, WWW 2019, pages 2115-2124.", "links": null } }, "ref_entries": { "FIGREF2": { "uris": null, "type_str": "figure", "text": "The performance curves with a variety of l m from 1 to 7.", "num": null }, "FIGREF4": { "uris": null, "type_str": "figure", "text": "Wrongly classified samples with important textual information on the image.", "num": null }, "TABREF0": { "html": null, "type_str": "table", "num": null, "text": "", "content": "
Add & Norm
Self Attention
KVQ
Add & Norm
Feed Forward
Image EncoderResNetSelf Attention K V Q Add & NormSelf Attention K V Q Add & Norm
\u2a01
the sum-
up of word, segment, and position embeddings, N
is the maximum length of the sequence and d is the
embedding size. We adopt the pre-trained BERT
model on it to acquire text representations. The
encoded text can be depicted as H \u2208 R d * N , which
is the output of the last layer of BERT encoders
and d is the hidden size of BERT.
" }, "TABREF1": { "html": null, "type_str": "table", "num": null, "text": "2048 is a trainable parameter and d is the diemnsion of textual feature. G \u2208 R d * 49 is the ecoded representation of visual features.", "content": "" }, "TABREF4": { "html": null, "type_str": "table", "num": null, "text": "", "content": "
: Hyper-parameters
3.3 Experimental Settings
Our model is implemented in PyTorch (Paszke
et al., 2019), running on a NVIDIA TITAN RTX
GPU. The pre-trained BERT model is available
from the T ransf ormers toolkit released by Hug-
ging Face. 2 We adopt Adam (Kingma and Ba,
2015) as our optimizer and set the initial learn-
ing rate as 5e-5 with a warmup rate of 0.2. The
batch size is fixed to 32 for training. The maximum
length is 75 for text and 10 for hashtags, respec-
tively. Our model is fine-tuned for eight epochs on
" }, "TABREF5": { "html": null, "type_str": "table", "num": null, "text": "Experiment results on the multi-modal sarcasm detection dataset. The best results are in bold.", "content": "" }, "TABREF7": { "html": null, "type_str": "table", "num": null, "text": "Ablation experiment results. The best results are in bold.", "content": "
" }, "TABREF9": { "html": null, "type_str": "table", "num": null, "text": "Experiment results when involving the text on the image in our model.", "content": "
" } } } }