{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:42:42.441253Z" }, "title": "A Report on the 2020 Sarcasm Detection Shared Task", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "", "affiliation": {}, "email": "dghosh@ets.org" }, { "first": "Avijit", "middle": [], "last": "Vajpayee", "suffix": "", "affiliation": {}, "email": "avajpayee@ets.org" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Detecting sarcasm and verbal irony is critical for understanding people's actual sentiments and beliefs. Thus, the field of sarcasm analysis has become a popular research problem in natural language processing. As the community working on computational approaches for sarcasm detection is growing, it is imperative to conduct benchmarking studies to analyze the current state-of-the-art, facilitating progress in this area. We report on the shared task on sarcasm detection we conducted as a part of the 2nd Workshop on Figurative Language Processing (FigLang 2020) at ACL 2020.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Detecting sarcasm and verbal irony is critical for understanding people's actual sentiments and beliefs. Thus, the field of sarcasm analysis has become a popular research problem in natural language processing. As the community working on computational approaches for sarcasm detection is growing, it is imperative to conduct benchmarking studies to analyze the current state-of-the-art, facilitating progress in this area. We report on the shared task on sarcasm detection we conducted as a part of the 2nd Workshop on Figurative Language Processing (FigLang 2020) at ACL 2020.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Sarcasm and verbal irony are a type of figurative language where the speakers usually mean the opposite of what they say. Recognizing whether a speaker is ironic or sarcastic is essential to downstream applications for correctly understanding speakers' intended sentiments and beliefs. Consequently, in the last decade, the problem of irony and sarcasm detection has attracted a considerable interest from computational linguistics researchers. The task has been usually framed as a binary classification task (sarcastic vs. non-sarcastic) using either the utterance in isolation or adding contextual information such as conversation context, author context, visual context, or cognitive features Gonz\u00e1lez-Ib\u00e1\u00f1ez et al., 2011; Riloff et al., 2013; Maynard and Greenwood, 2014; Wallace et al., 2014; Ghosh et al., 2015; Muresan et al., 2016; Amir et al., 2016; Mishra et al., 2016; Ghosh and Veale, 2017; Felbo et al., 2017; Hazarika et al., 2018; Tay et al., 2018; Oprea and Magdy, 2019; Majumder et al., 2019; Castro et al., 2019; Ghosh et al., 2019) .", "cite_spans": [ { "start": 697, "end": 726, "text": "Gonz\u00e1lez-Ib\u00e1\u00f1ez et al., 2011;", "ref_id": "BIBREF22" }, { "start": 727, "end": 747, "text": "Riloff et al., 2013;", "ref_id": "BIBREF52" }, { "start": 748, "end": 776, "text": "Maynard and Greenwood, 2014;", "ref_id": "BIBREF44" }, { "start": 777, "end": 798, "text": "Wallace et al., 2014;", "ref_id": "BIBREF61" }, { "start": 799, "end": 818, "text": "Ghosh et al., 2015;", "ref_id": "BIBREF19" }, { "start": 819, "end": 840, "text": "Muresan et al., 2016;", "ref_id": "BIBREF47" }, { "start": 841, "end": 859, "text": "Amir et al., 2016;", "ref_id": "BIBREF1" }, { "start": 860, "end": 880, "text": "Mishra et al., 2016;", "ref_id": "BIBREF46" }, { "start": 881, "end": 903, "text": "Ghosh and Veale, 2017;", "ref_id": "BIBREF16" }, { "start": 904, "end": 923, "text": "Felbo et al., 2017;", "ref_id": "BIBREF14" }, { "start": 924, "end": 946, "text": "Hazarika et al., 2018;", "ref_id": "BIBREF24" }, { "start": 947, "end": 964, "text": "Tay et al., 2018;", "ref_id": "BIBREF56" }, { "start": 965, "end": 987, "text": "Oprea and Magdy, 2019;", "ref_id": "BIBREF48" }, { "start": 988, "end": 1010, "text": "Majumder et al., 2019;", "ref_id": null }, { "start": 1011, "end": 1031, "text": "Castro et al., 2019;", "ref_id": "BIBREF7" }, { "start": 1032, "end": 1051, "text": "Ghosh et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we report on the shared task on sarcasm detection that we conducted as part of the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Message Context 1 The [govt] just confiscated a $180 million boat shipment of cocaine from drug traffickers. Context 2 People think 5 tonnes is not a lot of cocaine. Response Man, I've seen more than that on a Friday night! ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": ") at ACL 2020. The task aims to study the role of conversation context for sarcasm detection. Two types of social media content are used as training data for the two tracks -microblogging platform such as Twitter and online discussion forum such as Reddit. Table 1 and Table 2 show examples of three turn dialogues, where Response is the sarcastic reply. Without using the conversation context Context i , it is difficult to identify the sarcastic intent expressed in Response. The shared task is designed to benchmark the usefulness of modeling the entire conversation context (i.e., all the prior dialogue turns) for sarcasm detection.", "cite_spans": [], "ref_spans": [ { "start": 257, "end": 276, "text": "Table 1 and Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": "Section 2 discusses the current state of research on sarcasm detection with a focus on the role of context. Section 3 provides a description of the shared task, datasets, and metrics. Section 4 contains brief summaries of each of the participating systems whereas Section 5 reports a comparative evaluation of the systems and our observations about trends in designs and performance of the systems that participated in the shared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": "Message Context 1 This is the greatest video in the history of college football. Context 2 Hes gonna have a short career if he keeps smoking . Not good for your health Response Awesome !!! Everybody does it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": "That's the greatest reason to do something. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": "A considerable amount of work on sarcasm detection has considered the utterance in isolation when predicting the sarcastic or non-sarcastic label. Initial approaches used feature-based machine learning models that rely on different types of features from lexical (e.g., sarcasm markers, word embeddings) to pragmatic such as emoticons or learned patterns of contrast between positive sentiment and negative situations Veale and Hao, 2010; Gonz\u00e1lez-Ib\u00e1\u00f1ez et al., 2011; Liebrecht et al., 2013; Riloff et al., 2013; Maynard and Greenwood, 2014; Ghosh et al., 2015; . Recently, deep learning methods have been applied for this task (Ghosh and Veale, 2016; Tay et al., 2018) . For excellent surveys on sarcasm and irony detection see (Wallace, 2015; Joshi et al., 2017) . However, when recognizing sarcastic intent even humans have difficulties sometimes when considering an utterance in isolation (Wallace et al., 2014) . Recently an increasing number of researchers have started to explore the role of contextual information for irony and sarcasm analysis. The term context loosely refers to any information that is available beyond the utterance itself (Joshi et al., 2017) . A few researchers have examined author context (Bamman and Smith, 2015; Khattri et al., 2015; Rajadesingan et al., 2015; Amir et al., 2016; Ghosh and Veale, 2017) , multi-modal context (Schifanella et al., 2016; Cai et al., 2019; Castro et al., 2019) , eye-tracking information (Mishra et al., 2016) , or conversation context (Bamman and Smith, 2015; Wang et al., 2015; Joshi et al., 2016; Zhang et al., 2016; Ghosh and Veale, 2017) .", "cite_spans": [ { "start": 418, "end": 438, "text": "Veale and Hao, 2010;", "ref_id": "BIBREF59" }, { "start": 439, "end": 468, "text": "Gonz\u00e1lez-Ib\u00e1\u00f1ez et al., 2011;", "ref_id": "BIBREF22" }, { "start": 469, "end": 492, "text": "Liebrecht et al., 2013;", "ref_id": "BIBREF39" }, { "start": 493, "end": 513, "text": "Riloff et al., 2013;", "ref_id": "BIBREF52" }, { "start": 514, "end": 542, "text": "Maynard and Greenwood, 2014;", "ref_id": "BIBREF44" }, { "start": 543, "end": 562, "text": "Ghosh et al., 2015;", "ref_id": "BIBREF19" }, { "start": 629, "end": 652, "text": "(Ghosh and Veale, 2016;", "ref_id": "BIBREF15" }, { "start": 653, "end": 670, "text": "Tay et al., 2018)", "ref_id": "BIBREF56" }, { "start": 730, "end": 745, "text": "(Wallace, 2015;", "ref_id": "BIBREF60" }, { "start": 746, "end": 765, "text": "Joshi et al., 2017)", "ref_id": "BIBREF27" }, { "start": 894, "end": 916, "text": "(Wallace et al., 2014)", "ref_id": "BIBREF61" }, { "start": 1152, "end": 1172, "text": "(Joshi et al., 2017)", "ref_id": "BIBREF27" }, { "start": 1222, "end": 1246, "text": "(Bamman and Smith, 2015;", "ref_id": "BIBREF4" }, { "start": 1247, "end": 1268, "text": "Khattri et al., 2015;", "ref_id": "BIBREF32" }, { "start": 1269, "end": 1295, "text": "Rajadesingan et al., 2015;", "ref_id": "BIBREF51" }, { "start": 1296, "end": 1314, "text": "Amir et al., 2016;", "ref_id": "BIBREF1" }, { "start": 1315, "end": 1337, "text": "Ghosh and Veale, 2017)", "ref_id": "BIBREF16" }, { "start": 1360, "end": 1386, "text": "(Schifanella et al., 2016;", "ref_id": "BIBREF53" }, { "start": 1387, "end": 1404, "text": "Cai et al., 2019;", "ref_id": "BIBREF6" }, { "start": 1405, "end": 1425, "text": "Castro et al., 2019)", "ref_id": "BIBREF7" }, { "start": 1453, "end": 1474, "text": "(Mishra et al., 2016)", "ref_id": "BIBREF46" }, { "start": 1501, "end": 1525, "text": "(Bamman and Smith, 2015;", "ref_id": "BIBREF4" }, { "start": 1526, "end": 1544, "text": "Wang et al., 2015;", "ref_id": "BIBREF62" }, { "start": 1545, "end": 1564, "text": "Joshi et al., 2016;", "ref_id": "BIBREF29" }, { "start": 1565, "end": 1584, "text": "Zhang et al., 2016;", "ref_id": "BIBREF66" }, { "start": 1585, "end": 1607, "text": "Ghosh and Veale, 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Related to shared tasks on figurative language analysis, recently, Van Hee et al. (2018) have con-ducted a SemEval task on irony detection in Twitter focusing on utterances in isolation. Besides the binary classification task of identifying the ironic tweet the authors also conducted a multi-class irony classification to identify the specific type of irony: whether it contains verbal irony, situational irony, or other types of irony. In our case, the current shared task aims to study the role of conversation context for sarcasm detection. In particular, we focus on benchmark the effectiveness of modeling the conversation context (e.g., all the prior dialogue turns or a subset of the prior dialogue turns) for sarcasm detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The design of our shared task is guided by two specific issues. First, we plan to leverage a particular type of context -the entire prior conversation context -for sarcasm detection. Second, we plan to investigate the systems' performance on conversations from two types of social media platforms: Twitter and Reddit. Both of these platforms allow the writers to mark whether their messages are sarcastic (e.g., #sarcasm hashtag in Twitter and \"/s\" marker in Reddit).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "The competition is organized in two phases: training and evaluation. By making available common datasets and frameworks for evaluation, we hope to contribute to the consolidation and strengthening of the growing community of researchers working on computational approaches to sarcasm analysis. Khodak et al. (2017) introduced the self-annotated Reddit Corpus which is a very large collection of sarcastic and non-sarcastic posts (over one million) curated from different subreddits such as politics, religion, sports, technology, etc. This corpus contains self-labeled sarcastic posts where users label their posts as sarcastic by marking \"/s\" to the end of sarcastic posts. For any such sarcastic post, the corpus also provides the full conversation context, i.e., all the prior turns that took place in the dialogue.", "cite_spans": [ { "start": 294, "end": 314, "text": "Khodak et al. (2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "We select the training data for the Reddit track from Khodak et al. (2017) . We considered a couple of criteria. First, we choose sarcastic responses with at least two prior turns. Note, for many responses in our training corpus the number of turns is much more. Second, we curated sarcastic re-sponses from a variety of subreddits such that no single subreddit (e.g., politics) dominates the training corpus. In addition, we avoid responses from subreddits that we believe are too specific and narrow (e.g., subreddit dedicated to a specific video game) that might not generalize well. The nonsarcastic partition of the training dataset is collected from the same set of subreddits that are used to collect sarcastic responses. We finally end up in selecting 4,400 posts (as well as their conversation context) for the training dataset equally balanced between sarcastic and non-sarcastic posts.", "cite_spans": [ { "start": 54, "end": 74, "text": "Khodak et al. (2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Reddit Training Dataset", "sec_num": "3.1.1" }, { "text": "For the Twitter dataset, we have relied upon the annotations that users assign to their tweets using hashtags. The sarcastic tweets were collected using hashtags: #sarcasm and #sarcastic. As nonsarcastic utterances, we consider sentiment tweets, i.e., we adopt the methodology proposed in related work (Muresan et al., 2016) . Such sentiment tweets do not contain the sarcasm hashtags but include hashtags that contain positive or negative sentiment words. The positive tweets express direct positive sentiment and they are collected based on tweets with positive hashtags such as #happy, #love, #lucky. Likewise, the negative tweets express direct negative sentiment and are collected based on tweets with negative hashtags such as #sad, #hate, #angry. Classifying sarcastic utterances against sentiment utterances is a considerably harder task than classifying against random objective tweets since many sarcastic utterances also contain sentiment terms. Here, we are relying on self-labeled tweets, thus, it is always possible that sarcastic tweets were mislabeled with sentiment hashtags or users did not use the #sarcasm hashtag at all. We manually evaluated around 200 sentiment tweets and found very few such cases in the training corpus. Similar to the Reddit dataset we apply a couple of criteria while selecting the training dataset. First, we select sarcastic or non-sarcastic tweets only when they appear in a dialogue (i.e., begins with \"@\"-user symbol) and at least have two or more prior turns as conversation context. Second, for the non-sarcastic posts, we maintain a strict upper limit (i.e., not-greater than 10%) for any sentiment hashtag. Third, we apply heuristics such as avoiding short tweets, discarding tweets with only multiple URLs, etc. We end up selecting 5,000 tweets for training balanced between sarcastic and non-sarcastic tweets. Figure 1 presents a plot of number of training utterances on the basis of context length, for Reddit and Twitter tracks respectively. We notice, although the numbers are comparable for utterances with context length equal to two or three, for Twitter corpus, utterances with a higher number of context (i.e., prior turns) is much higher.", "cite_spans": [ { "start": 302, "end": 324, "text": "(Muresan et al., 2016)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Twitter Training Dataset", "sec_num": "3.1.2" }, { "text": "The Twitter data for evaluation is curated similarly to the training data. For Reddit, we do not use Khodak et al. (2017) rather collected new sarcastic and non-sarcastic responses from Reddit. First, for sarcastic responses we utilize the same set of subreddits utilized in the training dataset, thus, keeping the same genre between the evaluation and training. For the non-sarcastic partition, we utilized the same set of subreddits and submission threads as the sarcastic partition. For both tracks the evaluation dataset contains 1800 instances partitioned equally between the sarcastic and the non-sarcastic categories.", "cite_spans": [ { "start": 101, "end": 121, "text": "Khodak et al. (2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Data", "sec_num": "3.1.3" }, { "text": "In the first phase, data is released for training and/or development of sarcasm detection models (both Reddit and Twitter). Participants can choose to partition the training data further to a validation set for preliminary evaluations and/or tuning of hyper-parameters. Likewise, they can also elect to perform cross-validation on the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Phase", "sec_num": "3.2" }, { "text": "In the second phase, instances for evaluation are released. Each participating system generated predictions for the evaluation instances, for up to N models. 1 Predictions are submitted to the Co-daLab site and evaluated automatically against the gold labels. CodaLab is an established platform to organize shared-tasks (Leong et al., 2018) because it is easy to use, provides easy communication with the participants (e.g., allows mass-emailing) as well as tracks all the submissions updating the leaderboard in real-time. The metrics used for evaluation is the average F1 score between the two categories -sarcastic and non-sarcastic. The leaderboards displayed the Precision, Recall, and F1 scores in the descending order of the F1 scores, separately for the two tracks -Twitter and Reddit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Phase", "sec_num": "3.3" }, { "text": "The shared task started on January 19, 2020, when the training data was made available to all the registered participants. We released the evaluation data on February 25, 2020. Submissions were accepted until March 16, 2020. Overall, we received an overwhelming number of submissions: 655 for the Reddit track and 1070 for the Twitter track. The CodaLab leaderboard showcases results from 39 systems for the Reddit track and 38 systems for the Twitter track, respectively. Out of all submissions, 14 shared task system papers were submitted. In the following section we summarize each system paper. We also put forward a comparative analysis based on their performance and the choice of features/models in Section 5. Interested readers can refer to the individual teams' papers for more details. But first, we discuss the baseline classification model that we used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems", "sec_num": "4" }, { "text": "We use prior published work as the baseline that used conversation context to detect sarcasm from social media platforms such as Twitter and Reddit . proposed a dual LSTM architecture with hierarchical attention where one LSTM models the conversation context and the other models sarcastic response. The hierarchical attention (Yang et al., 2016) implements two levels of attention -one at the word level and another at the sentence level. We used their system based on only the immediate conversation context (i.e., the immediate prior turn). 2 This is denoted as LST M attn in Table 3 and Table 4 .", "cite_spans": [ { "start": 327, "end": 346, "text": "(Yang et al., 2016)", "ref_id": "BIBREF64" } ], "ref_spans": [ { "start": 579, "end": 598, "text": "Table 3 and Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Baseline Classifier", "sec_num": "4.1" }, { "text": "We describe the participating systems in the following section (in alphabetical order).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Descriptions", "sec_num": "4.2" }, { "text": "abaruah (Baruah et al., 2020) : Fine-tuned a BERT model (Devlin et al., 2018) and reported results on varying maximum sequence length (corresponding to varying level of context inclusion from just response to entire context). They also reported results of BiLSTM with FastText embeddings (of response and entire context) and SVM based on char n-gram features (again on both response and entire context). One interesting result was SVM with discrete features performed better than BiLSTM. They achieved best results with BERT on response and most immediate context.", "cite_spans": [ { "start": 8, "end": 29, "text": "(Baruah et al., 2020)", "ref_id": "BIBREF5" }, { "start": 56, "end": 77, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "System Descriptions", "sec_num": "4.2" }, { "text": "ad6398 (Kumar and Anand, 2020) : Report results comparing multiple transformer architectures (BERT, SpanBERT (Joshi et al., 2020) , RoBERTa ) both in single sentence classification (with concatenated context and response string) and sentence pair classification (with context and response being separate inputs to a Siamese type architecture). Their best result was with using RoBERTa + LSTM model. aditya604 (Avvaru et al., 2020) : Used BERT on simple concatenation of last-k context texts and response text. The authors included details of data cleaning (de-emojification, hashtag text extraction, apostrophe expansion) as well experiments on other architectures (LSTM, CNN, XLNet ) and varying size of context (5, 7, complete) in their report. The best results were obtained by BERT with 7 length context for Twitter dataset and BERT with 5 context for Reddit dataset.", "cite_spans": [ { "start": 7, "end": 30, "text": "(Kumar and Anand, 2020)", "ref_id": "BIBREF34" }, { "start": 109, "end": 129, "text": "(Joshi et al., 2020)", "ref_id": "BIBREF30" }, { "start": 409, "end": 430, "text": "(Avvaru et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "System Descriptions", "sec_num": "4.2" }, { "text": "amitjena40 (Jena et al., 2020): Used a timeseries analysis inspired approach for integrating context. Each text in conversational thread (context and response) was individually scored using BERT and Simple Exponential Smoothing (SES) was utilized to get probability of final response being sarcastic. They used the final response label as a pseudo-label for scoring the context entries, which is not theoretically grounded. If final response is sarcastic, the previous context dialogue cannot be assumed to be sarcastic (with respect to its preceding dialogue). However, the effect of this error is attenuated due to exponentially decreasing contribution of context to final label under SES scheme. AnandKumaR (Khatri and P, 2020) : Experimented with using traditional ML classifiers like SVM and Logisitic Regression over embeddings through BERT and GloVe (Pennington et al., 2014) . Using BERT as a feature extraction method as opposed to fine-tuning it was not beneficial and Logisitic Regression over GloVe embeddings outperformed them in their experiment. Context was used in their best model but no details were available about the depth of context usage (full vs. immediate). Additionally, they only experimented with Twitter data and no submission was made to the Reddit track. They provided details of data cleaning measures for their experiments which involved stopword removal, lowercasing, stemming, punctuation removal and spelling normalization.", "cite_spans": [ { "start": 710, "end": 730, "text": "(Khatri and P, 2020)", "ref_id": "BIBREF31" }, { "start": 857, "end": 882, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "System Descriptions", "sec_num": "4.2" }, { "text": "andy3223 (Dong et al., 2020) : Used the transformer-based architecture for sarcasm detection, reporting the performance of three architecture, BERT, RoBERTa, and ALBERT (Lan et al., 2019) . They considered two models, the targetoriented where only the target (i.e., sarcastic response) is modeled and context-aware, where the context is also modeled with the target. The authors conducted extensive hyper-parameter search, and set the learning rate to 3e-5, the number of epochs to 30, and use different seed values, 21, 42, 63, for three runs. Additionally, they set the maximum sequence length 128 for the target-oriented models while it is set to 256 for the context-aware models.", "cite_spans": [ { "start": 9, "end": 28, "text": "(Dong et al., 2020)", "ref_id": "BIBREF13" }, { "start": 169, "end": 187, "text": "(Lan et al., 2019)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "System Descriptions", "sec_num": "4.2" }, { "text": "burtenshaw (Lemmens et al., 2020) : Employed an ensemble of four models -LSTM (on word, emoji and hashtag representations), CNN-LSTM (on GloVe embeddings with discrete punctuation and sentiment features), MLP (on sentence embeddings through Infersent (Conneau et al., 2017) ) and SVM (on character and stylometric features). The first three models (except SVM) used the last two immediate contexts along with the response.", "cite_spans": [ { "start": 11, "end": 33, "text": "(Lemmens et al., 2020)", "ref_id": "BIBREF37" }, { "start": 251, "end": 273, "text": "(Conneau et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "System Descriptions", "sec_num": "4.2" }, { "text": "duke DS (Gregory et al., 2020) : Here the authors have conducted extensive set of experiments using discrete features, DNNs, as well as transformer models, however, reporting only the results on the Twitter track. Regarding discrete features, one of novelties in their approach is including a predictor to identify whether the tweet is political or not, since many sarcastic tweets are on political topics. Regarding the models, the best performing model is an ensemble of five transformers: BERTbase-uncased, RoBERTa-base, XLNet-base-cased, RoBERTa-large, and ALBERT-base-v2.", "cite_spans": [ { "start": 8, "end": 30, "text": "(Gregory et al., 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "System Descriptions", "sec_num": "4.2" }, { "text": "Compared traditional machine learning classifiers (e.g., Logistic Regression/Random Forest/XGBoost/Linear SVC/ Gaussian Naive Bayes) on discrete bag-of-word features/Doc2Vec features with LSTM models on Word2Vec embeddings (Mikolov et al., 2013) and BERT models. For context usage they report results on using isolated response, isolated context and context-response combined (unclear as to how deep the context usage is). The best performance for their experiments was by BERT on isolated response.", "cite_spans": [ { "start": 223, "end": 245, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "miroblog (Lee et al., 2020) : Implemented a classifier composed of BERT followed by BiLSTM and NeXtVLAD (Lin et al., 2018 ) (a differentiable pooling mechanism which empirically performed better than Mean/Max pooling). 3 They employed an ensembling approach for including varying length context and reported that gains in F1 after context of length three are negligible. Just with these two contributions alone, their model outperformed all others. Additionally, they devised a novel approach of data augmentation (i.e., Contextual Response Augmentation) from unlabelled conversational contexts based on next sentence prediction confidence score of BERT. Leveraging large-scale unlabelled conversation data from web, their model outperformed the second best system by 14% and 8.4% for Twitter and Reddit respectively (absolute F1 score).", "cite_spans": [ { "start": 9, "end": 27, "text": "(Lee et al., 2020)", "ref_id": "BIBREF36" }, { "start": 104, "end": 121, "text": "(Lin et al., 2018", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "nclabj (Jaiswal, 2020): Used a majority-voting ensemble of RoBERTa models with different weight-initialization and different levels of context length. Their report shows that previous 3 turns of dialogues had the best performance in isolation. Additionally, the present results comparing other sentence embedding architectures like Universal Sentence Encoder (Cer et al., 2018 ), ELMo (Peters et al., 2018 and BERT.", "cite_spans": [ { "start": 359, "end": 376, "text": "(Cer et al., 2018", "ref_id": "BIBREF8" }, { "start": 377, "end": 405, "text": "), ELMo (Peters et al., 2018", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "salokr/vaibhav (Srivastava et al., 2020) : Employed a CNN-LSTM based architecture on BERT embeddings to utilize the full context thread and the response. The entire context after encoding through BERT is passed through CNN and LSTM layers to get a representation of the context. Convolution and dense layers over this summarized context representation and BERT encoding of response make up the final classifier.", "cite_spans": [ { "start": 15, "end": 40, "text": "(Srivastava et al., 2020)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "taha (ataei et al., 2020) : Reported experiments comparing SVM on character n-gram features, LSTM-CNN models, Transformer models as well as a novel usage of aspect based sentiment classification approaches like Interactive Attention Networks(IAN) (Ma et al., 2017) , Local Context Focus(LCF)-BERT (Zeng et al., 2019) and BERT-Attentional Encoder network (AEN) (Song et al., 2019) . For aspect based approaches, they viewed the last dialogue of conversational context as aspect of the target response. LCF-BERT was their best model for the Twitter task but due to computational resource limitations they were not able to try it for Reddit task (where BERT on just the response text performed best).", "cite_spans": [ { "start": 5, "end": 25, "text": "(ataei et al., 2020)", "ref_id": null }, { "start": 247, "end": 264, "text": "(Ma et al., 2017)", "ref_id": "BIBREF42" }, { "start": 297, "end": 316, "text": "(Zeng et al., 2019)", "ref_id": "BIBREF65" }, { "start": 360, "end": 379, "text": "(Song et al., 2019)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "tanvidadu (Dadu and Pant, 2020) : Fine-tuned RoBERTa-large model (355 Million parameters with over a 50K vocabulary size) on response and its two immediate contexts. They reported results on three different types of inputs: response-only model, concatenation of immediate two context with response, and using an explicit separator token between the response and the final context. The best result is reported in the setting where they used the separation token.", "cite_spans": [ { "start": 10, "end": 31, "text": "(Dadu and Pant, 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "5 Results and Discussions Table 3 and Table 4 present the results for the Reddit track and the Twitter track, respectively. We show the rank of the submitted systems (best result from their submitted reports) both in terms of the system submissions (out of 14) as well as their rank on the Codalab leaderboard. Note, for a couple of entries we observe a discrepancy between their best reported system(s) and the leaderboard entries. For the sake of fairness, for such cases, we selected the leaderboard entries to present in Table 3 and Table 4 . 4 Also, out of the 14 system descriptions duke DS and AnadKumR report the performance on the Twitter dataset, only. For overall results on both tracks, we observe majority of the models outperformed the LST M attn baseline . Almost all the submitted systems have used the transformer-architecture that seems to perform better than RNN-architecture, even without any task-specific fine-tuning. Although most of the models are similar and perform comparably, we observe a particular system -miroblog -has outperformed the other models in both the tracks by posting an improvement over the 2nd ranked system by more than 7% F1-score in the Reddit track and by 14% F1-score in the Twitter track. : Performance of the best system per team and baseline for the Twitter track. We include two ranks -ranks from the submitted systems as well as the Leaderboard ranks from the CodaLab site", "cite_spans": [ { "start": 548, "end": 549, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 26, "end": 33, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 38, "end": 45, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 525, "end": 545, "text": "Table 3 and Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "In the following paragraphs, we inspect the performance of the different systems more closely. We discuss a couple of particular aspects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "Context Usage: One of the prime motivating factors for conducting this shared task was to investigate the role of contextual information. We notice the most common approach for integrating context was simply concatenating it with the response text. Novel approaches include : 5. Using explicit separator between context and response when concatenating (tanvidadu)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "Depth of Context: Results suggest that beyond three context turns, gains from context information are negligible and may also reduce the performance due to sparsity of long context threads. The depth of context required is dependent on the architecture and CNN-LSTM based summarization of context thread (salokr) was the only approach that effectively used the whole dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "Discrete vs. Embedding Features The leaderboard was dominated by Transformer based architectures and we saw submissions using BERT or RoBERTa and other variants. Other sentence embedding architectures like Infersent, CNN/LSTM over word embeddings were also used but had middling performances. Discrete features were involved in only two submissions (burtenshaw and duke DS) and were the focus of burtenshaw system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "Leveraging other datasets The large difference between the best model (miroblog) and other systems can be attributed to their dataset augmentation strategies. Using just the context thread as a negative example when the context+response is a positive example, is a straight-forward approach for augmentation from labeled dialogues. Their novel contribution lies in leveraging large-scaled unlabelled dialogue threads, showing another use of BERT by using NSP confidence score for assigning pseudo-labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "Analysis of predictions: Finally, we conducted an error analysis based on the predictions of the systems. We particularly focused on addressing two questions. First, we investigate whether any particular pattern exists in the evaluation instances that are wrongly classified by the majority of the systems. Second, we compare the predictions of the top-performing systems to identify instances correctly classified by the candidate system but missed by the remaining systems. Here, we attempt to recognize specific characteristics that are unique to a model, if any. Instead of looking at the predictions of all the systems we decided to analyze only the top-three submissions in both tracks because of their high performances. We identify 80 instances (30 sarcastic) from the Reddit evaluation dataset and 20 instances (10 sarcastic) from the Twitter evaluation set, respectively, that are missed by all the top-performing systems. Our interpretation of this finding is that all these test instances more or less belong to a variety of topics including sarcastic remarks on baseball teams, internet bills, vaccination, etc., that probably do not generalize well during the training. For both Twitter and Reddit, we also found many sarcastic examples that contain common non-sarcastic markers such as laughs (e.g., \"haha\"), jokes, positive-sentiment emoticons (e.g., :)) in terms of Twitter track. We did not find any correlation to context length. Most of the instances contain varied context length, from two to six.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "While analyzing the predictions of individual systems we noted that miroblog correctly identifies the most number of predictions for both the tracks. In fact, miroblog has successfully predicted over two hundred examples (with almost equal distribution of sarcastic and non-sarcastic instances) in comparison to the second-ranked and third-ranked systems for both tracks. As stated earlier, this can be attributed to their data augmentation strategies that have assisted miroblog's models to generalize best. However, we still notice that instances with subtle humor or positive sentiment are missed by the best-performing models even if they are pretrained on a very large-scale corpora. We foresee models that are able to detect subtle humor or witty wordplay will perform even better in a sarcasm detection task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "kalaivani.A (kalaivani A and D, 2020):", "sec_num": null }, { "text": "This paper summarizes the results of the shared task on sarcasm detection using conversation from two social media platforms (Reddit and Twitter), organized as part of the 2nd Workshop on the Figurative Language Processing at ACL 2020. This shared task aimed to investigate the role of conversation context for sarcasm detection. The goal was to understand how much conversation context is needed or helpful for sarcasm detection. For Reddit, the training data was sampled from the standard corpus from Khodak et al. (2017) whereas we curated a new evaluation dataset. For Twitter, both the training and the test datasets are new and collected using standard hashtags. We received 655 submissions (from 39 unique participants) and 1070 submissions (from 38 unique participants) for Reddit and Twitter tracks, respectively. We provided brief descriptions of each of the participating systems who submitted a shared task paper (14 systems).", "cite_spans": [ { "start": 192, "end": 202, "text": "Figurative", "ref_id": null }, { "start": 503, "end": 523, "text": "Khodak et al. (2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We notice that almost every submitted system have used transformer-based architectures, such as BERT and RoBERTa and other variants, emphasizing the increasing popularity of using pre-trained language models for various classification tasks. The best systems, however, have employed a clever mix of ensemble techniques and/or data augmentation setups, which seem to be a promising direction for future work. We hope that some of the teams will make their implementations publicly available, which would facilitate further research on improving performance on the sarcasm detection task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "N is set to 999. 2 https://github.com/Alex-Fabbri/deep_ learning_nlp_sarcasm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "VLAD is an acronym of \"Vector of Locally Aggregated Descriptors\"(Lin et al., 2018).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Also, for such cases (e.g., abaruah, under the Approach column we reported the approach described in the system paper that is not necessarily reflect the scores ofTable 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sarcasm identification and detection in conversion context using BERT", "authors": [ { "first": "D", "middle": [], "last": "Thenmozhi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Thenmozhi D. 2020. Sarcasm iden- tification and detection in conversion context using BERT. In Proceedings of the Second Workshop on Figurative Language Processing, Seattle, WA, USA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Modelling context with user embeddings for sarcasm detection in social media", "authors": [ { "first": "Silvio", "middle": [], "last": "Amir", "suffix": "" }, { "first": "C", "middle": [], "last": "Byron", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Paula Carvalho M\u00e1rio J", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "", "middle": [], "last": "Silva", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.00976" ] }, "num": null, "urls": [], "raw_text": "Silvio Amir, Byron C Wallace, Hao Lyu, and Paula Car- valho M\u00e1rio J Silva. 2016. Modelling context with user embeddings for sarcasm detection in social me- dia. arXiv preprint arXiv:1607.00976.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Applying Transformers and aspect-based sentiment analysis approaches on sarcasm detection", "authors": [ { "first": "Taha", "middle": [], "last": "Shangipour Ataei", "suffix": "" }, { "first": "Soroush", "middle": [], "last": "Javdan", "suffix": "" }, { "first": "Behrouz", "middle": [], "last": "Minaei-Bidgoli", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taha Shangipour ataei, Soroush Javdan, and Behrouz Minaei-Bidgoli. 2020. Applying Transformers and aspect-based sentiment analysis approaches on sar- casm detection. In Proceedings of the Second Work- shop on Figurative Language Processing, Seattle, WA, USA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Detecting sarcasm in conversation context using Transformer based model", "authors": [ { "first": "Adithya", "middle": [], "last": "Avvaru", "suffix": "" }, { "first": "Sanath", "middle": [], "last": "Vobilisetty", "suffix": "" }, { "first": "Radhika", "middle": [], "last": "Mamidi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adithya Avvaru, Sanath Vobilisetty, and Radhika Mamidi. 2020. Detecting sarcasm in conversation context using Transformer based model. In Pro- ceedings of the Second Workshop on Figurative Lan- guage Processing, Seattle, WA, USA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Contextualized sarcasm detection on twitter", "authors": [ { "first": "David", "middle": [], "last": "Bamman", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Ninth International AAAI Conference on Web and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Bamman and Noah A Smith. 2015. Contextual- ized sarcasm detection on twitter. In Ninth Interna- tional AAAI Conference on Web and Social Media.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Context-aware sarcasm detection using BERT", "authors": [ { "first": "Arup", "middle": [], "last": "Baruah", "suffix": "" }, { "first": "Kaushik", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ferdous", "middle": [], "last": "Barbhuiya", "suffix": "" }, { "first": "Kuntal", "middle": [], "last": "Dey", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arup Baruah, Kaushik Das, Ferdous Barbhuiya, and Kuntal Dey. 2020. Context-aware sarcasm detection using BERT. In Proceedings of the Second Work- shop on Figurative Language Processing, Seattle, WA, USA.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multimodal sarcasm detection in twitter with hierarchical fusion model", "authors": [ { "first": "Yitao", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Huiyu", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2506--2515", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yitao Cai, Huiyu Cai, and Xiaojun Wan. 2019. Multi- modal sarcasm detection in twitter with hierarchical fusion model. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2506-2515.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Towards multimodal sarcasm detection (an obviously perfect paper)", "authors": [ { "first": "Santiago", "middle": [], "last": "Castro", "suffix": "" }, { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Ver\u00f3nica", "middle": [], "last": "P\u00e9rez-Rosas", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Zimmermann", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4619--4629", "other_ids": {}, "num": null, "urls": [], "raw_text": "Santiago Castro, Devamanyu Hazarika, Ver\u00f3nica P\u00e9rez- Rosas, Roger Zimmermann, Rada Mihalcea, and Soujanya Poria. 2019. Towards multimodal sarcasm detection (an obviously perfect paper). In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4619-4629.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Universal sentence encoder", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sheng-Yi", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Limtiaco", "suffix": "" }, { "first": "Rhomni", "middle": [], "last": "St John", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Guajardo-Cespedes", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Tar", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.11175" ] }, "num": null, "urls": [], "raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Supervised learning of universal sentence representations from natural language inference data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Loic", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.02364" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sarcasm detection using context separators in online discourse", "authors": [ { "first": "Tanvi", "middle": [], "last": "Dadu", "suffix": "" }, { "first": "Kartikey", "middle": [], "last": "Pant", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tanvi Dadu and Kartikey Pant. 2020. Sarcasm detec- tion using context separators in online discourse. In Proceedings of the Second Workshop on Figurative Language Processing, Seattle, WA, USA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Semi-supervised recognition of sarcastic sentences in twitter and amazon", "authors": [ { "first": "Dmitry", "middle": [], "last": "Davidov", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Tsur", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning, CoNLL '10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the Four- teenth Conference on Computational Natural Lan- guage Learning, CoNLL '10.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Transformer-based context-aware sarcasm detection in conversation threads from social media", "authors": [ { "first": "Xiangjue", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Changmao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jinho", "middle": [ "D" ], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiangjue Dong, Changmao Li, and Jinho D. Choi. 2020. Transformer-based context-aware sarcasm de- tection in conversation threads from social media. In Proceedings of the Second Workshop on Figurative Language Processing, Seattle, WA, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm", "authors": [ { "first": "Bjarke", "middle": [], "last": "Felbo", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Mislove", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Iyad", "middle": [], "last": "Rahwan", "suffix": "" }, { "first": "Sune", "middle": [], "last": "Lehmann", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1615--1625", "other_ids": { "DOI": [ "10.18653/v1/D17-1169" ] }, "num": null, "urls": [], "raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain represen- tations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 1615-1625. Association for Computational Linguis- tics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Fracking sarcasm using neural network", "authors": [ { "first": "Aniruddha", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "161--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aniruddha Ghosh and Tony Veale. 2016. Fracking sarcasm using neural network. In Proceedings of NAACL-HLT, pages 161-169.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Magnets for sarcasm: Making sarcasm detection timely, contextual and very personal", "authors": [ { "first": "Aniruddha", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "482--491", "other_ids": { "DOI": [ "10.18653/v1/D17-1050" ] }, "num": null, "urls": [], "raw_text": "Aniruddha Ghosh and Tony Veale. 2017. Magnets for sarcasm: Making sarcasm detection timely, contex- tual and very personal. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 482-491. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Sarcasm analysis using conversation context", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Alexander R Fabbri", "suffix": "" }, { "first": "", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.07531" ] }, "num": null, "urls": [], "raw_text": "Debanjan Ghosh, Alexander R Fabbri, and Smaranda Muresan. 2018. Sarcasm analysis using conversa- tion context. arXiv preprint arXiv:1808.07531.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The role of conversation context for sarcasm detection in online interactions", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Alexander", "middle": [ "Richard" ], "last": "Fabbri", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1707.06226" ] }, "num": null, "urls": [], "raw_text": "Debanjan Ghosh, Alexander Richard Fabbri, and Smaranda Muresan. 2017. The role of conversation context for sarcasm detection in online interactions. arXiv preprint arXiv:1707.06226.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1003--1012", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debanjan Ghosh, Weiwei Guo, and Smaranda Muresan. 2015. Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1003- 1012, Lisbon, Portugal. Association for Computa- tional Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "with 1 follower i must be awesome: P\". exploring the role of irony markers in irony recognition", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.05253" ] }, "num": null, "urls": [], "raw_text": "Debanjan Ghosh and Smaranda Muresan. 2018. \" with 1 follower i must be awesome: P\". exploring the role of irony markers in irony recognition. arXiv preprint arXiv:1804.05253.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Interpreting verbal irony: Linguistic strategies and the connection to the type of semantic incongruity", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Musi", "suffix": "" }, { "first": "Kartikeya", "middle": [], "last": "Upasani", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.00891" ] }, "num": null, "urls": [], "raw_text": "Debanjan Ghosh, Elena Musi, Kartikeya Upasani, and Smaranda Muresan. 2019. Interpreting verbal irony: Linguistic strategies and the connection to the type of semantic incongruity. arXiv preprint arXiv:1911.00891.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Identifying sarcasm in twitter: A closer look", "authors": [ { "first": "Roberto", "middle": [], "last": "Gonz\u00e1lez-Ib\u00e1\u00f1ez", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "Nina", "middle": [], "last": "Wacholder", "suffix": "" } ], "year": 2011, "venue": "ACL (Short Papers)", "volume": "", "issue": "", "pages": "581--586", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Gonz\u00e1lez-Ib\u00e1\u00f1ez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twit- ter: A closer look. In ACL (Short Papers), pages 581-586. Association for Computational Linguis- tics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A Transformer approach to contextual sarcasm detection in twitter", "authors": [ { "first": "Hunter", "middle": [], "last": "Gregory", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Li", "suffix": "" }, { "first": "Pouya", "middle": [], "last": "Mohammadi", "suffix": "" }, { "first": "Natalie", "middle": [], "last": "Tarn", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Ballantyne", "suffix": "" }, { "first": "Cynthia", "middle": [], "last": "Rudin", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hunter Gregory, Steven Li, Pouya Mohammadi, Na- talie Tarn, Rachel Ballantyne, and Cynthia Rudin. 2020. A Transformer approach to contextual sar- casm detection in twitter. In Proceedings of the Sec- ond Workshop on Figurative Language Processing, Seattle, WA, USA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Cascade: Contextual sarcasm detection in online discussion forums", "authors": [ { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Sruthi", "middle": [], "last": "Gorantla", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Zimmermann", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1837--1848", "other_ids": {}, "num": null, "urls": [], "raw_text": "Devamanyu Hazarika, Soujanya Poria, Sruthi Gorantla, Erik Cambria, Roger Zimmermann, and Rada Mi- halcea. 2018. Cascade: Contextual sarcasm detec- tion in online discussion forums. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1837-1848. Association for Com- putational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Neural sarcasm detection using conversation context", "authors": [], "year": null, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikhil Jaiswal. 2020. Neural sarcasm detection using conversation context. In Proceedings of the Second Workshop on Figurative Language Processing, Seat- tle, WA, USA.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "2020. C-net: Contextual network for sarcasm detection", "authors": [ { "first": "Amit", "middle": [], "last": "Kumar Jena", "suffix": "" }, { "first": "Aman", "middle": [], "last": "Sinha", "suffix": "" }, { "first": "Rohit", "middle": [], "last": "Agarwal", "suffix": "" } ], "year": null, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Kumar Jena, Aman Sinha, and Rohit Agarwal. 2020. C-net: Contextual network for sarcasm de- tection. In Proceedings of the Second Workshop on Figurative Language Processing, Seattle, WA, USA.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Automatic sarcasm detection: A survey", "authors": [ { "first": "Aditya", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "Mark", "middle": [ "J" ], "last": "Car", "suffix": "" } ], "year": 2017, "venue": "ACM Computing Surveys (CSUR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Joshi, Pushpak Bhattacharyya, and Mark J Car- man. 2017. Automatic sarcasm detection: A survey. ACM Computing Surveys (CSUR), page 73.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Harnessing context incongruity for sarcasm detection", "authors": [ { "first": "Aditya", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Vinita", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "757--762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Joshi, Vinita Sharma, and Pushpak Bhat- tacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 757-762.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Harnessing sequence labeling for sarcasm detection in dialogue from tv series 'friends", "authors": [ { "first": "Aditya", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Vaibhav", "middle": [], "last": "Tripathi", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Joshi, Vaibhav Tripathi, Pushpak Bhat- tacharyya, and Mark Carman. 2016. Harnessing se- quence labeling for sarcasm detection in dialogue from tv series 'friends'. CoNLL 2016, page 146.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Spanbert: Improving pre-training by representing and predicting spans", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "S", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Weld", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "64--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predict- ing spans. Transactions of the Association for Com- putational Linguistics, 8:64-77.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Sarcasm detection in tweets with BERT and GloVe embeddings", "authors": [ { "first": "Akshay", "middle": [], "last": "Khatri", "suffix": "" }, { "first": "P", "middle": [], "last": "Pranav", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akshay Khatri and Pranav P. 2020. Sarcasm detection in tweets with BERT and GloVe embeddings. In Proceedings of the Second Workshop on Figurative Language Processing, Seattle, WA, USA.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Your sentiment precedes you: Using an author's historical tweets to predict sarcasm", "authors": [ { "first": "Anupam", "middle": [], "last": "Khattri", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Carman", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "25--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anupam Khattri, Aditya Joshi, Pushpak Bhattacharyya, and Mark Carman. 2015. Your sentiment precedes you: Using an author's historical tweets to pre- dict sarcasm. In Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Senti- ment and Social Media Analysis, pages 25-30, Lis- boa, Portugal. Association for Computational Lin- guistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A large self-annotated corpus for sarcasm", "authors": [ { "first": "Mikhail", "middle": [], "last": "Khodak", "suffix": "" }, { "first": "Nikunj", "middle": [], "last": "Saunshi", "suffix": "" }, { "first": "Kiran", "middle": [], "last": "Vodrahalli", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.05579" ] }, "num": null, "urls": [], "raw_text": "Mikhail Khodak, Nikunj Saunshi, and Kiran Vodrahalli. 2017. A large self-annotated corpus for sarcasm. arXiv preprint arXiv:1704.05579.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Transformers on sarcasm detection with context", "authors": [ { "first": "Amardeep", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Anand", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amardeep Kumar and Vivek Anand. 2020. Transform- ers on sarcasm detection with context. In Proceed- ings of the Second Workshop on Figurative Lan- guage Processing, Seattle, WA, USA.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Albert: A lite bert for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.11942" ] }, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Augmenting data for sarcasm detection with unlabeled conversation context", "authors": [ { "first": "Hankyol", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Youngjae", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Gunhee", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hankyol Lee, Youngjae Yu, and Gunhee Kim. 2020. Augmenting data for sarcasm detection with unla- beled conversation context. In Proceedings of the Second Workshop on Figurative Language Process- ing, Seattle, WA, USA.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Sarcasm detection using an ensemble approach", "authors": [ { "first": "Jens", "middle": [], "last": "Lemmens", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Burtenshaw", "suffix": "" }, { "first": "Ehsan", "middle": [], "last": "Lotfi", "suffix": "" }, { "first": "Ilia", "middle": [], "last": "Markov", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jens Lemmens, Ben Burtenshaw, Ehsan Lotfi, Ilia Markov, and Walter Daelemans. 2020. Sarcasm de- tection using an ensemble approach. In Proceedings of the Second Workshop on Figurative Language Processing, Seattle, WA, USA.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A report on the 2018 vua metaphor detection shared task", "authors": [ { "first": "Beata", "middle": [ "Beigman" ], "last": "Chee Wee Leong", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Klebanov", "suffix": "" }, { "first": "", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "56--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chee Wee Leong, Beata Beigman Klebanov, and Eka- terina Shutova. 2018. A report on the 2018 vua metaphor detection shared task. In Proceedings of the Workshop on Figurative Language Processing, pages 56-66.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "The perfect solution for detecting sarcasm in tweets# not", "authors": [ { "first": "", "middle": [], "last": "Cc Liebrecht", "suffix": "" }, { "first": "Apj", "middle": [], "last": "Fa Kunneman", "suffix": "" }, { "first": "", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "", "middle": [], "last": "Bosch", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "CC Liebrecht, FA Kunneman, and APJ van den Bosch. 2013. The perfect solution for detecting sarcasm in tweets# not. In Proceedings of the 4th Workshop on Computational Approaches to Subjectivity, Senti- ment and Social Media Analysis.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Nextvlad: An efficient neural network to aggregate frame-level features for large-scale video classification", "authors": [ { "first": "Rongcheng", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Jianping", "middle": [], "last": "Fan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the European Conference on Computer Vision (ECCV)", "volume": "", "issue": "", "pages": "0--0", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rongcheng Lin, Jing Xiao, and Jianping Fan. 2018. Nextvlad: An efficient neural network to aggregate frame-level features for large-scale video classifica- tion. In Proceedings of the European Conference on Computer Vision (ECCV), pages 0-0.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Interactive attention networks for aspect-level sentiment classification", "authors": [ { "first": "Dehong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1709.00893" ] }, "num": null, "urls": [], "raw_text": "Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. arXiv preprint arXiv:1709.00893.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Erik Cambria, and Alexander Gelbukh. 2019. Sentiment and sarcasm classification with multitask learning", "authors": [ { "first": "Navonil", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Haiyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Niyati", "middle": [], "last": "Chhaya", "suffix": "" } ], "year": null, "venue": "IEEE Intelligent Systems", "volume": "34", "issue": "3", "pages": "38--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Navonil Majumder, Soujanya Poria, Haiyun Peng, Niyati Chhaya, Erik Cambria, and Alexander Gel- bukh. 2019. Sentiment and sarcasm classification with multitask learning. IEEE Intelligent Systems, 34(3):38-43.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis", "authors": [ { "first": "Diana", "middle": [], "last": "Maynard", "suffix": "" }, { "first": "A", "middle": [], "last": "Mark", "suffix": "" }, { "first": "", "middle": [], "last": "Greenwood", "suffix": "" } ], "year": 2014, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana Maynard and Mark A Greenwood. 2014. Who cares about sarcastic tweets? investigating the im- pact of sarcasm on sentiment analysis. In Proceed- ings of LREC.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, pages 3111-3119.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Harnessing cognitive features for sarcasm detection", "authors": [ { "first": "Abhijit", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Diptesh", "middle": [], "last": "Kanojia", "suffix": "" }, { "first": "Seema", "middle": [], "last": "Nagar", "suffix": "" }, { "first": "Kuntal", "middle": [], "last": "Dey", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1095--1104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kun- tal Dey, and Pushpak Bhattacharyya. 2016. Har- nessing cognitive features for sarcasm detection. In Proceedings of the 54th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1095- 1104, Berlin, Germany.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Identification of nonliteral language in social media: A case study on sarcasm", "authors": [ { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Gonzalez-Ibanez", "suffix": "" }, { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Nina", "middle": [], "last": "Wacholder", "suffix": "" } ], "year": 2016, "venue": "Journal of the Association for Information Science and Technology", "volume": "67", "issue": "11", "pages": "2725--2737", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smaranda Muresan, Roberto Gonzalez-Ibanez, Deban- jan Ghosh, and Nina Wacholder. 2016. Identifica- tion of nonliteral language in social media: A case study on sarcasm. Journal of the Association for Information Science and Technology, 67(11):2725- 2737.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Exploring author context for detecting intended vs perceived sarcasm", "authors": [ { "first": "Silviu", "middle": [], "last": "Oprea", "suffix": "" }, { "first": "Walid", "middle": [], "last": "Magdy", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2854--2859", "other_ids": {}, "num": null, "urls": [], "raw_text": "Silviu Oprea and Walid Magdy. 2019. Exploring au- thor context for detecting intended vs perceived sar- casm. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 2854-2859.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Empiricial Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Deep contextualized word representations", "authors": [ { "first": "E", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.05365" ] }, "num": null, "urls": [], "raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Sarcasm detection on twitter: A behavioral modeling approach", "authors": [ { "first": "Ashwin", "middle": [], "last": "Rajadesingan", "suffix": "" }, { "first": "Reza", "middle": [], "last": "Zafarani", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Eighth ACM International Conference on Web Search and Data Mining", "volume": "", "issue": "", "pages": "97--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. Sarcasm detection on twitter: A behavioral modeling approach. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pages 97-106. ACM.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Sarcasm as contrast between a positive sentiment and negative situation", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Ashequl", "middle": [], "last": "Qadir", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Surve", "suffix": "" }, { "first": "Lalindra De", "middle": [], "last": "Silva", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gilbert", "suffix": "" }, { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "704--714", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 704-714.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Detecting sarcasm in multimodal social platforms", "authors": [ { "first": "Rossano", "middle": [], "last": "Schifanella", "suffix": "" }, { "first": "Paloma", "middle": [], "last": "De Juan", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" }, { "first": "Liangliang", "middle": [], "last": "Cao", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 ACM on Multimedia Conference", "volume": "", "issue": "", "pages": "1136--1145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rossano Schifanella, Paloma de Juan, Joel Tetreault, and Liangliang Cao. 2016. Detecting sarcasm in multimodal social platforms. In Proceedings of the 2016 ACM on Multimedia Conference, pages 1136- 1145. ACM.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Attentional encoder network for targeted sentiment classification", "authors": [ { "first": "Youwei", "middle": [], "last": "Song", "suffix": "" }, { "first": "Jiahai", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Zhiyue", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yanghui", "middle": [], "last": "Rao", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.09314" ] }, "num": null, "urls": [], "raw_text": "Youwei Song, Jiahai Wang, Tao Jiang, Zhiyue Liu, and Yanghui Rao. 2019. Attentional encoder network for targeted sentiment classification. arXiv preprint arXiv:1902.09314.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "A novel hierarchical BERT architecture for sarcasm detection", "authors": [ { "first": "Himani", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Vaibhav", "middle": [], "last": "Varshney", "suffix": "" }, { "first": "Surabhi", "middle": [], "last": "Kumari", "suffix": "" }, { "first": "Saurabh", "middle": [], "last": "Srivastava", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Himani Srivastava, Vaibhav Varshney, Surabhi Kumari, and Saurabh Srivastava. 2020. A novel hierarchical BERT architecture for sarcasm detection. In Pro- ceedings of the Second Workshop on Figurative Lan- guage Processing, Seattle, WA, USA.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Reasoning with sarcasm by reading inbetween", "authors": [ { "first": "Yi", "middle": [], "last": "Tay", "suffix": "" }, { "first": "Anh", "middle": [ "Tuan" ], "last": "Luu", "suffix": "" }, { "first": "Siu", "middle": [ "Cheung" ], "last": "Hui", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1010--1020", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Tay, Anh Tuan Luu, Siu Cheung Hui, and Jian Su. 2018. Reasoning with sarcasm by reading in- between. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1010-1020. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Icwsm-a great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews", "authors": [ { "first": "Oren", "middle": [], "last": "Tsur", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Davidov", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2010, "venue": "ICWSM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Tsur, Dmitry Davidov, and Ari Rappoport. 2010. Icwsm-a great catchy name: Semi-supervised recog- nition of sarcastic sentences in online product re- views. In ICWSM.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Semeval-2018 task 3: Irony detection in english tweets", "authors": [ { "first": "Cynthia", "middle": [], "last": "Van Hee", "suffix": "" }, { "first": "Els", "middle": [], "last": "Lefever", "suffix": "" }, { "first": "V\u00e9ronique", "middle": [], "last": "Hoste", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "39--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cynthia Van Hee, Els Lefever, and V\u00e9ronique Hoste. 2018. Semeval-2018 task 3: Irony detection in en- glish tweets. In Proceedings of The 12th Interna- tional Workshop on Semantic Evaluation, pages 39- 50.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Detecting ironic intent in creative comparisons", "authors": [ { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" }, { "first": "Yanfen", "middle": [], "last": "Hao", "suffix": "" } ], "year": 2010, "venue": "European Conference on Artificial Intelligence", "volume": "215", "issue": "", "pages": "765--770", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tony Veale and Yanfen Hao. 2010. Detecting ironic intent in creative comparisons. In European Con- ference on Artificial Intelligence, volume 215, pages 765-770, Lisbon, Portugal.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Computational irony: A survey and new perspectives", "authors": [ { "first": "C", "middle": [], "last": "Byron", "suffix": "" }, { "first": "", "middle": [], "last": "Wallace", "suffix": "" } ], "year": 2015, "venue": "Artificial Intelligence Review", "volume": "43", "issue": "4", "pages": "467--483", "other_ids": {}, "num": null, "urls": [], "raw_text": "Byron C Wallace. 2015. Computational irony: A sur- vey and new perspectives. Artificial Intelligence Re- view, 43(4):467-483.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Humans require context to infer ironic intent (so computers probably do, too)", "authors": [ { "first": "C", "middle": [], "last": "Byron", "suffix": "" }, { "first": "Do", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Kook Choe", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Kertz", "suffix": "" }, { "first": "", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2014, "venue": "ACL (2)", "volume": "", "issue": "", "pages": "512--516", "other_ids": {}, "num": null, "urls": [], "raw_text": "Byron C Wallace, Do Kook Choe, Laura Kertz, and Eugene Charniak. 2014. Humans require context to infer ironic intent (so computers probably do, too). In ACL (2), pages 512-516.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Twitter sarcasm detection exploiting a context-based model", "authors": [ { "first": "Zelin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhijian", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Ruimin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yafeng", "middle": [], "last": "Ren", "suffix": "" } ], "year": 2015, "venue": "International Conference on Web Information Systems Engineering", "volume": "", "issue": "", "pages": "77--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zelin Wang, Zhijian Wu, Ruimin Wang, and Yafeng Ren. 2015. Twitter sarcasm detection exploiting a context-based model. In International Conference on Web Information Systems Engineering, pages 77- 91, Miami, Florida. Springer.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5754--5764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5754-5764.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Hierarchical attention networks for document classification", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Smola", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "1480--1489", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of NAACL-HLT, pages 1480-1489.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Lcf: A local context focus mechanism for aspect-based sentiment classification", "authors": [ { "first": "Biqing", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ruyang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wu", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xuli", "middle": [], "last": "Han", "suffix": "" } ], "year": 2019, "venue": "Applied Sciences", "volume": "9", "issue": "16", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Biqing Zeng, Heng Yang, Ruyang Xu, Wu Zhou, and Xuli Han. 2019. Lcf: A local context focus mecha- nism for aspect-based sentiment classification. Ap- plied Sciences, 9(16):3389.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Tweet sarcasm detection using deep neural network", "authors": [ { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Guohong", "middle": [], "last": "Fu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, The 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2449--2460", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Tweet sarcasm detection using deep neural network. In Proceedings of COLING 2016, The 26th Inter- national Conference on Computational Linguistics: Technical Papers, pages 2449-2460.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Plot of Reddit (blue) and Twitter (orange) training datasets on the basis of context length. X-axis represents context length (i.e., number of prior turns) and Y-axis represents the % of training utterances." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "1. Taking immediate context as aspect for response in Aspect-based Sentiment Classification architectures (taha) 2. CNN-LSTM based summarization of entire context thread (salokr) 3. Time-series fusion with proxy labels for context (amitjena40) 4. Ensemble of multiple models with different depth of context (miroblog)" }, "TABREF0": { "text": "Sarcastic replies to conversation context in Reddit. Response turn is a reply to Context 2 turn that is a reply to Context 1 turn 2nd Workshop on Figurative Language Processing (", "html": null, "num": null, "type_str": "table", "content": "