{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:15:03.980044Z" }, "title": "Smash at SemEval-2020 Task 7: Optimizing the Hyperparameters of ERNIE 2.0 for Humor Ranking and Rating", "authors": [ { "first": "J", "middle": [ "A" ], "last": "Meaney", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "jameaney@ed.ac.uk" }, { "first": "Steven", "middle": [ "R" ], "last": "Wilson", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "steven.wilson@ed.ac.uk" }, { "first": "Walid", "middle": [], "last": "Magdy", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "wmagdy@inf.ed.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The use of pre-trained language models such as BERT and ULMFiT has become increasingly popular in shared tasks, due to their powerful language modelling capabilities. Our entry to SemEval uses ERNIE 2.0, a language model which is pre-trained on a large number of tasks to enrich the semantic and syntactic information learned. ERNIE's knowledge masking pretraining task is a unique method for learning about named entities, and we hypothesise that it may be of use in a dataset which is built on news headlines and which contains many named entities. We optimize the hyperparameters in a regression and classification model and find that the hyperparameters we selected helped to make bigger gains in the classification model than the regression model.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The use of pre-trained language models such as BERT and ULMFiT has become increasingly popular in shared tasks, due to their powerful language modelling capabilities. Our entry to SemEval uses ERNIE 2.0, a language model which is pre-trained on a large number of tasks to enrich the semantic and syntactic information learned. ERNIE's knowledge masking pretraining task is a unique method for learning about named entities, and we hypothesise that it may be of use in a dataset which is built on news headlines and which contains many named entities. We optimize the hyperparameters in a regression and classification model and find that the hyperparameters we selected helped to make bigger gains in the classification model than the regression model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Verbal humor uses a variety of linguistic features, such as synonymy, wordplay, and phonological similiarities, as well non-linguistic features like world knowledge, to produce a comic effect. That such a broad set of skills are required to understand humor, has led several researchers to deem that computational humor is an AI-complete problem Binsted et al., 2006) . There is a relatively longstanding body of research into humor detection in a limited domain, such as knock-knock jokes (Taylor and Mazlack, 2004) , one-liners (Mihalcea and Strapparava, 2006) and humorous news articles from the satirical news publication The Onion (Mihalcea and Pulman, 2007) . However, the use of shared tasks has attracted more attention and interest in the field since 2017. While previous challenges have focused on collecting Twitter data (Potash et al., 2017; Castro et al., 2018) , SemEval 2020 (Hossain et al., 2020) took an original approach and generated the data by collecting news headlines and then asking annotators to edit one word in the headline to make it humorous (Hossain et al., 2019) . These headlines emulate those of The Onion. The edits shown below indicate the location of the substitution and the word to be inserted. The edited headlines were then rated for humor by subsequent annotators. Sub-task A was to predict the mean funniness score of the edited headline. In sub-task B, the systems saw two edits of the same headline, and predicted which one had achieved the higher mean funniness score.", "cite_spans": [ { "start": 346, "end": 367, "text": "Binsted et al., 2006)", "ref_id": "BIBREF1" }, { "start": 490, "end": 516, "text": "(Taylor and Mazlack, 2004)", "ref_id": "BIBREF22" }, { "start": 530, "end": 562, "text": "(Mihalcea and Strapparava, 2006)", "ref_id": "BIBREF12" }, { "start": 636, "end": 663, "text": "(Mihalcea and Pulman, 2007)", "ref_id": "BIBREF11" }, { "start": 832, "end": 853, "text": "(Potash et al., 2017;", "ref_id": "BIBREF18" }, { "start": 854, "end": 874, "text": "Castro et al., 2018)", "ref_id": "BIBREF2" }, { "start": 890, "end": 912, "text": "(Hossain et al., 2020)", "ref_id": "BIBREF7" }, { "start": 1071, "end": 1093, "text": "(Hossain et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Excluding work on puns, there have been three humor detection shared tasks in recent years: Semeval 2017 (Potash et al., 2017) , HAHA 2018 (Castro et al., 2018) and HAHA 2019 (Chiruzzo et al., 2019) . As the tasks and data have varied between them, direct comparison is not possible. However, a comparison of approaches to the tasks shows some interesting trends.", "cite_spans": [ { "start": 105, "end": 126, "text": "(Potash et al., 2017)", "ref_id": "BIBREF18" }, { "start": 139, "end": 160, "text": "(Castro et al., 2018)", "ref_id": "BIBREF2" }, { "start": 175, "end": 198, "text": "(Chiruzzo et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Semeval 2017's entries were evenly divided between feature engineering approaches and deep learning systems, with both achieving competitive results. The highest ranking team in the official results for task A, SVNIT (Mahajan and Zaveri, 2017) , used an SVM with incongruity, ambiguity and stylistic features. The second highest-ranking team, Datastories (Baziotis et al., 2017) opted for a Siamese bi-LSTM with attention. Interestingly, a remarkably simple system prevailed in task B: Duluth (Yan and Pedersen, 2017) used the probability assigned to the text by a bigram language model instead of the output of a classifier to make predictions.", "cite_spans": [ { "start": 217, "end": 243, "text": "(Mahajan and Zaveri, 2017)", "ref_id": "BIBREF10" }, { "start": 355, "end": 378, "text": "(Baziotis et al., 2017)", "ref_id": "BIBREF0" }, { "start": 493, "end": 517, "text": "(Yan and Pedersen, 2017)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Entries to HAHA 2018 were divided along similar lines. The winning system used Naive Bayes and ridge regression models optimized with an evolutionary algorithm (Ortiz-Bejar et al., 2018) with the runner up using a bi-LSTM with attention (Ortega-Bueno et al., 2018) .", "cite_spans": [ { "start": 237, "end": 264, "text": "(Ortega-Bueno et al., 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "HAHA 2019 saw a sea change towards the use of transfer learning models, such as BERT (Devlin et al., 2018) and ULMFiT (Howard and Ruder, 2018) . These models leverage large amounts of data and transformer attention models to learn contextual relations between words. Adilism (Ismailov, 2019) used multilingual BERT base uncased and extended the language model training without labels, before finetuning their system with the dataset labels. The second place system used an ensemble of a BERT model and ULMFiT, with Naive Bayes and SVM classifiers. The majority of the top entries to this task used BERT in some way, although one noted that it did not improve performance as expected (Ortega-Bueno et al., 2019) .", "cite_spans": [ { "start": 85, "end": 106, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF4" }, { "start": 118, "end": 142, "text": "(Howard and Ruder, 2018)", "ref_id": "BIBREF8" }, { "start": 275, "end": 291, "text": "(Ismailov, 2019)", "ref_id": "BIBREF9" }, { "start": 683, "end": 710, "text": "(Ortega-Bueno et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "3 System Overview 3.1 Why ERNIE 2.0?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "As BERT models are trained on a masked-language model and sentence prediction task, they capture mainly word-level and sentence-level information. By comparison, ERNIE 2.0 (Sun et al., 2020) -henceforth ERNIE -aims to capture more lexical, syntactic and semantic information in corpora, by training on eight different tasks in a continual pre-training framework. Knowledge masking features among these eight tasks, and is implemented by treating a phrase or entity as an entire unit, instead of masking the constituent words. The distinction in how BERT and ERNIE learn is illustrated in how they learn the following sentence: Harry Potter is a series of fantasy novels written by J. K. Rowling BERT captures co-occurrence information of 'J' with 'K' and 'Rowling', however it does not capture information about the entity J. K. Rowling. By modelling this entity as a single unit, ERNIE claims to be capable of extrapolating the relationship between Harry Potter and J. K. Rowling (Sun et al., 2019) . Furthermore, ERNIE is trained on a wide varieties of domains, including encyclopedias and news articles, giving the model a lot of knowledge of named entities. This is of great benefit in the Funlines dataset, which is built on news headlines, and therefore features a large number of named entities, particularly politicians. This may help the model to infer the relationship between Mitch McConnell and Trump in the example from table 1.", "cite_spans": [ { "start": 981, "end": 999, "text": "(Sun et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "The dataset featured the original headline, with the word which had been replaced in angle brackets, and the substitute word separate. We rendered the edited headlines by placing the word in angle brackets into the sentence. This did not give our model access to the keyword, or to the original headlines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Preprocessing", "sec_num": "3.2" }, { "text": "For ERNIE models, we preprocessed the data as follows: We lowercased the texts and tokenized them into word pieces, this was implemented with a greedy longest-match-first system to tokenize them given the vocabulary. As is conventional for ERNIE, we then added a [CLS] token to the start of each text, and a [SEP] token to the end of each text, with an additional [SEP] replacing the [CLS] in the second text for pairs of texts (e.g. task 2). We also padded sequences to a maximum length of 128.", "cite_spans": [ { "start": 364, "end": 369, "text": "[SEP]", "ref_id": null }, { "start": 384, "end": 389, "text": "[CLS]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Text Preprocessing", "sec_num": "3.2" }, { "text": "For task 1, we create two baselines, one which predicted a constant value, and the other which predicted the mean value, using scikit learn (Pedregosa et al., 2011) . For task 2, we created three baselines. In the first, we always predicted the same label. The second baseline was a trigram language model built on KenLM (Heafield, 2011) , using a dataset containing around 200,000 news headlines from 2012-2018 editions of the Huffington Post 1 . Similarly to the approach taken by the Duluth team (Yan and Pedersen, 2017) in SemEval 2017, we reasoned that the funnier of the two headlines would be the least similar to real news headlines, so we selected the sentence that had a lower log probability according to the model. However, this performed worse than the first baseline.", "cite_spans": [ { "start": 140, "end": 164, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF17" }, { "start": 321, "end": 337, "text": "(Heafield, 2011)", "ref_id": "BIBREF5" }, { "start": 499, "end": 523, "text": "(Yan and Pedersen, 2017)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.3" }, { "text": "The third baseline was a trigram model built the headlines labelled as sarcastic from a sarcastic news dataset (Misra and Arora, 2019) . These headlines came from The Onion, which the competition dataset seeks to emulate. Here we reasoned that the funnier headline would have a higher log probability under this language model. Predicting labels in this way was an improvement over the other two baselines, suggesting that the unique data generation methods in this challenge succeeded in emulating satirical headlines in some way. ", "cite_spans": [ { "start": 111, "end": 134, "text": "(Misra and Arora, 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.3" }, { "text": "For the transfer learning models, we used ERNIE base which has 12 layers, a hidden size of 768 and 12 self-attention heads. We used a maximum sequence length of 128, a dropout probability of 0.1 and the Adam optimizer. To finetune for task 1, we built a fully connected layer with mean square error as the loss function. For task 2, after the fully connected layer, we added a softmax layer and used cross entropy as the loss function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Configuration", "sec_num": "3.4" }, { "text": "We experimented with optimizing three hyperparameters: learning rate (1e-06, 0.0001 or 0.001), batch size (16, 32 or 64) and number of epochs (3, 4 or 5). For the sake of brevity, we report only the three highest and lowest results for each task. The results reported are the mean of 5 runs, with standard deviation in parentheses. We noticed remarkably little variation in the task 1 results, regardless of the hyperparameter tweaking. Given that the same learning rate is observed in both high and low-scoring systems, and that there is no observable pattern in terms of batch size, this suggests that another hyperparameter, or variable may help to achieve better results. By contrast, in task 2, we saw much more variation, with a jump of almost 11% from the lowest to the highest-scoring configuration. A small learning rate of 0.0001, along with a relatively large batch size of 64 featured in all three top results, and the number of epochs was decisive, bringing a 5% increase over at the optimal number -4. We observed that the lowest learning rate also achieved the lowest scores. However, with too small a learning rate, the network appears not to converge, and varying the other hyperparameters does not impact this. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "While transfer learning models have achieved very impressive results on a variety of NLP tasks, the performance on this humor task was not as high as anticipated. Perhaps in a multi-task learning setup, we may have seen better performance. Nonetheless, our work demonstrates the importance of optimizing the hyperparamters of the finetuning layers, which achieved improvements on both tasks, but specifically the classification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://www.kaggle.com/rmisra/news-headlines-dataset-for-sarcasm-detection", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis", "authors": [ { "first": "Christos", "middle": [], "last": "Baziotis", "suffix": "" }, { "first": "Nikos", "middle": [], "last": "Pelekis", "suffix": "" }, { "first": "Christos", "middle": [], "last": "Doulkeridis", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)", "volume": "", "issue": "", "pages": "747--754", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christos Baziotis, Nikos Pelekis, and Christos Doulkeridis. 2017. Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 747-754, Vancouver, Canada, August. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Computational humor", "authors": [ { "first": "K", "middle": [], "last": "Binsted", "suffix": "" }, { "first": "", "middle": [], "last": "Hendler", "suffix": "" }, { "first": "S", "middle": [], "last": "Van Den Bergen", "suffix": "" }, { "first": "Antinus", "middle": [], "last": "Coulson", "suffix": "" }, { "first": "", "middle": [], "last": "Nijholt", "suffix": "" }, { "first": "", "middle": [], "last": "Stock", "suffix": "" }, { "first": "", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "", "middle": [], "last": "Ritchie", "suffix": "" }, { "first": "H", "middle": [], "last": "Manurung", "suffix": "" }, { "first": "", "middle": [], "last": "Pain", "suffix": "" } ], "year": 2006, "venue": "IEEE intelligent systems", "volume": "21", "issue": "", "pages": "59--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "K Binsted, J Hendler, B van den Bergen, S Coulson, Antinus Nijholt, O Stock, C Strapparava, G Ritchie, R Manu- rung, H Pain, et al. 2006. Computational humor. IEEE intelligent systems, 21(suppl 2/2):59-69.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Overview of the haha task: Humor analysis based on human annotation at ibereval 2018", "authors": [ { "first": "Santiago", "middle": [], "last": "Castro", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Chiruzzo", "suffix": "" }, { "first": "Aiala", "middle": [], "last": "Ros\u00e1", "suffix": "" } ], "year": 2018, "venue": "IberEval@ SEPLN", "volume": "", "issue": "", "pages": "187--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Santiago Castro, Luis Chiruzzo, and Aiala Ros\u00e1. 2018. Overview of the haha task: Humor analysis based on human annotation at ibereval 2018. In IberEval@ SEPLN, pages 187-194.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Overview of haha at iberlef 2019: Humor analysis based on human annotation", "authors": [ { "first": "Luis", "middle": [], "last": "Chiruzzo", "suffix": "" }, { "first": "Mathias", "middle": [], "last": "Castro", "suffix": "" }, { "first": "Diego", "middle": [], "last": "Etcheverry", "suffix": "" }, { "first": "Juan Jos\u00e9", "middle": [], "last": "Garat", "suffix": "" }, { "first": "Aiala", "middle": [], "last": "Prada", "suffix": "" }, { "first": "", "middle": [], "last": "Ros\u00e1", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Iberian Languages Evaluation Forum", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luis Chiruzzo, S Castro, Mathias Etcheverry, Diego Garat, Juan Jos\u00e9 Prada, and Aiala Ros\u00e1. 2019. Overview of haha at iberlef 2019: Humor analysis based on human annotation. In Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2019). CEUR Workshop Proceedings, CEUR-WS, Bilbao, Spain (9 2019).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Kenlm: Faster and smaller language model queries", "authors": [ { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the sixth workshop on statistical machine translation", "volume": "", "issue": "", "pages": "187--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Heafield. 2011. Kenlm: Faster and smaller language model queries. In Proceedings of the sixth workshop on statistical machine translation, pages 187-197. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "President Vows to Cut Hair\": Dataset and Analysis of Creative Text Editing for Humorous Headlines", "authors": [ { "first": "Nabil", "middle": [], "last": "Hossain", "suffix": "" }, { "first": "John", "middle": [], "last": "Krumm", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "133--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nabil Hossain, John Krumm, and Michael Gamon. 2019. \"President Vows to Cut Hair\": Dataset and Analysis of Creative Text Editing for Humorous Headlines. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 133-142.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Semeval-2020 Task 7: Assessing humor in edited news headlines", "authors": [ { "first": "Nabil", "middle": [], "last": "Hossain", "suffix": "" }, { "first": "John", "middle": [], "last": "Krumm", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Kautz", "suffix": "" } ], "year": 2020, "venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2020)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nabil Hossain, John Krumm, Michael Gamon, and Henry Kautz. 2020. Semeval-2020 Task 7: Assessing humor in edited news headlines. In Proceedings of International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Universal language model fine-tuning for text classification", "authors": [ { "first": "Jeremy", "middle": [], "last": "Howard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1801.06146" ] }, "num": null, "urls": [], "raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Humor analysis based on human annotation challenge at iberlef 2019: First-place solution", "authors": [ { "first": "Adilzhan", "middle": [], "last": "Ismailov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Iberian Languages Evaluation Forum", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adilzhan Ismailov. 2019. Humor analysis based on human annotation challenge at iberlef 2019: First-place solution. In Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2019). CEUR Workshop Pro- ceedings, CEUR-WS, Bilbao, Spain (9 2019).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Svnit@ semeval 2017 task-6: Learning a sense of humor using supervised approach", "authors": [ { "first": "Rutal", "middle": [], "last": "Mahajan", "suffix": "" }, { "first": "Mukesh", "middle": [], "last": "Zaveri", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "411--415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rutal Mahajan and Mukesh Zaveri. 2017. Svnit@ semeval 2017 task-6: Learning a sense of humor using super- vised approach. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 411-415.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Characterizing humour: An exploration of features in humorous texts", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Pulman", "suffix": "" } ], "year": 2007, "venue": "International Conference on Intelligent Text Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "337--347", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Stephen Pulman. 2007. Characterizing humour: An exploration of features in humorous texts. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 337- 347. Springer.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning to laugh (automatically): Computational models for humor recognition", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2006, "venue": "Computational Intelligence", "volume": "22", "issue": "2", "pages": "126--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Carlo Strapparava. 2006. Learning to laugh (automatically): Computational models for humor recognition. Computational Intelligence, 22(2):126-142.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Sarcasm detection using hybrid neural network", "authors": [ { "first": "Rishabh", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Prahal", "middle": [], "last": "Arora", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.07414" ] }, "num": null, "urls": [], "raw_text": "Rishabh Misra and Prahal Arora. 2019. Sarcasm detection using hybrid neural network. arXiv preprint arXiv:1908.07414.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Uo upv: Deep linguistic humor detection in spanish social media", "authors": [ { "first": "Reynier", "middle": [], "last": "Ortega-Bueno", "suffix": "" }, { "first": "Carlos", "middle": [ "E" ], "last": "Muniz-Cuza", "suffix": "" }, { "first": "Jos\u00e9 E Medina", "middle": [], "last": "Pagola", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rosso", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reynier Ortega-Bueno, Carlos E Muniz-Cuza, Jos\u00e9 E Medina Pagola, and Paolo Rosso. 2018. Uo upv: Deep linguistic humor detection in spanish social media. In Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing (SEPLN 2018).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Uo upv2 at haha 2019: Bigru neural network informed with linguistic features for humor recognition", "authors": [ { "first": "Reynier", "middle": [], "last": "Ortega-Bueno", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rosso", "suffix": "" }, { "first": "Jos\u00e9 E Medina", "middle": [], "last": "Pagola", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Iberian Languages Evaluation Forum", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reynier Ortega-Bueno, Paolo Rosso, and Jos\u00e9 E Medina Pagola. 2019. Uo upv2 at haha 2019: Bigru neural network informed with linguistic features for humor recognition. In Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2019). CEUR Workshop Proceedings, CEUR-WS, Bilbao, Spain (9 2019).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Ingeotec at ibereval 2018 task haha: \u00b5tc and evomsa to detect and score humor in texts", "authors": [ { "first": "Jos\u00e9", "middle": [], "last": "Ortiz-Bejar", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Salgado", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Graff", "suffix": "" }, { "first": "Daniela", "middle": [], "last": "Moctezuma", "suffix": "" }, { "first": "Sabino", "middle": [], "last": "Miranda-Jim\u00e9nez", "suffix": "" }, { "first": "Eric", "middle": [ "S" ], "last": "Tellez", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jos\u00e9 Ortiz-Bejar, Vladimir Salgado, Mario Graff, Daniela Moctezuma, Sabino Miranda-Jim\u00e9nez, and Eric S Tellez. 2018. Ingeotec at ibereval 2018 task haha: \u00b5tc and evomsa to detect and score humor in texts. In Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing (SEPLN 2018).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Scikitlearn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit- learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Semeval-2017 task 6:# hashtagwars: Learning a sense of humor", "authors": [ { "first": "Peter", "middle": [], "last": "Potash", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Romanov", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "49--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Potash, Alexey Romanov, and Anna Rumshisky. 2017. Semeval-2017 task 6:# hashtagwars: Learning a sense of humor. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 49-57.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Laughing with hahacronym, a computational humor system", "authors": [ { "first": "Oliviero", "middle": [], "last": "Stock", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2006, "venue": "21st conference of American Association for Artificial Intelligence (AAAI-06)", "volume": "", "issue": "", "pages": "1675--1678", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oliviero Stock and Carlo Strapparava. 2006. Laughing with hahacronym, a computational humor system. In 21st conference of American Association for Artificial Intelligence (AAAI-06), pages 1675-1678. AAAI.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Ernie 2.0: A continual pre-training framework for language understanding", "authors": [ { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Shuohuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shikun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Hao Tian", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.12412" ] }, "num": null, "urls": [], "raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2019. Ernie 2.0: A continual pre-training framework for language understanding. arXiv preprint arXiv:1907.12412.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Ernie 2.0: A continual pre-training framework for language understanding", "authors": [ { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Shuohuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yu-Kun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shikun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Hao Tian", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "8968--8975", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernie 2.0: A continual pre-training framework for language understanding. In AAAI, pages 8968-8975.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Computationally recognizing wordplay in jokes", "authors": [ { "first": "M", "middle": [], "last": "Julia", "suffix": "" }, { "first": "Lawrence", "middle": [ "J" ], "last": "Taylor", "suffix": "" }, { "first": "", "middle": [], "last": "Mazlack", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Annual Meeting of the Cognitive Science Society", "volume": "26", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia M Taylor and Lawrence J Mazlack. 2004. Computationally recognizing wordplay in jokes. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 26.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Duluth at semeval-2017 task 6: Language models in humor detection", "authors": [ { "first": "Xinru", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.08390" ] }, "num": null, "urls": [], "raw_text": "Xinru Yan and Ted Pedersen. 2017. Duluth at semeval-2017 task 6: Language models in humor detection. arXiv preprint arXiv:1704.08390.", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "type_str": "table", "text": "Example of Funlines Headline", "html": null, "content": "
TypeText
" }, "TABREF1": { "num": null, "type_str": "table", "text": "Learned by BERT: [mask] Potter is a series [mask] fantasy novels [mask] by J.", "html": null, "content": "" }, "TABREF2": { "num": null, "type_str": "table", "text": "Baselines for Task 1", "html": null, "content": "
SystemRMSE
Predict Constant Value 0.7214
Predict Mean Value0.6968
" }, "TABREF3": { "num": null, "type_str": "table", "text": "Baselines for Task 2", "html": null, "content": "
SystemAccuracy
Predict Constant Value0.4475
200k Huff Post Headlines 0.4314
28k Onion Headlines0.4546
" }, "TABREF4": { "num": null, "type_str": "table", "text": "Highest and Lowest Performing Parameters for Task 1", "html": null, "content": "
Learning Rate Batch Size Epochs RMSE (SD)
0.00011630.5806 (0.011)
0.00016430.5817 (0.005)
0.00013230.5829 (0.003)
0.0011630.5966 (0)
0.00011650.6008 (0.010)
1e-066430.6009 (0.001)
" }, "TABREF5": { "num": null, "type_str": "table", "text": "Highest and Lowest Performing Parameters for Task 2", "html": null, "content": "
Learning Rate Batch Size Epochs Mean Accuracy (SD)
0.00016440.59408 (0.017)
0.00016450.54644 (0.051)
0.00016430.5150 (0.053)
1e-063230.4911 (0.006)
1e-066450.4880 (0.005)
1e-066430.4846 (0.002)
" } } } }