{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:20:28.827912Z" }, "title": "Hitachi at SemEval-2020 Task 7: Stacking at Scale with Heterogeneous Language Models for Humor Recognition", "authors": [ { "first": "Terufumi", "middle": [], "last": "Morishita", "suffix": "", "affiliation": {}, "email": "terufumi.morishita.wp@hitachi.com" }, { "first": "Gaku", "middle": [], "last": "Morio", "suffix": "", "affiliation": {}, "email": "gaku.morio.vn@hitachi.com" }, { "first": "Hiroaki", "middle": [], "last": "Ozaki", "suffix": "", "affiliation": {}, "email": "hiroaki.ozaki.yu@hitachi.com" }, { "first": "Toshinori", "middle": [ "Miyoshi" ], "last": "Hitachi", "suffix": "", "affiliation": {}, "email": "toshinori.miyoshi.pd@hitachi.com" }, { "first": "Ltd", "middle": [], "last": "Resarch", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes the winning system for SemEval-2020 task 7: Assessing Humor in Edited News Headlines. Our strategy is Stacking at Scale (SaS) with heterogeneous pre-trained language models (PLMs) such as BERT and GPT-2. SaS first performs fine-tuning on numbers of PLMs with various hyperparameters and then applies a powerful stacking ensemble on top of the fine-tuned PLMs. Our experimental results show that SaS outperforms a naive average ensemble, leveraging weaker PLMs as well as high-performing PLMs. Interestingly, the results show that SaS captured non-funny semantics. Consequently, the system was ranked 1st in all subtasks by significant margins compared with other systems.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper describes the winning system for SemEval-2020 task 7: Assessing Humor in Edited News Headlines. Our strategy is Stacking at Scale (SaS) with heterogeneous pre-trained language models (PLMs) such as BERT and GPT-2. SaS first performs fine-tuning on numbers of PLMs with various hyperparameters and then applies a powerful stacking ensemble on top of the fine-tuned PLMs. Our experimental results show that SaS outperforms a naive average ensemble, leveraging weaker PLMs as well as high-performing PLMs. Interestingly, the results show that SaS captured non-funny semantics. Consequently, the system was ranked 1st in all subtasks by significant margins compared with other systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The recognition of humor in text has been receiving much attention (Barbieri and Saggion, 2014; Hossain et al., 2019) . Accordingly, SemEval-2020 task 7, Assessing Humor in Edited News Headlines (Hossain et al., 2020a) , which aims at automatically recognizing humor in hand-edited news headlines, was held with two subtasks: Subtask 1, which aims at predicting a funny score for an edited news headline, and Subtask 2, which aims at predicting the funnier headline of two given edited headlines.", "cite_spans": [ { "start": 67, "end": 95, "text": "(Barbieri and Saggion, 2014;", "ref_id": "BIBREF1" }, { "start": 96, "end": 117, "text": "Hossain et al., 2019)", "ref_id": "BIBREF6" }, { "start": 195, "end": 218, "text": "(Hossain et al., 2020a)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we pursue humor recognition with a large-scale stacking ensemble (hereafter Stacking at Scale or SaS), by leveraging pre-trained language models (PLMs). SaS is based on an ensemble method where a meta-estimator is trained to predict labels from the outputs of base models, finding the best combinations of the base models (Wolpert, 1992) . Hence, there are two steps in SaS: (i) fine-tuning numbers of heterogeneous PLMs, including BERT (Devlin et al., 2019) , GPT-2 (Radford et al., 2019) , RoBERTa (Liu et al., 2019) , Transformer-XL , XLNet , and XLM (Lample and Conneau, 2019) , with various hyperparameters, obtaining rich and diverse models, and (ii) training a meta-estimator on top of these PLMs.", "cite_spans": [ { "start": 337, "end": 352, "text": "(Wolpert, 1992)", "ref_id": "BIBREF18" }, { "start": 452, "end": 473, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 476, "end": 504, "text": "GPT-2 (Radford et al., 2019)", "ref_id": null }, { "start": 515, "end": 533, "text": "(Liu et al., 2019)", "ref_id": "BIBREF11" }, { "start": 569, "end": 595, "text": "(Lample and Conneau, 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our experiments, fusing up to 1750 PLMs in total, indicate that SaS successfully leverages weaker PLMs as well as high-performing PLMs. Consequently, our system is ranked 1st on both subtasks with significant margins to others. Interestingly, analyses show that SaS learned (relatively) non-funny semantics while still struggling to understand the funniest semantics. To the best of our knowledge, this is the first experiment that involves thousands of diverse of PLMs, revealing the current strengths and limitations of PLMs in automatic humor recognition. We also provide useful insights obtained from rich analyses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Work related to humor recognition has been done in recent years (Khodak et al., 2018 ; Barbieri and Saggion, 2014; Reyes et al., 2012) . Khodak et al. (2018) introduced a large-scale annotated corpus of sarcasm and provided baseline systems for sarcasm detection. Barbieri and Saggion (2014) widely investigated features for automatically detecting irony and humor. SemEval-2020 task 7 (Hossain et al., 2020a) aims at automatically detecting humor in hand-edited news headlines and was introduced by (Hossain et al., 2019) . We worked to solve the problem by utilizing a number of PLMs with stacking.", "cite_spans": [ { "start": 64, "end": 84, "text": "(Khodak et al., 2018", "ref_id": "BIBREF9" }, { "start": 87, "end": 114, "text": "Barbieri and Saggion, 2014;", "ref_id": "BIBREF1" }, { "start": 115, "end": 134, "text": "Reyes et al., 2012)", "ref_id": "BIBREF15" }, { "start": 137, "end": 157, "text": "Khodak et al. (2018)", "ref_id": "BIBREF9" }, { "start": 264, "end": 291, "text": "Barbieri and Saggion (2014)", "ref_id": "BIBREF1" }, { "start": 386, "end": 409, "text": "(Hossain et al., 2020a)", "ref_id": "BIBREF7" }, { "start": 500, "end": 522, "text": "(Hossain et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "As we described in the above, Subtask 1 aims at predicting a \"funny score\", a real-value in the range of [0, 3] (0 = \"Not\", 1 = \"Slightly\", 2 = \"Moderately\", 3 = \"Funny\") for an edited headline. We formalized the task as a sentence-pair regression. Subtask 2 aims at predicting the funnier headline of two edited headlines originating from the same headline. We take an approach to utilizing the model of Subtask 1, that is, estimating the scores of the edited headlines and choosing the one having the higher score. Figure 1 shows an overview of our proposed model architecture. Given a pair of edited and original headlines, we apply PLM, BiLSTM layers, a dot-product attention layer, a pooling layer, and a feed-forward layer successively to predict funny scores. Preprocessing: We concatenate two headlines. Tokenization is conducted by a PLM-specific tokenizer. We surround the edited tokens with two special marking tokens, \"<\" and \">.\" We insert special tokens (e.g., [CLS] and [SEP] ) if necessary as required for each PLM. The implementations are described in detail in Section 6.1.", "cite_spans": [ { "start": 975, "end": 980, "text": "[CLS]", "ref_id": null }, { "start": 985, "end": 990, "text": "[SEP]", "ref_id": null } ], "ref_spans": [ { "start": 517, "end": 525, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Task Formalization", "sec_num": "3" }, { "text": "To recognize inner-headline semantics, we first apply headline-wise multi-layered BiLSTM (Graves et al., 2013) as follows:", "cite_spans": [ { "start": 89, "end": 110, "text": "(Graves et al., 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Intra-and Inter-Headline Encoding", "sec_num": "4.1" }, { "text": "h (BiLSTM) i = \uf8f1 \uf8f2 \uf8f3 BILSTM h (PLM) start_edit , . . . , h (PLM) end_edit i , if start_edit \u2264 i \u2264 end_edit BILSTM h (PLM) start_origin , . . . , h (PLM) end_origin i , if start_origin \u2264 i \u2264 end_origin where h (PLM) i /h (BiLSTM) i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intra-and Inter-Headline Encoding", "sec_num": "4.1" }, { "text": "are the PLM/BiLSTM representation of the i-th token and (start_edit, end_edit)/(start_origin, end_origin) represent the starting/ending positions of the edited/original headlines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intra-and Inter-Headline Encoding", "sec_num": "4.1" }, { "text": "Next, h", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intra-and Inter-Headline Encoding", "sec_num": "4.1" }, { "text": "(BiLSTM) i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intra-and Inter-Headline Encoding", "sec_num": "4.1" }, { "text": "are fed into the global dot-product-attention to capture inter-headline semantics, producing final hidden embeddings h i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intra-and Inter-Headline Encoding", "sec_num": "4.1" }, { "text": "We employ a headline-wise pooling layer and predict the funny score with a feed-forward network (FFN):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Funny Score Regression", "sec_num": "4.2" }, { "text": "y = v FFN POOLING PLM (h start_edit:end_edit ) \u2295 POOLING PLM h start_origin:end_origin ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Funny Score Regression", "sec_num": "4.2" }, { "text": "where \u2295 is a concatenation operation. POOLING PLM is a PLM-specific embedding pooling function. For example, for BERT, it takes the embeddings of the first tokens of two headlines (\"[CLS]\" and \"[SEP]\"). The details are in Table 7 of Appendix B. We trained the model with mean squared error loss.", "cite_spans": [], "ref_spans": [ { "start": 222, "end": 229, "text": "Table 7", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Funny Score Regression", "sec_num": "4.2" }, { "text": "We further propose large-scaled ensemble, called Stacking at Scale (SaS), based on a two-layer stacking ensemble (Wolpert, 1992) , where the first-layer models (i.e., base models) are fine-tuned PLMs with different hyperparameter sets, and the second-layer model (i.e., meta-estimator) is another regression model. This may select the best combinations of the base models to produce more robust predictions. Figure 2 shows a schematic view and the algorithm steps of SaS. The key attributes are (i) using heterogeneous PLMs for base models, (ii) generating diverse hyperparameter sets for the base models, ", "cite_spans": [ { "start": 113, "end": 128, "text": "(Wolpert, 1992)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 408, "end": 416, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Stacking at Scale", "sec_num": "5" }, { "text": "Step 1: Divide Humicroedit into folds For \u03c4 in PLMs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CV averaging CV averaging", "sec_num": null }, { "text": "Step 2: Initialize HyparparamOptimizer For = 1 \u2026 B:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CV averaging CV averaging", "sec_num": null }, { "text": "Step 3: Get a hyperparameter set suggestion from HyparparamOptimizer Step 4: k-fold Cross-validation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CV averaging CV averaging", "sec_num": null }, { "text": "For = 1 \u2026 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CV averaging CV averaging", "sec_num": null }, { "text": ", \u2190 Fine-tuned PLM with on non -th (i.e. training) folds plus FunLines , \u2190 Predicted funny scores on -th (i.e. validation) fold", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CV averaging CV averaging", "sec_num": null }, { "text": "Step 5: Build CV-averaging models \u2190 A model that takes an average of predictions of ,1 \u2026 , \u2208 \u211d \u2190 A concatenation of ,1 \u2026 , , where is data size Step 4: Feedback the performance of to HyparparamOptimizer", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CV averaging CV averaging", "sec_num": null }, { "text": "Step 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Train Meta Estimator", "sec_num": null }, { "text": "\u2208 \u211d \u00d7|\u2133| \u2190 A concatenation of 1 \u2026 | | \u2208", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Train Meta Estimator", "sec_num": null }, { "text": "Step 2: \u2190 Meta estimator trained on", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Train Meta Estimator", "sec_num": null }, { "text": "Apply Meta Estimator Input: Test data consists of samples For = 1 \u2026 |\u2133|:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Train Meta Estimator", "sec_num": null }, { "text": "Step 1: \u2032 \u2208 \u211d \u2190 Predicted funny scores on test data with model \u2208 \u2133 Step 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Train Meta Estimator", "sec_num": null }, { "text": "S\u2032 \u2208 \u211d \u00d7|\u2133| \u2190 Concatenation of 1 \u2032 \u2026 |\u2133| \u2032", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Train Meta Estimator", "sec_num": null }, { "text": "Step 3: Apply the meta estimator on S\u2032and get the final prediction Figure 2 : Simplified example of Stacking at Scale with 3-fold cross-validation and (iii) performing cross-validation (CV) during the whole process. CV is used for accumulating label leakage-free prediction data over the whole training dataset used for the meta-estimator training as well as measuring the accurate performances of models to select better base models. Since SaS requires enormous computations, discussions on complexity are given in Appendix A.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 75, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Train Meta Estimator", "sec_num": null }, { "text": "fold1 fold1 1 fold1 fold1 2 fold1 fold1 3 |\u2133| 1 Select Base Models \u2133 \u2190 \u2205, \u2190 \u2205 For \u03c4 in PLMs: \u2133 \u2190 \u2133 \u222a { top-performed models from 1 \u2026 } \u2190 \u222a {", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Train Meta Estimator", "sec_num": null }, { "text": "We pursue diversity for the base models by generating numbers of various hyperparameter sets. To generate sets with reasonable performances in a relatively small number of trials, we utilize a hyperparameter optimization framework. It seeks the best hyperparameter set by performing an iterative search, (i) suggesting (possibly better) sets given the sets already found and their performances and (ii) measuring the performances of the newly suggested sets (see Train Base Models in Figure 2 ). The performance for each set is measured on the basis of mean squared errors (MSEs) averaged over k validation folds of CV.", "cite_spans": [], "ref_spans": [ { "start": 484, "end": 492, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Base Model Hyperparameter Generation", "sec_num": "5.1" }, { "text": "Since our purpose here is not only to find the best hyperparameter sets but to collect diverse sets with reasonable performances, we keep all the sets suggested during the search. After the search, we select the top performing n sets from each PLM type (see Select Base Models in Figure 2 ).", "cite_spans": [], "ref_spans": [ { "start": 280, "end": 288, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Base Model Hyperparameter Generation", "sec_num": "5.1" }, { "text": "Since the non-linearity laid on the dataset could have already been captured by PLMs, we use simple linear regression models for the meta-estimator. Suppose that the scores for a headline predicted by N base models are\u0177 1 , ...,\u0177 N . The meta-estimator learns the weights w i in the linear regression problem", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Estimator Training and Inference", "sec_num": "5.2" }, { "text": "y meta = w 0 + w 1\u01771 + ... + w N\u0177N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Estimator Training and Inference", "sec_num": "5.2" }, { "text": "(1) by using MSE loss with some regularization term. The input dimensionality N is (# of PLM types \u00d7 n) because we pick the top n hyperparameter sets for each PLM type. For example, for Train Meta Estimator in Figure 2 , |M| = |S| = (# of PLM types \u00d7 n).", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 218, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Meta-Estimator Training and Inference", "sec_num": "5.2" }, { "text": "Overall, to predict funny scores with SaS, we (i) feed a headline pair into (# of PLM types \u00d7 n \u00d7 k) base models, obtaining (# of PLM types\u00d7n\u00d7k) predictions in total, (ii) take the CV-wise average of the predictions, reducing the dimension to (# of PLM types \u00d7 n), and (iii) feed the CV-averaged predictions into the meta-estimator to get the final prediction (see Apply Meta Estimator in Figure 2 ).", "cite_spans": [], "ref_spans": [ { "start": 389, "end": 397, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Meta-Estimator Training and Inference", "sec_num": "5.2" }, { "text": "Offline Performance Measurements: Throughout our experiments, we estimated the performances of the models using the root mean squared error (RMSE) aggregated over the validation data of k=5 fold cross-validation 1 (hereafter RMSE-CV). Note that the aggregation is done over k=5 different sets of validation data, so we can measure the performances robustly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "6.1" }, { "text": "Base Models: Table 1 shows the seven employed PLMs. We employed Optuna (Akiba et al., 2019) as the hyperparameter optimization framework. We generated 50 hyperparameter sets for each PLM. Therefore, in total, 350 models (= 7 types of PLMs \u00d7 50 sets of hyperparameters) or 1750 models including the CV variants (\u00d75 CV-folds) were built for the experiments. Details of on the hyperparameters are given in Appendix B. We tried many choices of n ranging from 1 to 50 to minimize the RMSE-CV. Meta-Estimators: Two types of meta-estimators were employed: (i) Lasso regression (Tibshirani, 1996) , i.e., linear regression with L1 regularization \u03b2|w|, and (ii) Ridge regression (Hoerl and Kennard, 1970) , i.e., linear regression with L2 regularization \u03b2||w|| 2 . The strength parameter \u03b2 was chosen from the default search values of scikit-learn (Pedregosa et al., 2011) to minimize the RMSE-CV. Data: We used Humicroedit (Hossain et al., 2019) and FunLines (Hossain et al., 2020b) , which are distributed officially. They have the same data format; however, they have slightly different label distributions (Hossain et al., 2020b ). The official splits (i.e., train and dev) of Humicroedit are all concatenated to a single dataset, on which the cross-validation folds are built.", "cite_spans": [ { "start": 71, "end": 91, "text": "(Akiba et al., 2019)", "ref_id": "BIBREF0" }, { "start": 570, "end": 588, "text": "(Tibshirani, 1996)", "ref_id": "BIBREF16" }, { "start": 670, "end": 695, "text": "(Hoerl and Kennard, 1970)", "ref_id": "BIBREF5" }, { "start": 839, "end": 863, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF12" }, { "start": 915, "end": 937, "text": "(Hossain et al., 2019)", "ref_id": "BIBREF6" }, { "start": 951, "end": 974, "text": "(Hossain et al., 2020b)", "ref_id": "BIBREF8" }, { "start": 1101, "end": 1123, "text": "(Hossain et al., 2020b", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 13, "end": 20, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Settings", "sec_num": "6.1" }, { "text": "For training folds, we used both relevant datasets, Humicroedit (Hossain et al., 2019) and FunLines (Hossain et al., 2020b) , to maximally capture funny semantics. However, for validation folds, we used only Humicroedit because the test data instances were taken only from Humicroedit and we wanted to measure the approximate model performances on the test set. Implementation: We implemented the base models with jiant (Pruksachatkun et al., 2020) , a transfer learning framework, which in turn utilizes Hugging Face's Transformers library (Wolf et al., 2019) for their implementation of PLMs, PLM-specific tokens (e.g., \"[CLS]\" and \"[SEP]\" for BERT), and a PLMspecific tokenizer. We implemented the meta-estimators using scikit-learn (Pedregosa et al., 2011) . We employed the RidgeCV and LassoCV functions for Ridge/Lasso regressions. Both functions automatically find the best regularization strengths \u03b2. Computational Resource: We employed up to 800 Volta (16-GB) GPUs offered by ABCI 2 .", "cite_spans": [ { "start": 64, "end": 86, "text": "(Hossain et al., 2019)", "ref_id": "BIBREF6" }, { "start": 100, "end": 123, "text": "(Hossain et al., 2020b)", "ref_id": "BIBREF8" }, { "start": 420, "end": 448, "text": "(Pruksachatkun et al., 2020)", "ref_id": null }, { "start": 541, "end": 560, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF17" }, { "start": 736, "end": 760, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "6.1" }, { "text": "Official Ranking: We submitted the SaS-Ridge (n=20) system, i.e., SaS with the Ridge estimator using n=20 hyperparameter sets per PLM type, which performed the best in our pre-submission experiments. The model utilized 700 base models (=7 types of PLMs \u00d7 20 sets of hyperparameters \u00d7 5 CV-folds). The official ranking presented in Table 2 shows that our system is ranked 1st on both subtasks by significant margins to others. Hereafter, we analyze our system using Subtask 1 since we tuned our systems on it. How Powerful is SaS?: We show ablation results for each PLM for n = 1 systems in Table 3 . Most of the stacking models (shown as \"SaS\") performed better than single models (\"single\"), showing the effectiveness of fusing heterogeneous PLMs. Removing a PLM from SaS almost always degrades the performance regardless of the native performance of the removed model. This implies that not only the strongest PLMs but also the weaker PLMs are important for SaS. Hereafter, we use the single RoBERTa model as our baseline, which is the strongest model among the single models. Note that this baseline is competitive since it is with the best hyperparameter found in the 50-step hyperparameter optimization. Figure 3 shows the change in performance for the total number of base models without CV variants (i.e., 7 types of PLMs \u00d7 n). SaS-Ridge achieved its best performance around 100 models, and SaS-Lasso 1 Let MSEi be the MSE and ni the number of instances for the ith-fold validation data. The aggregated RMSE is as follows.", "cite_spans": [], "ref_spans": [ { "start": 331, "end": 338, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 590, "end": 597, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1209, "end": 1217, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6.2" }, { "text": "RMSE-CV = 1 k i=1 n i k i=1 n i \u00d7 MSE i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6.2" }, { "text": "2 AI Bridging Cloud Infrastructure provided by National Institute of Advanced Industrial Science and Technology (AIST). PLM model BERT (Devlin et al., 2019) large-uncased GPT-2 (Radford et al., 2019) medium / large RoBERTa (Liu et al., 2019) large Transformer-XL wt103 XLNet large-cased XLM (Lample and Conneau, 2019) en-2048 kept its performance high over the stacking of 100 models, while the naive average ensemble got worse. This implies that at least nearly or over 100 PLMs are required to achieve the best performance for SaS. Also, SaS successfully utilized weaker models without harming the performance, while the naive average ensemble failed in that. To validate this, we plotted the numbers of active weights (i.e., the number of w i (i \u2265 1) in eq. (1) that meet the condition |w i | \u2265 threshold(0.01)) in Figure 4 . Since Lasso is a sparse linear model, it constantly activated 80 to 100 PLMs, while Ridge's active weights increased linearly. The result indicates that utilizing the sparse model can automatically adjust the number of PLMs to be used.", "cite_spans": [ { "start": 135, "end": 156, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 177, "end": 199, "text": "(Radford et al., 2019)", "ref_id": "BIBREF14" }, { "start": 223, "end": 241, "text": "(Liu et al., 2019)", "ref_id": "BIBREF11" }, { "start": 291, "end": 317, "text": "(Lample and Conneau, 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 818, "end": 826, "text": "Figure 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6.2" }, { "text": "Which Type of PLM is Useful?: We obtained contribution scores for each PLM type via the metaestimator's weights. Figure 5 shows the PLM-wise sums of absolute weights; Table 4 : Some sample headlines on which our best system, SaS-Lasso (n=50), reduced absolute errors by large margins (top) and by small (or sometimes negative) margins (bottom). Besides headlines, we show gold funny score (\"gold\"), prediction made by our system (\"SaS\"), with single RoBERTa (\"RoBERTa\"), and error reduction over baseline RoBERTa (\"reduction\").", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 121, "text": "Figure 5", "ref_id": "FIGREF2" }, { "start": 167, "end": 174, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6.2" }, { "text": "i\u2208PLM |w i |,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6.2" }, { "text": "w i are the weights in eq. (1). RoBERTa and GPT-2 seemed to be the most preferable models, consistent with the results of excluding the models shown in Table 3 (shown as w/o). However, the plot also indicates that the stacking succeeded in leveraging weaker models as well as the best models. What Did SaS Solve?: Figure 6 shows a sample distribution over the gold funny scores (top) and the mean absolute error (i.e., e SaS = |\u0177 meta \u2212 y gold |) and the mean absolute error reductions (i.e., e RoBERTa \u2212 e SaS ) over the single RoBERTa baseline (bottom). SaS improved performance for not-to slightly-funny ([0-1.5]) headlines, while having similar or degraded performance for the funnier ([1.5-3.0]) headlines. In short, SaS learned (relatively) non-funny semantics. Since these headlines are the majority, SaS also gained overall performance improvements.", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 159, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 314, "end": 322, "text": "Figure 6", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6.2" }, { "text": "Case Study: Why Did SaS Learn Non-Funny Semantics? Table 4 shows sample headlines. The top rows show samples on which our best system, SaS-Lasso (n=50), reduced the errors over the single RoBERTa baseline by large margins. These headlines had small funny scores, and it seems that we can understand the non-funniness from the headline text itself without needing much external knowledge, and, in particular, some of the funniness comes only from the bizarreness or incongruity of the edited headlines. It is natural for PLMs to detect these types of non-funniness because they are trained on large amounts of corpora and could have learned to detect the unnaturalness of the given texts. We estimate that SaS enhanced this ability by combining the heterogeneous PLMs. The bottom rows show headlines with small (or sometimes negative) error reductions. These headlines had large funny scores and seemed to be expressing irony. Irony does not express intentions directly in text and rather relies on a reader's inference using sufficient common sense or background knowledge, especially on current topics. Given that SaS could have chosen the best combination of PLMs and that even SaS had no performance gain for such headlines, it is likely that such knowledge is not contained in any of the PLMs. This suggests the current limitation of PLMs on the humor recognition tasks.", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 58, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6.2" }, { "text": "In this paper, we proposed a top performing model for the task of humor recognition. We fused thousands of pre-trained language models by Stacking at Scale. Experimental results showed the incredible performance of the Stacking at Scale, and at the same time, also revealed the current limitation of pre-trained language models. For future work, we will explore injecting common sense or background knowledge into models to understand humor better. parameter value P 7 Seven types of PLM are used. (see Table 1 ) B 50 50 hyperparameter sets per PLM types are generated. k 5 5-fold cross-validation is employed. n 50 Our best system, Lasso(n = 50) uses 50 base models per PLM type.", "cite_spans": [], "ref_spans": [ { "start": 503, "end": 510, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Humicroedit + FunLines The concatenation is used. About 17k instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dtrain", "sec_num": null }, { "text": "Official test data About 3k instances. In our setting, the number of models trained (N train ) was as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dtest", "sec_num": null }, { "text": "N train = P \u00d7 B \u00d7 k = 7 \u00d7 50 \u00d7 5 = 1750", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dtest", "sec_num": null }, { "text": "Thus, the total training time could be estimated theoretically as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dtest", "sec_num": null }, { "text": "T base train (D train ) = N train \u00d7 \u03c4 base train (D train ) \u223c 1750 \u00d7 0.5 hours = 875 hours As we observed, with 200 Volta GPUs, it took \u223c 5 hours to train the whole SaS model, which is of the same order as the theoretically predicted training time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dtest", "sec_num": null }, { "text": "The number of models engaged in the ensemble (N infer ) was estimated as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dtest", "sec_num": null }, { "text": "N infer = P \u00d7 n \u00d7 k = 7 \u00d7 50 \u00d7 5 = 1750", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dtest", "sec_num": null }, { "text": "We have not measured the total inference time since the inference was executed at the same time as the training with our implementation. Therefore, we show only the theoretically expected inference time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dtest", "sec_num": null }, { "text": "T base train (D train ) = N train \u00d7 \u03c4 base train (D train ) \u223c 1750 \u00d7 20 secs \u223c 10 hours B Base-Model Hyperparameter Generation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dtest", "sec_num": null }, { "text": "In this section, we describe the setup in detail and the results of the base-model hyperparameter generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dtest", "sec_num": null }, { "text": "We generated hyperparameter sets using Optuna (Akiba et al., 2019) , a hyperparameter optimization framework. We used version 1.10. We started each optimization process using Optuna's default seed. For each PLM type, we generated 50 hyperparameter sets. At each step of the optimization process, we tried 5 hyperparameter sets in parallel. Therefore, in total, 10 steps were needed to try 50 hyperparameter sets. Table 6 and Table 7 show the specific hyperparameter setups. Table 6 shows the hyperparameters and their (i) search range, (ii) the initial values, and (iii) the Optuna sampling functions used. vlin et al., 2019) \"adam\" or \"bert_adam\" \"adam\" categorical gradient clipping The value of gradient L2-norm clipping -5.0 (fixed) max epochs", "cite_spans": [ { "start": 46, "end": 66, "text": "(Akiba et al., 2019)", "ref_id": "BIBREF0" }, { "start": 607, "end": 625, "text": "vlin et al., 2019)", "ref_id": null } ], "ref_spans": [ { "start": 413, "end": 420, "text": "Table 6", "ref_id": "TABREF9" }, { "start": 425, "end": 432, "text": "Table 7", "ref_id": "TABREF7" }, { "start": 474, "end": 481, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "B.1 Setup", "sec_num": null }, { "text": "The max trainin epochs. 4, 8 or 16 8 categorical early stopping patience The patience of early stopping. The validation check is done with the intervals of 200 gradient steps. (Devlin et al., 2019) large-uncased 1 first token GPT-2 (Radford et al., 2019) medium / large 16 / 2 last token RoBERTa (Liu et al., 2019) large 16 average Transformer-XL wt103 4 average XLNet large-cased 16 last token XLM (Lample and Conneau, 2019) en-2048 1 average Table 7 : PLM-specific fixed hyperparameters. Batch size and embedding pooling function mentioned in Section 4.2 are shown. \"First token\" takes first token embedding from headline, \"last token\" takes last token embedding, and \"average\" takes average of all token embeddings in headline.", "cite_spans": [ { "start": 176, "end": 197, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 232, "end": 254, "text": "(Radford et al., 2019)", "ref_id": "BIBREF14" }, { "start": 296, "end": 314, "text": "(Liu et al., 2019)", "ref_id": "BIBREF11" }, { "start": 399, "end": 425, "text": "(Lample and Conneau, 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 444, "end": 451, "text": "Table 7", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "B.1 Setup", "sec_num": null }, { "text": "For the readers' convenience, we show the best hyperparameters found in our search in Table 8 . Note that readers can reproduce these results by executing the hyperparameter search with the experimental setup described in the previous section.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 8", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "B.2 Results -The Best Values -", "sec_num": null }, { "text": "BERT ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Results -The Best Values -", "sec_num": null } ], "back_matter": [ { "text": "Computational resource of AI Bridging Cloud Infrastructure (ABCI) provided by National Institute of Advanced Industrial Science and Technology (AIST) was used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "In this section, we discuss the time complexity of the Stacking at Scale (SaS) algorithm. We (i) first induce the theoretical time complexity of SaS and (ii) show measurements of the actual running time observed in our experiments, which is in accordance with those predicted by the theory.The discussions are not that rigorous or exhaustive; however, we believe they are enough to offer readers rough estimations of the time complexity of SaS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Time Complexity of Stacking at Scale", "sec_num": null }, { "text": "We estimate the time complexity of SaS, expressed by that of a single base-model system. The training phase complexity [eq. (2) ] and the inference phase complexity [eq. (3) ] are induced. In both cases, the dominant term comes from the base-model hyperparameter generation or inference. Thus, the SaS time complexity is (# of base models engaged in a phase) times larger than that of a single base-model system.We first decompose the SaS algorithm into several steps and induce the time complexity of each step independently. Then, we aggregate the complexities to calculate the overall complexity of SaS.", "cite_spans": [ { "start": 119, "end": 127, "text": "[eq. (2)", "ref_id": null }, { "start": 165, "end": 173, "text": "[eq. (3)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A.1 Theoretical Expressions", "sec_num": null }, { "text": "Let \u03c4 base train (D train ) be the time needed to train a single base model on the training data D train with a specific setup (say, a specific PLM type, number of epochs, specific machine resource used, etc.). Let N train be the number of base models (i.e., the number of unique hyperparameter sets) to be trained. The time complexity of base-model hyperparameter generation T base train (D train ) is estimated as follows. Figure 2 , N train is expressed as:where P is the number of PLM types, B the hyperparameter-optimization step budget per PLM type, and k the number of cross-validation folds.", "cite_spans": [], "ref_spans": [ { "start": 425, "end": 433, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "A.1.1 Base-Model Hyperparameter Generation", "sec_num": null }, { "text": "Let \u03c4 base infer (D test ) be the time needed to execute inference with a single base model over the test data D test with a specific setup. The time complexity of base model inference T base infer (D test ) is estimated as follows.the number of models per PLM type that are engaged in SaS. Then, N infer is expressed as follows.Since the inputs of the meta-estimators are the predictions over the training data D train made by the base models, we must execute the inference of the base model over the training data D train beforehand. Therefore, the time complexity of meta-estimator training T meta train is expressed as:is the time needed to train a meta-estimator with a specific setup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1.2 Base Model Inference", "sec_num": null }, { "text": "The time complexity of meta-estimator inference T meta infer is expressed as:is the time needed to execute the inference of the meta-estimator for a specific setup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1.4 Meta-Estimator Inference", "sec_num": null }, { "text": "Overall, the time complexity of SaS training T SaS train (D train ) is as follows.\u03c4 base train (D train ) In many cases, the second term in the brackets is negligible given that n B \u2264 1 and that \u03c4 base infer (D train )/\u03c4 base train (D train ) 1 often holds since, (i) in the training phase, we iterate over the dataset for several times, while, in the inference phase, we iterate only once, and, (ii) in the training phase, we need to back-propagate the gradients, while, in the inference, we do not. The third term can be negligible in the case where there are numbers of base models to train (N train 1) or the meta-estimator is \"lighter\" than the base models (\u03c4 meta train (D train )/\u03c4 base train (D train ) 1; this indeed holds for our experiments since the base models are large neural networks, while the meta-estimators are just linear regressions. Thus, T SaS train (D train ) can be approximated only by the first term (i.e., base model training) as follows.Thus, the overall training complexity of SaS is PBk times larger than that of a base model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1.5 Overall SaS Training", "sec_num": null }, { "text": "The time complexity of SaS inference T SaS infer (D test ) is the same as that of the meta-estimator's inferenceAgain, the second term can be negligible in the case where there are numbers of base models engaged in SaS (N infer 1) or the meta-estimator is lighter than the base models (\u03c4 meta train (D test )/\u03c4 base train (D test ) 1). Thus, T SaS infer (D test ) can be approximated only by the first term (i.e., base model inference) as follows.Thus, the overall inference complexity of SaS is Pnk times larger than that of a base model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1.6 Overall SaS Inference", "sec_num": null }, { "text": "In this section, we show the observed running times of the SaS algorithm. Please note that the results are not rigorous or exhaustive. The purpose here is rather to offer readers a taste of the order estimations of the computational time and resources needed to reproduce the SaS experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 Measurements of Running Times", "sec_num": null }, { "text": "We reposit the setting of the experiments in Table 5 . For computational resources, we trained our base models using Volta (16-GB) GPUs (single model per single GPU). Some large models [i.e., and XLM] were trained on Volta (32-GB) GPUs. On average, the training time seemed to be about 30 minutes, that is:\u03c4 base train (D train ) \u223c 0.5 hours, Note that this estimation is really rough since \u03c4 base train (D train ) depends on many factors including the PLM type, number of training epochs (mostly 8 or 16), batch size (1 to 16), and that the above value is only the average (or \"marginal\") actual running times.With the same setting as the training, the inference time was observed to be:\u03c4 base infer (D test ) \u223c 20 secs, which is much smaller than th training time, as expected.", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 52, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "A.2.1 Measurement of \u03c4", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Optuna: A nextgeneration hyperparameter optimization framework", "authors": [ { "first": "Takuya", "middle": [], "last": "Akiba", "suffix": "" }, { "first": "Shotaro", "middle": [], "last": "Sano", "suffix": "" }, { "first": "Toshihiko", "middle": [], "last": "Yanase", "suffix": "" }, { "first": "Takeru", "middle": [], "last": "Ohta", "suffix": "" }, { "first": "Masanori", "middle": [], "last": "Koyama", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '19", "volume": "", "issue": "", "pages": "2623--2631", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A next- generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '19, pages 2623-2631, New York, NY, USA. ACM.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Automatic detection of irony and humour in twitter", "authors": [ { "first": "Francesco", "middle": [], "last": "Barbieri", "suffix": "" }, { "first": "Horacio", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2014, "venue": "ICCC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francesco Barbieri and Horacio Saggion. 2014. Automatic detection of irony and humour in twitter. In ICCC.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Transformer-XL: Attentive language models beyond a fixed-length context", "authors": [ { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2978--2988", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer- XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988, Florence, Italy, July. Association for Compu- tational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Speech recognition with deep recurrent neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Abdel", "middle": [ "Rahman" ], "last": "Mohamed", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2013, "venue": "2013 IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "6645--6649", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex. Graves, Abdel rahman. Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6645-6649.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Ridge regression: Biased estimation for nonorthogonal problems", "authors": [ { "first": "A", "middle": [ "E" ], "last": "Hoerl", "suffix": "" }, { "first": "R", "middle": [ "W" ], "last": "Kennard", "suffix": "" } ], "year": 1970, "venue": "", "volume": "12", "issue": "", "pages": "55--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. E. Hoerl and R. W. Kennard. 1970. Ridge regression: Biased estimation for nonorthogonal problems. Techno- metrics, 12:55-67.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "president vows to cut hair\": Dataset and analysis of creative text editing for humorous headlines", "authors": [ { "first": "Nabil", "middle": [], "last": "Hossain", "suffix": "" }, { "first": "John", "middle": [], "last": "Krumm", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "133--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nabil Hossain, John Krumm, and Michael Gamon. 2019. \"president vows to cut hair\": Dataset and analysis of creative text editing for humorous headlines. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 133-142, Minneapolis, Minnesota, June. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Semeval-2020 Task 7: Assessing humor in edited news headlines", "authors": [ { "first": "Nabil", "middle": [], "last": "Hossain", "suffix": "" }, { "first": "John", "middle": [], "last": "Krumm", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Kautz", "suffix": "" } ], "year": 2020, "venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2020)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nabil Hossain, John Krumm, Michael Gamon, and Henry Kautz. 2020a. Semeval-2020 Task 7: Assessing humor in edited news headlines. In Proceedings of International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Stimulating creativity with FunLines: A case study of humor generation in headlines", "authors": [ { "first": "Nabil", "middle": [], "last": "Hossain", "suffix": "" }, { "first": "John", "middle": [], "last": "Krumm", "suffix": "" }, { "first": "Tanvir", "middle": [], "last": "Sajed", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Kautz", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "256--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nabil Hossain, John Krumm, Tanvir Sajed, and Henry Kautz. 2020b. Stimulating creativity with FunLines: A case study of humor generation in headlines. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics: System Demonstrations, pages 256-262, Online, July. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A large self-annotated corpus for sarcasm", "authors": [ { "first": "Mikhail", "middle": [], "last": "Khodak", "suffix": "" }, { "first": "Nikunj", "middle": [], "last": "Saunshi", "suffix": "" }, { "first": "Kiran", "middle": [], "last": "Vodrahalli", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikhail Khodak, Nikunj Saunshi, and Kiran Vodrahalli. 2018. A large self-annotated corpus for sarcasm. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May. European Language Resources Association (ELRA).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Cross-lingual language model pretraining", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems (NeurIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "RoBERTa: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Scikitlearn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit- learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "2020. jiant: A software toolkit for research on general-purpose text understanding models", "authors": [ { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Yeres", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Phu Mon Htut", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Tenney", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": null, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "109--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yada Pruksachatkun, Phil Yeres, Haokun Liu, Jason Phang, Phu Mon Htut, Alex Wang, Ian Tenney, and Samuel R. Bowman. 2020. jiant: A software toolkit for research on general-purpose text understanding models. In Pro- ceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 109-117, Online, July. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "From humor recognition to irony detection: The figurative language of social media", "authors": [ { "first": "Antonio", "middle": [], "last": "Reyes", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rosso", "suffix": "" }, { "first": "Davide", "middle": [], "last": "Buscaldi", "suffix": "" } ], "year": 2012, "venue": "Data Knowledge Engineering", "volume": "74", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antonio Reyes, Paolo Rosso, and Davide Buscaldi. 2012. From humor recognition to irony detection: The figurative language of social media. Data Knowledge Engineering, 74:1-12, 04.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Regression shrinkage and selection via the Lasso", "authors": [ { "first": "R", "middle": [], "last": "Tibshirani", "suffix": "" } ], "year": 1996, "venue": "Journal of the Royal Statistical Society (Series B)", "volume": "58", "issue": "", "pages": "267--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Tibshirani. 1996. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society (Series B), 58:267-288.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "HuggingFace's transformers: State-of-theart natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's transformers: State-of-the- art natural language processing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Stacked generalization", "authors": [ { "first": "H", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Wolpert", "suffix": "" } ], "year": 1992, "venue": "Neural Networks", "volume": "5", "issue": "", "pages": "241--259", "other_ids": {}, "num": null, "urls": [], "raw_text": "David H. Wolpert. 1992. Stacked generalization. Neural Networks, 5:241-259.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "XLNet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "5753--5763", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In H. Wallach, H. Larochelle, A. Beygelz- imer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 5753-5763. Curran Associates, Inc.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Fine-Tuning Pre-Trained Language Model (PLM) on Sentence-Pair Regression PLM specific pooling [CLS] President Vows to Cut < Hair > .[SEP] President Vows to Cut Taxes . Overview of proposed model", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "Change in number of active weights of SaS for number of base models without CV variants. Models were trained on our k=5 CV's training data. Values shown are averages over CV variant models. RoBERTa GPT-2(L) BERT GPT-2", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "PLM-wise sum of absolute weights ( i\u2208PLM |w i |) of best SaS models, i.e., SaS-Lasso (n=50) and SaS-Ridge (n=20). Models were trained on our k=5 CV's training data. Values shown are averages over CV variant models.", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "Top: Gold funny-score distribution. Bottom: Mean absolute error and mean absolute error reduction of our best model, SaS-Lasso (n=50), over single RoBERTa. Bin width was 0.2.", "uris": null, "num": null }, "TABREF1": { "type_str": "table", "content": "
Input: Stacking at Scale
Hyparparam11
Optimizer
123123
1|\u2133|corresponding predictions from 1 \u2026}
", "html": null, "text": "Humicroedit data, FunLines data, (cross-validation folds), , B (trial budget), HyparparamOptimizer, PLMs (a set of PLM types)", "num": null }, "TABREF2": { "type_str": "table", "content": "
\"model\" represents specific pre-trained
model variants of HuggingFace's Trans-
formers library (Wolf et al., 2019)
.
Subtask 1Subtask 2
teamRMSE teamaccuracy
Hitachi (ours) 0.49725 Hitachi (ours) 0.67428
Amobee0.50726 Amobee0.66058
YNU-HPCC 0.51737 YNU-HPCC0.65906
MLEngineer 0.51966 lmml0.64688
lmml0.52027 PALI0.64460
", "html": null, "text": "Seven PLMs employed in SaS.", "num": null }, "TABREF3": { "type_str": "table", "content": "
Performances on official test data are
shown.
modelRMSE-CV
SaS-Ridge (n=1)0.4998
average ensemble0.5071
SaS-Ridge (n=1) w/o BERT0.5004
SaS-Ridge (n=1) w/o GPT-2 (M)0.5003
SaS-Ridge (n=1) w/o GPT-2 (L)0.5014
SaS-Ridge (n=1) w/o RoBERTa0.5052
SaS-Ridge (n=1) w/o Transformer-XL 0.4998
SaS-Ridge (n=1) w/o XLNet0.4999
SaS-Ridge (n=1) w/o XLM0.5001
single BERT0.5237
single GPT-2 (M)0.5217
single GPT-2 (L)0.5168
single RoBERTa0.5109
single Transformer-XL0.5565
single XLNet0.5536
single XLM0.5349
", "html": null, "text": "Official results for top five teams.", "num": null }, "TABREF4": { "type_str": "table", "content": "
: Performance comparison of var-
ious models. RMSE aggregated over
k=5 CV's validation data (RMSE-CV) is
shown. Top: Ensemble models (SaS and
average ensemble) over n = 1 base mod-
els from each PLM type. Middle: SaS
models without a specific PLM. Bottom:
single base models.
", "html": null, "text": "", "num": null }, "TABREF5": { "type_str": "table", "content": "
2.60 0.85 0.820.03
Trump 's ' strategy ' on Afghanistan Presidency : Let the next president figure it out2.60 0.80 0.99-0.19
Hillary Clinton Supporters Filed a Complaint Against Bernie Sanders themselves -And Lost 3.00 0.76 0.91-0.15
", "html": null, "text": "where headline gold SaS RoBERTa reduction U.S. ambassador to U.N. says Russia tearing down global order delivery 0.00 0.63 1.11 0.48 North Carolina Governor Says He 'll Issue Executive Order For Full LGBTQ Rights alphabet 1.40 1.12 0.62 0.50 Hillary Clinton : Democrats Who Are Pro-Life Must Vote strip to Promote Abortion 1.60 1.38 0.46 0.92 'I certainly meant no disrespect respect' : Kellyanne Conway addresses her pose in the Oval Office photo 2.40 0.77 0.74 0.03 Rocks falling into oceans , not climate change , causing sea levels to rise according to one congressman toddler.", "num": null }, "TABREF6": { "type_str": "table", "content": "
A.2.2 Measurement of T
", "html": null, "text": "Setup of our experiments", "num": null }, "TABREF7": { "type_str": "table", "content": "
shows
", "html": null, "text": "", "num": null }, "TABREF9": { "type_str": "table", "content": "
PLMtypebatch size pooling
BERT
", "html": null, "text": "Setup of each hyperparameter. Search range, initial value, and Optuna sampling function used are shown. Note that some hyperparameters are not searched for but fixed.", "num": null }, "TABREF11": { "type_str": "table", "content": "", "html": null, "text": "Best hyperparameters found in hyperparameter search. Performance was measured by RMSE-CV.", "num": null } } } }