{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:48:15.608812Z" }, "title": "Unsupervised Domain Adaptation in Cross-corpora Abusive Language Detection", "authors": [ { "first": "Tulika", "middle": [], "last": "Bose", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNRS", "location": { "postCode": "F-54000", "settlement": "Nancy", "region": "Inria, LORIA", "country": "France" } }, "email": "tulika.bose@loria.fr" }, { "first": "Irina", "middle": [], "last": "Illina", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNRS", "location": { "postCode": "F-54000", "settlement": "Nancy", "region": "Inria, LORIA", "country": "France" } }, "email": "illina@loria.fr" }, { "first": "Dominique", "middle": [], "last": "Fohr", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNRS", "location": { "postCode": "F-54000", "settlement": "Nancy", "region": "Inria, LORIA", "country": "France" } }, "email": "dominique.fohr@loria.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The state-of-the-art abusive language detection models report great in-corpus performance, but underperform when evaluated on abusive comments that differ from the training scenario. As human annotation involves substantial time and effort, models that can adapt to newly collected comments can prove to be useful. In this paper, we investigate the effectiveness of several Unsupervised Domain Adaptation (UDA) approaches for the task of cross-corpora abusive language detection. In comparison, we adapt a variant of the BERT model, trained on large-scale abusive comments, using Masked Language Model (MLM) fine-tuning. Our evaluation shows that the UDA approaches result in sub-optimal performance, while the MLM fine-tuning does better in the cross-corpora setting. Detailed analysis reveals the limitations of the UDA approaches and emphasizes the need to build efficient adaptation methods for this task.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "The state-of-the-art abusive language detection models report great in-corpus performance, but underperform when evaluated on abusive comments that differ from the training scenario. As human annotation involves substantial time and effort, models that can adapt to newly collected comments can prove to be useful. In this paper, we investigate the effectiveness of several Unsupervised Domain Adaptation (UDA) approaches for the task of cross-corpora abusive language detection. In comparison, we adapt a variant of the BERT model, trained on large-scale abusive comments, using Masked Language Model (MLM) fine-tuning. Our evaluation shows that the UDA approaches result in sub-optimal performance, while the MLM fine-tuning does better in the cross-corpora setting. Detailed analysis reveals the limitations of the UDA approaches and emphasizes the need to build efficient adaptation methods for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Social networking platforms have been used as a medium for expressing opinions, ideas, and feelings. This has resulted in a serious concern of abusive language, which is commonly described as hurtful, obscene, or toxic towards an individual or a group sharing common societal characteristics such as race, religion, gender, etc. The huge amount of comments generated every day on these platforms make it increasingly infeasible for manual moderators to review every comment for its abusive content. As such, automated abuse detection mechanisms are employed to assist moderators. We consider the variations of online abuse, toxicity, hate speech, and offensive language as abusive language and this work addresses the detection of abusive versus non-abusive comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Supervised classification approaches for abuse detection require a large amount of expensive annotated data (Lee et al., 2018) . Moreover, models already trained on the available annotated corpus report degraded performance on new content (Yin and Zubiaga, 2021; Swamy et al., 2019; Wiegand et al., 2019) . This is due to phenomena like change of topics discussed in social media, and differences across corpora, such as varying sampling strategies, targets of abuse, abusive language forms, etc. These call for approaches that can adapt to newly seen content out of the original training corpus. Annotating such content is non-trivial and may require substantial time and effort (Poletto et al., 2019; Ombui et al., 2019) . Thus, Unsupervised Domain Adaptation (UDA) methods that can adapt without the target domain labels (Ramponi and Plank, 2020) , turn out to be attractive in this task. Given an automatic text classification or tagging task, such as abusive language detection, a corpus with coherence can be considered a domain (Ramponi and Plank, 2020; Plank, 2011) . Under this condition, domain adaptation approaches can be applied in cross-corpora evaluation setups. This motivates us to explore UDA for cross-corpora abusive language detection.", "cite_spans": [ { "start": 108, "end": 126, "text": "(Lee et al., 2018)", "ref_id": "BIBREF16" }, { "start": 239, "end": 262, "text": "(Yin and Zubiaga, 2021;", "ref_id": "BIBREF34" }, { "start": 263, "end": 282, "text": "Swamy et al., 2019;", "ref_id": "BIBREF28" }, { "start": 283, "end": 304, "text": "Wiegand et al., 2019)", "ref_id": "BIBREF32" }, { "start": 680, "end": 702, "text": "(Poletto et al., 2019;", "ref_id": "BIBREF21" }, { "start": 703, "end": 722, "text": "Ombui et al., 2019)", "ref_id": "BIBREF19" }, { "start": 824, "end": 849, "text": "(Ramponi and Plank, 2020)", "ref_id": "BIBREF23" }, { "start": 1035, "end": 1060, "text": "(Ramponi and Plank, 2020;", "ref_id": "BIBREF23" }, { "start": 1061, "end": 1073, "text": "Plank, 2011)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A task related to abuse detection is sentiment classification (Bauwelinck and Lefever, 2019; Rajamanickam et al., 2020) , and it involves an extensive body of work on domain adaptation. In this work, we analyze if the problem of cross-corpora abusive language detection can be addressed by the existing advancements in domain adaptation. Alongside different UDA approaches, we also evaluate the effectiveness of recently proposed Hate-BERT model (Caselli et al., 2021) that has finetuned BERT (Devlin et al., 2019) on a large corpus of abusive language from Reddit using the Masked Language Model (MLM) objective. Furthermore, we perform the MLM fine-tuning of HateBERT on target corpus, which can be considered a form of unsupervised adaptation. Our contribution is summarised below:", "cite_spans": [ { "start": 62, "end": 92, "text": "(Bauwelinck and Lefever, 2019;", "ref_id": "BIBREF2" }, { "start": 93, "end": 119, "text": "Rajamanickam et al., 2020)", "ref_id": "BIBREF22" }, { "start": 446, "end": 468, "text": "(Caselli et al., 2021)", "ref_id": "BIBREF5" }, { "start": 493, "end": 514, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We investigate some of the best perform-ing UDA approaches, originally proposed for cross-domain sentiment classification, and analyze their performance on the task of crosscorpora abusive language detection. We provide some insights on the sub-optimal performance of these approaches. To the best of our knowledge, this is the first work that analyzes UDA approaches for cross-corpora abuse detection. \u2022 We analyze the performance of HateBERT in our cross-corpora evaluation set-up. In particular, we use the Masked Language Model (MLM) objective to further fine-tune Hate-BERT over the unlabeled target corpus, and subsequently perform supervised fine-tuning over the source corpus.", "cite_spans": [ { "start": 394, "end": 404, "text": "detection.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remaining of this paper is structured as follows: Section 2 discusses the shifts across different abusive corpora. Section 3 surveys some recently proposed UDA models for sentiment classification and discusses the main differences in the approaches. Section 4 presents the experimental settings used in our evaluation. The results of our evaluation and a discussion on performances of different approaches are present in Section 5. Finally, Section 6 concludes the paper and highlights some future work. Saha and Sindhwani (2012) have detailed the problem of changing topics in social media with time. Hence, temporal or contextual shifts are commonly witnessed across different abusive corpora. For example, the datasets by Waseem and Hovy (2016) ; were collected in or before 2016, and during 2018, respectively, and also involve different contexts of discussion.", "cite_spans": [ { "start": 508, "end": 533, "text": "Saha and Sindhwani (2012)", "ref_id": "BIBREF26" }, { "start": 729, "end": 751, "text": "Waseem and Hovy (2016)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Moreover, sampling strategies across datasets also introduce bias in the data (Wiegand et al., 2019) , and could be a cause for differences across datasets. For instance, Davidson et al. (2017) sample tweets containing keywords from a hate speech lexicon, which has resulted in the corpus having a major proportion (83%) of abusive content. As mentioned by Waseem et al. (2018) , tweets in Davidson et al. (2017) originate from the United States, whereas Waseem and Hovy (2016) sample them without such a demographic constraint.", "cite_spans": [ { "start": 78, "end": 100, "text": "(Wiegand et al., 2019)", "ref_id": "BIBREF32" }, { "start": 171, "end": 193, "text": "Davidson et al. (2017)", "ref_id": "BIBREF7" }, { "start": 357, "end": 377, "text": "Waseem et al. (2018)", "ref_id": "BIBREF31" }, { "start": 390, "end": 412, "text": "Davidson et al. (2017)", "ref_id": "BIBREF7" }, { "start": 455, "end": 477, "text": "Waseem and Hovy (2016)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Shifts in Abusive Language Corpora", "sec_num": "2" }, { "text": "Apart from sampling differences, the targets and types of abuse may vary across datasets. For instance, even though women are targeted both in Waseem and Hovy (2016) and Davidson et al. (2017) , the former involves more subtle and implicit forms of abuse, while the the latter involves explicit abuse involving profane words. Besides, religious minorities are the other targeted groups in Waseem and Hovy (2016) , while African Americans are targeted in Davidson et al. (2017) . Owing to these differences across corpora, abusive language detection in a cross-corpora setting remains a challenge. This has been empirically validated by Wiegand et al. (2019) ; Arango et al. (2019) ; Swamy et al. (2019) ; Karan and \u0160najder (2018) with performance degradation across the cross-corpora evaluation settings. Thus, it can be concluded that the different collection time frames, sampling strategies, and targets of abuse would induce a shift in the data.", "cite_spans": [ { "start": 143, "end": 165, "text": "Waseem and Hovy (2016)", "ref_id": "BIBREF30" }, { "start": 170, "end": 192, "text": "Davidson et al. (2017)", "ref_id": "BIBREF7" }, { "start": 389, "end": 411, "text": "Waseem and Hovy (2016)", "ref_id": "BIBREF30" }, { "start": 454, "end": 476, "text": "Davidson et al. (2017)", "ref_id": "BIBREF7" }, { "start": 636, "end": 657, "text": "Wiegand et al. (2019)", "ref_id": "BIBREF32" }, { "start": 660, "end": 680, "text": "Arango et al. (2019)", "ref_id": "BIBREF0" }, { "start": 683, "end": 702, "text": "Swamy et al. (2019)", "ref_id": "BIBREF28" }, { "start": 705, "end": 729, "text": "Karan and \u0160najder (2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Shifts in Abusive Language Corpora", "sec_num": "2" }, { "text": "As discussed by Ramponi and Plank (2020) ; Plank (2011), a coherent type of corpus can typically be considered a domain for tasks such as automatic text classification. We, therefore, decide to apply domain adaptation methods for our task of crosscorpora abuse detection. Besides, UDA methods aim to adapt a classifier learned on the source domain D S to the target domain D T , where only the unlabeled target domain samples X T and the labeled source domain samples X S are assumed to be available. We denote the source labels by Y S . In this work, we use the unlabeled samples X T for adaptation and evaluate the performance over the remaining unseen target samples from D T .", "cite_spans": [ { "start": 16, "end": 40, "text": "Ramponi and Plank (2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Domain Adaptation", "sec_num": "3" }, { "text": "There is a vast body of research on UDA for the related task of cross-domain sentiment classification. Amongst them, the feature-centric approaches typically construct an aligned feature space either using pivot features (Blitzer et al., 2006) or using Autoencoders (Glorot et al., 2011; Chen et al., 2012) . Besides these, domain adversarial training is used widely as a loss-centric approach to maximize the confusion in domain identification and align the source and target representations (Ganin et al., 2016; Ganin and Lempitsky, 2015 ). Owing to their success in cross-domain sentiment classification, we decide to apply the following pivot-based and domain-adversarial UDA approaches to the task of cross-corpora abusive language detection.", "cite_spans": [ { "start": 221, "end": 243, "text": "(Blitzer et al., 2006)", "ref_id": "BIBREF4" }, { "start": 266, "end": 287, "text": "(Glorot et al., 2011;", "ref_id": "BIBREF12" }, { "start": 288, "end": 306, "text": "Chen et al., 2012)", "ref_id": "BIBREF6" }, { "start": 493, "end": 513, "text": "(Ganin et al., 2016;", "ref_id": "BIBREF11" }, { "start": 514, "end": 539, "text": "Ganin and Lempitsky, 2015", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Survey of UDA Approaches", "sec_num": "3.1" }, { "text": "Pivot-based approaches: Following Blitzer et al. (2006) , pivot-based approaches extract a set of common shared features, called pivots, across domains that are (i) frequent in X S and X T ; and (ii) highly correlated with Y S . Pivot Based Language Modeling (PBLM) (Ziser and Reichart, 2018) has outperformed the Autoencoder based pivot prediction (Ziser and Reichart, 2017) . It performs representation learning by employing a Long Short-Term Memory (LSTM) based language model to predict the pivots using other non-pivots features in the input samples from both X S and X T . Convolutional Neural Networks (CNN) and LSTM based classifiers are subsequently employed for the final supervised training with X S and Y S . Pivot-based Encoder Representation of Language (PERL) (Ben-David et al., 2020), a recently proposed UDA model, integrates BERT (Devlin et al., 2019) with pivot-based fine-tuning using the MLM objective. It involves prediction of the masked unigram/ bigram pivots from the non-pivots of the input samples from both X S and X T . This is followed by supervised task training with a convolution, average pooling and a linear layer over the encoded representations of the input samples from X S . During the supervised task training, the encoder weights are kept frozen. Both PBLM and PERL use unigrams and bi-grams as pivots, although higher order n-grams can also be used.", "cite_spans": [ { "start": 34, "end": 55, "text": "Blitzer et al. (2006)", "ref_id": "BIBREF4" }, { "start": 266, "end": 292, "text": "(Ziser and Reichart, 2018)", "ref_id": "BIBREF36" }, { "start": 349, "end": 375, "text": "(Ziser and Reichart, 2017)", "ref_id": "BIBREF35" }, { "start": 848, "end": 869, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Survey of UDA Approaches", "sec_num": "3.1" }, { "text": "Domain adversarial approaches: Hierarchical Attention Transfer Network (HATN) (Li et al., 2017 (Li et al., , 2018 employs the domain classification based adversarial training using X S and X T , along with an attention mechanism using X S and Y S to automate the pivot construction. The Gradient Reversal Layer (GRL) (Ganin and Lempitsky, 2015) is used in the adversarial training to ensure that the learned pivots are domain-shared, and the attention mechanism ensures that they are useful for the end task. During training, the pivots are predicted using the non-pivots while jointly performing the domain adversarial training, and the supervised endtask training. Recently BERT-based approaches for UDA are proposed by Du et al. (2020) ; Ryu and Lee (2020) that also apply the domain adversarial training. Adversarial Adaptation with Distillation (AAD) (Ryu and Lee, 2020) is such a domain adversarial approach that is applied over BERT. Unlike HATN, in AAD, the domain adversarial training is done with the framework of the Adversarial Discriminative Domain Adaptation (ADDA) (Tzeng et al., 2017) , using X S and X T . This aims to make the source and target representations similar. Moreover, it leverages knowledge distillation (Hinton et al., 2015) as an additional loss function during adaptation.", "cite_spans": [ { "start": 78, "end": 94, "text": "(Li et al., 2017", "ref_id": "BIBREF18" }, { "start": 95, "end": 113, "text": "(Li et al., , 2018", "ref_id": "BIBREF17" }, { "start": 317, "end": 344, "text": "(Ganin and Lempitsky, 2015)", "ref_id": "BIBREF10" }, { "start": 722, "end": 738, "text": "Du et al. (2020)", "ref_id": "BIBREF9" }, { "start": 741, "end": 759, "text": "Ryu and Lee (2020)", "ref_id": "BIBREF25" }, { "start": 856, "end": 875, "text": "(Ryu and Lee, 2020)", "ref_id": "BIBREF25" }, { "start": 1080, "end": 1100, "text": "(Tzeng et al., 2017)", "ref_id": "BIBREF29" }, { "start": 1234, "end": 1255, "text": "(Hinton et al., 2015)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Survey of UDA Approaches", "sec_num": "3.1" }, { "text": "Model Fine-tuning with HateBERT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptation through Masked Language", "sec_num": "3.2" }, { "text": "Rietzler et al. 2020; Xu et al. (2019) show that the language model fine-tuning of BERT (using the MLM and the Next Sentence Prediction task) results in incorporating domain-specific knowledge into the model and is useful for cross-domain adaptation. This step does not require task-specific labels. The recently proposed HateBERT model (Caselli et al., 2021) extends the pre-trained BERT model using the MLM objective over a large corpus of unlabeled abusive comments from Reddit. This is expected to shift the pre-trained BERT model towards abusive language. It is shown by Caselli et al. (2021) that HateBERT is more portable across abusive language datasets, as compared to BERT. We, thus, decide to perform further analysis over HateBERT for our task.", "cite_spans": [ { "start": 22, "end": 38, "text": "Xu et al. (2019)", "ref_id": "BIBREF33" }, { "start": 337, "end": 359, "text": "(Caselli et al., 2021)", "ref_id": "BIBREF5" }, { "start": 576, "end": 597, "text": "Caselli et al. (2021)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Adaptation through Masked Language", "sec_num": "3.2" }, { "text": "In particular, we begin with the HateBERT model and perform MLM fine-tuning incorporating the unlabeled train set from the target corpus. We hypothesize that performing this step should incorporate the variations in the abusive language present in the target corpus into the model. For the classification task, supervised fine-tuning is performed over the MLM fine-tuned model obtained from the previous step, using X S and Y S . We present experiments over three different publicly available abusive language corpora from Twitter as they cover different forms of abuse, namely Davidson (Davidson et al., 2017) ,Waseem (Waseem and Hovy, 2016) and HatEval . Following the precedent of other works on crosscorpora abuse detection (Wiegand et al., 2019; Swamy et al., 2019; Karan and \u0160najder, 2018) , we target a binary classification task with classes: abusive and non-abusive. We randomly split Davidson and Waseem into train (80%), development (10%), and test (10%), whereas in the case of HatEval, we use the standard partition of the shared task. Statistics of the train-test splits of these datasets are listed in Table 1 .", "cite_spans": [ { "start": 578, "end": 610, "text": "Davidson (Davidson et al., 2017)", "ref_id": "BIBREF7" }, { "start": 619, "end": 642, "text": "(Waseem and Hovy, 2016)", "ref_id": "BIBREF30" }, { "start": 728, "end": 750, "text": "(Wiegand et al., 2019;", "ref_id": "BIBREF32" }, { "start": 751, "end": 770, "text": "Swamy et al., 2019;", "ref_id": "BIBREF28" }, { "start": 771, "end": 795, "text": "Karan and \u0160najder, 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 1117, "end": 1124, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Adaptation through Masked Language", "sec_num": "3.2" }, { "text": "During pre-processing, we remove the URLs and retain the frequently occurring Twitter handles (user names) present in the datasets, as they could provide important information. 1 The words contained in hashtags are split using the tool Crazy-Tokenizer 2 and the words are converted into lowercase.", "cite_spans": [ { "start": 177, "end": 178, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Adaptation through Masked Language", "sec_num": "3.2" }, { "text": "Given the three corpora listed above, we experiment with all the six pairs of X S and X T for our cross-corpora analysis. The UDA approaches leverage the respective unlabeled train sets in D T for adaptation, along with the train sets in D S . The abusive language classifier is subsequently trained on the labeled train set in D S and evaluated on the test set in D T . In the \"no adaptation\" case, the HateBERT model is fine-tuned in a supervised manner on the labeled source corpus train set, and evaluated on the target test set. Unsupervised adaptation using HateBERT involves training of the HateBERT model on the target corpus train set using the MLM objective. This is followed by a supervised fine-tuning on the source corpus train set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Setup", "sec_num": "4.2" }, { "text": "We use the original implementations of the UDA models 3 and the pre-trained HateBERT 4 model for our experiments. We select the best model checkpoints by performing early-stopping of the training while evaluating the performance on the respective development sets in D S . FastText 5 word vectors, 1 Eg., the Twitter handle @realDonaldTrump. 2 https://redditscore.readthedocs.io/ en/master/tokenizing.html 3 PBLM: https://github.com/yftah89/ PBLM-Domain-Adaptation, HATN: https: //github.com/hsqmlzno1/HATN, PERL: https://github.com/eyalbd2/PERL, AAD: https://github.com/bzantium/bert-AAD 4 https://osf.io/tbd58/ 5 https://fasttext.cc/ pre-trained over Wikipedia, are used for word embedding initialization for both HATN and PBLM. PERL and AAD are initialized with the BERT baseuncased model. 6 In PBLM, we employ the LSTM based classifier. 7 For both PERL and PBLM, words with the highest mutual information with respect to the source labels and occurring at least 10 times in both the source and target corpora are considered as pivots (Ziser and Reichart, 2018) . Our evaluation reports the mean and standard deviation of macro averaged F1 scores, obtained by an approach, over five runs with different random initializations. We first present the in-corpus performance of the HateBERT model in Table 2 , obtained after supervised fine-tuning on the respective datasets, along with the frequent abuse-related words. As shown in Table 2 , the in-corpus performance is high for Davidson and Waseem, but not for HatEval. HatEval shared task presents a challenging test set and similar performance have been reported in prior work (Caselli et al., 2021) . Crosscorpora performance of HateBERT and the UDA models discussed in Section 3.1, is presented in Table 3. Comparing Table 2 and Table 3 , substantial degradation of performance is observed across the datasets in the cross-corpora setting. This highlights the challenge of cross-corpora performance in abusive language detection.", "cite_spans": [ { "start": 793, "end": 794, "text": "6", "ref_id": null }, { "start": 1038, "end": 1064, "text": "(Ziser and Reichart, 2018)", "ref_id": "BIBREF36" }, { "start": 1630, "end": 1652, "text": "(Caselli et al., 2021)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 1298, "end": 1305, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 1431, "end": 1438, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 1753, "end": 1791, "text": "Table 3. Comparing Table 2 and Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Evaluation Setup", "sec_num": "4.2" }, { "text": "Cross-corpora evaluation in Table 3 shows that all the UDA methods experience drop in average performance when compared to the no-adaptation case of supervised fine-tuning of HateBERT. However, the additional step of MLM fine-tuning of HateBERT on the unlabeled train set from target corpus results in an improved performance in most of the cases. In the following sub-sections, we perform a detailed analysis to get further insights into the sub-optimal performance of the UDA approaches for our task.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 35, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Evaluation Setup", "sec_num": "4.2" }, { "text": "To understand the performance of the pivot-based models, we probe the characteristics of the pivots used by these models as they control the transfer of information across source and target corpora. As mentioned in Section 3.1, one of the criteria for pivot selection is their affinity to the available labels. Accordingly, if the adaptation results in better performance, a higher proportion of pivots would have more affinity to one of the two classes. In the following, we aim to study this particular characteristic across the source train set and the target test set. To compute class affinities, we obtain a ratio of the class membership of every pivot p i :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pivot Characteristics in Pivot-based Approaches", "sec_num": "5.1" }, { "text": "r i = #abusive comments with p i #non-abusive comments with p i (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pivot Characteristics in Pivot-based Approaches", "sec_num": "5.1" }, { "text": "The ratios obtained for the train set of the source and the test set of the target, for the pivot p i , are denoted as r i s and r i t , respectively. A pivot p i with similar class affinities in both the source train and target test should satisfy:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pivot Characteristics in Pivot-based Approaches", "sec_num": "5.1" }, { "text": "(r i s , r i t ) < 1 \u2212 th or (r i s , r i t ) > 1 + th (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pivot Characteristics in Pivot-based Approaches", "sec_num": "5.1" }, { "text": "Here, th denotes the threshold. Ratios less than (1 \u2212 th) indicate affinity towards non-abusive class, while those greater than (1 + th) indicate affinity towards the abusive class. For every source \u2192target pair, we select the pivots that satisfy Equation (2) with threshold th = 0.3, and calculate the percentage of the selected pivots as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pivot Characteristics in Pivot-based Approaches", "sec_num": "5.1" }, { "text": "perc s\u2192t = #pivots satisfying Equation (2) #Total pivots \u00d7 100", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pivot Characteristics in Pivot-based Approaches", "sec_num": "5.1" }, { "text": "(3) This indicates the percentage of pivots having similar affinity towards one of the two classes. We now analyze this percentage in the best and the worst case scenarios of PBLM. 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pivot Characteristics in Pivot-based Approaches", "sec_num": "5.1" }, { "text": "Worst cases: For the worst case of Waseem \u2192Davidson, Equation (3) yields a low perc s\u2192t of 18.8%. This indicates that the percentage of pivots having similar class affinities, across the source and the target, remains low in the worst performing pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pivot Characteristics in Pivot-based Approaches", "sec_num": "5.1" }, { "text": "Best case: The best case in PBLM corresponds to HatEval \u2192Davidson. In this case, Equation (3) yields a relatively higher perc s\u2192t of 51.4%. This is because the pivots extracted in this case involve a lot of profane words. Since in Davidson, the majority of abusive content involves the use of profane words (as also reflected in Table 2 ), the pivots extracted by PBLM can represent the target corpus well in this case. ", "cite_spans": [], "ref_spans": [ { "start": 329, "end": 336, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Pivot Characteristics in Pivot-based Approaches", "sec_num": "5.1" }, { "text": "On an average, the adversarial approach of HATN performs slightly better than AAD. In order to analyze the difference, we investigate the representation spaces of the two approaches for the best case of HATN i.e. HatEval \u2192Davidson. To this end, we apply the Principal Component Analysis (PCA) to obtain the two-dimensional visualization of the feature spaces from the train set of the source corpus HatEval and the test set of the target corpus Davidson. The PCA plots are shown in Figure 1. Adversarial training in both the HATN and AAD models tends to bring the representation regions of the source and target corpora close to each other. At the same time, separation of abusive and non-abusive classes in source train set seems to be happening in both the models. However, in the representation space of AAD, samples corresponding to abusive and non-abusive classes in the target test set do not follow the class separation seen in the source train set. But in the representation space of HATN, samples in the target test set appear to follow the class separation exhibited by its source train set. Considering the abusive class as positive, this is reflected in the higher number of True Positives in HATN as compared to that of AAD for this pair (#TP for HATN: 1393, #TP for AAD: 1105), while the True Negatives remain almost the same (#TN for HATN: 370, #TN for AAD: 373).", "cite_spans": [], "ref_spans": [ { "start": 482, "end": 488, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Domain Adversarial Approaches", "sec_num": "5.2" }, { "text": "One of the limitations of these domain adversarial approaches is the class-agnostic alignment of the common source-target representation space. As discussed in Saito et al. (2018) , methods that do not consider the class boundary information while aligning the source and target distributions, often result in having ambiguous and non-discriminative target domain features near class boundaries. Besides, such an alignment can be achieved without having access to the target domain class labels (Saito et al., 2018) . As such, an effective alignment should also attempt to minimize the intra-class, and maximize the inter-class domain discrepancy (Kang et al., 2019) .", "cite_spans": [ { "start": 160, "end": 179, "text": "Saito et al. (2018)", "ref_id": "BIBREF27" }, { "start": 495, "end": 515, "text": "(Saito et al., 2018)", "ref_id": "BIBREF27" }, { "start": 647, "end": 666, "text": "(Kang et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Domain Adversarial Approaches", "sec_num": "5.2" }, { "text": "It is evident from Table 3 that the MLM fine-tuning of HateBERT, before the subsequent supervised fine-tuning over the source corpus, results in improved performance in majority of the cases. We investigated the MLM fine-tuning over different combinations of the source and target corpora, in order to identify the best configuration. These include: a combination of the train sets from all the three corpora, combining the source and target train sets, and using only the target train set. Table 4 shows that MLM fine-tuning over only the unlabeled target corpus results in the best overall performance. This is in agreement to Rietzler et al. (2020) who observe a better capture of domain-specific knowledge with fine-tuning only on the target domain.", "cite_spans": [ { "start": 629, "end": 651, "text": "Rietzler et al. (2020)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 19, "end": 26, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 491, "end": 498, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "MLM Fine-tuning of HateBERT", "sec_num": "5.3" }, { "text": "HateBERT MLM Fine-tuning", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bridging the Gap between PERL and", "sec_num": "5.4" }, { "text": "Since PERL originally incorporates BERT, Table 3 reports the performance of PERL initialized with the pre-trained BERT model. As discussed in Section 3.1, PERL applies MLM fine-tuning over the pre-trained BERT model, where only the pivots are predicted rather than all the masked tokens. step of PERL, they are kept frozen during supervised training for the classification task. As an additional verification, we try to leverage the Hate-BERT model for initializing PERL in the same way as BERT is used in the original PERL model, with frozen encoder layers. As shown in Table 5 , this does not result in substantial performance gains over PERL-BERT on average. As a further extension, we update all the layers in PERL during the supervised training step and use the same hyperparameters as those used for HateBERT (Caselli et al., 2021) . 9 This results in improved performance from PERL. However, it stills remains behind the best performing HateBERT model with MLM fine-tuning on target.", "cite_spans": [ { "start": 815, "end": 837, "text": "(Caselli et al., 2021)", "ref_id": "BIBREF5" }, { "start": 840, "end": 841, "text": "9", "ref_id": null } ], "ref_spans": [ { "start": 41, "end": 48, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 571, "end": 578, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Bridging the Gap between PERL and", "sec_num": "5.4" }, { "text": "In general, when models are trained over HatEval, they are found to be more robust towards addressing the shifts across corpora. One of the primary reasons is that HatEval captures wider forms of abuse directed towards both immigrants and women. The most frequent words in Table 5 : Macro average F1 scores (mean \u00b1 std-dev) of PERL initialized with BERT and HateBERT (HBERT) with frozen encoder layers, and PERL initialized with HateBERT with updates across all layers, for all the pairs (Hat : HatEval, Was : Waseem, Dav : Davidson). The best in each row is marked in bold.", "cite_spans": [], "ref_spans": [ { "start": 273, "end": 280, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Source Corpora Specific Behaviour", "sec_num": "5.5" }, { "text": "in this dataset rarely involve abuse directed towards target groups other than women (99.3% of the abusive comments are sexist and 0.6% racist). This is because majority of these comments have been removed before crawling. Besides, Waseem mostly involves subtle and implicit abuse, and less use of profane words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Corpora Specific Behaviour", "sec_num": "5.5" }, { "text": "This work analyzed the efficacy of some successful Unsupervised Domain Adaptation approaches of cross-domain sentiment classification in crosscorpora abuse detection. Our experiments highlighted some of the problems with these approaches that render them sub-optimal in the cross-corpora abuse detection task. While the extraction of pivots, in the pivot-based models, is not optimal enough to capture the shared space across domains, the domain adversarial methods underperform substantially. The analysis of the Masked Language Model fine-tuning of HateBERT on the target corpus displayed improvements in general as compared to only fine-tuning HateBERT over the source corpus, suggesting that it helps in adapting the model towards target-specific language variations. The overall performance of all the approaches, however, indicates that building robust and portable abuse detection models is a challenging problem, far from being solved. Future work along the lines of domain adversarial training should explore methods which learn class boundaries that generalize well to the target corpora while performing alignment of the source and target representation spaces. Such an alignment can be performed without target class labels by minimizing the intra-class domain discrepancy (Kang et al., 2019) . Pivot-based approaches should explore pivot extraction methods that account for higher-level semantics of abusive language across source and target corpora.", "cite_spans": [ { "start": 1285, "end": 1304, "text": "(Kang et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "https://github.com/huggingface/ transformers 7 CNN classifier obtained similar performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Pivot extraction criteria are same for PBLM and PERL and similar percentages are expected with PERL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that the ablation study inBen-David et al. (2020) discusses the effect of the number of unfrozen encoder layers only in the MLM fine-tuning step, but not in the supervised training step for the end task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported partly by the french PIA project \"Lorraine Universit\u00e9 d'Excellence\", reference ANR-15-IDEX-04-LUE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Hate speech detection is not as easy as you may think: A closer look at model validation", "authors": [ { "first": "Aym\u00e9", "middle": [], "last": "Arango", "suffix": "" }, { "first": "Jorge", "middle": [], "last": "P\u00e9rez", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Poblete", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'19", "volume": "", "issue": "", "pages": "45--54", "other_ids": { "DOI": [ "10.1145/3331184.3331262" ] }, "num": null, "urls": [], "raw_text": "Aym\u00e9 Arango, Jorge P\u00e9rez, and Barbara Poblete. 2019. Hate speech detection is not as easy as you may think: A closer look at model validation. In Proceed- ings of the 42nd International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, SIGIR'19, page 45-54, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter", "authors": [ { "first": "Valerio", "middle": [], "last": "Basile", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Bosco", "suffix": "" }, { "first": "Elisabetta", "middle": [], "last": "Fersini", "suffix": "" }, { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "" }, { "first": "Viviana", "middle": [], "last": "Patti", "suffix": "" }, { "first": "Francisco Manuel Rangel", "middle": [], "last": "Pardo", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rosso", "suffix": "" }, { "first": "Manuela", "middle": [], "last": "Sanguinetti", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "54--63", "other_ids": { "DOI": [ "10.18653/v1/S19-2007" ] }, "num": null, "urls": [], "raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela San- guinetti. 2019. SemEval-2019 task 5: Multilin- gual detection of hate speech against immigrants and women in twitter. In Proceedings of the 13th Inter- national Workshop on Semantic Evaluation, pages 54-63, Minneapolis, Minnesota, USA. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Measuring the impact of sentiment for hate speech detection on twitter", "authors": [ { "first": "Nina", "middle": [], "last": "Bauwelinck", "suffix": "" }, { "first": "Els", "middle": [], "last": "Lefever", "suffix": "" } ], "year": 2019, "venue": "Proceedings of HUSO 2019, The fifth international conference on human and social analytics", "volume": "", "issue": "", "pages": "17--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nina Bauwelinck and Els Lefever. 2019. Measuring the impact of sentiment for hate speech detection on twitter. In Proceedings of HUSO 2019, The fifth international conference on human and social ana- lytics, pages 17-22. IARIA, International Academy, Research, and Industry Association.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Perl: Pivot-based domain adaptation for pre-trained deep contextualized embedding models", "authors": [ { "first": "Eyal", "middle": [], "last": "Ben-David", "suffix": "" }, { "first": "Carmel", "middle": [], "last": "Rabinovitz", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "504--5221", "other_ids": { "DOI": [ "https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00328" ] }, "num": null, "urls": [], "raw_text": "Eyal Ben-David, Carmel Rabinovitz, and Roi Reichart. 2020. Perl: Pivot-based domain adaptation for pre-trained deep contextualized embedding models. Transactions of the Association for Computational Linguistics, 8:504-5221.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Domain adaptation with structural correspondence learning", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "120--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspon- dence learning. In Proceedings of the 2006 Con- ference on Empirical Methods in Natural Language Processing, pages 120-128, Sydney, Australia. As- sociation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Hatebert: Retraining bert for abusive language detection in english", "authors": [ { "first": "Tommaso", "middle": [], "last": "Caselli", "suffix": "" }, { "first": "Valerio", "middle": [], "last": "Basile", "suffix": "" }, { "first": "Jelena", "middle": [], "last": "Mitrovi\u0107", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Granitzer", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.12472" ] }, "num": null, "urls": [], "raw_text": "Tommaso Caselli, Valerio Basile, Jelena Mitrovi\u0107, and Michael Granitzer. 2021. Hatebert: Retraining bert for abusive language detection in english. arXiv preprint arXiv:2010.12472.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Marginalized denoising autoencoders for domain adaptation", "authors": [ { "first": "Minmin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhixiang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Sha", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 29th International Conference on Machine Learning, ICML'12", "volume": "", "issue": "", "pages": "1627--1634", "other_ids": { "DOI": [ "https://dl.acm.org/doi/10.5555/3042573.3042781" ] }, "num": null, "urls": [], "raw_text": "Minmin Chen, Zhixiang Xu, Kilian Q. Weinberger, and Fei Sha. 2012. Marginalized denoising autoen- coders for domain adaptation. In Proceedings of the 29th International Conference on Machine Learn- ing, ICML'12, page 1627-1634, Madison, WI, USA. Omnipress.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automated hate speech detection and the problem of offensive language", "authors": [ { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Macy", "suffix": "" }, { "first": "Ingmar", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM '17", "volume": "", "issue": "", "pages": "512--515", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Confer- ence on Web and Social Media, ICWSM '17, pages 512-515.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Adversarial and domain-aware BERT for cross-domain sentiment analysis", "authors": [ { "first": "Chunning", "middle": [], "last": "Du", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jingyu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Jianxin", "middle": [], "last": "Liao", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4019--4028", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.370" ] }, "num": null, "urls": [], "raw_text": "Chunning Du, Haifeng Sun, Jingyu Wang, Qi Qi, and Jianxin Liao. 2020. Adversarial and domain-aware BERT for cross-domain sentiment analysis. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4019- 4028, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Unsupervised domain adaptation by backpropagation", "authors": [ { "first": "Yaroslav", "middle": [], "last": "Ganin", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Lempitsky", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning", "volume": "37", "issue": "", "pages": "1180--1189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaroslav Ganin and Victor Lempitsky. 2015. Unsu- pervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1180-1189, Lille, France. PMLR.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Domain-adversarial training of neural networks", "authors": [ { "first": "Yaroslav", "middle": [], "last": "Ganin", "suffix": "" }, { "first": "Evgeniya", "middle": [], "last": "Ustinova", "suffix": "" }, { "first": "Hana", "middle": [], "last": "Ajakan", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Germain", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Laviolette", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Marchand", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Lempitsky", "suffix": "" } ], "year": 2016, "venue": "J. Mach. Learn. Res", "volume": "17", "issue": "1", "pages": "2096--2030", "other_ids": { "DOI": [ "https://dl.acm.org/doi/10.5555/2946645.2946704" ] }, "num": null, "urls": [], "raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Lavio- lette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. J. Mach. Learn. Res., 17(1):2096-2030.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Domain adaptation for large-scale sentiment classification: A deep learning approach", "authors": [ { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th International Conference on Machine Learning, ICML'11", "volume": "", "issue": "", "pages": "513--520", "other_ids": { "DOI": [ "https://dl.acm.org/doi/10.5555/3104482.3104547" ] }, "num": null, "urls": [], "raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Pro- ceedings of the 28th International Conference on Machine Learning, ICML'11, page 513-520, Madi- son, WI, USA. Omnipress.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Distilling the knowledge in a neural network", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2015, "venue": "NIPS Deep Learning and Representation Learning Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learn- ing Workshop.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Contrastive adaptation network for unsupervised domain adaptation", "authors": [ { "first": "Guoliang", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Alexander", "middle": [ "G" ], "last": "Hauptmann", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "4893--4902", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guoliang Kang, Lu Jiang, Yi Yang, and Alexander G Hauptmann. 2019. Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4893-4902.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Cross-domain detection of abusive language online", "authors": [ { "first": "Mladen", "middle": [], "last": "Karan", "suffix": "" }, { "first": "Jan", "middle": [], "last": "\u0160najder", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)", "volume": "", "issue": "", "pages": "132--137", "other_ids": { "DOI": [ "10.18653/v1/W18-5117" ] }, "num": null, "urls": [], "raw_text": "Mladen Karan and Jan \u0160najder. 2018. Cross-domain detection of abusive language online. In Proceed- ings of the 2nd Workshop on Abusive Language On- line (ALW2), pages 132-137, Brussels, Belgium. As- sociation for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Comparative studies of detecting abusive language on twitter", "authors": [ { "first": "Younghun", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Seunghyun", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Kyomin", "middle": [], "last": "Jung", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)", "volume": "", "issue": "", "pages": "101--106", "other_ids": { "DOI": [ "10.18653/v1/W18-5113" ] }, "num": null, "urls": [], "raw_text": "Younghun Lee, Seunghyun Yoon, and Kyomin Jung. 2018. Comparative studies of detecting abusive lan- guage on twitter. In Proceedings of the 2nd Work- shop on Abusive Language Online (ALW2), pages 101-106, Brussels, Belgium. Association for Com- putational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Hierarchical attention transfer network for crossdomain sentiment classification", "authors": [ { "first": "Zheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2018, "venue": "AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng Li, Ying Wei, Yu Zhang, and Qiang Yang. 2018. Hierarchical attention transfer network for cross- domain sentiment classification. In AAAI Confer- ence on Artificial Intelligence.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "End-to-end adversarial memory network for cross-domain sentiment classification", "authors": [ { "first": "Zheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng Li, Yu Zhang, Ying Wei, Yuxiang Wu, and Qiang Yang. 2017. End-to-end adversarial mem- ory network for cross-domain sentiment classifica- tion. In Proceedings of the International Joint Con- ference on Artificial Intelligence (IJCAI 2017).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Annotation framework for hate speech identification in tweets: Case study of tweets during kenyan elections", "authors": [ { "first": "Edward", "middle": [], "last": "Ombui", "suffix": "" }, { "first": "Moses", "middle": [], "last": "Karani", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Muchemi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Ombui, Moses Karani, and Lawrence Mu- chemi. 2019. Annotation framework for hate speech identification in tweets: Case study of tweets during kenyan elections. 2019 IST-Africa Week Conference (IST-Africa), pages 1-9.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Domain adaptation for parsing", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Plank. 2011. Domain adaptation for parsing. Ph.D. thesis, University of Groningen.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Annotating hate speech: Three schemes at comparison", "authors": [ { "first": "Fabio", "middle": [], "last": "Poletto", "suffix": "" }, { "first": "Valerio", "middle": [], "last": "Basile", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Bosco", "suffix": "" }, { "first": "Viviana", "middle": [], "last": "Patti", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Stranisci", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Sixth Italian Conference on Computational Linguistics", "volume": "2481", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabio Poletto, Valerio Basile, Cristina Bosco, Viviana Patti, and Marco Stranisci. 2019. Annotating hate speech: Three schemes at comparison. In Pro- ceedings of the Sixth Italian Conference on Com- putational Linguistics, Bari, Italy, November 13-15, 2019, volume 2481 of CEUR Workshop Proceedings. CEUR-WS.org.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Joint modelling of emotion and abusive language detection", "authors": [ { "first": "Santhosh", "middle": [], "last": "Rajamanickam", "suffix": "" }, { "first": "Pushkar", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4270--4279", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.394" ] }, "num": null, "urls": [], "raw_text": "Santhosh Rajamanickam, Pushkar Mishra, Helen Yan- nakoudakis, and Ekaterina Shutova. 2020. Joint modelling of emotion and abusive language detec- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4270-4279, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Neural unsupervised domain adaptation in NLP-A survey", "authors": [ { "first": "Alan", "middle": [], "last": "Ramponi", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "6838--6855", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.603" ] }, "num": null, "urls": [], "raw_text": "Alan Ramponi and Barbara Plank. 2020. Neural unsu- pervised domain adaptation in NLP-A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6838-6855, Barcelona, Spain (Online). International Committee on Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification", "authors": [ { "first": "Alexander", "middle": [], "last": "Rietzler", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Stabinger", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Opitz", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Engl", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "4933--4941", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2020. Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4933-4941, Mar- seille, France. European Language Resources Asso- ciation.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Knowledge distillation for bert unsupervised domain adaptation", "authors": [ { "first": "Minho", "middle": [], "last": "Ryu", "suffix": "" }, { "first": "Kichun", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.11478" ] }, "num": null, "urls": [], "raw_text": "Minho Ryu and Kichun Lee. 2020. Knowledge distilla- tion for bert unsupervised domain adaptation. arXiv preprint arXiv:2010.11478.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Learning evolving and emerging topics in social media: A dynamic nmf approach with temporal regularization", "authors": [ { "first": "Ankan", "middle": [], "last": "Saha", "suffix": "" }, { "first": "Vikas", "middle": [], "last": "Sindhwani", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, WSDM '12", "volume": "", "issue": "", "pages": "693--702", "other_ids": { "DOI": [ "10.1145/2124295.2124376" ] }, "num": null, "urls": [], "raw_text": "Ankan Saha and Vikas Sindhwani. 2012. Learning evolving and emerging topics in social media: A dynamic nmf approach with temporal regularization. In Proceedings of the Fifth ACM International Con- ference on Web Search and Data Mining, WSDM '12, page 693-702, New York, NY, USA. Associa- tion for Computing Machinery.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Maximum classifier discrepancy for unsupervised domain adaptation", "authors": [ { "first": "Kuniaki", "middle": [], "last": "Saito", "suffix": "" }, { "first": "Kohei", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Yoshitaka", "middle": [], "last": "Ushiku", "suffix": "" }, { "first": "Tatsuya", "middle": [], "last": "Harada", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "3723--3732", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Maximum classifier discrep- ancy for unsupervised domain adaptation. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pages 3723-3732.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Studying generalisability across abusive language detection datasets", "authors": [ { "first": "Steve", "middle": [ "Durairaj" ], "last": "Swamy", "suffix": "" }, { "first": "Anupam", "middle": [], "last": "Jamatia", "suffix": "" }, { "first": "Bj\u00f6rn", "middle": [], "last": "Gamb\u00e4ck", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "940--950", "other_ids": { "DOI": [ "10.18653/v1/K19-1088" ] }, "num": null, "urls": [], "raw_text": "Steve Durairaj Swamy, Anupam Jamatia, and Bj\u00f6rn Gamb\u00e4ck. 2019. Studying generalisability across abusive language detection datasets. In Proceed- ings of the 23rd Conference on Computational Nat- ural Language Learning (CoNLL), pages 940-950, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Adversarial discriminative domain adaptation", "authors": [ { "first": "E", "middle": [], "last": "Tzeng", "suffix": "" }, { "first": "J", "middle": [], "last": "Hoffman", "suffix": "" }, { "first": "K", "middle": [], "last": "Saenko", "suffix": "" }, { "first": "T", "middle": [], "last": "Darrell", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "2962--2971", "other_ids": { "DOI": [ "10.1109/CVPR.2017.316" ] }, "num": null, "urls": [], "raw_text": "E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. 2017. Adversarial discriminative domain adaptation. In 2017 IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 2962-2971.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the NAACL Student Research Workshop", "volume": "", "issue": "", "pages": "88--93", "other_ids": { "DOI": [ "10.18653/v1/N16-2013" ] }, "num": null, "urls": [], "raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computa- tional Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Bridging the gaps: Multi task learning for domain transfer of hate speech detection", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "James", "middle": [], "last": "Thorne", "suffix": "" }, { "first": "Joachim", "middle": [], "last": "Bingel", "suffix": "" } ], "year": 2018, "venue": "Online Harassment. Human-Computer Interaction Series", "volume": "", "issue": "", "pages": "29--55", "other_ids": { "DOI": [ "10.1007/978-3-319-78583-7_3" ] }, "num": null, "urls": [], "raw_text": "Zeerak Waseem, James Thorne, and Joachim Bingel. 2018. Bridging the gaps: Multi task learning for domain transfer of hate speech detection. In Gol- beck J. (eds) Online Harassment. Human-Computer Interaction Series, pages 29-55, Cham. Springer In- ternational Publishing.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Detection of Abusive Language: the Problem of Biased Datasets", "authors": [ { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Kleinbauer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "602--608", "other_ids": { "DOI": [ "10.18653/v1/N19-1060" ] }, "num": null, "urls": [], "raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of Abusive Language: the Problem of Biased Datasets. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 602-608, Minneapolis, Minnesota. Association for Computational Linguis- tics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "BERT post-training for review reading comprehension and aspect-based sentiment analysis", "authors": [ { "first": "Hu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2324--2335", "other_ids": { "DOI": [ "10.18653/v1/N19-1242" ] }, "num": null, "urls": [], "raw_text": "Hu Xu, Bing Liu, Lei Shu, and Philip Yu. 2019. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2324-2335, Minneapolis, Minnesota. Association for Computational Linguis- tics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Towards generalisable hate speech detection: a review on obstacles and solutions", "authors": [ { "first": "Wenjie", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2102.08886" ] }, "num": null, "urls": [], "raw_text": "Wenjie Yin and Arkaitz Zubiaga. 2021. Towards gener- alisable hate speech detection: a review on obstacles and solutions. arXiv preprint arXiv:2102.08886.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Neural structural correspondence learning for domain adaptation", "authors": [ { "first": "Yftah", "middle": [], "last": "Ziser", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "400--410", "other_ids": { "DOI": [ "10.18653/v1/K17-1040" ] }, "num": null, "urls": [], "raw_text": "Yftah Ziser and Roi Reichart. 2017. Neural structural correspondence learning for domain adaptation. In Proceedings of the 21st Conference on Computa- tional Natural Language Learning (CoNLL 2017), pages 400-410, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Pivot based language modeling for improved neural domain adaptation", "authors": [ { "first": "Yftah", "middle": [], "last": "Ziser", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1241--1251", "other_ids": { "DOI": [ "10.18653/v1/N18-1112" ] }, "num": null, "urls": [], "raw_text": "Yftah Ziser and Roi Reichart. 2018. Pivot based lan- guage modeling for improved neural domain adapta- tion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long Papers), pages 1241-1251, New Orleans, Louisiana. Association for Computa- tional Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "(Best viewed in color) PCA based visualization of HatEval \u2192Davidson in the adversarial approaches." }, "TABREF1": { "type_str": "table", "html": null, "num": null, "content": "", "text": "Statistics of the datasets used (average comment length is reported in terms of word numbers)." }, "TABREF3": { "type_str": "table", "html": null, "num": null, "content": "
", "text": "" }, "TABREF5": { "type_str": "table", "html": null, "num": null, "content": "
", "text": "Macro average F1 scores (mean\u00b1std-dev) on different source and target pairs for cross-corpora abuse detection (Hat : HatEval, Was : Waseem, Dav : Davidson). The best in each row is marked in bold." }, "TABREF6": { "type_str": "table", "html": null, "num": null, "content": "
SourceHBERTHBERTHBERT
\u2192TargetMLMMLM onMLM
on all 3Source +on
corporaTargetTarget
Average61.461.062.9
Fol-
lowing Ben-David et al. (2020), after the encoder
weights are learned during the MLM fine-tuning
", "text": "Hat \u2192Was 69.7\u00b10.8 68.9\u00b10.6 68.0\u00b11.0 Was \u2192Hat 57.2\u00b11.4 56.8\u00b11.1 56.5\u00b11.1 Dav \u2192Was 60.2\u00b10.7 58.8\u00b10.8 66.7\u00b10.8 Was \u2192Dav 63.4\u00b13.9 63.4\u00b13.9 67.1\u00b12.9 Hat \u2192Dav 66.6\u00b11.1 66.7\u00b12.1 67.8\u00b11.6 Dav \u2192Hat 51.4\u00b10.2 51.5\u00b10.1 51.4\u00b10.4" }, "TABREF7": { "type_str": "table", "html": null, "num": null, "content": "
: Macro average F1 scores (mean \u00b1 std-dev)
for Masked Language Model fine-tuning of HateBERT
(HBERT MLM) over different corpora combinations,
before supervised fine-tuning on source; Hat : HatEval,
Was : Waseem, Dav : Davidson. The best in each row
is marked in bold.
", "text": "" }, "TABREF8": { "type_str": "table", "html": null, "num": null, "content": "
SourcePERL-PERL-PERL-
\u2192TargetBERTHBERTHBERT
(frozen(frozen(with
encoderencoderlayer up-
layers)layers)dates)
Average56.857.760.8
", "text": "also highlight the same. The corpus involves a mix of implicit as well as explicit abusive language. On the contrary, models trained over Waseem are generally unable to adapt well in cross-corpora settings. Since only tweet IDs were made available in Waseem, we observe that our crawled comments Hat \u2192Was 57.1\u00b11.8 63.2\u00b11.7 68.3\u00b10.8 Was \u2192Hat 55.3\u00b10.7 55.0\u00b10.9 57.8\u00b10.8 Dav \u2192Was 67.4\u00b11.0 65.9\u00b11.3 57.3\u00b13.1 Was \u2192Dav 48.3\u00b11.5 48.1\u00b13.7 64.4\u00b12.1 Hat \u2192Dav 62.6\u00b13.8 63.6\u00b10.9 66.1\u00b11.8 Dav \u2192Hat 50.3\u00b10.9 50.4\u00b10.6 51.1\u00b10.3" } } } }