|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:32:19.444878Z" |
|
}, |
|
"title": "Generalisability of Topic Models in Cross-corpora Abusive Language Detection", |
|
"authors": [ |
|
{ |
|
"first": "Tulika", |
|
"middle": [], |
|
"last": "Bose", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "CNRS", |
|
"location": { |
|
"postCode": "F-54000", |
|
"settlement": "Nancy", |
|
"region": "Inria, LORIA", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "tulika.bose@loria.fr" |
|
}, |
|
{ |
|
"first": "Irina", |
|
"middle": [], |
|
"last": "Illina", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "CNRS", |
|
"location": { |
|
"postCode": "F-54000", |
|
"settlement": "Nancy", |
|
"region": "Inria, LORIA", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "illina@loria.fr" |
|
}, |
|
{ |
|
"first": "Dominique", |
|
"middle": [], |
|
"last": "Fohr", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "CNRS", |
|
"location": { |
|
"postCode": "F-54000", |
|
"settlement": "Nancy", |
|
"region": "Inria, LORIA", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "dominique.fohr@loria.fr" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Rapidly changing social media content calls for robust and generalisable abuse detection models. However, the state-of-the-art supervised models display degraded performance when they are evaluated on abusive comments that differ from the training corpus. We investigate if the performance of supervised models for cross-corpora abuse detection can be improved by incorporating additional information from topic models, as the latter can infer the latent topic mixtures from unseen samples. In particular, we combine topical information with representations from a model tuned for classifying abusive comments. Our performance analysis reveals that topic models are able to capture abuse-related topics that can transfer across corpora, and result in improved generalisability.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Rapidly changing social media content calls for robust and generalisable abuse detection models. However, the state-of-the-art supervised models display degraded performance when they are evaluated on abusive comments that differ from the training corpus. We investigate if the performance of supervised models for cross-corpora abuse detection can be improved by incorporating additional information from topic models, as the latter can infer the latent topic mixtures from unseen samples. In particular, we combine topical information with representations from a model tuned for classifying abusive comments. Our performance analysis reveals that topic models are able to capture abuse-related topics that can transfer across corpora, and result in improved generalisability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "With the exponentially increased use of social networking platforms, concerns on abusive language has increased at an alarming rate. Such language is described as hurtful, toxic, or obscene, and targets individuals or a larger group based on common societal characteristics such as race, religion, ethnicity, gender, etc. The increased spread of such content hampers free speech as it can potentially discourage users from expressing themselves without fear, and intimidate them into leaving the conversation. Considering variations of online abuse, toxicity, hate speech, and offensive language as abusive language, this work addresses the detection of abusive versus non-abusive comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Automatic detection of abuse is challenging as there are problems of changing linguistic traits, subtle forms of abuse, amongst others (Vidgen et al., 2019) . Moreover, the performance of models trained for abuse detection are found to degrade considerably, when they encounter abusive comments that differ from the training corpus (Wiegand et al., 2019; Arango et al., 2019; Swamy et al., 2019 ; Karan and \u0160najder, 2018) . This is due to the varied sampling strategies used to build training corpus, topical and temporal shifts (Florio et al., 2020) , and varied targets of abuse across corpora. Since social media content changes rapidly, abusive language detection models with better generalisation can be more effective (Yin and Zubiaga, 2021) . To this end, a cross-corpora analysis and evaluation is important.", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 156, |
|
"text": "(Vidgen et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 354, |
|
"text": "(Wiegand et al., 2019;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 375, |
|
"text": "Arango et al., 2019;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 376, |
|
"end": 394, |
|
"text": "Swamy et al., 2019", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 421, |
|
"text": "Karan and \u0160najder, 2018)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 529, |
|
"end": 550, |
|
"text": "(Florio et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 747, |
|
"text": "(Yin and Zubiaga, 2021)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Topic models have been explored for generic cross-domain text classification (Jing et al., 2018; Zhuang et al., 2013; Li et al., 2012) , demonstrating better generalisability. Moreover, they can be learnt in an unsupervised manner and can infer topic mixtures from unseen samples. This inspires us to exploit topic model representations for crosscorpora abuse detection.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 96, |
|
"text": "(Jing et al., 2018;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 97, |
|
"end": 117, |
|
"text": "Zhuang et al., 2013;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 134, |
|
"text": "Li et al., 2012)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recently, Caselli et al. (2021) have \"retrained\" BERT (Devlin et al., 2019) over large-scale abusive Reddit comments to provide the HateBERT model which has displayed better generalisability in cross-corpora experiments. Furthermore, Peinelt et al. (2020) show that combination of topic model and BERT representations leads to better performance at semantic similarity task. Taking these studies into account, we investigate if combining topic representation with contextualised HateBERT representations can result in better generalisability in cross-corpora abuse detection. Cross corpora evaluation on three common abusive language corpora supports and demonstrates the effectiveness of this approach. Besides, we bring some insights into how the association of unseen comments to abusive topics obtained from original training data can help in cross-corpora abusive language detection.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 31, |
|
"text": "Caselli et al. (2021)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 54, |
|
"end": 75, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 255, |
|
"text": "Peinelt et al. (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is organised as follows: Section 2 describes the architecture of the combination of topic model and HateBERT. Section 3 presents our experimental settings. An analysis of the results obtained is present in Section 4, and Section 5 concludes the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we leverage the Topically Driven Neural Language Model (TDLM) (Lau et al., 2017) to obtain topic representations, as it can employ pre-trained embeddings which are found to be more suitable for short Twitter comments (Yi et al., 2020 ). The original model of TDLM applies a Convolutional Neural Network (CNN) over wordembeddings to generate a comment embedding. This comment embedding is used to learn and extract topic distributions. Cer et al. (2018) show that transfer learning via sentence embeddings performs better than word-embeddings on a variety of tasks. Hence, we modify TDLM to accept the transformer based Universal Sentence Encoder (USE) (Cer et al., 2018) embeddings extracted from input comments, instead of the comment embeddings from CNN. The modified model is denoted as U-TDLM hereon. Refer to Appendix A.1 for the architecture of U-TDLM and also to Lau et al. 2017. U-TDLM is trained on the train set from the source corpus and is used to infer on the test set from a different target corpus. The topic distribution per comment c is given by", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 94, |
|
"text": "(Lau et al., 2017)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 247, |
|
"text": "(Yi et al., 2020", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 449, |
|
"end": 466, |
|
"text": "Cer et al. (2018)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 666, |
|
"end": 684, |
|
"text": "(Cer et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Topic Model and HateBERT", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "T c = [p(t i |c)] i=1:k ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Topic Model and HateBERT", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where k is the number of topics. T c is passed through a Fully Connected (FC) layer to obtain transformed representation T c . Besides, we first perform supervised fine-tuning of HateBERT 1 on the train set of the source corpus. The vector corresponding to the [CLS] token in the final layer of this fine-tuned HateBERT model is chosen as the Hate-BERT representation for a comment. It is transformed through an FC layer to obtain the C vector. Finally, in the combined model (HateBERT+U-TDLM), the concatenated vector [T c ; C] is passed through a final FC and a softmax classification layer. The readers are referred to Appendix A.2 for the architecture of the individual, and the combined models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Topic Model and HateBERT", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Evaluation Set-up", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Topic Model and HateBERT", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We perform experiments on three different publicly available abusive tweet corpora, namely, HatEval (Basile et al., 2019), Waseem (Waseem and Hovy, 2016) , and Davidson (Davidson et al., 2017) . We target a binary classification task with classes: abusive and non abusive, following the precedent of 1 Pre-trained model from https://osf.io/tbd58/ previous work on cross corpora analysis (Wiegand et al., 2019; Swamy et al., 2019; Karan and \u0160najder, 2018) . For HatEval, we use the standard partition of the shared task, whereas the other two datasets are randomly split into train (80%),development (10%), and test (10%). The statistics of the traintest splits of these datasets are listed in Table 1 We choose a topic number of 15 for our experiments based on the results for in-corpus performance and to maintain a fair comparison. Besides, the best model checkpoints are selected by performing early-stopping of the training using the respective development sets. The FC layers are followed by Rectified Linear Units (ReLU) in the individual as well as the combined models. In the individual models, the FC layers for transforming T c and the HateBERT representation have 10 and 600 hidden units, respectively. The final FC layer in the combined model has 400 hidden units. Classification performance is reported in terms of mean F1 score and standard deviation over five runs, with random initialisations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 153, |
|
"text": "(Waseem and Hovy, 2016)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 192, |
|
"text": "(Davidson et al., 2017)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 409, |
|
"text": "(Wiegand et al., 2019;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 429, |
|
"text": "Swamy et al., 2019;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 454, |
|
"text": "Karan and \u0160najder, 2018)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 693, |
|
"end": 700, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We remove the URLs from the Twitter comments, but retain Twitter handles as they can contribute to topic representations. 2 Hashtags are split into constituent words using the tool CrazyTokenizer 3 , and words are converted into lower-case. U-TDLM involves prediction of words from the comments based on topic representations. In this part, our implementation uses stemmed words and skips stopwords. All models are trained on the train set of the source corpus. The in-corpus performance of the models is obtained on the source corpora test sets, while the cross-corpora performance is obtained on target corpora test sets. It is shown in Table 2 that the cross-corpora performance degrades substantially as compared to the in-corpus performance, except for HatEval which indeed has a low in-corpus performance. HatEval test set is part of a shared task, and similar in-corpus performance have been reported in prior work (Caselli et al., 2021) . Overall, comparing the cross-corpora performances of all models, we can observe that the combined model (HateBERT + U-TDLM) either outperforms Hate-BERT or retains its performance. This hints that incorporating topic representations can be useful in cross-corpora abusive language detection. As an ablation study, we replaced U-TDLM features with random vectors to evaluate the combined model. Such a concatenation decreased the performance in the cross-corpora setting, yielding an average macro-F1 score of 59.4. This indicates that the topic representations improve generalisation along with HateBERT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 922, |
|
"end": 944, |
|
"text": "(Caselli et al., 2021)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 639, |
|
"end": 646, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Pre-processing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We investigate the cases in . Some of the prominent topics from Waseem and HateEval associated with abuse, and the top words corresponding to these topics are provided in Table 3 and Train on Waseem \u2192Test on Davidson: In this case, U-TDLM shows poor performance due to the large number of False Negatives (#FN for U-TDLM: 1824), and less True Positives (#TP for U-TDLM: 266). The combined model, on the other hand, has higher True Positives compared to those obtained from HateBERT (#TP for HateBERT+U-TDLM: 1556, #TP for HateBERT: 1267). The count of True Negatives with the combined model remains similar to that in HateBERT (#TN for Hate-BERT + U-TDLM: 314, #TN for HateBERT: 340). This indicates that U-TDLM introduces some complementary information in the combined model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 178, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Case-studies to Analyse Improvements from U-TDLM", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We analyse a few abusive comments in the test set of Davidson (target) in Table 4 , which are wrongly classified by HateBERT, but correctly detected as abusive by the combined model. The topical membership of these abusive comments from Davidson indicates that U-TDLM associates high", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 70, |
|
"text": "Davidson (target)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 81, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Case-studies to Analyse Improvements from U-TDLM", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Abusive Comments in Target Source topics", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source \u2192Target", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When women are so proud that they don't like to cook; clean b*tch stop being lazy..It's not cute.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Waseem \u2192Davidson", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ya girl is a slimy ass h*e. get her under control and tell her to stop spraying bullshit out her mouth all day.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4, 12", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "No. Its wrong to try to change f*ggots; There is no \"therapy\"....sympathize like they are retards.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HatEval \u2192Davidson", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Naturally, when a shitty leftist rag talks trash about another shitty leftist rag, you better fall in line... weights to the relevant abuse-related topics from Waseem. As indicated in the first example, an abusive comment against women that discusses cooking, in Davidson, is mapped to the topics 4 (sexism) and 12 (cooking show) from Waseem. Similarly, the second comment gets high weight in the three topics 4, 9 and 12 due to its sexist content and use of a profane word. Other pairs of corpora that yield improved performance with the combined model also follow similar trends as above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "3, 7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Train on HatEval \u2192Test on Davidson: In this case, while U-TDLM performs considerably well, the combined model only provides a slight improvement over HateBERT, as per Table 2 . U-TDLM has a higher TP when compared to both HateBERT and the combined model (#TP for U-TDLM: 1924, #TP for HateBERT+U-TDLM: 1106, #TP for Hate-BERT: 1076), with lower TN (#TN for U-TDLM: 130, #TN for HateBERT+U-TDLM: 373, #TN for HateBERT: 374).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 174, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "10", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Few abusive comments from Davidson that are correctly classified by U-TDLM alone are presented in Table 4 . The first comment for this case have high weights for the abuse-related topics 3 and 7 from HatEval due to the presence of the profane word \"f*ggot\". The second comment only gets a high weight for topic 10, which deals with politics. This is due to the word \"leftist\", which is associated with a political ideology. As per our analysis, we found that all of these source topics are highly correlated with the abusive labels in the source corpus of HatEval. As such, these comments from the target corpus of Davidson are correctly classified as abusive by U-TDLM.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 105, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "10", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An in-corpus and cross-corpora evaluation of Hate-BERT and U-TDLM has helped us confirm our perspective on generalisation in the abusive language detection task. A contextualised representation model like HateBERT can achieve great levels of performance on the abusive language detection task, only when the evaluation dataset does not differ from the training set. The performance of this model degrades drastically on abusive language comments from unseen contexts. Topic models like U-TDLM, which express comments as a mixture of topics learnt from a corpus, allow unseen comments to trigger abusive language topics. While topic space representations tend to lose the exact context of a comment, combining them with Hate-BERT representations can give modest improvements over HateBERT or at the least, retain the performance of HateBERT. These results should fuel interest and motivate further developments in the generalisation of abusive language detection models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Eg., the topic associated with @realDonaldTrump. 3 https://redditscore.readthedocs.io/ en/master/tokenizing.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported partly by the french PIA project \"Lorraine Universit\u00e9 d'Excellence\", reference ANR-15-IDEX-04-LUE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Hate speech detection is not as easy as you may think: A closer look at model validation", |
|
"authors": [ |
|
{ |
|
"first": "Aym\u00e9", |
|
"middle": [], |
|
"last": "Arango", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorge", |
|
"middle": [], |
|
"last": "P\u00e9rez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Poblete", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'19", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--54", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3331184.3331262" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aym\u00e9 Arango, Jorge P\u00e9rez, and Barbara Poblete. 2019. Hate speech detection is not as easy as you may think: A closer look at model validation. In Proceed- ings of the 42nd International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, SIGIR'19, page 45-54, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter", |
|
"authors": [ |
|
{ |
|
"first": "Valerio", |
|
"middle": [], |
|
"last": "Basile", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cristina", |
|
"middle": [], |
|
"last": "Bosco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elisabetta", |
|
"middle": [], |
|
"last": "Fersini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Debora", |
|
"middle": [], |
|
"last": "Nozza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Viviana", |
|
"middle": [], |
|
"last": "Patti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco Manuel Rangel", |
|
"middle": [], |
|
"last": "Pardo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manuela", |
|
"middle": [], |
|
"last": "Sanguinetti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "54--63", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S19-2007" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela San- guinetti. 2019. SemEval-2019 task 5: Multilin- gual detection of hate speech against immigrants and women in twitter. In Proceedings of the 13th Inter- national Workshop on Semantic Evaluation, pages 54-63, Minneapolis, Minnesota, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Hatebert: Retraining bert for abusive language detection in english", |
|
"authors": [ |
|
{ |
|
"first": "Tommaso", |
|
"middle": [], |
|
"last": "Caselli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valerio", |
|
"middle": [], |
|
"last": "Basile", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jelena", |
|
"middle": [], |
|
"last": "Mitrovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Granitzer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2010.12472" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tommaso Caselli, Valerio Basile, Jelena Mitrovi\u0107, and Michael Granitzer. 2021. Hatebert: Retraining bert for abusive language detection in english. arXiv preprint arXiv:2010.12472.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Chris Tar, Yun hsuan Sung, Brian Strope, and Ray Kurzweil", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinfei", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nan", |
|
"middle": [], |
|
"last": "Sheng Yi Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicole", |
|
"middle": [], |
|
"last": "Hua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lyn Untalan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rhomni", |
|
"middle": [], |
|
"last": "Limtiaco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "St", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mario", |
|
"middle": [], |
|
"last": "Constant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Guajardo-C\u00e9spedes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "EMNLP demonstration", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Cer, Yinfei Yang, Sheng yi Kong, Nan Hua, Nicole Lyn Untalan Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-C\u00e9spedes, Steve Yuan, Chris Tar, Yun hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. In EMNLP demonstration, Brussels, Belgium.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automated hate speech detection and the problem of offensive language", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Davidson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dana", |
|
"middle": [], |
|
"last": "Warmsley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Macy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ingmar", |
|
"middle": [], |
|
"last": "Weber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM '17", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "512--515", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Confer- ence on Web and Social Media, ICWSM '17, pages 512-515.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Time of your hate: The challenge of time in hate speech detection on social media", |
|
"authors": [ |
|
{ |
|
"first": "Komal", |
|
"middle": [], |
|
"last": "Florio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valerio", |
|
"middle": [], |
|
"last": "Basile", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Polignano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierpaolo", |
|
"middle": [], |
|
"last": "Basile", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Viviana", |
|
"middle": [], |
|
"last": "Patti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Applied Sciences", |
|
"volume": "", |
|
"issue": "12", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3390/app10124180" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Komal Florio, Valerio Basile, Marco Polignano, Pier- paolo Basile, and Viviana Patti. 2020. Time of your hate: The challenge of time in hate speech detection on social media. Applied Sciences, 10(12).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Cross-domain labeled LDA for cross-domain text classification", |
|
"authors": [ |
|
{ |
|
"first": "Baoyu", |
|
"middle": [], |
|
"last": "Jing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenwei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deqing", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fuzhen", |
|
"middle": [], |
|
"last": "Zhuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cheng", |
|
"middle": [], |
|
"last": "Niu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IEEE International Conference on Data Mining, ICDM 2018", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "187--196", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICDM.2018.00034" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baoyu Jing, Chenwei Lu, Deqing Wang, Fuzhen Zhuang, and Cheng Niu. 2018. Cross-domain la- beled LDA for cross-domain text classification. In IEEE International Conference on Data Mining, ICDM 2018, Singapore, November 17-20, 2018, pages 187-196. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Cross-domain detection of abusive language online", |
|
"authors": [ |
|
{ |
|
"first": "Mladen", |
|
"middle": [], |
|
"last": "Karan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "\u0160najder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "132--137", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-5117" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mladen Karan and Jan \u0160najder. 2018. Cross-domain detection of abusive language online. In Proceed- ings of the 2nd Workshop on Abusive Language On- line (ALW2), pages 132-137, Brussels, Belgium. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Topically driven neural language model", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Jey Han Lau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "355--365", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1033" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jey Han Lau, Timothy Baldwin, and Trevor Cohn. 2017. Topically driven neural language model. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 355-365, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Topic correlation analysis for cross-domain text classification", |
|
"authors": [ |
|
{ |
|
"first": "Lianghao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoming", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingsheng", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, AAAI'12", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "998--1004", |
|
"other_ids": { |
|
"DOI": [ |
|
"https://dl.acm.org/doi/10.5555/2900728.2900870" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lianghao Li, Xiaoming Jin, and Mingsheng Long. 2012. Topic correlation analysis for cross-domain text classification. In Proceedings of the Twenty- Sixth AAAI Conference on Artificial Intelligence, AAAI'12, page 998-1004. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "2020. tBERT: Topic models and BERT joining forces for semantic similarity detection", |
|
"authors": [ |
|
{ |
|
"first": "Nicole", |
|
"middle": [], |
|
"last": "Peinelt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dong", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Liakata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7047--7055", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.630" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicole Peinelt, Dong Nguyen, and Maria Liakata. 2020. tBERT: Topic models and BERT joining forces for semantic similarity detection. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7047-7055, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Studying generalisability across abusive language detection datasets", |
|
"authors": [ |
|
{ |
|
"first": "Steve", |
|
"middle": [ |
|
"Durairaj" |
|
], |
|
"last": "Swamy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anupam", |
|
"middle": [], |
|
"last": "Jamatia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bj\u00f6rn", |
|
"middle": [], |
|
"last": "Gamb\u00e4ck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "940--950", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K19-1088" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steve Durairaj Swamy, Anupam Jamatia, and Bj\u00f6rn Gamb\u00e4ck. 2019. Studying generalisability across abusive language detection datasets. In Proceed- ings of the 23rd Conference on Computational Nat- ural Language Learning (CoNLL), pages 940-950, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Challenges and frontiers in abusive content detection", |
|
"authors": [ |
|
{ |
|
"first": "Bertie", |
|
"middle": [], |
|
"last": "Vidgen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dong", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebekah", |
|
"middle": [], |
|
"last": "Tromble", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Hale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Margetts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Third Workshop on Abusive Language Online", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "80--93", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-3509" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detec- tion. In Proceedings of the Third Workshop on Abu- sive Language Online, pages 80-93, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Zeerak", |
|
"middle": [], |
|
"last": "Waseem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the NAACL Student Research Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "88--93", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-2013" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Detection of Abusive Language: the Problem of Biased Datasets", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Wiegand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Ruppenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Kleinbauer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "602--608", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1060" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of Abusive Language: the Problem of Biased Datasets. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 602-608, Minneapolis, Minnesota. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Topic modeling for short texts via word embedding and document correlation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Yi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "IEEE Access", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "30692--30705", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ACCESS.2020.2973207" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Yi, B. Jiang, and J. Wu. 2020. Topic modeling for short texts via word embedding and document corre- lation. IEEE Access, 8:30692-30705.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Towards generalisable hate speech detection: a review on obstacles and solutions", |
|
"authors": [ |
|
{ |
|
"first": "Wenjie", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arkaitz", |
|
"middle": [], |
|
"last": "Zubiaga", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2102.08886" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenjie Yin and Arkaitz Zubiaga. 2021. Towards gener- alisable hate speech detection: a review on obstacles and solutions. arXiv preprint arXiv:2102.08886.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Concept learning for crossdomain text classification: A general probabilistic framework", |
|
"authors": [ |
|
{ |
|
"first": "Fuzhen", |
|
"middle": [], |
|
"last": "Zhuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ping", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peifeng", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qing", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhongzhi", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "IJCAI International Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1960--1966", |
|
"other_ids": { |
|
"DOI": [ |
|
"https://dl.acm.org/doi/abs/10.5555/2540128.2540409" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fuzhen Zhuang, Ping Luo, Peifeng Yin, Qing He, and Zhongzhi Shi. 2013. Concept learning for cross- domain text classification: A general probabilistic framework. In IJCAI International Joint Conference on Artificial Intelligence, pages 1960-1966.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"content": "<table><tr><td>Datasets Number of</td><td>Average</td><td>Abuse</td></tr><tr><td>comments</td><td>comment</td><td>%</td></tr><tr><td/><td>length</td><td/></tr><tr><td>Train Test</td><td/><td/></tr><tr><td>HatEval 9000 3000</td><td>21.3</td><td>42.1</td></tr><tr><td>Waseem 8720 1090</td><td>14.7</td><td>26.8</td></tr><tr><td>Davidson 19817 2477</td><td>14.1</td><td>83.2</td></tr><tr><td>.</td><td/><td/></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": ".", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"text": "Statistics of the datasets used (average comment length is calculated in terms of word numbers).", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>Train set</td><td colspan=\"2\">In-corpus performance HateBERT U-TDLM</td><td>Cross-corpus test set</td><td colspan=\"3\">Cross-corpora performance HateBERT U-TDLM HateBERT + U-TDLM</td></tr><tr><td>HatEval</td><td>53.9\u00b11.7</td><td>41.5\u00b10.6</td><td>Waseem Davidson</td><td>66.5\u00b12.2 59.2\u00b12.5</td><td>55.5\u00b12.6 64.4\u00b12.3</td><td>67.8\u00b12.4 60.4\u00b11.4</td></tr><tr><td colspan=\"2\">Waseem 86.1\u00b10.4</td><td>73.7\u00b11.4</td><td>HatEval Davidson</td><td>55.8\u00b11.4 59.8\u00b13.6</td><td>36.7\u00b10.0 28.2\u00b12.4</td><td>55.4\u00b10.7 64.8\u00b11.8</td></tr><tr><td colspan=\"2\">Davidson 93.7\u00b10.2</td><td>75.6\u00b10.8</td><td>HatEval Waseem</td><td>51.8\u00b10.2 66.6\u00b13.0</td><td>50.5\u00b11.3 48.7\u00b13.3</td><td>51.8\u00b10.3 68.5\u00b12.1</td></tr><tr><td colspan=\"2\">Average 77.9</td><td>63.6</td><td/><td>60.0</td><td>47.3</td><td>61.5</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "presents the in-corpus and cross-corpora evaluation of the HateBERT and U-TDLM models.", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"text": "Macro average F1 scores (mean\u00b1std-dev) for in-corpus and cross-corpora abuse detection. The best in each row for the cross-corpora performance is marked in bold.", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td>which report rel-</td></tr><tr><td>atively large improvements, as compared to Hate-</td></tr><tr><td>BERT, either with HateBERT+U-TDLM (train on</td></tr><tr><td>Waseem, test on Davidson) or only with U-TDLM</td></tr><tr><td>(train on HateEval, test on Davidson)</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td>, respectively. For better interpretation,</td></tr><tr><td>topic names are manually assigned based on the</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"content": "<table><tr><td>: U-TDLM trained on Waseem's train set (topic</td></tr><tr><td>names are assigned manually for interpretation).</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"content": "<table><tr><td colspan=\"3\">: Abusive comments in the target corpus, correctly classified by HateBERT+U-TDLM (Waseem</td></tr><tr><td colspan=\"3\">\u2192Davidson) and U-TDLM (HatEval \u2192Davidson). \"Source topics\" : topics that are assigned high weights by</td></tr><tr><td colspan=\"3\">U-TDLM trained on Source.</td></tr><tr><td>Topic</td><td>Names</td><td>Top words</td></tr><tr><td>id</td><td/><td/></tr><tr><td>3</td><td>Explicit</td><td>men, c*ck, d*ck, woman,</td></tr><tr><td/><td>abuse 1</td><td>picture, sl*t, s*ck, guy</td></tr><tr><td>7</td><td>Explicit</td><td>b*tch, ho*, n*gger, girl-</td></tr><tr><td/><td>abuse 2</td><td>friend, f*ck, shit, s*ck,</td></tr><tr><td/><td/><td>dumb</td></tr><tr><td>10</td><td>Politics</td><td>therickwilson, anncoulter,</td></tr><tr><td/><td>related</td><td>c*nt, commies, tr*nny,</td></tr><tr><td/><td/><td>judgejeanine, keitholber-</td></tr><tr><td/><td/><td>mann, donaldjtrumpjr</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"content": "<table><tr><td>: U-TDLM trained on HatEval's train set (topic</td></tr><tr><td>names are assigned manually for interpretation).</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |