{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:48:39.150072Z" }, "title": "Understanding and Interpreting the Impact of User Context in Hate Speech Detection", "authors": [ { "first": "Edoardo", "middle": [], "last": "Mosca", "suffix": "", "affiliation": {}, "email": "edoardo.mosca@tum.de" }, { "first": "T", "middle": [ "U" ], "last": "Munich", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Maximilian", "middle": [], "last": "Wich", "suffix": "", "affiliation": {}, "email": "maximilian.wich@tum.de" }, { "first": "Georg", "middle": [], "last": "Groh", "suffix": "", "affiliation": {}, "email": "grohg@in.tum.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "As hate speech spreads on social media and online communities, research continues to work on its automatic detection. Recently, recognition performance has been increasing thanks to advances in deep learning and the integration of user features. This work investigates the effects that such features can have on a detection model. Unlike previous research, we show that simple performance comparison does not expose the full impact of including contextualand user information. By leveraging explainability techniques, we show (1) that user features play a role in the model's decision and (2) how they affect the feature space learned by the model. Besides revealing that-and also illustrating why-user features are the reason for performance gains, we show how such techniques can be combined to better understand the model and to detect unintended bias.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "As hate speech spreads on social media and online communities, research continues to work on its automatic detection. Recently, recognition performance has been increasing thanks to advances in deep learning and the integration of user features. This work investigates the effects that such features can have on a detection model. Unlike previous research, we show that simple performance comparison does not expose the full impact of including contextualand user information. By leveraging explainability techniques, we show (1) that user features play a role in the model's decision and (2) how they affect the feature space learned by the model. Besides revealing that-and also illustrating why-user features are the reason for performance gains, we show how such techniques can be combined to better understand the model and to detect unintended bias.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Communication and information exchange between people is taking place on online platforms at a continuously increasing rate. While these means allow everyone to express themselves freely at any time, they are massively contributing to the spread of negative phenomenons such as online harassment and abusive behavior. Among those, which are all to discourage, online hate speech has attracted the attention of many researchers due to its deleterious effects (Munro, 2011; Williams et al., 2020; Duggan, 2017) .", "cite_spans": [ { "start": 458, "end": 471, "text": "(Munro, 2011;", "ref_id": "BIBREF21" }, { "start": 472, "end": 494, "text": "Williams et al., 2020;", "ref_id": "BIBREF36" }, { "start": 495, "end": 508, "text": "Duggan, 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The extremely large volume of online content and the high speed at which new one is generated exclude immediately the chance of content moderation being done manually. This realization has naturally captured the attention of the Machine Learning (ML) field, seeking to craft automatic and scalable solutions (MacAvaney et al., 2019; Waseem et al., 2017; .", "cite_spans": [ { "start": 308, "end": 332, "text": "(MacAvaney et al., 2019;", "ref_id": "BIBREF14" }, { "start": 333, "end": 353, "text": "Waseem et al., 2017;", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Methods for detecting hate speech and similar abusive behavior have been thus on the rise, consistently improving in terms of performance and generalization (Schmidt and Wiegand, 2017; Mishra et al., 2019b) . However, even the current state of the art still faces limitations in accuracy and is yet not ready to be deployed in practice. Hate speech recognition remains an extremely difficult task (Waseem et al., 2017) , in particular when the expression of hate is implicit and hidden behind figures of speech and sarcasm.", "cite_spans": [ { "start": 157, "end": 184, "text": "(Schmidt and Wiegand, 2017;", "ref_id": "BIBREF25" }, { "start": 185, "end": 206, "text": "Mishra et al., 2019b)", "ref_id": "BIBREF18" }, { "start": 397, "end": 418, "text": "(Waseem et al., 2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Alongside language features, recent works have considered utilizing user features as an additional source of knowledge to provide detection models with context information (Fehn Unsv\u00e5g and Gamb\u00e4ck, 2018; Ribeiro et al., 2018) . As a general trend, models incorporating context exhibit improved performance compared to their pure textbased counterparts (Mishra et al., 2018 (Mishra et al., , 2019a . Nevertheless, the effect, which these additional features have on the model, has not been interpreted or understood yet. So far, models have mostly been compared only in terms of performance metrics. The goal of this work is to shed light on the impact generated by including user features-or more in general context-into hate speech detection methods. Our methodology heavily relies on a combination of modern techniques coming from the field of eXplainable Artificial Intelligence (XAI).", "cite_spans": [ { "start": 172, "end": 203, "text": "(Fehn Unsv\u00e5g and Gamb\u00e4ck, 2018;", "ref_id": "BIBREF7" }, { "start": 204, "end": 225, "text": "Ribeiro et al., 2018)", "ref_id": "BIBREF22" }, { "start": 352, "end": 372, "text": "(Mishra et al., 2018", "ref_id": "BIBREF16" }, { "start": 373, "end": 396, "text": "(Mishra et al., , 2019a", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We show that adding user and social context to models is the reason for performance gains. We also explore the model's learned features space to understand how such features are leveraged for detection. At the same time, we discover that models incorporating user features suffer less from bias in the text. Unfortunately, those same models contain a new type of bias that originates from adding user information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A limited amount of research has focused on applying XAI techniques to the hate speech recognition case. For instance, Wang (2018) adapts a number of explainability techniques from the computer vision and applies them to a hate speech classifier trained on . Feature occlusion was used to highlight the most relevant words for the final classifier prediction and activation maximization selected the terms that the classifier captured and judged as relevant at a dataset-level. Vijayaraghavan et al. (2019) constructs an interpretable multi-modal detector that uses text alongside social and cultural context features. The authors leverage attention scores to quantify the relevance of different input features. Wich et al. (2020) applies posthoc explainability on a custom dataset in German to expose and estimate the impact of political bias on hate speech classifiers. More in detail, left-and right-wing political bias within the training data is visualized via DeepSHAP-based explanations (Lundberg and Lee, 2017).", "cite_spans": [ { "start": 478, "end": 506, "text": "Vijayaraghavan et al. (2019)", "ref_id": "BIBREF29" }, { "start": 712, "end": 730, "text": "Wich et al. (2020)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Explainability for Recognition Models", "sec_num": "2.1" }, { "text": "MacAvaney et al. (2019) combines together multiple simple classifiers to assemble a transparent model. Risch et al. (2020) reviews and compares several explainability techniques applied to hate speech classifiers. Their experimentation includes popular post-hoc approaches such as LIME (Ribeiro et al., 2016) and LRP (Bach et al., 2015) as well as self-explanatory detectors (Risch et al., 2020) .", "cite_spans": [ { "start": 103, "end": 122, "text": "Risch et al. (2020)", "ref_id": "BIBREF24" }, { "start": 286, "end": 308, "text": "(Ribeiro et al., 2016)", "ref_id": "BIBREF23" }, { "start": 317, "end": 336, "text": "(Bach et al., 2015)", "ref_id": "BIBREF1" }, { "start": 375, "end": 395, "text": "(Risch et al., 2020)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Explainability for Recognition Models", "sec_num": "2.1" }, { "text": "For our use case, we apply post-hoc explainability approaches (Lipton, 2018). We use external techniques to explain models that would otherwise be black-boxes (Arrieta et al., 2020) . In contrast, transparent models are interpretable thanks to their intuitive and simple design.", "cite_spans": [ { "start": 159, "end": 181, "text": "(Arrieta et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Explainability for Recognition Models", "sec_num": "2.1" }, { "text": "Models have been continuously improving since the first documented step towards automatic hate speech detection Spertus (1997) . The evolution of recognition approaches has been favored by advances in Natural Language Processing (NLP) research (Mishra et al., 2019b) . For instance, s.o.t.a detectors like Mozafari et al. (2020) exploit highperforming language models such as BERT (Devlin et al., 2019) . A different research branch took an alternative path and explored the inclusion of social context alongside text. These additional features are usually referred to with the terms user features, context features, or social features. Some tried incorporating the gender (Waseem, 2016) and the profile's geolocation and language (Gal\u00e1n-Garc\u00eda et al., 2016) . Others instead utilized the user's number of followers or friends (Fehn Unsv\u00e5g and Gamb\u00e4ck, 2018) . Modeling users' social and conversational interactions via their corresponding graph was also shown to be rewarding (Mishra et al., 2019b; Cecillon et al., 2019) . Ribeiro et al. (2018) creates additional features by measuring properties like betweenness and eigenvector centrality. Mishra et al. (2018) and Mishra et al. (2019a) instead fed the graph directly to the model either embedded as matrix or via using graph convolutional neural network (Hamilton et al., 2017) .", "cite_spans": [ { "start": 112, "end": 126, "text": "Spertus (1997)", "ref_id": "BIBREF27" }, { "start": 244, "end": 266, "text": "(Mishra et al., 2019b)", "ref_id": "BIBREF18" }, { "start": 306, "end": 328, "text": "Mozafari et al. (2020)", "ref_id": "BIBREF20" }, { "start": 381, "end": 402, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 673, "end": 687, "text": "(Waseem, 2016)", "ref_id": "BIBREF32" }, { "start": 731, "end": 758, "text": "(Gal\u00e1n-Garc\u00eda et al., 2016)", "ref_id": "BIBREF9" }, { "start": 827, "end": 858, "text": "(Fehn Unsv\u00e5g and Gamb\u00e4ck, 2018)", "ref_id": "BIBREF7" }, { "start": 977, "end": 999, "text": "(Mishra et al., 2019b;", "ref_id": "BIBREF18" }, { "start": 1000, "end": 1022, "text": "Cecillon et al., 2019)", "ref_id": "BIBREF3" }, { "start": 1025, "end": 1046, "text": "Ribeiro et al. (2018)", "ref_id": "BIBREF22" }, { "start": 1144, "end": 1164, "text": "Mishra et al. (2018)", "ref_id": "BIBREF16" }, { "start": 1169, "end": 1190, "text": "Mishra et al. (2019a)", "ref_id": "BIBREF17" }, { "start": 1309, "end": 1332, "text": "(Hamilton et al., 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Context Features for Hate Speech Detection", "sec_num": "2.2" }, { "text": "While previous work explored the usage of a wide range of context features (Fehn Unsv\u00e5g and Gamb\u00e4ck, 2018), detection models have only been compared in terms of performance metrics. Besides accuracy, researchers have not focused on other changes that such features could have on the model. Our work shows that indeed this addition entails a large impact on the recognition algorithm's behavior and substantially changes its characteristics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Features for Hate Speech Detection", "sec_num": "2.2" }, { "text": "In this section, we describe in detail the different datasets and detection models that we include in our interpretability-driven analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "Previous research has produced several datasets to support further developments in the hate speech detection area (Founta et al., 2018; Warner and Hirschberg, 2012) . Some became relatively popular to benchmark and test new ideas and improvements in recognition techniques. For our experimentation, we pick the DAVIDSON and the WASEEM (Waseem and Hovy, 2016) datasets. The choice was motivated by their variety of speech classes and popularity as detection benchmarks.", "cite_spans": [ { "start": 114, "end": 135, "text": "(Founta et al., 2018;", "ref_id": "BIBREF8" }, { "start": 136, "end": 164, "text": "Warner and Hirschberg, 2012)", "ref_id": "BIBREF31" }, { "start": 335, "end": 358, "text": "(Waseem and Hovy, 2016)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Preprocessing", "sec_num": "3.1" }, { "text": "Both benchmarks consist of a collection of tweets coupled with classification tasks with three possible classes. DAVIDSON contains \u223c 25, 000 tweets of which 1, 430 are labeled as hate, 19, 190 as offensive, and 4, 163 as neither . As classification outcomes in WASEEM in-stead, we have racism, sexism, and neither. The three classes contain 3, 378, 1, 970, and 11, 501 tweets respectively (Waseem and Hovy, 2016) . We were not able to retrieve the remaining 65 of the original 16, 914 samples.", "cite_spans": [ { "start": 389, "end": 412, "text": "(Waseem and Hovy, 2016)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Preprocessing", "sec_num": "3.1" }, { "text": "We follow the same preprocessing steps for both datasets. First, terms belonging to categories like url, email, percent, number, user, and time are annotated via a category token. For instance, \"341\" is replaced by \"\". After that, we apply word segmentation and spell correction based on Twitter word statistics. Both methods and statistics were provided by the ekphrasis 1 text preprocessing tool (Baziotis et al., 2017) .", "cite_spans": [ { "start": 406, "end": 429, "text": "(Baziotis et al., 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Preprocessing", "sec_num": "3.1" }, { "text": "In addition to the tweets that represent the text (or content) component of our input features, we also retrieve information about the tweet's authors and their relationships. In a similar fashion as done in Mishra et al. 2018, we construct a community graph G = (V, E) where each node represents a user and two nodes are connected if at least one of the two users follows the other one. We were able to retrieve |V | = 6, 725 users and |E| = 19, 597 relationships for DAVIDSON, while for WASEEM we have |V | = 2, 024 and |E| = 9, 955.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Preprocessing", "sec_num": "3.1" }, { "text": "The respective average node degrees are 2, 914 and 4, 918 and the overall graphs' densities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Preprocessing", "sec_num": "3.1" }, { "text": "D = 2 \u2022 |E| |V |(|V | \u2212 1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Preprocessing", "sec_num": "3.1" }, { "text": "are 0.00087 and 0.00486 respectively. We immediately notice that both graphs are very sparse. In particular, we have 3, 393 users not connected to anyone in DAVIDSON and 927 in WASEEM. For reference, Mishra et al. 2018achieves a graph density of 0.0075 on WASEEM, with only \u223c 400 authors being solitary, i.e. with no connections. We assume the difference is reasonable as data availability considerably decreases over time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Preprocessing", "sec_num": "3.1" }, { "text": "Our experimentation and findings are based on the comparison of two detection models, one that solely relies on text features and one that instead incorporates context features. To better capture their behavioral differences, we build them to be relatively simple and also to not differ in the textprocessing part.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection Models", "sec_num": "3.2" }, { "text": "1 https://github.com/cbaziotis/ekphrasis The first model, shown in figure 1, computes the three classification probabilities only based on the tweets' content. The input text is fed to the model as Bag of Words (BoW), which is then processed by two fully connected layers. We refer to this model as text model. The second model instead leverages the information coming from three input sources: the tweet's text, the user's vocabulary, and the follower network. The first input is identical to what is fed to the text model. The second is constructed from all the tweets of the author in the dataset and aims to model their overall writing style. Concretely, we merge the tweets' BoW representations, i.e. we apply a logical-OR to their corresponding vectors. The third is the author's follower network and describes their online surrounding community. On a more technical note, this can be extracted as a row from the adjacency matrix of our community graph described in section 3.1. Note that s.o.t.a hate speech detector used similar context features (Mishra et al., 2018 (Mishra et al., , 2019a . We refer to this model as social model.", "cite_spans": [ { "start": 1054, "end": 1074, "text": "(Mishra et al., 2018", "ref_id": "BIBREF16" }, { "start": 1075, "end": 1098, "text": "(Mishra et al., , 2019a", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Detection Models", "sec_num": "3.2" }, { "text": "As sketched in figure 2, the different input sources are initially processed separately in the model's architecture. After the first layer, the intermediate representations from the different branches are concatenated together and fed to two more layers to compute the final output. Note that the textand social models have the same dimensions for their final hidden layer and can be seen as equivalent networks working on different inputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection Models", "sec_num": "3.2" }, { "text": "We now describe our methodology in detail. Recall that our models differ precisely on the usage of user features. As we will see shortly, their comparison beyond accuracy measurements sheds light on the different model properties and hence on the potential impact of incorporating context features. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Analysis", "sec_num": "4" }, { "text": "We apply the same training and testing procedure to all models and datasets. We keep the 60% of the data for training while splitting the remaining equally between validation and test set, i.e. 20% each. Tables 1 and 2 report our results in terms of F1 scores for WASEEM (Waseem and Hovy, 2016) and DAVIDSON respectively. To increase our confidence in their validity, we average the performance over five runs with randomly picked train/validation/test sets. We observe different trends for the two datasets. On WASEEM, the social model considerably outperforms (by 4.3%) our text model. The performance gain is general and not restricted to any single class. Quite surprisingly, our text model performs better on racist tweets than sexist ones, although the sexism class is almost twice as big. This suggests that sexism is, at least in this case, somewhat harder to detect by just looking at the tweet content. On the contrary, our social model shows an impressive improvement in the sexism class (al-most 13%), suggesting the presence of detectable patterns in sexist users and their social interactions. On DAVIDSON, we only observe a contained improvement (1%). Moreover, the jump in performance is restricted to the hate class, containing a tiny amount of samples. We believe the difference between the two datasets should be expected due to the lower amount of user data available for DAVIDSON. Considering these results, we focus on applying our technique on the WASEEM dataset in the remainder of this paper. Nevertheless, the respective results on DAVIDSON can be found in the appendix A. While on both datasets we do not outperform the current s ", "cite_spans": [ { "start": 271, "end": 294, "text": "(Waseem and Hovy, 2016)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 204, "end": 218, "text": "Tables 1 and 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Training and Performance", "sec_num": "4.1" }, { "text": "We now apply a first post-hoc explainability method. For each feature we calculate its corresponding Shapley value (Shapley, 1953; Lundberg and Lee, 2017) . That is, we quantify the relevance that each feature has for the prediction of a specific output. Shapley values have been shown-both theoretically and empirically-to be an ideal estimator for feature relevance (Lundberg and Lee, 2017).", "cite_spans": [ { "start": 115, "end": 130, "text": "(Shapley, 1953;", "ref_id": "BIBREF26" }, { "start": 131, "end": 154, "text": "Lundberg and Lee, 2017)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Shapley Values Estimation", "sec_num": "4.2" }, { "text": "As exact Shapley values are exponentially complex to determine, we use accurate approximation methods as done in (Lundberg and Lee, 2017; Strumbelj and Kononenko, 2014). Figure 3 shows concrete examples in which Shapley values are calculated for both models on two test tweets from WASEEM. For our social model, we consider the user vocabulary and the follower network as single features for simplicity. Notably, the context is used by the social model and can play a significant role in its prediction. Hence, we can confirm the context features to be the reason for the performance gains. We can empirically exclude that the differences between the text-and the social model architectures justify the jump in performance. Figure 3 : Example of features contribution, computed via Shapley value approximation, for our text and social models. In (a) and (c) we use as input the tweet \" I think Arquette is a dummy who believes it. Not a Valenti who knowingly lies.\". The sexist tweet refers to the actress Patricia Arquette, who spoke in favour of gender equality, and the feminist writer Jessica Valenti. Some words are missing in the plot as our BoW dimension is limited during preprocessing. In (b) and (d), we use the racist tweet \"These girls are the equivalent of the irritating Asian girls a couple of years ago. Well done, 7. #MKR\". The hashtag refers to the Australian cooking show \"My Kitchen Rules\".", "cite_spans": [], "ref_spans": [ { "start": 170, "end": 178, "text": "Figure 3", "ref_id": null }, { "start": 724, "end": 732, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Shapley Values Estimation", "sec_num": "4.2" }, { "text": "We have seen that detection models can benefit from the inclusion of context features. We now focus on understanding why this is the case. Shapley values and more in general feature attribution methods can quantify how much single features contribute to the prediction. Yet, alone, they do not give us any intuition to answer our why-question. We look at the feature space learned by our models, which can be considered a global explainability technique. For our text model, we remove the last layer and feed the tweets to the remaining architecture. The output is a 50-dimensional embedding for each tweet. We employ the t-Distributed Stochastic Neighbor Embedding (t-SNE) (Van der Maaten and Hinton, 2008) to reduce the embeddings to two dimensions for visualization purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Space Exploration", "sec_num": "4.3" }, { "text": "The resulting plot, in figure 4d , shows all the tweets in a single cluster. Racist tweets look more concentrated in one area than sexist ones, suggest-ing that sexism is somewhat harder to detect for the model. This result is coherent with our per-class performance scores.", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 32, "text": "figure 4d", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Feature Space Exploration", "sec_num": "4.3" }, { "text": "We apply the same procedure to the social model. In this case, we visualize the hidden layer of each separate branch as well as the final hidden layer analogous to the text model. Not surprisingly, the tweet branch (figure 4a) looks very similar to the feature space learned by our text model. The user's vocabulary branch (figure 4b) instead shows the samples distributed in well-separated clusters. Notably, racist tweets have been restricted to one cluster and we can also observe pure-sexist and pureneither clusters. The follower network branch (figure 4c) looks similar though cluster separation is not as strong. Once more, we notice racism more concentrated than sexism, which is considerably more mixed with regular tweets. To some extent, this result is in line with the notion of homophily among racist users (Mathew et al., 2019) . Intuitively, being able to divide users into different clusters based on their behavior should be helpful for classification at later layers. This is confirmed by the combined feature space plot (figure 4e). Indeed, tweets are now structured in multiple clusters instead of a single one as for our text model. Also in this case, we observe several pure or almost-pure groups.", "cite_spans": [ { "start": 820, "end": 841, "text": "(Mathew et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Space Exploration", "sec_num": "4.3" }, { "text": "The corresponding visualizations and results for DAVIDSON can be found in appendix A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Space Exploration", "sec_num": "4.3" }, { "text": "Explaining a Novel Tweet", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Targeted Behavioral Analysis:", "sec_num": "4.4" }, { "text": "We have seen how different explainability techniques convey different types of information on the examined model. Computing Shapley values and visualizing the learned feature space can also be used in combination as they complement each other. If used together, they can both quantify the relevance of each feature as well as show how certain types of features are leveraged by the model to better distinguish between classes. So far, our explanations are relative to the datasets used for model training and testing. However, to better understand a classifier it should also be tested beyond its test set. This can be sim-ply done by feeding the model with a novel tweet. Via artificially crafting tweets, we can check the model's behavior in specific cases. For instance, we can inspect how it reacts to specific sub-types of hate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Targeted Behavioral Analysis:", "sec_num": "4.4" }, { "text": "Let us consider the anti-Islamic tweet \"muslims are the worst, together with their god\". If fed to our model, it is classified as racist with a 75% confidence following our expectations. Figures 5a and 5c show explanations for the tweet. We can see that the word \"muslim\" plays a big role by looking at its corresponding Shapley value. At the same time, the projection of the novel tweet onto the feature space shows how the sample is collocated together with the other racist tweets by the text model.", "cite_spans": [], "ref_spans": [ { "start": 187, "end": 205, "text": "Figures 5a and 5c", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Targeted Behavioral Analysis:", "sec_num": "4.4" }, { "text": "If we now change our hypothetical tweet to be anti-black-\"black people are the worst, together with their slang\"-we observe a different model behavior (figures 5b and 5d). In fact, now the tweet is not classified as racist. No word has a substantial impact on the prediction. We can also notice a slight shift of the sample in the features space, away from the racism cluster. If changing the target of the hate changes the prediction, then the model/dataset probably contains bias against that target. Model interpretability further reveals how and embedding in the text model's latent space of an islamophobic and a anti-black racist tweets. The two sentences had, according to our text model, the 75% and 24% probability of being racist respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Targeted Behavioral Analysis:", "sec_num": "4.4" }, { "text": "its behavior reacts to different targets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Targeted Behavioral Analysis:", "sec_num": "4.4" }, { "text": "We run the same experiment with our social model. This time, it correctly classifies the antiblack tweet as racist (55% confidence). This suggests that text bias could be mitigated by using models that do not only rely on the text input. However, the social model is much more sensitive to changes in the user-derived features. To test this, we feed the model the same tweet and only change the author that generated it. For a fair comparison, we pick one random user with other racist tweets, one random user with other sexist tweets, and one random user with no hateful tweets in the dataset. We refer to these users as racist, sexist, and regular users respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Targeted Behavioral Analysis:", "sec_num": "4.4" }, { "text": "Our crafted tweet is classified as racist when coming from a racist user (64%). However, it is instead judged non-hateful in both the other cases (12% and 19% for a sexist and user with no hate background respectively). Evidently, racist tweets also need some contribution from the social features to be judged as racist.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Targeted Behavioral Analysis:", "sec_num": "4.4" }, { "text": "A very informative explanation comes again from both the Shapley values and the feature space exploration (figure 6). On the left side, we can see the Shapley value for the racist and regular users. Results relative to the sexist user are analogous to the regular user and reported in the supplementary material (A.3). All the words have a similar contribution to the racism class in all cases. However, the difference in the authors plays a substantial role in the decision. Only the racist user positively contributes to the racism class. On the right side of 6, we can see the embedding in the latent space for each case. Different input authors cause the tweet to be embedded in different clusters. Only in the first one the model actually considers the possibility of the tweet being racist.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Targeted Behavioral Analysis:", "sec_num": "4.4" }, { "text": "Hence, while adding user-derived features might mitigate the effects of bias in the text, it generates a new form of bias that could discriminate users based on their previous behavior and hinder the model from classifying correctly hateful content. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Targeted Behavioral Analysis:", "sec_num": "4.4" }, { "text": "In our work, we investigated the effects of user features in hate speech detection. In previous studies, this was done by comparing models based on performance metric. We have shown that post-hoc explainability techniques provide a much deeper understanding of the models' behavior. In our case, when applied to two models that differ specifically on the usage of context features, the in-depth comparison reveals the impact that such additional features can have. The two utilized techniques-Shapley values estimation and learned feature space explorationconvey different kinds of information. The first one quantifies how each feature plays a role but does not tell us what is happening in the background. The second one illustrates the model's perception of the tweets but does not provide any quantitative information for the prediction. Furthermore, we have seen that artificially crafting and modifying a tweet can be useful to examine the models' behavior in particular scenarios. In concrete exam-ples, the two approaches worked as bias detectors present in the text as well as in the user features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "We believe that analyzing detection models is vital for understanding how certain features shape the way data is processed. Accuracy alone is by no means a sufficient metric to decide which model to prefer. Our work shows that even models that perform significantly better can potentially lead to new types of bias. We urge researchers in the field to compare recognition approaches beyond accuracy to avoid potential harm to affected users.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "Data scarcity is still a main issue faced by current researchers, especially when it comes to context features. We believe that larger and more complete datasets will improve our understanding of how certain features interact and will help future research in advancing both in accuracy and bias mitigation. Figure 7 shows the feature space learned by our text model on DAVIDSON. Overall, the distribution looks similar as the one of WASEEM visualized in figure 4d. We can notice that hate tweets are extremely sparse and mixed with the offensive ones. This is reflected by the poor model performance on the hate class, possibly caused by the conceptual overlap that these two classes have. On the other hand, non-harmful tweets are mostly concentrated in one area of the plot, confirming the satisfactory F1 scored achieved. Figure 8 shows the feature space learned by our social model on DAVIDSON. As done for WASEEM, we report the plots both for the single branches as well as for their combination. The tweet branch (figure 8a) has a similar structure to figure 7. However, hateful tweets are also concentrated in a small portion of the space. This reflects the improved performance that the social model had on the hate class. This suggests that the information coming from the other input sources reinforces the signal backpropagated to the tweet branch, resulting in a less chaotic mixture of hateful and offensive tweets. The user vocabulary (figure 8b) and the follower network branch (figure 8c) do not present the same characteristics as seen on WASEEM. In this case, we do not have the data points separated into multiple clusters. The same goes for the overall learned feature space ( figure 8d) , where all the tweets are contained in one single cloud. This is consistent with what we observed in terms of F1 Scores. In contrast to what occurred on WASEEM, user features did not cause a substantial impact on the feature space on DAVIDSON and thus did not produce a large leap in performance.", "cite_spans": [], "ref_spans": [ { "start": 307, "end": 315, "text": "Figure 7", "ref_id": "FIGREF6" }, { "start": 825, "end": 833, "text": "Figure 8", "ref_id": null }, { "start": 1697, "end": 1707, "text": "figure 8d)", "ref_id": null } ], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "Figure 6 compares the model's behavior on the same tweet but with different authors, one racist and one regular. For completeness, figure 9 shows the corresponding plots-Shapley values and embedding onto the features space-for the same tweet when generated by a sexist user. The result is analogous to the one obtained with the regular user. Also in this case the tweet is not classified as racist (12% confidence). The estimated Shapley values show a substantial impact of the user vocabulary against the racism class. The embedding onto the latent space shows once more that changing the author caused the tweet to embed in a different cluster, hence excluding the possibility of the content being classified correctly. ", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 139, "text": "figure 9", "ref_id": null } ], "eq_spans": [], "section": "A.3 Complement to Figure 6", "sec_num": null } ], "back_matter": [ { "text": "This paper is based on a joined work in the context of Edoardo Mosca's master's thesis (Mosca, 2020) .", "cite_spans": [ { "start": 87, "end": 100, "text": "(Mosca, 2020)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI", "authors": [ { "first": "Alejandro", "middle": [], "last": "Barredo Arrieta", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "D\u00edaz-Rodr\u00edguez", "suffix": "" }, { "first": "Javier", "middle": [ "Del" ], "last": "Ser", "suffix": "" }, { "first": "Adrien", "middle": [], "last": "Bennetot", "suffix": "" }, { "first": "Siham", "middle": [], "last": "Tabik", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Barbado", "suffix": "" }, { "first": "Salvador", "middle": [], "last": "Garc\u00eda", "suffix": "" }, { "first": "Sergio", "middle": [], "last": "Gil-L\u00f3pez", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Molina", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Benjamins", "suffix": "" } ], "year": 2020, "venue": "Information Fusion", "volume": "58", "issue": "", "pages": "82--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alejandro Barredo Arrieta, Natalia D\u00edaz-Rodr\u00edguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Al- berto Barbado, Salvador Garc\u00eda, Sergio Gil-L\u00f3pez, Daniel Molina, Richard Benjamins, et al. 2020. Ex- plainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward re- sponsible AI. Information Fusion, 58:82-115.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation", "authors": [ { "first": "Sebastian", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Binder", "suffix": "" }, { "first": "Gr\u00e9goire", "middle": [], "last": "Montavon", "suffix": "" }, { "first": "Frederick", "middle": [], "last": "Klauschen", "suffix": "" }, { "first": "Klaus-Robert", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Samek", "suffix": "" } ], "year": 2015, "venue": "PloS one", "volume": "", "issue": "7", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Bach, Alexander Binder, Gr\u00e9goire Mon- tavon, Frederick Klauschen, Klaus-Robert M\u00fcller, and Wojciech Samek. 2015. On pixel-wise explana- tions for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis", "authors": [ { "first": "Christos", "middle": [], "last": "Baziotis", "suffix": "" }, { "first": "Nikos", "middle": [], "last": "Pelekis", "suffix": "" }, { "first": "Christos", "middle": [], "last": "Doulkeridis", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "747--754", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christos Baziotis, Nikos Pelekis, and Christos Doulk- eridis. 2017. Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis. In Proceedings of the 11th International Workshop on Semantic Evalu- ation (SemEval-2017), pages 747-754.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Abusive language detection in online conversations by combining contentand graph-based features", "authors": [ { "first": "No\u00e9", "middle": [], "last": "Cecillon", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Labatut", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Dufour", "suffix": "" }, { "first": "Georges", "middle": [], "last": "Linar\u00e8s", "suffix": "" } ], "year": 2019, "venue": "ICWSM International Workshop on Modeling and Mining Social-Media-Driven Complex Networks", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "No\u00e9 Cecillon, Vincent Labatut, Richard Dufour, and Georges Linar\u00e8s. 2019. Abusive language detec- tion in online conversations by combining content- and graph-based features. In ICWSM International Workshop on Modeling and Mining Social-Media- Driven Complex Networks, volume 2, page 8. Fron- tiers.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automated hate speech detection and the problem of offensive language", "authors": [ { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Macy", "suffix": "" }, { "first": "Ingmar", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International AAAI Conference on Web and Social Media", "volume": "11", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media, volume 11.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Online harassment 2017", "authors": [ { "first": "Maeve", "middle": [], "last": "Duggan", "suffix": "" } ], "year": 2017, "venue": "Pew Research Center", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maeve Duggan. 2017. Online harassment 2017. Pew Research Center.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Effects of User Features on Twitter Hate Speech Detection", "authors": [ { "first": "Elise", "middle": [], "last": "Fehn Unsv\u00e5g", "suffix": "" }, { "first": "Bj\u00f6rn", "middle": [], "last": "Gamb\u00e4ck", "suffix": "" } ], "year": 2018, "venue": "Proc. 2nd Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "75--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elise Fehn Unsv\u00e5g and Bj\u00f6rn Gamb\u00e4ck. 2018. The Ef- fects of User Features on Twitter Hate Speech Detec- tion. In Proc. 2nd Workshop on Abusive Language Online, pages 75-85.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Large scale crowdsourcing and characterization of twitter abusive behavior", "authors": [ { "first": "Antigoni-Maria", "middle": [], "last": "Founta", "suffix": "" }, { "first": "Constantinos", "middle": [], "last": "Djouvas", "suffix": "" }, { "first": "Despoina", "middle": [], "last": "Chatzakou", "suffix": "" }, { "first": "Ilias", "middle": [], "last": "Leontiadis", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Blackburn", "suffix": "" }, { "first": "Gianluca", "middle": [], "last": "Stringhini", "suffix": "" }, { "first": "Athena", "middle": [], "last": "Vakali", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Sirivianos", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Kourtellis", "suffix": "" } ], "year": 2018, "venue": "Proc. 11th ICWSM", "volume": "", "issue": "", "pages": "491--500", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antigoni-Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In Proc. 11th ICWSM, pages 491- 500.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Supervised machine learning for the detection of troll profiles in twitter social network: Application to a real case of cyberbullying", "authors": [ { "first": "Patxi", "middle": [], "last": "Gal\u00e1n-Garc\u00eda", "suffix": "" }, { "first": "Jos\u00e9", "middle": [], "last": "Gaviria De La Puerta", "suffix": "" }, { "first": "Carlos", "middle": [ "Laorden" ], "last": "G\u00f3mez", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Santos", "suffix": "" }, { "first": "Pablo", "middle": [ "Garc\u00eda" ], "last": "Bringas", "suffix": "" } ], "year": 2016, "venue": "Logic Journal of the IGPL", "volume": "24", "issue": "1", "pages": "42--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patxi Gal\u00e1n-Garc\u00eda, Jos\u00e9 Gaviria de la Puerta, Car- los Laorden G\u00f3mez, Igor Santos, and Pablo Garc\u00eda Bringas. 2016. Supervised machine learning for the detection of troll profiles in twitter social network: Application to a real case of cyberbullying. Logic Journal of the IGPL, 24(1):42-53.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Inductive representation learning on large graphs", "authors": [ { "first": "Will", "middle": [], "last": "Hamilton", "suffix": "" }, { "first": "Zhitao", "middle": [], "last": "Ying", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The mythos of model interpretability. Queue", "authors": [ { "first": "", "middle": [], "last": "Zachary C Lipton", "suffix": "" } ], "year": 2018, "venue": "", "volume": "16", "issue": "", "pages": "31--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zachary C Lipton. 2018. The mythos of model inter- pretability. Queue, 16(3):31-57.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A unified approach to interpreting model predictions", "authors": [ { "first": "M", "middle": [], "last": "Scott", "suffix": "" }, { "first": "Su-In", "middle": [], "last": "Lundberg", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "4765--4774", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Ad- vances in neural information processing systems, pages 4765-4774.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Visualizing data using t-sne", "authors": [ { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2008, "venue": "Journal of machine learning research", "volume": "", "issue": "11", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(11).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Hate speech detection: Challenges and solutions", "authors": [ { "first": "Sean", "middle": [], "last": "Macavaney", "suffix": "" }, { "first": "", "middle": [], "last": "Hao-Ren", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Katina", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Nazli", "middle": [], "last": "Russell", "suffix": "" }, { "first": "Ophir", "middle": [], "last": "Goharian", "suffix": "" }, { "first": "", "middle": [], "last": "Frieder", "suffix": "" } ], "year": 2019, "venue": "PloS one", "volume": "", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sean MacAvaney, Hao-Ren Yao, Eugene Yang, Katina Russell, Nazli Goharian, and Ophir Frieder. 2019. Hate speech detection: Challenges and solutions. PloS one, 14(8).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Spread of hate speech in online social media", "authors": [ { "first": "Binny", "middle": [], "last": "Mathew", "suffix": "" }, { "first": "Ritam", "middle": [], "last": "Dutt", "suffix": "" }, { "first": "Pawan", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Animesh", "middle": [], "last": "Mukherjee", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 10th ACM conference on web science", "volume": "", "issue": "", "pages": "173--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Binny Mathew, Ritam Dutt, Pawan Goyal, and Ani- mesh Mukherjee. 2019. Spread of hate speech in on- line social media. In Proceedings of the 10th ACM conference on web science, pages 173-182.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Author profiling for abuse detection", "authors": [ { "first": "Pushkar", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Marco", "middle": [ "Del" ], "last": "Tredici", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1088--1098", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pushkar Mishra, Marco Del Tredici, Helen Yan- nakoudakis, and Ekaterina Shutova. 2018. Author profiling for abuse detection. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1088-1098.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Abusive language detection with graph convolutional networks", "authors": [ { "first": "Pushkar", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Marco", "middle": [ "Del" ], "last": "Tredici", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "", "issue": "", "pages": "2145--2150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pushkar Mishra, Marco Del Tredici, Helen Yan- nakoudakis, and Ekaterina Shutova. 2019a. Abu- sive language detection with graph convolutional networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, pages 2145-2150.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Tackling online abuse: A survey of automated abuse detection methods", "authors": [ { "first": "Pushkar", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.06024" ] }, "num": null, "urls": [], "raw_text": "Pushkar Mishra, Helen Yannakoudakis, and Ekaterina Shutova. 2019b. Tackling online abuse: A survey of automated abuse detection methods. arXiv preprint arXiv:1908.06024.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Master's thesis, Technical University of Munich. Advised and supervised by Maximilian Wich and Georg Groh", "authors": [ { "first": "Edoardo", "middle": [], "last": "Mosca", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edoardo Mosca. 2020. Explainability of hate speech detection models. Master's thesis, Technical Univer- sity of Munich. Advised and supervised by Maxim- ilian Wich and Georg Groh.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media", "authors": [ { "first": "Marzieh", "middle": [], "last": "Mozafari", "suffix": "" }, { "first": "Reza", "middle": [], "last": "Farahbakhsh", "suffix": "" }, { "first": "No\u00ebl", "middle": [], "last": "Crespi", "suffix": "" } ], "year": 2020, "venue": "Studies in Computational Intelligence", "volume": "881", "issue": "", "pages": "928--940", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marzieh Mozafari, Reza Farahbakhsh, and No\u00ebl Crespi. 2020. A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media. Studies in Computational Intelligence, 881 SCI:928- 940.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The protection of children online: a brief scoping review to identify vulnerable groups", "authors": [ { "first": "R", "middle": [], "last": "Emily", "suffix": "" }, { "first": "", "middle": [], "last": "Munro", "suffix": "" } ], "year": 2011, "venue": "Childhood Wellbeing Research Centre", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily R Munro. 2011. The protection of children on- line: a brief scoping review to identify vulnerable groups. Childhood Wellbeing Research Centre.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Characterizing and detecting hateful users on twitter", "authors": [ { "first": "Pedro", "middle": [ "H" ], "last": "Manoel Horta Ribeiro", "suffix": "" }, { "first": "Yuri", "middle": [ "A" ], "last": "Calais", "suffix": "" }, { "first": "", "middle": [], "last": "Santos", "suffix": "" }, { "first": "A", "middle": [ "F" ], "last": "Virg\u00edlio", "suffix": "" }, { "first": "Wagner", "middle": [], "last": "Almeida", "suffix": "" }, { "first": "", "middle": [], "last": "Meira", "suffix": "" } ], "year": 2018, "venue": "Twelfth international AAAI conference on web and social media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manoel Horta Ribeiro, Pedro H Calais, Yuri A Santos, Virg\u00edlio AF Almeida, and Wagner Meira Jr. 2018. Characterizing and detecting hateful users on twit- ter. In Twelfth international AAAI conference on web and social media.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Explaining the predictions of any classifier", "authors": [ { "first": "Sameer", "middle": [], "last": "Marco Tulio Ribeiro", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2016, "venue": "Proc. 22nd ACM SIGKDD Intl. Conf. Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "1135--1144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"Why should I trust you?\" Explain- ing the predictions of any classifier. In Proc. 22nd ACM SIGKDD Intl. Conf. Knowledge Discovery and Data Mining, pages 1135-1144.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Offensive language detection explained", "authors": [ { "first": "Julian", "middle": [], "last": "Risch", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Ruff", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Krestel", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying", "volume": "", "issue": "", "pages": "137--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian Risch, Robin Ruff, and Ralf Krestel. 2020. Of- fensive language detection explained. In Proceed- ings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 137-143.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A survey on hate speech detection using natural language processing", "authors": [ { "first": "Anna", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" } ], "year": 2017, "venue": "Proc. 5th Intl. Workshop on Natural Language Processing for Social Media", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proc. 5th Intl. Workshop on Natural Lan- guage Processing for Social Media, pages 1-10.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A value for n-person games", "authors": [ { "first": "S", "middle": [], "last": "Lloyd", "suffix": "" }, { "first": "", "middle": [], "last": "Shapley", "suffix": "" } ], "year": 1953, "venue": "Contributions to the Theory of Games", "volume": "2", "issue": "", "pages": "307--317", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lloyd S Shapley. 1953. A value for n-person games. Contributions to the Theory of Games, 2(28):307- 317.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Smokey: Automatic recognition of hostile messages", "authors": [ { "first": "Ellen", "middle": [], "last": "Spertus", "suffix": "" } ], "year": 1997, "venue": "Proceedings of Innovative Applications of Artificial Intelligence (IAAI)", "volume": "", "issue": "", "pages": "1058--1065", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Spertus. 1997. Smokey: Automatic recognition of hostile messages. In Proceedings of Innovative Applications of Artificial Intelligence (IAAI), pages 1058-1065.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems", "authors": [ { "first": "Igor", "middle": [], "last": "Erik\u0161trumbelj", "suffix": "" }, { "first": "", "middle": [], "last": "Kononenko", "suffix": "" } ], "year": 2014, "venue": "", "volume": "41", "issue": "", "pages": "647--665", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik\u0160trumbelj and Igor Kononenko. 2014. Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems, 41(3):647-665.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Interpretable Multi-Modal Hate Speech Detection", "authors": [ { "first": "Prashanth", "middle": [], "last": "Vijayaraghavan", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Deb", "middle": [], "last": "Roy", "suffix": "" } ], "year": 2019, "venue": "Intl. Conf. Machine Learning AI for Social Good Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prashanth Vijayaraghavan, Hugo Larochelle, and Deb Roy. 2019. Interpretable Multi-Modal Hate Speech Detection. In Intl. Conf. Machine Learning AI for Social Good Workshop.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Interpreting neural network hate speech classifiers", "authors": [ { "first": "Cindy", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proc. 2nd Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "86--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cindy Wang. 2018. Interpreting neural network hate speech classifiers. In Proc. 2nd Workshop on Abu- sive Language Online, pages 86-92.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Detecting hate speech on the world wide web", "authors": [ { "first": "William", "middle": [], "last": "Warner", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the second workshop on language in social media", "volume": "", "issue": "", "pages": "19--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. In Proceedings of the second workshop on language in social media, pages 19-26.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Are You a Racist or Am I Seeing Things? Annotator Influence on Hate Speech Detection on Twitter", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" } ], "year": 2016, "venue": "Proc. First Workshop on NLP and Computational Social Science", "volume": "", "issue": "", "pages": "138--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zeerak Waseem. 2016. Are You a Racist or Am I See- ing Things? Annotator Influence on Hate Speech Detection on Twitter. In Proc. First Workshop on NLP and Computational Social Science, pages 138- 142.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Understanding abuse: A typology of abusive language detection subtasks", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "Ingmar", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.09899" ] }, "num": null, "urls": [], "raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. arXiv preprint arXiv:1705.09899.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the NAACL student research workshop", "volume": "", "issue": "", "pages": "88--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL student research workshop, pages 88-93.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Impact of politically biased data on hate speech classification", "authors": [ { "first": "Maximilian", "middle": [], "last": "Wich", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Groh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourth Workshop on Online Abuse and Harms", "volume": "", "issue": "", "pages": "54--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximilian Wich, Jan Bauer, and Georg Groh. 2020. Impact of politically biased data on hate speech clas- sification. In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 54-64.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Hate in the machine: Anti-black and anti-muslim social media posts as predictors of offline racially and religiously aggravated crime", "authors": [ { "first": "L", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Pete", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Burnap", "suffix": "" }, { "first": "Han", "middle": [], "last": "Javed", "suffix": "" }, { "first": "Sefa", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Ozalp", "suffix": "" } ], "year": 2020, "venue": "The British Journal of Criminology", "volume": "60", "issue": "1", "pages": "93--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew L Williams, Pete Burnap, Amir Javed, Han Liu, and Sefa Ozalp. 2020. Hate in the machine: Anti-black and anti-muslim social media posts as predictors of offline racially and religiously aggra- vated crime. The British Journal of Criminology, 60(1):93-117.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Architecture of the text model." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Architecture of the social model." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": ".o.t.a-Mishra et al. (2019a) on WASEEM and Mozafari et al. (2020) on DAVID-SON-our results are comparable and thus satisfactory for our purposes." }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "WASEEM tweets, colored by label, in the features space learned by our text model (d) and social model (a,b,c for the independent branches, e combined)." }, "FIGREF4": { "type_str": "figure", "num": null, "uris": null, "text": "Features contribution (Shapley values w.r.t. the racism class)" }, "FIGREF5": { "type_str": "figure", "num": null, "uris": null, "text": "Features contribution (w.r.t. racism class) and embeddings of the islamophobic tweet in the social model's latent space. The two pairs of plots are w.r.t. two predictions done with different users as input: a racist one (a,b, 64%), and a regular one(c,d, 19%)." }, "FIGREF6": { "type_str": "figure", "num": null, "uris": null, "text": "DAVIDSON tweets, colored by label, in the feature space learned by the text model." }, "FIGREF7": { "type_str": "figure", "num": null, "uris": null, "text": "Latent space visualization of our social model on DAVIDSON, colored by label. The features are extracted from the single branches before the concatenation: tweet (a), user's vocabulary (b), follower network (c). The last plot (d) shows instead the final learned features space, after all branches are combined and processed together. Features contribution (w.r.t. racism class) and embeddings of the islamophobic tweet in the social model's latent space. The pair of plots are w.r.t. the prediction done with sexist author." }, "TABREF1": { "num": null, "type_str": "table", "content": "", "html": null, "text": "F1 Scores onWaseem and Hovy (2016)." }, "TABREF3": { "num": null, "type_str": "table", "content": "
", "html": null, "text": "F1 Scores on." } } } }