{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:06:23.614116Z" }, "title": "Understanding and Detecting Dangerous Speech in Social Media", "authors": [ { "first": "Ali", "middle": [], "last": "Alshehri", "suffix": "", "affiliation": { "laboratory": "Natural Langauge Processing Lab", "institution": "The University of British Columbia", "location": {} }, "email": "" }, { "first": "El", "middle": [], "last": "Moatez", "suffix": "", "affiliation": { "laboratory": "Natural Langauge Processing Lab", "institution": "The University of British Columbia", "location": {} }, "email": "" }, { "first": "Billah", "middle": [], "last": "Nagoudi", "suffix": "", "affiliation": { "laboratory": "Natural Langauge Processing Lab", "institution": "The University of British Columbia", "location": {} }, "email": "" }, { "first": "Muhammad", "middle": [], "last": "Abdul-Mageed", "suffix": "", "affiliation": { "laboratory": "Natural Langauge Processing Lab", "institution": "The University of British Columbia", "location": {} }, "email": "muhammad.mageeed@ubc.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Social media communication has become a significant part of daily activity in modern societies. For this reason, ensuring safety in social media platforms is a necessity. Use of dangerous language such as physical threats in online environments is a somewhat rare, yet remains highly important. Although several works have been performed on the related issue of detecting offensive and hateful language, dangerous speech has not previously been treated in any significant way. Motivated by these observations, we report our efforts to build a labeled dataset for dangerous speech. We also exploit our dataset to develop highly effective models to detect dangerous content. Our best model performs at 59.60% macro F1, significantly outperforming a competitive baseline.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Social media communication has become a significant part of daily activity in modern societies. For this reason, ensuring safety in social media platforms is a necessity. Use of dangerous language such as physical threats in online environments is a somewhat rare, yet remains highly important. Although several works have been performed on the related issue of detecting offensive and hateful language, dangerous speech has not previously been treated in any significant way. Motivated by these observations, we report our efforts to build a labeled dataset for dangerous speech. We also exploit our dataset to develop highly effective models to detect dangerous content. Our best model performs at 59.60% macro F1, significantly outperforming a competitive baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The proliferation of social media makes it necessary to ensure online safety. Unfortunately, offensive, hateful, aggressive, etc., language continues to be used online and put the well-being of millions of people at stake. In some cases, it has been reported that online incidents have caused not only mental and psychological trouble to some users but have indeed forced some to deactivate their accounts or, in extreme cases, even commit suicides (Hinduja and Patchin, 2010) . Previous work has focused on detecting various types of negative online behavior, but not necessarily dangerous speech. In this work, our goal is to bridge this gap by investigating dangerous content. More specifically, we focus on direct threats in Arabic Twitter. A threat can be defined as \"a statement of an intention to inflict pain, injury, damage, or other hostile action on someone in retribution for something done or not done.\" 1 This definition highlights two main aspects: (1) the speaker's intention of committing an act, which (2) he/she believes to be unfavorable to the addressee (Fraser, 1998) . We especially direct our primary attention to threats of physical harm. We build a new dataset for training machine learning classifiers to detect dangerous speech. Clearly, resulting models can be beneficial in protecting online users and communities alike.", "cite_spans": [ { "start": 449, "end": 476, "text": "(Hinduja and Patchin, 2010)", "ref_id": "BIBREF13" }, { "start": 1075, "end": 1089, "text": "(Fraser, 1998)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The fact that social media users can create fake accounts on online platforms makes it possible for such users to employ hostile and dangerous language without worrying about facing effective social nor legal consequences. This continues to put the responsibility on platforms such as Facebook and Twitter to maintain safe environments for their users. These networks have related guidelines and invest in fighting negative and dangerous content. Twitter, for example, prohibits any form of violence including threats of physical harm and promotion of terrorism. 2 However, due to the vast volume of communication on these platforms, it is not easy to detect harmful content manually. Our work aims at developing automated models \u2020 Both authors contributed equally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "1 https://en.oxforddictionaries.com/ definition/threat 2 https://help.twitter.com/en/ rules-and-policies/twitter-rules to help alleviate this problem in the context of dangerous speech.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Our focus on Arabic is motivated by the wide use of social media in the Arab world (Lenze, 2017) . Relatively recent estimates indicate that there are over 11M monthly active users as of March 2017, posting over 27M tweets each day (Salem, 2017) . An Arabic country such as Saudi Arabia has the highest Twitter penetration level worldwide, with 37% (Iqbal, 2019) . The Arabic language also presents interesting challenges primarily due to the dialectical variations cutting across all its linguistic levels: phonetic, phonological, morphological, semantic and syntactic (Farghaly and Shaalan, 2009) . Our work caters for dialectal variations in that we collect data using multidialectal seeds (Section 3.3.). Overall, we make the following contributions: 1) We manually curate a multi-dialectal dictionary of physical harm threats that can be used to collect data for training dangerous language models.", "cite_spans": [ { "start": 83, "end": 96, "text": "(Lenze, 2017)", "ref_id": "BIBREF18" }, { "start": 232, "end": 245, "text": "(Salem, 2017)", "ref_id": "BIBREF21" }, { "start": 349, "end": 362, "text": "(Iqbal, 2019)", "ref_id": "BIBREF14" }, { "start": 570, "end": 598, "text": "(Farghaly and Shaalan, 2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "2) We use our lexicon to collect a large dataset of threatening speech from Arabic Twitter, and manually annotate a subset of the data for dangerous speech. Our datasets are freely available online. 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "3) We investigate and characterize threatening speech in Arabic Twitter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We train effective models for detecting dangerous speech in Arabic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4)", "sec_num": null }, { "text": "The remainder of the paper is organized as follows: In Section 2., we review related literature. Building dangerous lexica used to collect our datasets is discussed in Section 3.3.. We describe our annotation in Section 4.1.. We present our models in Section 5., and conclude in Section 6..", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4)", "sec_num": null }, { "text": "Detection of offensive language in natural languages has recently attracted the interest of multiple researchers. However, the space of abusive language is vast and has its own nuances. Waseem et al. (2017) classify abusive language along two dimensions: directness (the level to which it is directed to a specific person or organization or not) and explicitness (the degree to which it is explicit). Jay and Janschewitz (2008) categorize offensive language to three categories: Vulgar, Pornographic, and Hateful. The Hateful category includes offensive language such as threats as well as language pertaining to class, race, or religion, among others. In the literature, these concepts are sometimes confused or even ignored altogether. In the following, we explore some of the relevant work on each of these themes.", "cite_spans": [ { "start": 401, "end": 427, "text": "Jay and Janschewitz (2008)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "Offensive Language. The terms offensive language and abusive language are commonly used interchangeably. They are cover terms that usually include all types of undesirable language such as hateful, racist, obscene, and dangerous speech. We review some work looking at these types of language here, with no specific focus on any of its forms. GermEval 2018 is a shared task on the Identification of Offensive Language in German proposed by Wiegand et al. (2018) . Their dataset consists of 8, 500 annotated tweets with two labels, \"offensive\" and \"non-offensive\". Another relevant shared task is the OffensEval (Zampieri et al., 2019) , which focuses on identifying and categorizing offensive language in social media. Very recently, an Arabic offensive language shared task is included in the 4th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT4). 3", "cite_spans": [ { "start": 439, "end": 460, "text": "Wiegand et al. (2018)", "ref_id": "BIBREF26" }, { "start": 610, "end": 633, "text": "(Zampieri et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "Hate Speech. Hate speech is a type of language that is biased, hostile, and malicious targeting a person or a group of people because of some of their actual or perceived innate characteristics (Gitari et al., 2015) . This type of harmful language received the most attention in the literature. Burnap and Williams (2014) investigate the manifestation and diffusion of hate speech and antagonistic content in Twitter in relation to situations that could be classified as 'trigger' events for hate crimes. Their dataset consists of 450K tweets collected during a two weeks window in the immediate aftermath of Drummer Lee Rigby's murder in Woolwich, UK. In Waseem (2016) , issues of annotation reliability are discussed. Authors examine whether the expertise level of annotators (e.g expert or amateur) and/or the type of information provided to the annotators, can improve the classification of hate speech. For this purpose, they extend the dataset of (Waseem and Hovy, 2016) with a set of about 7K tweets annotated by two types of CrowdFlower users: expert and amateur. They find that hate speech detection models trained on expert annotations outperform those trained on amateur annotations. This suggests that hate speech can be implicit and thus harder to detect by humans and machines alike. Another work by (Davidson et al., 2017) Garibo, 2019) . This shared task addresses the problem of multilingual detection of hate speech against immigrants and women in Twitter.", "cite_spans": [ { "start": 194, "end": 215, "text": "(Gitari et al., 2015)", "ref_id": "BIBREF11" }, { "start": 656, "end": 669, "text": "Waseem (2016)", "ref_id": "BIBREF25" }, { "start": 953, "end": 976, "text": "(Waseem and Hovy, 2016)", "ref_id": "BIBREF23" }, { "start": 1314, "end": 1337, "text": "(Davidson et al., 2017)", "ref_id": "BIBREF5" }, { "start": 1338, "end": 1351, "text": "Garibo, 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "Obscene Language. Obscene speech includes vulgar and pornographic speech. A few research papers have looked at this kind of speech in social media (Singh et al., 2016; Mubarak et al., 2017; Alshehri et al., 2018) . Mubarak et al. (2017) present an automated method to create and expand a list of obscene words, for the purpose of detecting obscene language. Abozinadah 2015 Racism and Sexism. Kwok and Wang (2013) create a balanced dataset comprising 24, 582 of 'racist' and 'non-racist' tweets. Waseem and Hovy (2016) collect a set of 136K hate tweets based on a list of common terms and slurs pertaining ethnic minorities, gender, sexuality, and religion. Afterwards, a random set of 16K tweets are selected and manually annotated with three labels: 'racist', 'sexist', or \"neither\". Gamb\u00e4ck and Sikdar (2017) introduce a deep-learning-based Twitter hate speech text classification model. Using data from Waseem and Hovy (2016) with about 6.5K tweets, the model classifies tweets into four categories: 'sexist', 'racist', 'both sexist and racist', and 'neither'. Clarke and Grieve (2017) , using the same list, explore differences among racist and sexist tweets along three dimensions: interactiveness, antagonism, and attitude and find an overall significant difference between them.", "cite_spans": [ { "start": 147, "end": 167, "text": "(Singh et al., 2016;", "ref_id": "BIBREF22" }, { "start": 168, "end": 189, "text": "Mubarak et al., 2017;", "ref_id": "BIBREF19" }, { "start": 190, "end": 212, "text": "Alshehri et al., 2018)", "ref_id": "BIBREF2" }, { "start": 215, "end": 236, "text": "Mubarak et al. (2017)", "ref_id": "BIBREF19" }, { "start": 786, "end": 811, "text": "Gamb\u00e4ck and Sikdar (2017)", "ref_id": "BIBREF10" }, { "start": 1065, "end": 1089, "text": "Clarke and Grieve (2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "Dangerous Language. Little work has been dedicated to detection and classification of dangerous language and threats. They are usually part of work on abusive and hate speech. This is to say that dangerous language has only been indirectly investigated within the NLP community. However, there is some research that is not necessarily computational in nature. For example, Gales (2011) investigates the correlation between interpersonal stance and the realization of threats by analyzing a corpus of 470 authentic threats. Ultimately, the goal of Gale's work is to help predict violence before it occurs. Hardaker and McGlashan (2016) , on the other hand, investigates the language surrounding threats of rape on Twitter. In their corpus, the authors find that women were the prime target of rape threats. In the rest of this paper, we explore the space and language of threats in Arabic Twitter. We now describe our lexicon and datasets. ", "cite_spans": [ { "start": 373, "end": 385, "text": "Gales (2011)", "ref_id": "BIBREF9" }, { "start": 605, "end": 634, "text": "Hardaker and McGlashan (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "Verb Dialect English Verb Dialect English Verb Dialect English G,M,R exterminate G,M contuse all blow up E,L kill * E,G mark G,L split * E,G give all skin * E,G,L,R burst all execute E,G,R boil * E,G,L disentangle G,M,R exterminate M slash all kill G,M,R destroy ** E,G,L,R drink E sound G,L,M,R assassinate E,G,L,R rip off all divide all rape E,G,L,R distort G,R smash * G,L,R pluck G cut off G,M smash E,L,M assault G,L slap E,G,L,M eliminate all wound G,L skin all cut G cut off all hit E,G,L,R pluck all whip E,G shoot all break all burn all stab G hit E,L,M,R smash E,G,L,R make fly ** E,G,L,R erase E,G,L demolish E,G,M,R torture M destroy G run over E torture E,G,M,R slaughter all slaughter E,G kill E,G,M,R blast E,G,M,R stone E,G,L,M destroy G,L,R smash", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "We define dangerous language as a statement of an intention to inflict physical pain, injury, or damage on someone in retribution for something done or not. This definition excludes threats that do not reflect physical harm on the side of the receiver end of the threat. The definition also excludes tongue in cheek whose real intention is to tease. An example of this later category is a threat made in the context of sports where it is common among fans to tease one another using metaphorical, string language (see Example # 6 in Section 3.3.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dangerous Language", "sec_num": "3.1." }, { "text": "We came up with a list of 57 verbs in their basic form that can be used literally or metaphorically to indicate physical harm (see table 1 ). This list is by no means exhaustive, although we did our best to expand it as much as possible.", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 138, "text": "table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Dangerous Lexica", "sec_num": "3.2." }, { "text": "As such, the list covers the frequent verbs used in the threatening domain in Arabic. 4 These verbs are used in one or more of the following varieties: Egyptian, Gulf, Levantine, Maghrebi, and MSA (see table 2 for more details). Most of these verbs (n=50 out of 57) literally indicate physical harm. Examples are ('to stap') and ('to de-skin'). The rest are used (sometimes metaphorically) to indicate threatening, such as ('to pluck') and ('to mark') usually with a body part such as ('face') or ('head'). Finally, some of the verbs are used idiomatically, such as ('to drink someone's blood') and ('to erase/eliminate from the face of the earth'). Multiword expressions in our seed list can be found in Table 3 . To be able to collect data, we used our manually curated list to construct threat phrases indicating physical harm such as ('I kill you') and ('He breaks him/it'). That is, each phrase consists of a physical harm verb, a singular or plural first or third person subject, and a plural or singular second or third person object. This gives us the following pattern:", "cite_spans": [ { "start": 86, "end": 87, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 705, "end": 712, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Dangerous Lexica", "sec_num": "3.2." }, { "text": "1st/3rd (SG / PL) + threat verb + 2nd/3rd (SG / PL) Some of the phrases only differ on the basis of spelling due to dialectical variations. For example, the body part ('face') can be spelled as or in the plural form depending on the dialect. Another example is the verb ('kill'), which can also be spelled as in Egyptian and some other Arabic dialects. Manual search of some of the seed tokens in twitter suggests that patterns involving 3rd person subject are almost always not threats. The following are two illustrating examples of this non-threatening use: 1) 'If he doesn't score, Messi kills happiness in some people'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dangerous Lexica", "sec_num": "3.2." }, { "text": "'Only a dear friend can break one's heart' Thus, we decided to limit our list of phrases to 'direct' dangerous threats, which are phrases involving a singular or plural first person subject and singular or plural second person object as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2)", "sec_num": null }, { "text": "1st (SG/PL) + threat verb + 2nd (SG/PL) Examples of these direct threats include ('We rape you') and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2)", "sec_num": null }, { "text": "('I burn you'). Less dangerous threats such as (\"We hurt you (all)\") and ('I push you') are also not considered. Our motivation for not including these latter phrases even though they involve direct threats is that they indicate less danger and (more crucially) are more likely to be used metaphorically in Arabic. This resulted in a set of 286 direct and dangerous phrases, which constitute our list of 'dangerous' seeds. We make the list of 286 direct threats phrases available to the research community. 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2)", "sec_num": null }, { "text": "We use the constructed 'dangerous' seed list to search Twitter using the REST API for two weeks resulting in a dataset of 2.8M tweets involving 'direct' threats as shown in Table 4. We then extract user ids from all users who contributed the REST API data (n = 399K users) and crawled their timelines (n = 705M tweets). We then acquire 107.5M tweets from the timelines, each of which carry one or more items from our 'dangerous' seed list. Combining these two datasets (the REST API dataset and dataset based on the timelines) results in a dataset consisting of 110.3M tweets as shown in Table 4 . In this work, we focus on exploiting the REST API dataset exclusively, leaving the rest of the data to future research.", "cite_spans": [], "ref_spans": [ { "start": 588, "end": 595, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Dataset", "sec_num": "3.3." }, { "text": "We first randomly sample 1K tweets from our REST API dataset. 5 Two of the authors annotated each tweet for being a threat ('dangerous') or not ('safe'). This sample annotation resulted in a Kappa (\u03ba) score of 0.57, which is fair according to Landis and Koch's scale (Landis and Koch, 1977) . The two annotators then held several discussion sessions to improve their mutual understanding of the problem and define some instructions as to how to label the data. We also added another random sample of 4K tweets (for a total size of 5K) to the annotation pool. After extensive revisions of the disagreement cases by the two annotators, the \u03ba score for the whole dataset (5K) was found to be at 0.90. The annotated dataset has a total of 1, 375 tweets in the 'dangerous' class and 3, 636 in the 'non-dangerous' class. Our overall agreed-upon instructions for annotations include the following:", "cite_spans": [ { "start": 267, "end": 290, "text": "(Landis and Koch, 1977)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "4.1." }, { "text": "\u2022 Textual threats combined with pleasant emojis such as and are not dangerous, as opposed to threat combined with less pleasant emojis such as . Thus, tweet 3 below should be coded as 'safe' while tweet 4 should be tagged as 'dangerous'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "4.1." }, { "text": "3) @user 'It goes with logic that I kill you ' 4) @user @user @user 'Move forward [in front of me] or else I stab you '", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "4.1." }, { "text": "\u2022 Mitigated threats with question marks or epistemic modals are dangerous unless they are combined with positive language or emojis such as Example 5 below. Note that the word Touha in Example 5 is an informal, friendly form for Arabic names such as FatHi or MamdouH.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "4.1." }, { "text": "'I am thinking of killing you, Touha '", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5) @user", "sec_num": null }, { "text": "\u2022 Threats related to sports are not dangerous. That is because it is common to use verbs like (\"slaughter\") and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5) @user", "sec_num": null }, { "text": "(\"rape\") among fans of rival teams to describe wins and losses, as in the following example. 6) @user 'It's actually better that we 'rape' you in your stadium, among your fans'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5) @user", "sec_num": null }, { "text": "\u2022 Ambiguous threats such as threats consisting of one word (as in Example 7 below) should be coded as 'dangerous': ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5) @user", "sec_num": null }, { "text": "The fact that 'dangerous' tweets are not frequent in the dataset suggests that this phenomenon of dangerous speech is relatively rare in the Twitter domain. To further investigate the commonality of such a phenomenon, we extract Table 8 : Results from our models on TEST.", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 236, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Data Analysis", "sec_num": "4.2." }, { "text": "the timelines of the authors of tweets in the dangerous class in the annotated dataset. Table 6 shows some descriptive statistics of the occurrence of dangerous seeds in their timelines. We can see from Table 6 that timelines contain on average 2, 313 tweets for each user, and there are on average 3.97 tweets in each timeline containing a dangerous seed token. This represents \u223c 0.17% of the tweets for each user. The average number of dangerous tweets is higher (n = 6) for users in the 75th percentile as opposed to n = 1 in the 25th percentile.", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 95, "text": "Table 6", "ref_id": "TABREF8" }, { "start": 203, "end": 210, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Data Analysis", "sec_num": "4.2." }, { "text": "To further understand dangerous language, we also analyze all the 5, 011 tweets from our annotated dataset. We identify a number of patterns in the data, cutting across both the 'dangerous' and 'safe' classes. We explain each of these patterns next.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Analysis", "sec_num": "4.2." }, { "text": "Conditional threats: One common threatening pattern involves conditional statements where the consequent involves a physical threat by the speaker toward the addressee, and the antecedent is a conditional phrase involving deterrence of an action that can possibly be carried out by the addressee or someone else. The following are two examples:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Analysis", "sec_num": "4.2." }, { "text": "13) @user 'I slaughter you if you (F) do anything' 14) @user 'If he transfers, I will stab you hardly in front of the crowds'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Analysis", "sec_num": "4.2." }, { "text": "It is clear from Examples 13 and 14 that the threats are directed to a twitter user mentioned in the tweet. So these tweets are potentially part of ongoing conversations between the person who posted the tweet and the user mentioned in the body of the tweet. As Table 9 shows, \u223c 71.2% of tweets in our annotated dataset (across the 'dangerous' and 'safe' classes) contain mentions of other Twitter users. This percentage is higher within the dangerous class (%= 78).", "cite_spans": [], "ref_spans": [ { "start": 262, "end": 269, "text": "Table 9", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Data Analysis", "sec_num": "4.2." }, { "text": "Threats accompanied with commands: Another common pattern involves a command accompanying the threat as in Example 15 below. These kinds of threats are more common in the dangerous than the safe class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Analysis", "sec_num": "4.2." }, { "text": "'I say get out before I hit your face'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15) @user", "sec_num": null }, { "text": "Threats accompanied with questions: Another less common pattern is threats in the form of questions. Metaphorical threats: Many of the tweets involve metaphorical use of the phrases in our annotated data. The target domain of the majority of these metaphorical uses is either sports or relationships. Words such as 'kill', 'rape', and 'slaughter' are used to indicate 'wining' in sport or 'burn' to mean 'pain' or 'longing' in romantic relationships. Examples 23-24 illustrate these cases:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15) @user", "sec_num": null }, { "text": "'I would like to tell my Manchester (football club) fans that we will rape them tomorrow'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "20)", "sec_num": null }, { "text": "'I will burn you with love and put off (the fire on you) with affection'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "21)", "sec_num": null }, { "text": "Emojis: Another interesting phenomenon (see Table 9 ) is the frequent use of emojis, which are found in about 40% of the annotated dataset. This is not surprising as it helps participants mitigate (and hence better disambiguate the nature of) their threats. Table 7 shows the top most frequent emojis used in our REST API data. It is evident that most of the used emojis do not indicate friendliness, but rather have a threatening nature. This is also true of using expressive interjections such as hahaha, which is more common in the non-dangerous than the dangerous class. Additionally, as mentioned above, some expressions involve use of 'body parts' such as eyes, head, face, nose, etc.. These are found to occur significantly higher in the 'dangerous' class.", "cite_spans": [], "ref_spans": [ { "start": 44, "end": 51, "text": "Table 9", "ref_id": "TABREF12" }, { "start": 258, "end": 265, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "21)", "sec_num": null }, { "text": "Conversational context: Finally, Table 7 also shows the top 10 most frequent seeds in our REST API dataset. All of these seeds involve a first singular person subject and a singular second person object, which indicate that many of these tweets containing dangerous seeds are part of one-toone conversations on Twitter.", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 40, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "21)", "sec_num": null }, { "text": "Dangerous speech data. We use our 5, 011 annotated tweet dataset for training deep learning models on dangerous speech. The dataset comprises 3, 570 'safe' tweets and 1, 389 'dangerous' tweets. We first remove all the seeds in our lexicon since these were used in collecting the data. We then keep only tweets with at least two words, obtaining 4, 445 tweets with 3, 225 'safe' labels and 1, 220 'dangerous' tweet (see Table 10 ). We split this dataset into 80% training, 10% development, and 10% test.", "cite_spans": [], "ref_spans": [ { "start": 419, "end": 427, "text": "Table 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Deep Learning Models", "sec_num": "5." }, { "text": "Offensive speech data. In one of our settings, we also use the offensive dataset released via the Offensive Shared Task 2020. 6 This offensive content dataset consists of 8000 tweets (1, 590 'offensive' and 6, 410 'non-offensive'). We use the offensive class data to augment our train split. Hence, we evaluate only on our test split where tweets are restricted to our dangerous gold tweets in the annotated dataset. We run this experiment as a way to test the utility of exploiting offensive tweets for enhancing dangerous language representation based on the assumption that dangerous speech is a subset of offensive language. However, Models. For the purpose of training deep learning models for detecting dangerous speech, we exploit the Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) model. For all our models, we use the BERT-Base Multilingual Cased (Multi-Cased) model. 7 It is trained on Wikipedia for 104 languages (including Arabic) with 12 layers, 12 attention heads, 768 hidden units each and 110M parameters. Additionally, we further fine-tune an off-the-shelf trained BERT Emotion (BERT-EMO) from AraNet (Abdul-Mageed et al., 2019) on our dangerous speech task. BERT-EMO is trained with Google's BERT-Base Multilingual Cased model on 8 emotion classes exploiting Arabic Twitter data. We train all BERT models for 20 epochs with a batch size of 32, maximum sequence size of 50 tokens and learning rate up to 2e \u22125 . We identify best results on the development set, and report final results on the blind test set. As our baseline, we use the majority class in our training split. Note that since our dataset is not balanced, the majority class baseline is competitive (63.97% macro F 1 score). Also, importantly, due to the imbalance in class distribution, the macro F 1 score (the harmonic mean of precision and recall) is our metric of choice as it is more balanced than accuracy.", "cite_spans": [ { "start": 805, "end": 826, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Deep Learning Models", "sec_num": "5." }, { "text": "Results & Discussion. As Table 8 shows, the results demonstrate that all the models outperform the baseline and succeed in detecting the dangerous speech with F 1 scores between 53.42% and 59.60%. We also observe that training on the offensive dataset did not improve the results. On the contrary, augmenting training data with the offensive task tweets cause deterioration to 53.52% F 1 for BERT and 54.11% F 1 for BERT-Emotion.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Deep Learning Models", "sec_num": "5." }, { "text": "The best model for detecting dangerous tweets is BERT-Emotion when fine-tuned on our gold dangerous dataset. It obtains an accuracy level of 77.97% and F 1 score of 59.60%. We note that both accuracy and F 1 are significantly higher then the the baseline. As mentioned earlier, since our dataset is highly imbalanced, F 1 , rather than accuracy, should be used as the metric of choice for evaluation. As such, our models are significantly better than our baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deep Learning Models", "sec_num": "5." }, { "text": "We have described our efforts to collect and manually label a dangerous speech dataset from a range of Arabic varieties. Our work shows that dangerous speech is rare online, thus making it difficult to find data for training machine learning classifiers. However, we were able to collect and annotate a sizeable dataset. To accelerate research, we will make our data available upon request. Another contribution we made is developing a number of models exploiting our data. Our best models are effective, and can be deployed for detecting the rare, yet highly serious, phenomenon of dangerous speech. For future work, we plan to further explore contexts of use of dangerous language in social media. We also plan to explore other deep learning methods on the task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "The concept of frequency here is based on native speaker knowledge of the language. The list was developed by the 3 authors, all of whom are native speakers of Arabic with multidialectal fluency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/UBC-NLP/ara_ dangspeech.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "'I wish to burn you and throw you to dogs'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/google-research/bert/ blob/master/multilingual.md.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Aranet: A deep learning toolkit for arabic social media", "authors": [ { "first": "M", "middle": [], "last": "Abdul-Mageed", "suffix": "" }, { "first": "C", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "A", "middle": [], "last": "Hashemi", "suffix": "" }, { "first": "E", "middle": [ "M B" ], "last": "Nagoudi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.13072" ] }, "num": null, "urls": [], "raw_text": "Abdul-Mageed, M., Zhang, C., Hashemi, A., and Nagoudi, E. M. B. (2019). Aranet: A deep learning toolkit for arabic social media. arXiv preprint arXiv:1912.13072.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Detection of abusive accounts with arabic tweets", "authors": [ { "first": "A", "middle": [], "last": "Abozinadah", "suffix": "" }, { "first": "M", "middle": [ "A A J J" ], "last": "", "suffix": "" } ], "year": 2015, "venue": "International Journal of Knowledge Engineering", "volume": "1", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abozinadah, A., M. A. a. J. J. (2015). Detection of abu- sive accounts with arabic tweets. International Journal of Knowledge Engineering, Vol. 1, No. 2.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Think before your click: Data and models for adult content in arabic twitter", "authors": [ { "first": "A", "middle": [], "last": "Alshehri", "suffix": "" }, { "first": "A", "middle": [], "last": "Nagoudi", "suffix": "" }, { "first": "A", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "Abdul-Mageed", "middle": [], "last": "", "suffix": "" }, { "first": "M", "middle": [], "last": "", "suffix": "" } ], "year": 2018, "venue": "The 2nd Text Analytics for Cybersecurity and Online Safety", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alshehri, A., Nagoudi, A., Hassan, A., and Abdul-Mageed, M. (2018). Think before your click: Data and models for adult content in arabic twitter. The 2nd Text Analyt- ics for Cybersecurity and Online Safety (TA-COS-2018), LREC.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Hate speech, machine classification and statistical modelling of information flows on twitter: Interpretation and communication for policy decision making", "authors": [ { "first": "P", "middle": [], "last": "Burnap", "suffix": "" }, { "first": "M", "middle": [ "L" ], "last": "Williams", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burnap, P. and Williams, M. L. (2014). Hate speech, ma- chine classification and statistical modelling of informa- tion flows on twitter: Interpretation and communication for policy decision making.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Dimensions of abusive language on twitter. Association for Computational Linguistics", "authors": [ { "first": "I", "middle": [], "last": "Clarke", "suffix": "" }, { "first": "J", "middle": [], "last": "Grieve", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clarke, I. and Grieve, J. (2017). Dimensions of abusive language on twitter. Association for Computational Lin- guistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automated hate speech detection and the problem of offensive language", "authors": [ { "first": "T", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "D", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "M", "middle": [], "last": "Macy", "suffix": "" }, { "first": "I", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.04009" ] }, "num": null, "urls": [], "raw_text": "Davidson, T., Warmsley, D., Macy, M., and Weber, I. (2017). Automated hate speech detection and the problem of offensive language. arXiv preprint arXiv:1703.04009.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "J", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "M.-W", "middle": [], "last": "Chang", "suffix": "" }, { "first": "K", "middle": [], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Arabic natural language processing: Challenges and solutions", "authors": [ { "first": "A", "middle": [], "last": "Farghaly", "suffix": "" }, { "first": "K", "middle": [], "last": "Shaalan", "suffix": "" } ], "year": 2009, "venue": "ACM Transactions on Asian Language Information Processing (TALIP)", "volume": "8", "issue": "4", "pages": "1--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Farghaly, A. and Shaalan, K. (2009). Arabic natural language processing: Challenges and solutions. ACM Transactions on Asian Language Information Process- ing (TALIP), 8(4):1-22.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Threatening revisited. Forensic linguistics", "authors": [ { "first": "B", "middle": [], "last": "Fraser", "suffix": "" } ], "year": 1998, "venue": "", "volume": "5", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fraser, B. (1998). Threatening revisited. Forensic linguis- tics, 5(2).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Identifying interpersonal stance in threatening discourse: An appraisal analysis. Discourse Studies", "authors": [ { "first": "T", "middle": [], "last": "Gales", "suffix": "" } ], "year": 2011, "venue": "", "volume": "13", "issue": "", "pages": "27--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gales, T. (2011). Identifying interpersonal stance in threat- ening discourse: An appraisal analysis. Discourse Stud- ies, 13(1):27-46.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Using convolutional neural networks to classify hate-speech", "authors": [ { "first": "B", "middle": [], "last": "Gamb\u00e4ck", "suffix": "" }, { "first": "U", "middle": [ "K" ], "last": "Sikdar", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "85--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gamb\u00e4ck, B. and Sikdar, U. K. (2017). Using convolu- tional neural networks to classify hate-speech. In Pro- ceedings of the First Workshop on Abusive Language On- line, pages 85-90.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A lexicon-based approach for hate speech detection", "authors": [ { "first": "N", "middle": [ "D" ], "last": "Gitari", "suffix": "" }, { "first": "Z", "middle": [], "last": "Zuping", "suffix": "" }, { "first": "H", "middle": [], "last": "Damien", "suffix": "" }, { "first": "J", "middle": [], "last": "Long", "suffix": "" } ], "year": 2015, "venue": "Journal of Multimedia and Ubiquitous Engineering", "volume": "10", "issue": "4", "pages": "215--230", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gitari, N. D., Zuping, Z., Damien, H., and Long, J. (2015). A lexicon-based approach for hate speech detection. In- ternational Journal of Multimedia and Ubiquitous Engi- neering, 10(4):215-230.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "real men don't hate women': Twitter rape threats and group identity", "authors": [ { "first": "C", "middle": [], "last": "Hardaker", "suffix": "" }, { "first": "M", "middle": [], "last": "Mcglashan", "suffix": "" } ], "year": 2016, "venue": "Journal of Pragmatics", "volume": "91", "issue": "", "pages": "80--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hardaker, C. and McGlashan, M. (2016). 'real men don't hate women': Twitter rape threats and group identity. Journal of Pragmatics, 91:80 -93.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bullying, cyberbullying, and suicide. Archives of suicide research", "authors": [ { "first": "S", "middle": [], "last": "Hinduja", "suffix": "" }, { "first": "J", "middle": [ "W" ], "last": "Patchin", "suffix": "" } ], "year": 2010, "venue": "", "volume": "14", "issue": "", "pages": "206--221", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hinduja, S. and Patchin, J. W. (2010). Bullying, cy- berbullying, and suicide. Archives of suicide research, 14(3):206-221.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Twitter revenue and usage statistics in 2019", "authors": [ { "first": "M", "middle": [], "last": "Iqbal", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iqbal, M. (2019). Twitter revenue and usage statistics in 2019. November.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The pragmatics of swearing", "authors": [ { "first": "T", "middle": [], "last": "Jay", "suffix": "" }, { "first": "K", "middle": [], "last": "Janschewitz", "suffix": "" } ], "year": 2008, "venue": "Journal of Politeness Research. Language, Behaviour", "volume": "4", "issue": "2", "pages": "267--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jay, T. and Janschewitz, K. (2008). The pragmatics of swearing. Journal of Politeness Research. Language, Behaviour, Culture, 4(2):267-288.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Locate the hate: Detecting tweets against blacks", "authors": [ { "first": "I", "middle": [], "last": "Kwok", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2013, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kwok, I. and Wang, Y. (2013). Locate the hate: Detecting tweets against blacks. In AAAI.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The measurement of observer agreement for categorical data", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Landis", "suffix": "" }, { "first": "G", "middle": [ "G" ], "last": "Koch", "suffix": "" } ], "year": 1977, "venue": "Biometrics", "volume": "33", "issue": "1", "pages": "159--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Landis, J. R. and Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1):159-174.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Social media in the arab world: Communication and public opinion in the gulf states", "authors": [ { "first": "N", "middle": [], "last": "Lenze", "suffix": "" } ], "year": 2017, "venue": "European Journal of Communication", "volume": "32", "issue": "1", "pages": "77--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lenze, N. (2017). Social media in the arab world: Com- munication and public opinion in the gulf states. Euro- pean Journal of Communication, 32(1):77-79.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Abusive language detection on arabic social media", "authors": [ { "first": "H", "middle": [], "last": "Mubarak", "suffix": "" }, { "first": "K", "middle": [], "last": "Darwish", "suffix": "" }, { "first": "W", "middle": [], "last": "Magdy", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "52--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mubarak, H., Darwish, K., and Magdy, W. (2017). Abu- sive language detection on arabic social media. In Pro- ceedings of the First Workshop on Abusive Language On- line, pages 52-56.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Multilingual detection of hate speech against immigrants and women in twitter at semeval-2019 task 5: Frequency analysis interpolation for hate in speech detection", "authors": [ { "first": "Oscar", "middle": [], "last": "Garibo", "suffix": "" }, { "first": ".", "middle": [ "O" ], "last": "", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "460--463", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar Garibo, i. O. (2019). Multilingual detection of hate speech against immigrants and women in twitter at semeval-2019 task 5: Frequency analysis interpolation for hate in speech detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 460-463.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The arab social media report 2017: Social media and the internet of things: Towards datadriven policymaking in the arab world", "authors": [ { "first": "F", "middle": [], "last": "Salem", "suffix": "" } ], "year": 2017, "venue": "", "volume": "7", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salem, F. (2017). The arab social media report 2017: So- cial media and the internet of things: Towards data- driven policymaking in the arab world. Vol. 7.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Behavioral analysis and classification of spammers distributing pornographic content in social media", "authors": [ { "first": "M", "middle": [], "last": "Singh", "suffix": "" }, { "first": "D", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "S", "middle": [], "last": "Sofat", "suffix": "" } ], "year": 2016, "venue": "Social Network Analysis and Mining", "volume": "6", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Singh, M., Bansal, D., and Sofat, S. (2016). Behav- ioral analysis and classification of spammers distributing pornographic content in social media. Social Network Analysis and Mining, 6(1):41, Jun.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter", "authors": [ { "first": "Z", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "D", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the NAACL student research workshop", "volume": "", "issue": "", "pages": "88--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Waseem, Z. and Hovy, D. (2016). Hateful symbols or hate- ful people? predictive features for hate speech detec- tion on twitter. In Proceedings of the NAACL student research workshop, pages 88-93.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Understanding abuse: A typology of abusive language detection subtasks", "authors": [ { "first": "Z", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "T", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "D", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "I", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Waseem, Z., Davidson, T., Warmsley, D., and Weber, I. (2017). Understanding abuse: A typology of abusive language detection subtasks. CoRR, abs/1705.09899.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter", "authors": [ { "first": "Z", "middle": [], "last": "Waseem", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the first workshop on NLP and computational social science", "volume": "", "issue": "", "pages": "138--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Waseem, Z. (2016). Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter. In Proceedings of the first workshop on NLP and compu- tational social science, pages 138-142.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Overview of the germeval 2018 shared task on the identification of offensive language", "authors": [ { "first": "M", "middle": [], "last": "Wiegand", "suffix": "" }, { "first": "M", "middle": [], "last": "Siegel", "suffix": "" }, { "first": "J", "middle": [], "last": "Ruppenhofer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wiegand, M., Siegel, M., and Ruppenhofer, J. (2018). Overview of the germeval 2018 shared task on the iden- tification of offensive language.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval)", "authors": [ { "first": "M", "middle": [], "last": "Zampieri", "suffix": "" }, { "first": "S", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "P", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "S", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "N", "middle": [], "last": "Farra", "suffix": "" }, { "first": "R", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.08983" ] }, "num": null, "urls": [], "raw_text": "Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019). Semeval-2019 task 6: Identi- fying and categorizing offensive language in social me- dia (offenseval). arXiv preprint arXiv:1903.08983.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "build a dataset of over 1M tweets comprising the most recent 50 tweets of 255 users who has participated in swearing hashtags as well as the most recent 50 tweets of users in their network. As feature input to their classifiers, the authors extracted basic statistical measures from each tweet and reported 96% accuracy of adult content detection. Alshehri et al. (2018) build a dataset of adult content in Arabic twitter and their distributors. The work identifies geographical distribution of targets of adult content and develops models for detecting spreaders of such content.Alshehri et al. (2018) report 79% accuracy on detecting adult content." }, "TABREF1": { "text": "Our list of dangerous verbs. All= all dialects. E= Egyptian. G= Gulf. L= Levantine. M= MSA. R= Maghrebi.", "num": null, "type_str": "table", "html": null, "content": "" }, "TABREF3": { "text": "Distribution of threat verbs across Arabic dialects.", "num": null, "type_str": "table", "html": null, "content": "
" }, "TABREF5": { "text": "Multiword expressions in our seed list.", "num": null, "type_str": "table", "html": null, "content": "
Dataset# of tweets
REST API2.8M
Timelines107.5M
ALL110.3M
" }, "TABREF6": { "text": "Breakdown of our 'dangerous' dataset.", "num": null, "type_str": "table", "html": null, "content": "
SafeDangerous Total
Safe3, 570523, 622
Dangerous7013191, 389
Total3, 6401, 3715, 011
" }, "TABREF7": { "text": "Annotator Agreement of 5011-tweet sample.", "num": null, "type_str": "table", "html": null, "content": "
MeasureValue
Avg. # timeline tweets2, 313
Avg. # dangerous tweets / user3.97
St. dev.3.64
25th percentile1
50th percentile4
75th percentile6
Minimum1
Maximum23
9) @user
'Don't talk to me in this way, or else I hit you! Talking of
(marrying) four women!'
10) @user @user
'The war will begin. By God, we will burn you down, you
fags, you pigs, you traitors'
11) @user
'A donkey will always be a donkey. You didn't learn the
lesson. We have to hit you on the back of you heads like
kids. Are you humans or animals?'
12) @user
'Give me your address so I can come to you, and not only
kill you but also dissect you'
" }, "TABREF8": { "text": "Descriptive statistics of the timeline data of 1, 370 users who contributed tweets classified as 'dangerous' in our annotated dataset.", "num": null, "type_str": "table", "html": null, "content": "
SeedEnglishEmoji
I slaughter you
I kill you
I rape you
I hit you
I torture you
I hit/give you
I lash you
I stab you
I hurt you
I burn you
" }, "TABREF9": { "text": "Top 10 most frequent 'dangerous' seeds and emojis in our REST API dataset.", "num": null, "type_str": "table", "html": null, "content": "" }, "TABREF12": { "text": "The frequency of some textual phenomena in our Annotated data. threats occurs in about 5% of our dangerous data as compared to 2.8% in the safe class. Unlike the examples above, the reason behind most of the 'question' threats is not particularly clear as they tend to be short, sometimes of one word. Interpretation of these threats requires more context, beyond the level of the tweet itself.", "num": null, "type_str": "table", "html": null, "content": "
Examples 16-18 illus-
" }, "TABREF13": { "text": "as we see inTable 8, this measure did not result in any improvements on top of our dangerous models. In fact, it leads to model deterioration.", "num": null, "type_str": "table", "html": null, "content": "
Train Dev Test
#Safe2, 727 244 254
#Dangerous852189 179
Total3, 579 433 433
" }, "TABREF14": { "text": "Distribution of dangerous and safe classes in our annotated dataset after normalization by removing seeds and one-word tweets.", "num": null, "type_str": "table", "html": null, "content": "" } } } }