ACL-OCL / Base_JSON /prefixT /json /trac /2020.trac-1.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:52:28.413175Z"
},
"title": "Aggression and Misogyny Detection using BERT: A Multi-Task Approach",
"authors": [
{
"first": "Niloofar",
"middle": [],
"last": "Safi Samghabadi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sri City",
"location": {}
},
"email": "nsafisamghabadi@uh.edu"
},
{
"first": "Parth",
"middle": [],
"last": "Patwa",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Pykl \u2666",
"suffix": "",
"affiliation": {},
"email": "srinivas.p@iiits.in"
},
{
"first": "Prerana",
"middle": [],
"last": "Mukherjee",
"suffix": "",
"affiliation": {},
"email": "prerana.m@iiits.in"
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": "",
"affiliation": {
"laboratory": "Wipro Research Lab",
"institution": "",
"location": {}
},
"email": "amitava.das2@wipro.com"
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sri City",
"location": {}
},
"email": "tsolorio@uh.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In recent times, the focus of the NLP community has increased towards offensive language, aggression, and hate-speech detection. This paper presents our system for TRAC-2 shared task on \"Aggression Identification\" (sub-task A) and \"Misogynistic Aggression Identification\" (sub-task B). The data for this shared task is provided in three different languages-English, Hindi, and Bengali. Each data instance is annotated into one of the three aggression classes-Not Aggressive, Covertly Aggressive, Overtly Aggressive, as well as one of the two misogyny classes-Gendered and Non-Gendered. We propose an end-to-end neural model using attention on top of BERT that incorporates a multi-task learning paradigm to address both sub-tasks simultaneously. Our team, \"na14\", scored 0.8579 weighted F1-measure on the English sub-task B and secured 3 rd rank out of 15 teams for the task. The code and the model weights are publicly available at https://github.com/NiloofarSafi/TRAC-2.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In recent times, the focus of the NLP community has increased towards offensive language, aggression, and hate-speech detection. This paper presents our system for TRAC-2 shared task on \"Aggression Identification\" (sub-task A) and \"Misogynistic Aggression Identification\" (sub-task B). The data for this shared task is provided in three different languages-English, Hindi, and Bengali. Each data instance is annotated into one of the three aggression classes-Not Aggressive, Covertly Aggressive, Overtly Aggressive, as well as one of the two misogyny classes-Gendered and Non-Gendered. We propose an end-to-end neural model using attention on top of BERT that incorporates a multi-task learning paradigm to address both sub-tasks simultaneously. Our team, \"na14\", scored 0.8579 weighted F1-measure on the English sub-task B and secured 3 rd rank out of 15 teams for the task. The code and the model weights are publicly available at https://github.com/NiloofarSafi/TRAC-2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social media and the internet are overabundant with data. The number of users on the internet has increased by 83% from 2014 to 2019. In 2019, more than 500 million tweets and 4 billion Facebook messages were posted daily 2 . Social Media has become an important and influential means of communication as it is easily accessible and provides a lot of freedom to users. Some users misuse this by engaging in trolling, cyberbullying, or by sharing aggressive, hateful, misogynistic content. Aggressive words, abusive language, or hate-speech is used to harm the identity, status, mental health, or prestige of the victim (Beran and Li, 2005; Culpeper, 2011) . This type of anti-social behavior causes disharmony in society. Hence, it is becoming quite alarming, and it is crucial to address this problem. Aggression is a feeling of anger that results in hostile behavior and readiness to attack. According to Kumar et al. (2018c) , aggression can either be expressed in a direct, explicit manner (Overtly Aggressive) or an indirect, sarcastic manner (Covertly Aggressive). Hate-speech is used to attack a person or a group of people based on their color, gender, race, sexual orientation, ethnicity, nationality, religion (Nockleby, 2000) . Misogyny or Sexism is a subset of hate-speech (Waseem and Hovy, 2016) and targets the victim based on gender or sexuality (Davidson et al., 2017; Bhattacharya et al., 2020) . It is essential to identify aggression and hate-speech in social networks to protect online users against such attacks, but it is quite time-consuming to do so manually. Hence, social media companies and government agencies are focusing on building a system that can automate the identification process. However, it is difficult to draw a dis-1 These authors contributed equally. 2 https://blog.microfocus.com/how-muchdata-is-created-on-the-internet-each-day/ tinguishing line between acceptable content and aggressive/hateful content due to the subjectivity of the definitions and different perceptions of the same content by different people, which makes it harder to build an automated AI system. Facebook published its audit report 3 on civil rights, which explains its strategy to tackle abusive and hateful content. The report claims that building a complete automation system to detect hate-speech is not possible, and content moderation is unavoidable. This point brings many researchers to focus on building hate-speech/aggression detection systems since a large amount of such data is diffused in social networks. To this end, several workshops have been organized, including 'Abusive Language Online' (ALW) (Roberts et al., 2019) , 'Trolling, Aggression and Cyberbullying' (TRAC) (Kumar et al., 2018b) , and Semantic Evaluation (SemEval) shared task on Identifying Offensive Language in Social Media (OffensEval) . This paper presents our system for TRAC-2 Shared Task on \"Aggression Identification\" (sub-task A) and \"Misogynistic Aggression Identification\" (sub-task B), in which we propose a BERT (Devlin et al., 2018) based architecture to detect misogyny and aggression using a multi-task approach. The proposed model uses attention mechanism over BERT to get relative importance of words, followed by Fully-Connected layers, and a final classification layer for each sub-task, which predicts the class.",
"cite_spans": [
{
"start": 619,
"end": 639,
"text": "(Beran and Li, 2005;",
"ref_id": null
},
{
"start": 640,
"end": 655,
"text": "Culpeper, 2011)",
"ref_id": "BIBREF2"
},
{
"start": 907,
"end": 927,
"text": "Kumar et al. (2018c)",
"ref_id": "BIBREF12"
},
{
"start": 1220,
"end": 1236,
"text": "(Nockleby, 2000)",
"ref_id": "BIBREF17"
},
{
"start": 1285,
"end": 1308,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF28"
},
{
"start": 1361,
"end": 1384,
"text": "(Davidson et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 1385,
"end": 1411,
"text": "Bhattacharya et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 2632,
"end": 2654,
"text": "(Roberts et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 2705,
"end": 2726,
"text": "(Kumar et al., 2018b)",
"ref_id": "BIBREF11"
},
{
"start": 3024,
"end": 3045,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Hate-speech: The interest of NLP researchers in hatespeech, aggression, and sexism detection has increased recently. Kwok and Wang (2013) proposed a supervised ap-proach to detect anti-black hate-speech in social media platforms using Twitter data. They categorized the text into binary labels racist vs. non-racist and achieved a classification accuracy of 76%. Burnap and Williams (2015) utilized ensemble based classifier results to forecast cyberhate proliferation using statistical approaches. The classifier captured the grammatical dependencies between words in Twitter data to anticipate the behavior to give antagonistic responses. Nobata et al. (2016) curated a corpus of user comments for abusive language detection and resorted to machine learning based approaches to detect subtle hate-speech. Schmidt and Wiegand (2017) , give a detailed survey on hate-speech detection works. Gamb\u00e4ck and Sikdar (2017) used convolutional layers on word vectors to detect hate-speech. Other recent works (Zhang et al., 2018; Agrawal and Awekar, 2018; Dadvar and Eckert, 2018 ) also use deep learning based techniques to detect hate-speech. BERT Based approaches also have become popular recently (Nikolov and Radivchev, 2019; Mozafari et al., 2019; Risch et al., 2019) . Sexism: Recently, misogynistic and sexist comments, posts, or tweets on social media platforms have become quite predominant. Jha and Mamidi (2017) provided an analysis of sexist tweets and further categorize them as hostile, benevolent, or other. Sharifirad and Matwin (2019) also provided an in-depth analysis of sexist tweets and categorize them based on the type of harassment. Frenda et al. (2019) performed linguistic analysis to detect misogyny and sexism in tweets. Parikh et al. (2019) introduced the first work on multi-label classification for sexism detection and also provided the largest dataset on sexism categorization. They built a BERT based neural architecture with distributional and word level embeddings to perform the classification task. Aggression: The first Shared Task on Aggression Identification (Kumar et al., 2018a) aimed to identify aggressive tweets in social media posts and provided datasets in Hindi and English. Samghabadi et al. (2018) used lexical and semantic features along with logistic regression for the task and obtained 0.59 and 0.63 F1 scores on Hindi and English Facebook datasets, respectively. Orasan (2018) utilized machine learning (SVM, random forest) on word embeddings for aggressive language identification. Raiyani et al. (2018) used fully connected layers on highly pre-processed data. Aroyehun and Gelbukh (2018) used data augmentation along with deep learning for aggression identification and achieved 0.64 F1 score on the English dataset. Risch and Krestel (2018) also employed a similar technique and got 0.60 F1 score for English.",
"cite_spans": [
{
"start": 117,
"end": 137,
"text": "Kwok and Wang (2013)",
"ref_id": "BIBREF13"
},
{
"start": 363,
"end": 389,
"text": "Burnap and Williams (2015)",
"ref_id": "BIBREF1"
},
{
"start": 641,
"end": 661,
"text": "Nobata et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 807,
"end": 833,
"text": "Schmidt and Wiegand (2017)",
"ref_id": "BIBREF26"
},
{
"start": 891,
"end": 916,
"text": "Gamb\u00e4ck and Sikdar (2017)",
"ref_id": "BIBREF7"
},
{
"start": 1001,
"end": 1021,
"text": "(Zhang et al., 2018;",
"ref_id": "BIBREF30"
},
{
"start": 1022,
"end": 1047,
"text": "Agrawal and Awekar, 2018;",
"ref_id": null
},
{
"start": 1048,
"end": 1071,
"text": "Dadvar and Eckert, 2018",
"ref_id": "BIBREF3"
},
{
"start": 1193,
"end": 1222,
"text": "(Nikolov and Radivchev, 2019;",
"ref_id": "BIBREF15"
},
{
"start": 1223,
"end": 1245,
"text": "Mozafari et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 1246,
"end": 1265,
"text": "Risch et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 1394,
"end": 1415,
"text": "Jha and Mamidi (2017)",
"ref_id": "BIBREF8"
},
{
"start": 1516,
"end": 1544,
"text": "Sharifirad and Matwin (2019)",
"ref_id": "BIBREF27"
},
{
"start": 1650,
"end": 1670,
"text": "Frenda et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 1742,
"end": 1762,
"text": "Parikh et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 2093,
"end": 2114,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF10"
},
{
"start": 2217,
"end": 2241,
"text": "Samghabadi et al. (2018)",
"ref_id": "BIBREF25"
},
{
"start": 2532,
"end": 2553,
"text": "Raiyani et al. (2018)",
"ref_id": "BIBREF20"
},
{
"start": 2769,
"end": 2793,
"text": "Risch and Krestel (2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "The datasets for this shared task are provided by (Bhattacharya et al., 2020) in three different languages: English, Hindi, and Bengali. For sub-task A, the data has been labeled with one out of three possible tags: Not Aggressive (NAG): Texts which are not aggressive. E.g. \"hats off brother\". Covertly Aggressive (CAG): Texts that express aggression in an indirect, sarcastic manner. E.g., \"You are not wrong, you are just ignorant.\".",
"cite_spans": [
{
"start": 50,
"end": 77,
"text": "(Bhattacharya et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "Overtly Aggressive (OAG): Texts which express aggression in a direct, straightforward, and explicit way. E.g., \"Liberals are retards\". For sub-task B, there are two classes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "Gendered (GEN): Texts that target a person or a group of people based on gender, sexuality, or lack of fulfillment of stereotypical gender roles. E.g., \"Homosexuality should be banned\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "Non-gendered (NGEN): Texts that are not gendered. E.g.. \"you are absolutely true bro...but even politicians supports them\". Although the perception of aggression and misogyny can vary from person to person, we found some annotations that are highly improbable. The following are some examples that are mislabeled as NAG:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "\u2022 \"This lady from BJP is crazy this is how u react man such a foolish and ignorant lady\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "\u2022 \"What a lousy moderator arnab is. Falthu show\",",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "\u2022 \"Ha yaar bahut hi chutya movie tha.sab log keh raha tha badia movie tha isliye dekha bt bilkul jhaand tha\" (It was a stupid movie. Everyone was saying it is good so I saw but it is completely stupid)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "\u2022 \"Brother puri movie bta di chutiya he kya\" (brother you spoiled the entire movie are you an idiot)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "Some examples of comments mislabeled as NGEN:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "\u2022 \"true feminist is Cancer\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "\u2022 \"Breif description but feminist is like urban terrorist and they will never understand\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "\u2022 \"Feminists are the next threat to our country\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "\u2022 \"chutiya hai ye feminists\" (these feminists are idiots) Table 1 shows statistics over the train and validation data for both sub-tasks across all available languages. From this table, we can easily find out that for both sub-tasks A and B, the train and dev sets are highly skewed towards NAG and NGEN classes, respectively. Table 2 indicates the co-occurrence of sub-task A and sub-task B labels. NAG mostly co-occurs with NGEN. The ratio of GEN to NGEN in OAG is greater than that in NAG and CAG. Overall, in all three languages, we can observe that as the directness of aggression increases (NAG<CAG<OAG), the percentage of GEN examples also increases. In Hindi and Bengali, OAG examples are more likely to be tagged as GEN than NGEN. Based on these observations, we can say that these two sub-tasks are related.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 327,
"end": 334,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "As we saw that the sub-tasks are related to each other, we create a unified deep neural architecture, following a multitask approach. Figure 1 illustrates the overall architecture of our proposed model. Our proposed model consists of the following modules: BERT Layer: We pass the input sequence of tokens to the BERT model (Devlin et al., 2018) to extract contextualized information.",
"cite_spans": [
{
"start": 324,
"end": 345,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "4."
},
{
"text": "Attention Layer: We feed the output of BERT layer to the attention mechanism proposed in Bahdanau et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "4."
},
{
"text": ". This layer computes the weighted sum of r = i \u03b1 i h i to aggregate hidden representations (h i ) of all tokens in a sequence to a single vector. To measure the relative importance of words, we calculate the attention weights \u03b1 i as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 i = exp(score(h i , e)) \u03a3 i exp(score(h i , e))",
"eq_num": "(1)"
}
],
"section": "System Architecture",
"sec_num": "4."
},
{
"text": "where the score(.) function is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score(h i , e) = v T tanh(W h h i + b h )",
"eq_num": "(2)"
}
],
"section": "System Architecture",
"sec_num": "4."
},
{
"text": "where W h is the weight matrix, and v and b h are the parameters of the network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "4."
},
{
"text": "We pass the output of the attention layer to Fully Connected (linear) layers for dimen-sion reduction. There are two linear layers with 500 and 100 neurons, respectively. Classification Layer: We feed the output of linear layers to two separate classification layers, one for predicting aggression class, and another for misogyny identification. For both cases, we use a linear layer with a softmax activation on top, which gives a probability score to the classes. The number of output neurons is three and two for sub-tasks A and B, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully-Connected Layers:",
"sec_num": null
},
{
"text": "For pre-processing, we use the BERT tokenizer for text tokenization. Then, we truncate the posts to 200 tokens, and left-pad the shorter sequence with zeros. For initializing weights of the BERT layer, we use \"bert based uncased\" pre-trained weights for English and \"bert base multilingual cased\" for Hindi and Bengali. To compute the loss between predicted and actual labels, we use Binary Cross Entropy. We calculate the sum of losses for both sub-tasks A and B. Additionally, for addressing the imbalance problem in the corpora, we add information about class weights to the loss functions for both outputs. We update the network weights using Adam optimizer (Kingma and Ba, 2014) with a learning rate of 1e \u22125 ; however, we do not fine-tune the BERT layer. We train the model over 200 epochs using training data and save the best model based on the F1 score obtained on the validation set. We train our models on Nvidia Tesla P40 GPU having 24 GB memory, where each epoch takes around 1.5 minutes to be completed. The code and the model weights are publicly available 1 . Table 3 shows the weighted F1 score and accuracy of our system on all the sub-tasks. Weighted F1 score is used as the official metric to rank the participants by the organizers. Based on the table, misogyny is easier to detect as compared to aggression across all available languages. The possible reason could be its binary and relatively straightforward nature as compared to sub-task A, which includes three classes. Our best score is achieved on English subtask B, where we secured 3 rd rank out of 15 teams. Our system lags behind the best performance on EN-B (0.8715 F1), and BEN-B (0.9365 F1) by 0.0136 and 0.0159, respectively, which shows our system is competitive and comparable to them. Table 3 : Results of BERT model on all sub-tasks. Figure 2 illustrates the confusion matrices of sub-task A for all three languages. Overall, CAG examples are more likely to be wrongly predicted as NAG than OAG. This could be due to the lack of abusive or explicit words in CAG instances. We further investigate this possibility in Section 5.1. In Hindi, OAG-NAG confusion (100) is high and is significantly more than that in English and Bengali. The reason could be that for Hindi corpus, the majority of the train instances are tagged as NAG (56.35%), whereas in its test data, the majority of the instances are labeled as OAG (57.00%). Figure 3 shows the confusion matrices for sub-task B on all three languages. Similar to OAG-NAG, we can see that GEN-NGEN confusion for Hindi test data is higher than that in other languages. It can be explained by table 1, where we can see that for Hindi sub-task B, the distribution of classes across the test data is significantly different from the training and dev sets. Table 5 : Instances where predicted label seems more accurate than given label.",
"cite_spans": [
{
"start": 662,
"end": 683,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 1076,
"end": 1083,
"text": "Table 3",
"ref_id": null
},
{
"start": 1774,
"end": 1781,
"text": "Table 3",
"ref_id": null
},
{
"start": 1824,
"end": 1832,
"text": "Figure 2",
"ref_id": null
},
{
"start": 2413,
"end": 2421,
"text": "Figure 3",
"ref_id": null
},
{
"start": 2789,
"end": 2796,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setups",
"sec_num": "4.1."
},
{
"text": "on all the sub-tasks. For sub-task A, the performance is least for CAG across all the languages, which shows that it is the most challenging aggression class to identify. OAG and CAG scores are least for English as compared to the other two languages because the percentage of training examples for those two classes is lower in English as compared to other languages. NAG is the easiest to detect in English and Bengali, whereas OAG is the easiest to detect in Hindi. With regards to sub-task B, the performance is better on NGEN than GEN for all the three languages. The difference between the F1 score on NGEN and GEN is significantly more in English than in Hindi and Bengali. This can be attributed to the lower percentage of GEN examples in English than in the other two languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5."
},
{
"text": "We analyze the mistakes of our model on the validation set to see where it goes wrong. We found several instances where the actual tag is CAG, but our model classifies them as NAG. Some of those examples are listed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1."
},
{
"text": "\u2022 \"Fat shaming is good. Why not?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1."
},
{
"text": "\u2022 \"**Gay people rely on straight people to produce more gay people**\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1."
},
{
"text": "\u2022 \"They have no right to live\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1."
},
{
"text": "\u2022 \"Inko hospital bejo..ye mentally hille hue log han\" (Send them to hospital, they are mentally disturbed people.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1."
},
{
"text": "\u2022 \"Bhai aap na sirf review kariye baki ki baatein na hi kare toh accha h ?\" (Brother you only do review, it's better of you don't talk about other things.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1."
},
{
"text": "From these examples, we can see that due to the indirect/sarcastic nature and lack of profanity in CAG, it is confused with NAG. This flags CAG as the most difficult class to detect. We also found some instances where the predicted labels seem more likely to be correct than the annotated labels. Table 5 shows such examples. In that, examples a-d are from sub-task A and are labeled as NAG, but as they include abusive and explicit words, the predicted label OAG seems more accurate. Examples e-g are labeled as GEN, but they are targeted towards a specific person not based on gender. So the model prediction NGEN is correct. Example h attacks a woman based on her gender, and hence the model predicts it as GEN.",
"cite_spans": [],
"ref_spans": [
{
"start": 297,
"end": 304,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1."
},
{
"text": "In this paper, we present our multi-task deep neural model to identify misogyny and aggression for three different corpora -English, Hindi, and Bengali. The analysis of the label co-occurrence across the two sub-tasks shows that aggression identification and misogyny identification are related. Analysis of the results shows that CAG is often confused with NAG and is the most challenging aggression class to detect. For future work, instead of employing BERT as a feature extractor, we plan to fine-tune it using the training data. We also plan to explore more sentiment features for better identification of the implicit forms of aggression (CAG).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "Agrawal, S. and Awekar, A. (2018). Deep learning for detecting cyberbullying across multiple social media platforms. In European Conference on Information Retrieval. Springer. Aroyehun, S. T. and Gelbukh, A. (2018). Aggression detection in social media: Using deep neural networks, data augmentation, and pseudo labeling. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018). Bahdanau, D., Cho, K., et al. (2014) . Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv: 1409.0473. Beran, T. and Li, Q. (2005) . Cyber-harassment: A study of a new method for an old behavior. JECR, 32(3).",
"cite_spans": [
{
"start": 428,
"end": 450,
"text": "Cho, K., et al. (2014)",
"ref_id": null
},
{
"start": 564,
"end": 584,
"text": "T. and Li, Q. (2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bibliographical References",
"sec_num": "7."
},
{
"text": "https://www.theverge.com/interface/2019/ 7/2/20678231/facebook-civil-rights-audithate-speech-moderators",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/NiloofarSafi/TRAC-2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Developing a multilingual annotated corpus of misogyny and aggression",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bhagat",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Dawer",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lahiri",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Ojha",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.07428"
]
},
"num": null,
"urls": [],
"raw_text": "Bhattacharya, S., Singh, S., Kumar, R., Bansal, A., Bhagat, A., Dawer, Y., Lahiri, B., and Ojha, A. K. (2020). Devel- oping a multilingual annotated corpus of misogyny and aggression. arXiv preprint arXiv:2003.07428.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Cyber hate speech on twitter: An application of machine classification and statistical modeling for policy and decision making",
"authors": [
{
"first": "P",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "M",
"middle": [
"L"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2015,
"venue": "Policy & Internet",
"volume": "7",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burnap, P. and Williams, M. L. (2015). Cyber hate speech on twitter: An application of machine classification and statistical modeling for policy and decision making. Pol- icy & Internet, 7(2).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Impoliteness: Using language to cause offence",
"authors": [
{
"first": "J",
"middle": [],
"last": "Culpeper",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "28",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Culpeper, J. (2011). Impoliteness: Using language to cause offence, volume 28. Cambridge University Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Cyberbullying detection in social networks using deep learning based models; a reproducibility study",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dadvar",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Eckert",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.08046"
]
},
"num": null,
"urls": [],
"raw_text": "Dadvar, M. and Eckert, K. (2018). Cyberbully- ing detection in social networks using deep learning based models; a reproducibility study. arXiv preprint arXiv:1812.08046.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automated Hate Speech Detection and the Problem of Offensive Language",
"authors": [
{
"first": "T",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davidson, T., Warmsley, D., Macy, M., and Weber, I. (2017). Automated Hate Speech Detection and the Prob- lem of Offensive Language. In Proceedings of ICWSM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2018). BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Online hate speech against women: Automatic identification of misogyny and sexism on twitter",
"authors": [
{
"first": "S",
"middle": [],
"last": "Frenda",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ghanem",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Montes-Y G\u00f3mez",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Intelligent & Fuzzy Systems",
"volume": "",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frenda, S., Ghanem, B., Montes-y G\u00f3mez, M., and Rosso, P. (2019). Online hate speech against women: Auto- matic identification of misogyny and sexism on twitter. Journal of Intelligent & Fuzzy Systems, 36(5).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Using Convolutional Neural Networks to Classify Hate-speech",
"authors": [
{
"first": "B",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "U",
"middle": [
"K"
],
"last": "Sikdar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gamb\u00e4ck, B. and Sikdar, U. K. (2017). Using Convolu- tional Neural Networks to Classify Hate-speech. In Pro- ceedings of the First Workshop on Abusive Language On- line.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "When does a compliment become sexist? analysis and classification of ambivalent sexism using twitter data",
"authors": [
{
"first": "A",
"middle": [],
"last": "Jha",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mamidi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Workshop on NLP and Computational Social Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jha, A. and Mamidi, R. (2017). When does a compliment become sexist? analysis and classification of ambivalent sexism using twitter data. In Proceedings of the Second Workshop on NLP and Computational Social Science.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "D",
"middle": [
"P"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Benchmarking Aggression Identification in Social Media",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Ojha",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbulling (TRAC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2018a). Benchmarking Aggression Identification in So- cial Media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbulling (TRAC).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018). Association for Computational Linguistics",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, et al., editors. (2018b). Proceedings of the First Workshop on Trolling, Aggression and Cyberbul- lying (TRAC-2018). Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Aggression-annotated corpus of hindi-english code-mixed data",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [
"N"
],
"last": "Reganti",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Maheshwari",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Reganti, A. N., Bhatia, A., and Maheshwari, T. (2018c). Aggression-annotated corpus of hindi-english code-mixed data. CoRR, abs/1803.09402.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Locate the hate: Detecting Tweets Against Blacks",
"authors": [
{
"first": "I",
"middle": [],
"last": "Kwok",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "Twenty-Seventh AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwok, I. and Wang, Y. (2013). Locate the hate: Detecting Tweets Against Blacks. In Twenty-Seventh AAAI Con- ference on Artificial Intelligence.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A bert-based transfer learning approach for hate speech detection in online social media",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mozafari",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Farahbakhsh",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Crespi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Complex Networks and Their Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mozafari, M., Farahbakhsh, R., and Crespi, N. (2019). A bert-based transfer learning approach for hate speech de- tection in online social media. In Proceedings of the In- ternational Conference on Complex Networks and Their Applications. Springer.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Nikolov-radivchev at SemEval-2019 task 6: Offensive tweet classification with BERT and ensembles",
"authors": [
{
"first": "A",
"middle": [],
"last": "Nikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Radivchev",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolov, A. and Radivchev, V. (2019). Nikolov-radivchev at SemEval-2019 task 6: Offensive tweet classification with BERT and ensembles. In Proceedings of the 13th International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Abusive Language Detection in Online User Content",
"authors": [
{
"first": "C",
"middle": [],
"last": "Nobata",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nobata, C., Tetreault, J., Thomas, A., Mehdad, Y., and Chang, Y. (2016). Abusive Language Detection in On- line User Content. In Proceedings of the 25th Interna- tional Conference on World Wide Web.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hate speech. Encyclopedia of the American constitution",
"authors": [
{
"first": "J",
"middle": [
"T"
],
"last": "Nockleby",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nockleby, J. T. (2000). Hate speech. Encyclopedia of the American constitution.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Aggressive language identification using word embeddings and sentiment features",
"authors": [
{
"first": "C",
"middle": [],
"last": "Orasan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orasan, C. (2018). Aggressive language identification us- ing word embeddings and sentiment features. Proceed- ings of the First Workshop on Trolling, Aggression and Cyberbullying.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multi-label categorization of accounts of sexism using a neural framework",
"authors": [
{
"first": "P",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Abburi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Badjatiya",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Chhaya",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.04602"
]
},
"num": null,
"urls": [],
"raw_text": "Parikh, P., Abburi, H., Badjatiya, P., Krishnan, R., Chhaya, N., Gupta, M., and Varma, V. (2019). Multi-label cat- egorization of accounts of sexism using a neural frame- work. arXiv preprint arXiv:1910.04602.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Fully connected neural network with advance preprocessor to identify aggression over Facebook and twitter",
"authors": [
{
"first": "K",
"middle": [],
"last": "Raiyani",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Gon\u00e7alves",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Quaresma",
"suffix": ""
},
{
"first": "V",
"middle": [
"B"
],
"last": "Nogueira",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raiyani, K., Gon\u00e7alves, T., Quaresma, P., and Nogueira, V. B. (2018). Fully connected neural network with ad- vance preprocessor to identify aggression over Facebook and twitter. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Aggression identification using deep learning and data augmentation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Krestel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Risch, J. and Krestel, R. (2018). Aggression identification using deep learning and data augmentation. In Proceed- ings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "hpiDEDIS at GermEval 2019: Offensive language identification using a german BERT model",
"authors": [
{
"first": "J",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stoll",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ziegele",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Krestel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 15th Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Risch, J., Stoll, A., Ziegele, M., and Krestel, R. (2019). hpiDEDIS at GermEval 2019: Offensive language iden- tification using a german BERT model. In Proceedings of the 15th Conference on Natural Language Processing.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Evaluating aggression identification in social media",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Kr",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Ojha",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Atul Kr. Ojha, S. M. and Zampieri, M. (2020). Evaluating aggression identification in social media. In Ritesh Kumar, et al., editors, Proceedings of the Second Workshop on Trolling, Aggression and Cy- berbullying (TRAC-2020), Paris, France, may. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Proceedings of the Third Workshop on Abusive Language Online. Association for Computational Linguistics",
"authors": [
{
"first": "Sarah",
"middle": [
"T"
],
"last": "Roberts",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah T. Roberts, et al., editors. (2019). Proceedings of the Third Workshop on Abusive Language Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Ritual-uh at TRAC 2018 shared task: Aggression identification",
"authors": [
{
"first": "N",
"middle": [
"S"
],
"last": "Samghabadi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mave",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kar",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samghabadi, N. S., Mave, D., Kar, S., and Solorio, T. (2018). Ritual-uh at TRAC 2018 shared task: Aggres- sion identification. CoRR, abs/1807.11712.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A Survey on Hate Speech Detection Using Natural Language Processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schmidt, A. and Wiegand, M. (2017). A Survey on Hate Speech Detection Using Natural Language Processing. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "When a tweet is actually sexist. A more comprehensive classification of different online harassment categories and the challenges in NLP",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sharifirad",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Matwin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharifirad, S. and Matwin, S. (2019). When a tweet is ac- tually sexist. A more comprehensive classification of dif- ferent online harassment categories and the challenges in NLP. CoRR, abs/1902.10584.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waseem, Z. and Hovy, D. (2016). Hateful symbols or hate- ful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL Student Re- search Workshop.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Pitenis",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "\u00c7\u00f6ltekin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of SemEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zampieri, M., Nakov, P., Rosenthal, S., Atanasova, P., Karadzhov, G., Mubarak, H., Derczynski, L., Pitenis, Z., and \u00c7\u00f6ltekin, c. (2020). SemEval-2020 Task 12: Multi- lingual Offensive Language Identification in Social Me- dia (OffensEval 2020). In Proceedings of SemEval.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Detecting Hate Speech on Twitter Using a Convolution-GRU Based Deep Neural Network",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tepper",
"suffix": ""
}
],
"year": 2018,
"venue": "Lecture Notes in Computer Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, Z., Robinson, D., and Tepper, J. (2018). Detect- ing Hate Speech on Twitter Using a Convolution-GRU Based Deep Neural Network. In Lecture Notes in Com- puter Science. Springer Verlag.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Overall architecture of the proposed model.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Heatmap of confusion matrices of our best performing systems for sub-task A across all languages. Heatmap of confusion matrices of our best performing systems for sub-task B across all languages.",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td colspan=\"2\">language split total</td><td colspan=\"6\">NAG-GEN NAG-NGEN CAG-GEN CAG-NGEN OAG-GEN OAG-NGEN</td></tr><tr><td>English</td><td colspan=\"2\">train 4263 134 dev 1066 38</td><td>3241 798</td><td>35 9</td><td>418 108</td><td>140 26</td><td>295 87</td></tr><tr><td>Hindi</td><td colspan=\"2\">train 3984 32 dev 997 11</td><td>2213 567</td><td>79 26</td><td>750 185</td><td>550 115</td><td>260 93</td></tr><tr><td>Bengali</td><td colspan=\"2\">train 3826 129 dev 957 37</td><td>1949 485</td><td>129 31</td><td>769 187</td><td>454 123</td><td>395 94</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Data statistics.",
"num": null
},
"TABREF2": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF5": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Class-wise F1 score for both sub-tasks across all three languages.",
"num": null
},
"TABREF6": {
"content": "<table><tr><td>indicates the class-wise performance of our system</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Also Veere Di Wedding Fake Feminist Piece Of Shit... Maha Chutiyapay ki film he Kabir Singh... It's totally bullshit movie... (Kabir Singh is a very stupid film... it's totally bullshit movie...)",
"num": null
}
}
}
}