ACL-OCL / Base_JSON /prefixT /json /trac /2020.trac-1.25.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:52:23.744513Z"
},
"title": "Developing a Multilingual Annotated Corpus of Misogyny and Aggression",
"authors": [
{
"first": "Shiladitya",
"middle": [],
"last": "Bhattacharya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Jawaharlal Nehru University",
"location": {
"addrLine": "2 Dr",
"settlement": "New Delhi"
}
},
"email": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Akanksha",
"middle": [],
"last": "Bansal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Panlingua Language Processing LLP",
"location": {
"settlement": "New Delhi"
}
},
"email": ""
},
{
"first": "Akash",
"middle": [],
"last": "Bhagat",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yogesh",
"middle": [],
"last": "Dawer",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Bornini",
"middle": [],
"last": "Lahiri",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology",
"location": {
"settlement": "Kharagpur"
}
},
"email": ""
},
{
"first": "Atul",
"middle": [
"Kr"
],
"last": "Ojha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Panlingua Language Processing LLP",
"location": {
"settlement": "New Delhi"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we discuss the development of a multilingual annotated corpus of misogyny and aggression in Indian English, Hindi, and Indian Bangla as part of a project on studying and automatically identifying misogyny and communalism on social media (the ComMA Project). The dataset is collected from comments on YouTube videos and currently contains a total of over 20,000 comments. The comments are annotated at two levels-aggression (overtly aggressive, covertly aggressive, and non-aggressive) and misogyny (gendered and non-gendered). We describe the process of data collection, the tagset used for annotation, and issues and challenges faced during the process of annotation. Finally, we discuss the results of the baseline experiments conducted to develop a classifier for misogyny in the three languages.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we discuss the development of a multilingual annotated corpus of misogyny and aggression in Indian English, Hindi, and Indian Bangla as part of a project on studying and automatically identifying misogyny and communalism on social media (the ComMA Project). The dataset is collected from comments on YouTube videos and currently contains a total of over 20,000 comments. The comments are annotated at two levels-aggression (overtly aggressive, covertly aggressive, and non-aggressive) and misogyny (gendered and non-gendered). We describe the process of data collection, the tagset used for annotation, and issues and challenges faced during the process of annotation. Finally, we discuss the results of the baseline experiments conducted to develop a classifier for misogyny in the three languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The proliferation in Social Networking (platforms and users) has transformed our communities and the manner in which we communicate. One of the widespread impact can be seen through the hate that has been vocalised through platforms like Facebook, Twitter, and YouTube, where content sharing and communication are integrated together. The hatefulness itself is not a novel discovery but the intensity and hostility lying in the expression is a matter of grave concern. Articulation of hatefulness is often strong enough to break down or weaken the community ties. As the impact of such articulation travels from online to offline domain, resultant reactions frequently lead to incidents like organised riot-like situations and unfortunate casualties to ultimately broaden the scope of marginalisation of individuals as well as communities. Mr. Nilesh Christopher in his August, 2019 article published in the online news portal Wired has reported how one particular platform named TikTok came in handy to spread caste-based atrocities in Tamil Nadu, India. Banaji et al. (2019) in a research report on the assessment of WhatsApp abuses in India says in one of its key findings \"... in the case of violence against a specific group (Muslims, Christians, Dalits, Adivasis, etc.) there exists widespread, simmering distrust, hatred, contempt and suspicion towards Pakistanis, Muslims, Dalits and critical or dissenting citizens.... What-sApp users in these demographics are predisposed both to believe disinformation and to share misinformation about discriminated groups in face-to-face and What-sApp networks.\" The report also observes that with the sweeping spread of WhatsApp, there has evolved newer forms of virtual violence against women as well ....\"Forms of WhatsApp-and smart-phone enabled violence against women in India include unsolicited sexts, sex tapes, rape videos, surveillance, violation of pri-vacy, bullying, forced confrontation with pornographic material, blackmail and humiliation.\" Thus, it has become all the more important for scholars and researchers to take the initiative and find methods to identify and compile the source and articulation of aggression It is for this reason that we have initiated the building of a sizeable corpus comprising YouTube comments to understand misogyny and aggression in user-generated posts and automatically identify those. In recent times, there has been a large number of studies exploring different aspects of hateful and aggressive language and their computational modelling and automatic detection such as toxic comments 1 (Thain et al., 2017) , trolling (Cambria et al., 2010; Kumar et al., 2014; Mojica de la Vega and Ng, 2018; Mihaylov et al., 2015) , flaming / insults (Sax, 2016; Nitin et al., 2012) , radicalization (Agarwal and Sureka, 2015; Agarwal and Sureka, 2017) , racism (Greevy and Smeaton, 2004; Greevy, 2004; Waseem, 2016) , online aggression (Kumar et al., 2018a) , cyberbullying (Xu et al., 2012; Dadvar et al., 2013) , hate speech (Kwok and Wang, 2013; Djuric et al., 2015; Burnap and Williams, 2015; Davidson et al., 2017; Malmasi and Zampieri, 2017; Malmasi and Zampieri, 2018; Waseem and Hovy, 2016) , abusive language (Waseem et al., 2017; Nobata et al., 2016; Mubarak et al., 2017) and offensive language (Wiegand et al., 2018; Zampieri et al., 2019) . Prior studies have explored the use of aggressive and hateful language on different platforms such as Twitter (Xu et al., 2012; Burnap and Williams, 2015; Davidson et al., 2017; Wiegand et al., 2018) , Wikipedia comments 1 , and Facebook posts (Kumar et al., 2018a) . Our present study is one of the first studies to make use of YouTube comments for computational modelling of aggression and misogyny (although there have been quite a few studies on pragmatic aspects of YouTube comments such as (Garc\u00e9s-Conejos Blitvich, 2010; Garc\u00e9s-Conejos Blitvich et al., 2013; Lorenzo-Dus et al., 2011; Bou-Franch et al., 2012) ). Some of the earlier studies on computational modelling of misogyny have focussed almost exclusively on tweets ((Menczer et al., 2015; Frenda et al., 2019; Hewitt et al., 2016; Fersini et al., 2018b; Fersini et al., 2018a; Sharifirad and Matwin, 2019) ). Also, all of these studies have focussed on either English or European languages like Italian and Spanish. And as such this is the first study on computational modelling of misogyny in two of India's largest languages -Hindi and Bangla. In the following sections, we will discuss the corpus collection and annotation for this study and the development of a baseline misogyny classifier for the two languages.",
"cite_spans": [
{
"start": 2588,
"end": 2608,
"text": "(Thain et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 2620,
"end": 2642,
"text": "(Cambria et al., 2010;",
"ref_id": "BIBREF5"
},
{
"start": 2643,
"end": 2662,
"text": "Kumar et al., 2014;",
"ref_id": "BIBREF16"
},
{
"start": 2663,
"end": 2694,
"text": "Mojica de la Vega and Ng, 2018;",
"ref_id": "BIBREF25"
},
{
"start": 2695,
"end": 2717,
"text": "Mihaylov et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 2738,
"end": 2749,
"text": "(Sax, 2016;",
"ref_id": "BIBREF30"
},
{
"start": 2750,
"end": 2769,
"text": "Nitin et al., 2012)",
"ref_id": "BIBREF28"
},
{
"start": 2787,
"end": 2813,
"text": "(Agarwal and Sureka, 2015;",
"ref_id": "BIBREF0"
},
{
"start": 2814,
"end": 2839,
"text": "Agarwal and Sureka, 2017)",
"ref_id": "BIBREF1"
},
{
"start": 2849,
"end": 2875,
"text": "(Greevy and Smeaton, 2004;",
"ref_id": "BIBREF13"
},
{
"start": 2876,
"end": 2889,
"text": "Greevy, 2004;",
"ref_id": "BIBREF14"
},
{
"start": 2890,
"end": 2903,
"text": "Waseem, 2016)",
"ref_id": "BIBREF35"
},
{
"start": 2924,
"end": 2945,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF17"
},
{
"start": 2962,
"end": 2979,
"text": "(Xu et al., 2012;",
"ref_id": "BIBREF37"
},
{
"start": 2980,
"end": 3000,
"text": "Dadvar et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 3015,
"end": 3036,
"text": "(Kwok and Wang, 2013;",
"ref_id": "BIBREF19"
},
{
"start": 3037,
"end": 3057,
"text": "Djuric et al., 2015;",
"ref_id": "BIBREF8"
},
{
"start": 3058,
"end": 3084,
"text": "Burnap and Williams, 2015;",
"ref_id": "BIBREF4"
},
{
"start": 3085,
"end": 3107,
"text": "Davidson et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 3108,
"end": 3135,
"text": "Malmasi and Zampieri, 2017;",
"ref_id": "BIBREF21"
},
{
"start": 3136,
"end": 3163,
"text": "Malmasi and Zampieri, 2018;",
"ref_id": "BIBREF22"
},
{
"start": 3164,
"end": 3186,
"text": "Waseem and Hovy, 2016)",
"ref_id": "BIBREF33"
},
{
"start": 3206,
"end": 3227,
"text": "(Waseem et al., 2017;",
"ref_id": "BIBREF34"
},
{
"start": 3228,
"end": 3248,
"text": "Nobata et al., 2016;",
"ref_id": "BIBREF29"
},
{
"start": 3249,
"end": 3270,
"text": "Mubarak et al., 2017)",
"ref_id": null
},
{
"start": 3294,
"end": 3316,
"text": "(Wiegand et al., 2018;",
"ref_id": "BIBREF36"
},
{
"start": 3317,
"end": 3339,
"text": "Zampieri et al., 2019)",
"ref_id": "BIBREF38"
},
{
"start": 3452,
"end": 3469,
"text": "(Xu et al., 2012;",
"ref_id": "BIBREF37"
},
{
"start": 3470,
"end": 3496,
"text": "Burnap and Williams, 2015;",
"ref_id": "BIBREF4"
},
{
"start": 3497,
"end": 3519,
"text": "Davidson et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 3520,
"end": 3541,
"text": "Wiegand et al., 2018)",
"ref_id": "BIBREF36"
},
{
"start": 3586,
"end": 3607,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF17"
},
{
"start": 3854,
"end": 3869,
"text": "Blitvich, 2010;",
"ref_id": "BIBREF12"
},
{
"start": 3870,
"end": 3907,
"text": "Garc\u00e9s-Conejos Blitvich et al., 2013;",
"ref_id": "BIBREF12"
},
{
"start": 3908,
"end": 3933,
"text": "Lorenzo-Dus et al., 2011;",
"ref_id": "BIBREF20"
},
{
"start": 3934,
"end": 3958,
"text": "Bou-Franch et al., 2012)",
"ref_id": "BIBREF3"
},
{
"start": 4072,
"end": 4095,
"text": "((Menczer et al., 2015;",
"ref_id": "BIBREF23"
},
{
"start": 4096,
"end": 4116,
"text": "Frenda et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 4117,
"end": 4137,
"text": "Hewitt et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 4138,
"end": 4160,
"text": "Fersini et al., 2018b;",
"ref_id": "BIBREF10"
},
{
"start": 4161,
"end": 4183,
"text": "Fersini et al., 2018a;",
"ref_id": "BIBREF9"
},
{
"start": 4184,
"end": 4212,
"text": "Sharifirad and Matwin, 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The use of a wide range of aggressive and hateful content on social media becomes interesting as well as challenging to study in context to India which is a secular nation with religious as well as linguistic and cultural heterogeneity. The present work is being carried out within the 'Communal and Misogynistic Aggression in Hindi-English-Bangla (ComMA) project'. The broader aim of this project is to understand how communal and sexually threatening misogynistic content is linguistically and structurally constructed by the aggressors and harassers and how it is evaluated by the other participants in the discourse. We will use the methods of micro-level discourse analysis, which will be a combination of conversation analysis and the interactional model used for (im)politeness studies, in order to understand the construction and evaluation of aggression on social media. We will use the insights from this study to develop a system that could automatically identify if some textual content is sexually threatening or communal on social media. The system will use multiple supervised text classification models that would be trained using a dataset annotated at 2 levels with labels pertaining to sexual and communal aggression as well as its evaluation by the other participants. The dataset will contain data in two of the largest spoken Indian languages -Hindi and Bangla -as well as code-mixed content in three languages -Hindi, Bangla and English. It will be collected from both social media (like Facebook and Twitter) as well as comments on blogs and news/opinion websites. The research presented in this paper focusses on one part of the project -automatic identification of misogyny.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context of the Study: The ComMA Project",
"sec_num": "2."
},
{
"text": "For the purpose of the project, online sources laden with comments were carefully selected. In general, extensively used social media platforms were considered primary sources because of their massive footfall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources",
"sec_num": "3.1."
},
{
"text": "Other than social media we also looked at some other popular streaming and sharing platforms. These were namely",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources",
"sec_num": "3.1."
},
{
"text": "\u2022 Facebook",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources",
"sec_num": "3.1."
},
{
"text": "\u2022 Twitter",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources",
"sec_num": "3.1."
},
{
"text": "The actual sources of information ranged from public posts, tweets, video blogs (vlogs), news coverage and so on. We have considered posts and discussion on current popular political issues related to feminine beauty and grooming related vlogs, discussions on the life-choices of female celluloid stars and national policy related debates pertaining to female empowerment. In the process of collection throughout, we have collected only the public posts and comments on them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 YouTube",
"sec_num": null
},
{
"text": "Given the desired output of the project and its requirements, conversations and opinions were selected on the basis of the points mentioned below",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Criteria for Conversations",
"sec_num": "3.2."
},
{
"text": "In order to prepare a considerable dataset for training and looking at the requirement, only those posts and/or conversations were selected which saw a large user engagement in terms of the comments received on them. On an average, we collected data from those posts/videos which had received a minimum of 100-150 comments. This not only ensured a higher volume of data but also more relevant kind of data since it was observed that there is a greater possibility of the presence of aggressive and misogynous comments in longer stretches of conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Volume of Conversation",
"sec_num": "3.2.1."
},
{
"text": "As we mentioned earlier, the choice of source materials was not random. Rather, a selection criterion was followed. After copious deliberations with the members, it was determined that we can only entertain those sources where misogyny is more likely to be expressed. A list of domains of possible source materials was considered and to name a few of those included the following - ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Relevance Criterion",
"sec_num": "3.2.2."
},
{
"text": "India is a multilingual nation, therefore, it was not surprising to find content from any one source expressed in multiple languages. As such during the initial process of data collection from designated sources we needed to carefully separate content in different languages. Therefore, a language identification task was taken up with the native speakers A separate task was also carried out to separate Bangladeshi and Indian varieties of Bangla since the two varieties differ substantially in the choice of lexicon as well as morpho-syntactic structures. At this point of time, we included only the Indian variety of Bangla in the dataset since we did not have sufficient instances of the Bangladeshi variety to be useful in the present task and mixing up the two varieties would have only made the dataset noisier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language",
"sec_num": "3.2.3."
},
{
"text": "We are working to further expand the dataset and as we collect and annotate more instances of Bangladeshi variety of Bangla, we will include that in the future releases of the dataset. The code-mixed English-Hindi and English-Bangla comments were separated out. The process of identification involved carefully analysed linguistically relevant information such as peculiar lexical choice, unique phonetic representation of chosen lexical items and regional colloquial usage. This manual annotation of languages and varieties were used to develop an automatic language identification system for these languages. This system was developed using Support Vector Machines and uses word trigrams and character 5-grams for making the prediction about the language of the content. It achieved an F-score of 0.93 and has worked reasonably well for automatically classifying content into one of the languages before being sent to annotators or even misogyny and aggression classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language",
"sec_num": "3.2.3."
},
{
"text": "In this section, we present the detailed guidelines for annotating the text from social media with information about aggression and misogyny. It gives a description of these categories and the features and, how those were employed during the annotation process. All annotations have been carried out at the level where the annotation target was a complete post, a comment or any one unit of the discourse. We would like to mention here that all of the data are represented as they were from the actual posts/sources. The authors and the project members do not bear ill feeling to people/names mentioned in the examples. Also, we do not endorse such aggressive and misogynistic language as one may find in the examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Aggression Tagset 2",
"sec_num": "4."
},
{
"text": "The aggression annotation was carried out using the aggression tagset (discussed in (Kumar et al., 2018b) ). The tagset is reproduced in Table 1 . ",
"cite_spans": [
{
"start": 84,
"end": 105,
"text": "(Kumar et al., 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "The Aggression Tagset 2",
"sec_num": "4."
},
{
"text": "Misogyny identification is a binary classification task and the labels that we use for the task (Table 2) as well as the detailed guidelines (as developed and used by the annotators) are discussed below.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 105,
"text": "(Table 2)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "The Misogyny Tagset",
"sec_num": "5."
},
{
"text": "Gendered or Misogynous NGEN Non-gendered or Non-misogynous ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TAG ATTRIBUTE GEN",
"sec_num": null
},
{
"text": "This refers to such cases where verbal aggression aimed towards",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gendered or Misogynous (GEN)",
"sec_num": "5.1."
},
{
"text": "\u2022 the stereotypical gender roles of the victim as well as the aggressor",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gendered or Misogynous (GEN)",
"sec_num": "5.1."
},
{
"text": "\u2022 aggressive reference to one's sexuality and sexual orientation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gendered or Misogynous (GEN)",
"sec_num": "5.1."
},
{
"text": "\u2022 attacks the victim because of/by referring to her/his gender (includes homophobic and transgender attacks)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gendered or Misogynous (GEN)",
"sec_num": "5.1."
},
{
"text": "\u2022 includes attack against the victim owing to not fulfilling gender roles assigned to them or fulfilling the roles assigned to another gender Some of the examples of this class are given below. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gendered or Misogynous (GEN)",
"sec_num": "5.1."
},
{
"text": "The text which is not gendered will be marked not gendered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-gendered or Non-Misogynous (NGEN)",
"sec_num": "5.2."
},
{
"text": "This tag was employed in rare instances where it was not possible to decide whether the text is GEN or NGEN. It was not included in the final tagged document. It only served as an intermediary tag for flagging and resolving really ambiguous and unclear instances. 4 For the sake of clarity and removing ambiguities in the annotation guidelines, an additional set of guidelines were formulated (as a result of discussion with the annotators). They are reproduced in the following sections.",
"cite_spans": [
{
"start": 264,
"end": 265,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unclear (UNC)",
"sec_num": "5.3."
},
{
"text": "The task relates to figuring out the 'intentionality' of the speaker (as manifested in the language used by her/im). You need to figure out if, something that is being said,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Instructions",
"sec_num": "5.4."
},
{
"text": "\u2022 arises out of an inherent bias of the speaker or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Instructions",
"sec_num": "5.4."
},
{
"text": "\u2022 an acceptance of that bias or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Instructions",
"sec_num": "5.4."
},
{
"text": "\u2022 propagates the bias (knowingly or unknowingly) or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Instructions",
"sec_num": "5.4."
},
{
"text": "\u2022 endorses the bias (again intentionally or unintentionally; or covertly or overtly)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Instructions",
"sec_num": "5.4."
},
{
"text": "The task could be approached by looking at the text and trying to figure out if it",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Instructions",
"sec_num": "5.4."
},
{
"text": "\u2022 attacks the victim because of/by referring to her/his gender (includes homophobic and transgender attacks) or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Instructions",
"sec_num": "5.4."
},
{
"text": "\u2022 includes attack against the victim owing to not fulfilling gender roles assigned to them or fulfilling the roles assigned to another gender",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Instructions",
"sec_num": "5.4."
},
{
"text": "Gendered does NOT mean any attack against women; it will be gendered only when the attack is BECAUSE of someone being a woman (or a man or a transgender or any of the countless gender identities). For example, In both (1) and 2, even though the attack is against a woman, the locus of attack may not be the gender. While in (2) the absence of a gender bias and misogyny is clear, in (1) it is little complicated because of the use of the last word and might be interpreted as gendered because of its use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attack against Women",
"sec_num": "5.5."
},
{
"text": "One of the tests employed for resolving if a joke was gendered or not was to see if the gender of the target of the joke is changed, then the joke still works or not. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jokes",
"sec_num": "5.6."
},
{
"text": "A lot of times, for the lack of complete context, it was not clear if a comment was satire / sarcasm or not. Such unclear instances were initially tagged 'Unclear' and later a decision was arrived at through discussion among the annotators and, if required, based on voting. For example,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Satire/ sarcasm",
"sec_num": "5.7."
},
{
"text": "1. \"Jithna dethe hein is d Benchmark for Jithna lena hai... -A Father (of Daughters and a Son) #Dowry #Jehaz #Shukrana #Nazrana #weddingideas #weddingseason #wedding #weddingdress #weddinggift\"\"\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Satire/ sarcasm",
"sec_num": "5.7."
},
{
"text": "The amount that we give is the benchmark",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Satire/ sarcasm",
"sec_num": "5.7."
},
{
"text": "The idea expressed by the father in the above example is the dowry amount given by the bride's family works as the standard for accepting dowry when the son gets married. It could be a serious justification of dowry or a satirical take on those who accept dowries stating this reason. One of the ways of resolving such cases might be to look at hashtags and try to see the intention of the speaker. In this case, #Shukrana, #Nazrana etc seems to carry positive connotations. Also the tweet itself may look like a justification for dowry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Satire/ sarcasm",
"sec_num": "5.7."
},
{
"text": "If the intent was not clear in case of poetry then it was marked 'Unclear' and was later resolved using a majority voting. However, in other instances, it was marked as perceived by the annotators. For example, In 1, the poetic verse is romantic in nature and talk about lovemaking. Such expressions can be gendered and express misogyny if they clearly represent lack of consent. Because the axis of consent is not clear here we do not mark it as Gendered. 2, however, is clearly gendered, despite being in verse (not really poetry, though) since the imagery of sex and sexual violence is unnecessarily invoked for attacking the victim.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Poetry / shayari",
"sec_num": "5.8."
},
{
"text": "In some cases, at the surface, speakers may seem to be speaking against a biased practice/behaviour but the arguments given by her/him may not actually be questioning those biases itself and might even be creating another kind of bias. Let us take a look at the following examples,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figuring out tacit intentions/underlying bias",
"sec_num": "5.9."
},
{
"text": "1. @ aajtak @ News18India @ sdtiwari Time has come that a debate on #Dowry should be organised on highest level. it is absolutely essential to abolish #Dowry from Hindu Society. A honest hard worker can't manage to satisfy Groom's demand, particularly when #Bride is highly educated. We the people of this nation #Abhinandan (welcome) the victory of our soldiers. Now we all should ensure a#SpecialStarus4Jawan. The one who is protecting our country by endangering their lives, has donated his life for the cause of the nation, this should definitely be done for him. Exempt soldiers from #Dowry Act.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figuring out tacit intentions/underlying bias",
"sec_num": "5.9."
},
{
"text": "In tweet 1 and 2, the dissatisfaction is because of the inability to afford the demands (and not because the 'demand' itself is discriminatory and biased). Its a financial argument for an inherently 'gender' issue since only women are supposed to give dowry. It also creates a distinction between 'educated' and 'uneducated' girls, thereby, implying that it is okay for uneducated girls to pay dowry. This creates another bias (which clearly doesn't exist for the other gender). Thus, even though the comment seems to be opposing a gendered practice like dowry; it doesn't actually oppose the underlying bias in a practice like this. While (3) looks like a support for protest against molestation, it reinforces the stereotype of women as sisters and daughters. Also molestation is a crime and it doesn't have to do anything with whether there are other women in someone's life or not. On the face of it, (4) may look like a religious comment. However, an underlying attempt is made here to present a gender issue as a religious issue. The speaker supports a practice which is biased against a specific gender (and religion is used as a smokescreen for propagating that bias). (5) reinforces the stereotypical gender associated with the use of a particular colour by a particular gender. (6) Puts gender issues vis-a-vis army which is not at all relevant or comparable and favours a certain kind of preferential treatment based on job. It supports dowry in certain cases (since dowry is not considered a gendered act by the speaker).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figuring out tacit intentions/underlying bias",
"sec_num": "5.9."
},
{
"text": "In general, abuses involving sex and sexual organs will be considered gendered since they emanate from an inherent gender bias. Let us take a look at the following examples - Even though there is no direct attack in (1), the abuse here arises out of an understanding about what is considered an homosexual act. The abuses used in (3) show the biased and misogynistic outlook of the speaker. Even though the attack is not because of the gender, it carries the connotations of attack against a specific gender as it reinforces the role of women as sexual objects. At the same time it propagates the stereotypical ideas of honor, masculinity, etc. Abuses like those in (4) and (5) evoke sexual imagery and are used for attacking someone, hence, gendered. In (6) the abuse is just an exclamation marker and therefore, not directed towards anyone. As such it is not gendered because of the use of this abuse (but see above for description of what makes it gendered).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abuses",
"sec_num": "5.10."
},
{
"text": "In a lot of cases of discussion around gender, it is the girls or the girls' side that are attacked -it is important to figure out the cases of blaming the victim for the problems they are facing (because of the patriarchal societal structure). For example, In this tweet, the speaker asserts that he is against dowry. However he still blames the parents of the girls for this kind of practice and at the same time also absolves the boys of any responsibility. Such cases of victim blaming is gendered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Victim blaming",
"sec_num": "5.11."
},
{
"text": "Describing a gendered act / incident / practice does not make the text gendered. In such cases, it will be gendered only if the speaker endorses the action or depicts an underlying bias. Let us take a look at the following examples - 2. Against the grain: In some parts of #Maharashtra, women get #dowry https://trib.al/gz1NTix",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of an event / fact",
"sec_num": "5.12."
},
{
"text": "3. If the groom's family in China is unable to afford the bride prices, then he is not considered a good match. Learn more: https://buff.ly/2CUDzqv #China #marriagemarket #matchmaking #dowry #brideprices #culturepic.twitter.com/v8MxjGsQz2 4. People were often coupled in European countries according to class and, thus, economic advantage. Learn more here: https://buff.ly/2umI6Nu #economicadvantage #dowry #Europe #culture #marriagepic.twitter.com/Pas1rKavLk 5. \u091c\u092c \u092c\u0924\u0930\u094d \u0928 \u092e\u093e\u0902 \u091c \u0915\u0930 \u0906\u092f\u0940 \u0935\u094b \u0924\u094b \u0917\u093e\u0932\u094b\u0902 \u0928\u0947 \u092c\u0924\u093e\u092f\u093e..!! \u093f\u0915 \u092c\u0924\u0930\u094d \u0928 \u0915\u093e\u0901 \u091a \u0915\u093e \u0915\u094b\u0908 \u0906\u091c \u093f\u092b\u0930 \u0938\u0947 \u091f\u0942 \u091f\u093e \u0917\u092f\u093e..!! @Ya-davsAniruddh @Anjupra7743 @KaranwalTanu @AmbedkarManorma follow @Rana11639322",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of an event / fact",
"sec_num": "5.12."
},
{
"text": "When she came after cleaning dishes her cheeks revealed it all..!! that a glass dish has been broken again today..!!",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of an event / fact",
"sec_num": "5.12."
},
{
"text": "(1) describes a biased situation. However there is no evidence to show that the speaker also endorses it. As such even though the situation being described is gendered, the tweet itself is not. (2) doesn't question the gender bias in the dowry system and acts as an underlying support for dowry. Irrespective of who pays the dowry to whom -its always biased against a specific gender. Since the speaker seems to be endorsing this view, it is gendered. In (3) even though it may look like the description of a practice, the underlying intention of the speaker is to support and justify the practice of dowry by giving a parallel example from a different context. (4) is presented as a covert support for the dowry system, which puts one specific gender in a very disadvantageous position and as such the tweet itself is gendered as well. In (5) even though the incident being described is gendered, the tweet is not a support for that. Thus, it will not be gendered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of an event / fact",
"sec_num": "5.12."
},
{
"text": "In some cases, gender bias might be mixed with other kinds of biases (like religious or regional). These cases, are marked as gendered. For example, 1. Arnab @republic is visibly anti ChristoROPcom-mieFascists. But the #MeToo / Libtard women hv wrapped him in their fingers. So in their appeasement he took anti Hindu stand on #Sabrimala . Appeased LGBTQ during Section 377. Vilified the accused in #MeToo b4 Court Trial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixed bias",
"sec_num": "5.13."
},
{
"text": "In this case, religion seems to be the locus of attack. However, it attacks a lot of other instances of support for non-male rights, hence, biased for a specific gender.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixed bias",
"sec_num": "5.13."
},
{
"text": "Let us take a look at the following examples -1. http://chng.it/DPFHRS9B4T.Please \u2026 sign this petition. For men and their families falsely accused in #DomesticViolence, #dowry and #498a by leeching women, there are no laws to give them a fair trial and no laws to punish leeching women. #MenCommission and #GenderEquality in laws needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Ambiguities",
"sec_num": "5.14."
},
{
"text": "2. Next surgical strike she along with her entire terrorist clan shd be dropped in #Napakistan #Disgusting she is. She also orchestrated fake #Asifa narrative. Shameless ppl dance on dead bodies..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Ambiguities",
"sec_num": "5.14."
},
{
"text": "3. Should we go for GENDER INJUSTICE here? #sabrimala was the same But as I respect my religion and its beliefs i fully support this ritual and i am fully satisfied with whatever rule is imposed. Jay matadi",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Ambiguities",
"sec_num": "5.14."
},
{
"text": "(1) is a call to punish those who misuse the law and so apparently promoting gender equality. However, when it is accompanied by a call to form a men commission, it seems to be ignoring and undermining the issues that a woman faces. There are several laws that are misused by several people -however this is the one law intended to protect the women that causes the maximum uproar. However, having said this, the intention of the speaker does not seem to be biased. In such cases, the annotators may annotate based on their intuition on case-by-case basis or mark it as 'unclear' so that annotations by multiple annotators may be taken. In such cases, they must also include a comment describing the ambiguity. In (2), the question to settle is this -is the criticism BECAUSE the person being criticised is a man / woman or the criticism is directed somewhere else? In this case, the criticism doesn't seem to be directed at gender. However bringing in #Asifa and calling it fake shows a gender bias. Such cases also have to be handled as mentioned above. In (3), the stand taken by the speaker is not clear here and as such may be marked unclear",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Ambiguities",
"sec_num": "5.14."
},
{
"text": "The annotation was carried out by a total of 4 annotators -two among these were speakers of all the three languages -Bangla, Hindi and English, while the other 2 did not speak Bangla. All the annotators were either pursuing or completed a higher degree in Linguistics and expected to have a centrist or leftleaning political orientation. Each of the instances in the dataset was annotated by 2 annotators and in case of disagreement, third annotation was taken/resolved through discussions and deliberations. The issues that we face in annotation occur due to different level of understanding of the language in question or personal prejudices and bias over interpretation and so on. Basically, it involves the differing worldview of many individuals. The process of continuous discussions and sensitisation (especially towards gender issues) among the annotators helped us in taking care of different worldviews of the annotators and also ensuring that they share largely similar values while annotating. However, we also took care not to influence the annotations via each other's perspective as in tasks like these, it is necessary that annotators are not given strict guidelines for annotation and keep the option open for their own interpretation. Notwithstanding the personal interpretations, there were occasions where reaching a consensus was hard in this task. As the task involved more than one individuals, the inter-annotator agreement experiments and subsequent discussions helped the annotators in getting acquainted with each other's perspectives and worldviews and ensuring that a largely uniform annotation process of followed. Krippendorff's kappa coefficient is used to measure inter-annotator agreement which turns out to be 0.75. Although, in about 75 per cent or more cases the tags were unanimous, some data required special attention as different individuals tagged those cases differently. In such cases a three-way process was developed in the course of deliberations. This process is as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation of the Dataset",
"sec_num": "6."
},
{
"text": "1. Counterexample method is used to test the comment: The annotators were given counterexamples to argue against their stand on specific instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation of the Dataset",
"sec_num": "6."
},
{
"text": "2. Annotators' vote are examined: All the collaborators joined in conference to deliberate over the data in question. Independent members were also consulted in the process to get a different view. Native speakers took part to disambiguate examples or provide explanations for parts not understood. Finally, a vote on the most relevant interpretation was carried on to reach a consensus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation of the Dataset",
"sec_num": "6."
},
{
"text": "3. UNC Tag: Instead of marking questionable data with GEN or NGEN, at times a less stringent approach was taken up. In this the annotators were asked to mark such data either as UNC or keep them untagged for a discussion later. This helped immensely in the smooth and timely flow of the annotation process, while a resolution was achieved later through discussion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation of the Dataset",
"sec_num": "6."
},
{
"text": "The final dataset contains a total of over 25,000 comments in the 3 languages -Hindi, Bangla and English. Figure 1 5 shows the share of data in each language. Overall, almost 3,000 (over 11%) are gendered/misogynistic and more than 23,000 are nongendered. The proportion of gendered comments in Hindi, Indian Bangla and Hindi-English code-mixed comments hovers around 10 -15%, while in English it is just around 4%. A language-wise break-up and comparison is given in Figure 2 . Almost half of these comments in Hindi, Indian Bangla and English are also annotated for 3 levels of aggres-",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 116,
"text": "Figure 1 5",
"ref_id": "FIGREF0"
},
{
"start": 468,
"end": 476,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Final Dataset",
"sec_num": "7."
},
{
"text": "5 One of our reviewers have pointed out that \"They are easier to read, can be printed out, and do not cause issues for people with colour blindness\". While we agree with the fact that it might be easier to print and 'read' the tables, we believe that figures serve an inherently different function in comparison to the tables. These are meant to be 'viewed' and not seen. The figures included in our paper intend to show the share of the different values and not necessarily to give a count of those numbers. In fact, we have included the tables to show the numbers. However, converting all the figures to tables will defeat the purpose of these figures: visualization. Hence, we have decided to retain the figures. The share of aggressive (taking together both overtly and covertly aggressive comments) comments in the dataset is around 45% of the total annotated dataset in Hindi and Indian Bangla, while it is around 20% in English. These are similar to what was reported in (Kumar et al., 2018b) . We also took a look at the co-occurrence of aggressive and gendered comments to see if most of the gendered/misogynous comments are also generally aggressive or not. Overall, it turns out that over 80% of the gendered comments are also aggressive; on the other hand, less than 30% of non-gendered comments are aggressive. These results shows that misogyny may be strongly correlated with aggression and even though a substantial proportion of non-gendered comments are also aggressive (in our dataset), a much larger proportion of gendered comments are aggressive. A languagewise break-up of proportion of aggression in gendered as well as non-gendered comments are given in Figure 4 and Figure 5 .",
"cite_spans": [
{
"start": 978,
"end": 999,
"text": "(Kumar et al., 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 1677,
"end": 1686,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 1691,
"end": 1699,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Final Dataset",
"sec_num": "7."
},
{
"text": "Using a subset of the annotated dataset, we trained Support Vector Machine (SVM) for automatic identification of misogyny in Hindi, Bangla and English (in the Indian context). The statistics of dataset used for training and testing is given in Table 3 . We experimented with different combinations of word (uni, bi and tri) and character (2 -5) n-grams as features. We carried out a 10-fold cross validation and also experimented with the C-value of SVM ranging from 0.001 to 10. The best performing classifiers, along with their performance for each of the three languages is summarised in Table 4 As is evident from this, character and word n-grams prove to be quite a string baseline, which achieves an f-score close to 0.90 for Hindi and Bangla and for English it achieves an impressive score of 0.93.",
"cite_spans": [],
"ref_spans": [
{
"start": 244,
"end": 251,
"text": "Table 3",
"ref_id": "TABREF13"
},
{
"start": 591,
"end": 598,
"text": "Table 4",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "Baseline Misogyny Classifier",
"sec_num": "8."
},
{
"text": "In this paper, we have discussed the development of a multilingual corpora in Hindi, Bangla, and English, annotated with the information about it being gendered or not. The total corpus consists of more than 25,000 comments from different YouTube videos annotated with this information. The dataset has been made publicly available for research purposes 6 . We also trained a baseline classifier on this dataset which gives a high f-score of over 0.87 for Hindi, 0.89 for Bangla and 0.93 for English dataset. We are currently working on expanding the dataset to include data from other platforms and domains and then test the classifier to see how well it performs across different kinds of dataset. Our goal is to have a dataset of at least 50,000 comments/units in each of the three languages and develop a multilingual classifier that can work reasonably well for different platforms/domains in automatically detecting misogyny over social media.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summing Up and the Way Ahead",
"sec_num": "9."
},
{
"text": "http://bit.ly/2FhLMVz",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Disclaimer: We would like to mention here that all of the data / examples included in this section are represented as they were collected from the actual posts/sources. The authors of the paper do not bear ill feeling to people/ names mentioned in the examples. Also, we do not endorse such aggressive and misogynistic language as one may find in the examples and the research aims at only understanding and reducing such language usage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We are thankful to one of the reviewers for suggesting this translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "One of our reviewers suggested that it might be a useful tag to retain in the final dataset. We would like to clarify that we had a rather long discussion about the need for retaining this tag. It was decided that if a substantial number of instances were tagged by the annotators as 'UNC' then we may retain it. However, only 8 -10 instances were annotated with this tag. Therefore, those cases were resolved via discussion among the annotators and the project staff instead of creating another tag, which has a negligible proportion in the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The dataset has been publicly released via a shared task on aggression and misogyny identificationhttps: //sites.google.com/view/trac2/shared-task",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Facebook Research for an unrestricted research gift for carrying out this research. We would also like to thank our reviewers for their extensive reviews which have led to substantial improvements in the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "10."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Using knn and svm based one-class classifier for detecting online radicalization on twitter",
"authors": [
{
"first": "S",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sureka",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Distributed Computing and Internet Technology",
"volume": "",
"issue": "",
"pages": "431--442",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agarwal, S. and Sureka, A. (2015). Using knn and svm based one-class classifier for detecting online radi- calization on twitter. In International Conference on Distributed Computing and Internet Technology, pages 431 -442. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Characterizing linguistic attributes for automatic classification of intent based racist/radicalized posts on tumblr microblogging website",
"authors": [
{
"first": "S",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sureka",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agarwal, S. and Sureka, A. (2017). Characterizing lin- guistic attributes for automatic classification of in- tent based racist/radicalized posts on tumblr micro- blogging website.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic identification and classification of misogynistic language on twitter",
"authors": [
{
"first": "M",
"middle": [],
"last": "Anzovino",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2018,
"venue": "Natural Language Processing and Information Systems",
"volume": "",
"issue": "",
"pages": "57--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anzovino, M., Fersini, E., and Rosso, P. (2018). Auto- matic identification and classification of misogynis- tic language on twitter. In Max Silberztein, et al., editors, Natural Language Processing and Informa- tion Systems, pages 57-64, Cham. Springer Interna- tional Publishing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Social Interaction in YouTube Text-Based Polylogues: A Study of Coherence",
"authors": [
{
"first": "P",
"middle": [],
"last": "Bou-Franch",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Lorenzo-Dus",
"suffix": ""
},
{
"first": "P",
"middle": [
"G"
],
"last": "Blitvich",
"suffix": ""
},
{
"first": ".-C",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Computer-Mediated Communication",
"volume": "17",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bou-Franch, P., Lorenzo-Dus, N., and Blitvich, P. G.-C. (2012). Social Interaction in YouTube Text- Based Polylogues: A Study of Coherence. Journal of Computer-Mediated Communication, 17(4):501- 521, 07.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Cyber hate speech on twitter: An application of machine classification and statistical modeling for policy and decision making",
"authors": [
{
"first": "P",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "M",
"middle": [
"L"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2015,
"venue": "Policy & Internet",
"volume": "7",
"issue": "2",
"pages": "223--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burnap, P. and Williams, M. L. (2015). Cyber hate speech on twitter: An application of machine classi- fication and statistical modeling for policy and deci- sion making. Policy & Internet, 7(2):223-242.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Do not feel the trolls",
"authors": [
{
"first": "E",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Chandra",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hussain",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cambria, E., Chandra, P., Sharma, A., and Hussain, A. (2010). Do not feel the trolls. In ISWC, Shang- hai.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improving Cyberbullying Detection with User Context",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dadvar",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Trieschnigg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ordelman",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jong",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Information Retrieval",
"volume": "",
"issue": "",
"pages": "693--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dadvar, M., Trieschnigg, D., Ordelman, R., and de Jong, F. (2013). Improving Cyberbullying Detec- tion with User Context. In Advances in Information Retrieval, pages 693-696. Springer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automated Hate Speech Detection and the Problem of Offensive Language",
"authors": [
{
"first": "T",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davidson, T., Warmsley, D., Macy, M., and Weber, I. (2017). Automated Hate Speech Detection and the Problem of Offensive Language. In Proceedings of ICWSM.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hate Speech Detection with Comment Embeddings",
"authors": [
{
"first": "N",
"middle": [],
"last": "Djuric",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Morris",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Grbovic",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Radosavljevic",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bhamidipati",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Djuric, N., Zhou, J., Morris, R., Grbovic, M., Ra- dosavljevic, V., and Bhamidipati, N. (2015). Hate Speech Detection with Comment Embeddings. In Proceedings of WWW.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Overview of the evalita 2018 task on automatic misogyny identification (AMI)",
"authors": [
{
"first": "E",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018) co-located with the Fifth Italian Conference on Computational Linguistics",
"volume": "2263",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fersini, E., Nozza, D., and Rosso, P. (2018a). Overview of the evalita 2018 task on automatic misogyny identification (AMI). In Tommaso Caselli, et al., editors, Proceedings of the Sixth Evalua- tion Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018) co-located with the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018), Turin, Italy, December 12-13, 2018, volume 2263 of CEUR Workshop Proceedings. CEUR-WS.org.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of the task on automatic misogyny identification at ibereval 2018",
"authors": [
{
"first": "E",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Anzovino",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing",
"volume": "2150",
"issue": "",
"pages": "214--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fersini, E., Rosso, P., and Anzovino, M. (2018b). Overview of the task on automatic misogyny iden- tification at ibereval 2018. In Paolo Rosso, et al., editors, Proceedings of the Third Workshop on Eval- uation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Con- ference of the Spanish Society for Natural Language Processing (SEPLN 2018), Sevilla, Spain, Septem- ber 18th, 2018, volume 2150 of CEUR Workshop Proceedings, pages 214-228. CEUR-WS.org.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Online hate speech against women: Automatic identification of misogyny and sexism on twitter",
"authors": [
{
"first": "S",
"middle": [],
"last": "Frenda",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ghanem",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Montes-Y G\u00f3mez",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Intelligent & Fuzzy Systems",
"volume": "36",
"issue": "5",
"pages": "4743--4752",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frenda, S., Ghanem, B., Montes-y G\u00f3mez, M., and Rosso, P. (2019). Online hate speech against women: Automatic identification of misogyny and sexism on twitter. Journal of Intelligent & Fuzzy Systems, 36(5):4743-4752.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Relational work in anonymous, asynchronous communication: A study of (dis)affiliation in youtube",
"authors": [
{
"first": "Garc\u00e9s-Conejos",
"middle": [],
"last": "Blitvich",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Lorenzo-Dus",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bou-Franch",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Handbook of Research on Discourse Behavior and Digital Communication",
"volume": "",
"issue": "",
"pages": "540--563",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Garc\u00e9s-Conejos Blitvich, P., Lorenzo-Dus, N., and Bou-Franch, P. (2013). Relational work in anony- mous, asynchronous communication: A study of (dis)affiliation in youtube. In Istvan Kecskes et al., editors, Research Trends in Intercultural Pragmat- ics, pages 343-366. De Gruyter Mouton, Berlin. Garc\u00e9s-Conejos Blitvich, P. (2010). The youtubifica- tion of politics, impoliteness and polarization. In Rotimi Taiwo, editor, Handbook of Research on Dis- course Behavior and Digital Communication: Lan- guage Structures and Social Interaction, pages 540 -563. IGI Global, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Classifying racist texts using a support vector machine",
"authors": [
{
"first": "E",
"middle": [],
"last": "Greevy",
"suffix": ""
},
{
"first": "A",
"middle": [
"F"
],
"last": "Smeaton",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 27th annual international ACM SI-GIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "468--469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greevy, E. and Smeaton, A. F. (2004). Classifying racist texts using a support vector machine. In Pro- ceedings of the 27th annual international ACM SI- GIR conference on Research and development in in- formation retrieval, pages 468 -469. ACM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic text categorisation of racist webpages",
"authors": [
{
"first": "E",
"middle": [],
"last": "Greevy",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greevy, E. (2004). Automatic text categorisation of racist webpages. Ph.D. thesis, Dublin City Univer- sity.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The problem of identifying misogynist language on twitter (and other online social spaces)",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Tiropanis",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Bokhove",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 8th ACM Conference on Web Science, WebSci '16",
"volume": "",
"issue": "",
"pages": "333--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hewitt, S., Tiropanis, T., and Bokhove, C. (2016). The problem of identifying misogynist language on twitter (and other online social spaces). In Proceed- ings of the 8th ACM Conference on Web Science, WebSci '16, page 333-335, New York, NY, USA. As- sociation for Computing Machinery.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Accurately detecting trolls in slashdot zoo via decluttering",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Spezzano",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Subrahmanian",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)",
"volume": "",
"issue": "",
"pages": "188--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, S., Spezzano, F., and Subrahmanian, V. (2014). Accurately detecting trolls in slashdot zoo via decluttering. In Proceedings of IEEE/ACM In- ternational Conference on Advances in Social Net- works Analysis and Mining (ASONAM), pages 188- 195.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Benchmarking Aggression Identification in Social Media",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Ojha",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of TRAC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2018a). Benchmarking Aggression Identifica- tion in Social Media. In Proceedings of TRAC.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Aggression-annotated corpus of hindi-english code-mixed data",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [
"N"
],
"last": "Reganti",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Maheshwari",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Reganti, A. N., Bhatia, A., and Mahesh- wari, T. (2018b). Aggression-annotated corpus of hindi-english code-mixed data. In Nicoletta Cal- zolari (Conference chair), et al., editors, Proceed- ings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France, may. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Locate the Hate: Detecting Tweets Against Blacks",
"authors": [
{
"first": "I",
"middle": [],
"last": "Kwok",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwok, I. and Wang, Y. (2013). Locate the Hate: De- tecting Tweets Against Blacks. In Proceedings of AAAI.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On-line polylogues and impoliteness: The case of postings sent in response to the obama reggaeton youtube video",
"authors": [
{
"first": "N",
"middle": [],
"last": "Lorenzo-Dus",
"suffix": ""
},
{
"first": "P",
"middle": [
"G"
],
"last": "Blitvich",
"suffix": ""
},
{
"first": ".-C",
"middle": [],
"last": "Bou-Franch",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Pragmatics",
"volume": "43",
"issue": "10",
"pages": "2578--2593",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lorenzo-Dus, N., Blitvich, P. G.-C., and Bou-Franch, P. (2011). On-line polylogues and impoliteness: The case of postings sent in response to the obama reggaeton youtube video. Journal of Pragmatics, 43(10):2578 -2593. Women, Power and the Media.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Detecting Hate Speech in Social Media",
"authors": [
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP)",
"volume": "",
"issue": "",
"pages": "467--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malmasi, S. and Zampieri, M. (2017). Detecting Hate Speech in Social Media. In Proceedings of the In- ternational Conference Recent Advances in Natural Language Processing (RANLP), pages 467-472.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Challenges in discriminating profanity from hate speech",
"authors": [
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Experimental & Theoretical Artificial Intelligence",
"volume": "30",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malmasi, S. and Zampieri, M. (2018). Challenges in discriminating profanity from hate speech. Journal of Experimental & Theoretical Artificial Intelligence, 30:1 -16.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Misogynistic Language on Twitter and Sexual Violence",
"authors": [
{
"first": "F",
"middle": [],
"last": "Menczer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Fulper",
"suffix": ""
},
{
"first": "G",
"middle": [
"L"
],
"last": "Ciampaglia",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ferrara",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Flammini",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Rowe",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the ACM Web Science Workshop on Computational Approaches to Social Modeling (ChASM). Association of Computing Machinery",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Menczer, F., Fulper, R., Ciampaglia, G. L., Ferrara, E., Ahn, Y., Flammini, A., Lewis, B., and Rowe, K. (2015). Misogynistic Language on Twitter and Sexual Violence. In Proceedings of the ACM Web Science Workshop on Computational Approaches to Social Modeling (ChASM). Association of Comput- ing Machinery, 1.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Finding opinion manipulation trolls in news community forums",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mihaylov",
"suffix": ""
},
{
"first": "G",
"middle": [
"D"
],
"last": "Georgiev",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ontotext",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning, CoNLL",
"volume": "",
"issue": "",
"pages": "310--314",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihaylov, T., Georgiev, G. D., Ontotext, A., and Nakov, P. (2015). Finding opinion manipulation trolls in news community forums. In Proceedings of the Nineteenth Conference on Computational Natu- ral Language Learning, CoNLL, pages 310-314.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Modeling trolling in social media conversations",
"authors": [
{
"first": "",
"middle": [],
"last": "Mojica De La",
"suffix": ""
},
{
"first": "L",
"middle": [
"G"
],
"last": "Vega",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mojica de la Vega, L. G. and Ng, V. (2018). Mod- eling trolling in social media conversations. In Pro- ceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Abusive language detection on Arabic social media",
"authors": [],
"year": null,
"venue": "Proceedings of ALW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abusive language detection on Arabic social media. In Proceedings of ALW.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Classification of flames in computer mediated communications",
"authors": [
{
"first": "",
"middle": [],
"last": "Nitin",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Sharma",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Aggarwal",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Choudhary",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chawla",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bhasinar",
"suffix": ""
}
],
"year": 2012,
"venue": "International Journal of Computer Applications",
"volume": "14",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin, Bansal, A., Sharma, S. M., Kumar, K., Ag- garwal, A., Goyal, S., Choudhary, K., Chawla, K., Jain, K., and Bhasinar, M. (2012). Classifica- tion of flames in computer mediated communica- tions. International Journal of Computer Applica- tions, 14(6).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Abusive Language Detection in Online User Content",
"authors": [
{
"first": "C",
"middle": [],
"last": "Nobata",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nobata, C., Tetreault, J., Thomas, A., Mehdad, Y., and Chang, Y. (2016). Abusive Language Detec- tion in Online User Content. In Proceedings of the 25th International Conference on World Wide Web, pages 145-153. International World Wide Web Con- ferences Steering Committee.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Flame Wars: Automatic Insult Detection",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sax",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sax, S. (2016). Flame Wars: Automatic Insult Detec- tion. Technical report, Stanford University.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "When a tweet is actually sexist. A more comprehensive classification of different online harassment categories and the challenges in NLP",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sharifirad",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Matwin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharifirad, S. and Matwin, S. (2019). When a tweet is actually sexist. A more comprehensive classifica- tion of different online harassment categories and the challenges in NLP. CoRR, abs/1902.10584.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Wikipedia Talk Labels: Toxicity. 2",
"authors": [
{
"first": "N",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Wulczyn",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thain, N., Dixon, L., and Wulczyn, E. (2017). Wikipedia Talk Labels: Toxicity. 2.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waseem, Z. and Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Understanding Abuse: A Typology of Abusive Language Detection Subtasks",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waseem, Z., Davidson, T., Warmsley, D., and We- ber, I. (2017). Understanding Abuse: A Typology of Abusive Language Detection Subtasks. Proceed- ings of ALW.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Are You a Racist or Am I Seeing Things? Annotator Influence on Hate Speech Detection on Twitter",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Waseem",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Workshop on NLP and Computational Social Science",
"volume": "",
"issue": "",
"pages": "138--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waseem, Z. (2016). Are You a Racist or Am I See- ing Things? Annotator Influence on Hate Speech Detection on Twitter. In Proceedings of the First Workshop on NLP and Computational Social Sci- ence, pages 138-142, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Overview of the GermEval 2018 Shared Task on the Identification of Offensive Language",
"authors": [
{
"first": "M",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Siegel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of GermEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wiegand, M., Siegel, M., and Ruppenhofer, J. (2018). Overview of the GermEval 2018 Shared Task on the Identification of Offensive Language. In Proceedings of GermEval.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Learning from Bullying Traces in Social Media",
"authors": [
{
"first": "J.-M",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "K.-S",
"middle": [],
"last": "Jun",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bellmore",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu, J.-M., Jun, K.-S., Zhu, X., and Bellmore, A. (2012). Learning from Bullying Traces in Social Me- dia. In Proceedings of NAACL.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Predicting the type and target of offensive posts in social media",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technology (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019). Predicting the type and target of offensive posts in social media. In Proceedings of the Annual Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technology (NAACL-HLT).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Languages in the Dataset"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Misogyny in the Dataset sion. A language-wise break-up and comparison of aggressive comments in the dataset is given inFigure 3"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Aggression in the Dataset"
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Co-occurrence of Misogyny and Aggression Figure 5: Co-occurrence of Non-gendered and Aggression"
},
"TABREF2": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF3": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF5": {
"text": "1. @KaDevender \u092d\u0921\u0935\u093e \u0939\u0948 \u0938\u093e\u0932\u093e \u0916 \u0932\u0938\u094d\u0924\u093e\u0928\u0940 \u0914\u0930 \u092a\u093e\u093f\u0915\u0938\u094d\u0924\u093e\u0928\u0940 \u090f\u091c\u0947\u0902 \u091f \u0939\u0948 \u0914\u0930 \u092f\u0947 \u092a\u093e\u093f\u0915\u0938\u094d\u0924\u093e\u0928\u0940 \u0938\u094d\u0932\u0940\u092a\u0930 \u0938\u0947 \u0932 \u0915 \u092e\u0947\u0902 \u092c\u0930 \u0939\u0948 \u092f\u0939\u093e\u0901 \u091a\u0941 \u092a \u0915\u0947 \u092c\u0948 \u0920\u0940 \u0939\u0948 \u093f\u0915\u0938\u0940 \u093f\u0926\u0928 \u092c\u092e \u092c\u093e\u0902 \u0927 \u0915\u0947 \u0915\u0942 \u0926 \u091c\u093e\u090f\u0917\u0940 \u0914\u0930 \u0939\u091c\u093e\u0930\u094b \u092c\u0947 \u0917\u0941 \u0928\u093e\u0939\u094b \u0915 \u091c\u093e\u0928 \u0932\u0947 \u0932\u0947 \u0917\u0940 ..\u093f\u0928\u0926\u094b\u0930\u094d \u0937 \u093f\u0939\u0928\u094d\u0926\u0942 \u0915\u0947 \u092e\u093e\u0930\u0928\u0947 \u092a\u0947 \u0924\u093e\u0932\u0940 \u0924\u094b \u0905\u092d\u0940 \u092c\u091c\u093e\u0924\u0940 \u0939\u0948 ..\u0906\u092a \u0915\u093e \u091a\u0941 \u0938\u0947 \u0928\u094d\u0926\u0930\u094d He is a bastard, a Khalistani and Pakistani agent and she is a member of the Pakistani sleeper cell. She is hiding here and will jump with bomb anyday and kill thousands of innocent people...she appreciates the killing of innocent Hindus .. your sucker 2. Meye r maa eki character er chi daughter and mother are as same character, disgusting",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF7": {
"text": "1. \u0938\u092e\u091f\u0915\u0930 \u091a\u0942 \u093f\u095c\u092f\u094b\u0902 \u092e\u0947\u0902 \u091b\u0941 \u092a\u0928\u0947 \u0932\u0917\u093e \u0936\u093e\u092f\u0926 \u092e\u0948\u0902 \u0928\u0947 \u091c\u094b \u091a\u0942 \u092e \u0932\u092f\u093e \u0909\u0938\u0915\u094b \u092e\u0941 \u091d\u0915\u094b \u091a\u0941 \u092d\u0928\u0947 \u0932\u0917\u093e \u0936\u093e\u092f\u0926 \u092e\u0948\u0902 \u0928\u0947 \u091c\u092c \u0906\u0917\u094b\u0936 \u092e\u0947\u0902 \u092d\u0930 \u0932\u092f\u093e \u0909\u0938\u0915\u094b \u092e\u0941 \u091d\u0938\u0947 \u091c\u0932\u0924\u093e \u0939\u0948 \u0924\u0947 \u0930\u093e \u0915\u0902 \u0917\u0928 \u0936\u093e\u092f\u0926... \u092e\u0948\u0902 \u0928\u0947 \u091c\u094b \u0907\u0936\u094d\u0915 \u0915\u0930 \u0932\u092f\u093e \u0939\u0948 \u0909\u0938\u0915\u094b #\u0938\u093e\u093f\u0939\u092c\u0967 #\u091a\u0942 \u095c\u0940 #\u0915\u0902 \u0917\u0928 #\u093f\u0939\u0902 \u0926\u0940_\u0936\u092c\u094d\u0926 #\u0936\u092c\u094d\u0926\u093f\u0928 \u0927",
"type_str": "table",
"html": null,
"content": "<table><tr><td>She hid herself in her bangles probably, when I</td></tr><tr><td>kissed her, it started to prick me when I hugged</td></tr><tr><td>her, your bangle is envious of me probably because</td></tr><tr><td>I made love to you.</td></tr><tr><td>2. Ranuu goo Ranuu Lagboo tmr Karr Nunuu??</td></tr><tr><td>Himeshh Salmann nakii Sonuu??</td></tr></table>",
"num": null
},
"TABREF8": {
"text": "\u092d\u0947 \u0928\u091a\u094b\u0926 \u092f\u0947 \u0917\u0941 \u0932\u093e\u092c\u0940 \u092a\u0948\u0902 \u091f \u0915\u094c\u0928 \u092a\u0939\u0928\u0924\u093e \u0939\u0948 \u092c\u0947",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">women's freedom, even Hindu Women\" them-</td></tr><tr><td colspan=\"2\">selves says that they don't want to enter #Sabri-</td></tr><tr><td>mala &amp; respect the traditions!</td><td>#IslamEx-</td></tr><tr><td colspan=\"2\">posed https:// twitter.com/theskindoctor1 3/sta-</td></tr><tr><td>tus/1113435724269981696 \u2026\"</td><td/></tr><tr><td>5. Fuck man, who wears a pink trouser?</td><td/></tr><tr><td colspan=\"2\">6. \u0939\u092e \u0926\u0947 \u0936 \u0935\u093e\u0938\u0940 \u091c\u0935\u093e\u0928\u094b\u0902 \u0915\u0947 \u091c\u0924 \u0915\u093e #Abhinandan \u0915\u0930\u0924\u0947 \u0939\u0948\u0902 . \u0905\u092c \u0939\u092e \u0938\u092c\u0915\u094b \u093f\u092e\u0932\u0915\u093e\u0930 #SpecialStarus4Jawan \u0938\u0941 \u093f\u0928 \u0936\u094d\u091a\u0924 \u0915\u0930\u0928\u093e \u0939\u094b\u0917\u093e. \u091c\u094b \u0905\u092a\u0928\u0947 \u091c\u093e\u0928 \u091c\u094b \u0916\u092e \u092e\u0947\u0902 \u0921\u093e\u0932\u0915\u0930</td></tr><tr><td colspan=\"2\">\u0926\u0947 \u0936 \u0915 \u0930\u0915\u094d\u0937\u093e \u0915\u0930 \u0930\u0939\u093e, \u0916\u0941 \u0926 \u0915\u094b \u0926\u0947 \u0936 \u0915\u0947 \u0932\u090f \u0938\u092e\u093f\u092a\u0930\u094d \u0924 \u0915\u0930</td></tr><tr><td colspan=\"2\">\u093f\u0926\u092f\u093e \u0939\u0948 , \u0909\u0938\u0915\u0947 \u0932\u090f \u092f\u0939 \u0924\u094b \u0939\u094b\u0928\u093e \u0939\u0940 \u091a\u093e\u093f\u0939\u090f. \u091c\u0935\u093e\u0928\u094b\u0902 \u0915\u094b #Dowry Act \u0938\u0947 \u092c\u093e\u0939\u0930 \u0915\u0930\u094b @ ani @ dna @ aaj-</td></tr><tr><td>takpic.twitter.com/ezmfDEzxXQ</td><td/></tr><tr><td>2. Don't Support #Dowry at all.Thre is no point</td><td/></tr><tr><td>to strt a rltnshp on exchnge of Bt also nd to</td><td/></tr><tr><td>tch society ,all failed marriages r nt due to</td><td/></tr><tr><td>#Dowry.So stop nmng every broken marriage as</td><td/></tr><tr><td>#FakeCases_498A_DV_125_377_376 Real suf-</td><td/></tr><tr><td>ferers nvr gts justice,help them stop misuse of</td><td/></tr><tr><td>#laws</td><td/></tr><tr><td>3. So according to you protesting against molesta-</td><td/></tr><tr><td>tion is a crime ? Sir Don't you have any daughters</td><td/></tr><tr><td>or sister? #BHU_\u0932\u093e\u0920\u0940\u091a\u093e\u091c\u0930\u094d #bhu_molestation</td><td/></tr><tr><td>4. When a thousand years old #Hindu tra-</td><td/></tr><tr><td>dition is followed in #Kerela then Mus-</td><td/></tr><tr><td>lims came forward to say that it oppresses</td><td/></tr></table>",
"num": null
},
"TABREF9": {
"text": "1. @USER This game sucks donkey balls 2. bitch calm down you pussy when yo ppl ain't around 3. \u0905\u092c\u0947 \u0913 \u0905\u092a\u0928\u0940 \u092c\u0939\u0928 \u0938\u0947 \u092a\u0948 \u0926\u093e \u0915 \u095c\u0947 \u0964 \u092d\u0921\u0941 \u0906 \u092c\u0928\u0928\u093e \u0939\u0948 \u0924\u094b \u092a\u092a\u094d\u092a\u0942 \u0915\u0947 \u0932\u0941 \u0902 \u0921 \u092a\u0930 \u092c\u0948 \u0920\u0964\u0924\u0947 \u0930\u0940 \u0905\u092e\u094d\u092e\u0940 \u0915\u093e \u092f\u093e\u0930 \u092e\u0924 \u0938\u092e\u091d \u092e\u0941 \u091d\u0947 \u091d\u094b\u092a\u0921\u0940 \u0915\u0947 \u0964 \u0938\u0902 \u0918\u0940 \u0906\u0924\u0902 \u0915\u0935\u093e\u0926\u0940 \u0939\u094b\u0924\u0947 \u0939\u0948 \u0915\u094d\u092f\u093e\u0964\u0905\u092a\u0928\u0940 \u092c\u0939\u0928 \u0915\u093e \u0939\u0932\u093e \u0932\u093e \u0915\u0930\u0928\u0947 \u0915\u0939\u0940\u0902 \u0914\u0930 \u091c\u093e \u0964\u0924\u0947 \u0930\u0947 \u091c\u0948 \u0938\u0947 10 \u0930\u094b\u091c \u093f\u0920\u0915\u093e\u0928\u0947 \u0932\u0917\u093e\u0924\u093e \u0939 \u0902",
"type_str": "table",
"html": null,
"content": "<table><tr><td>\u0964 \u0938\u092e\u091d\u093e \u0928\u092a\u0941 \u0902 \u0938\u0915</td></tr><tr><td>Hey you, a worm born out of your sister. If you</td></tr><tr><td>wish to be a fucker then go sit on Pappu's penis.</td></tr><tr><td>Do not think of me as your mother's boyfriend.</td></tr><tr><td>Those who belong to the Sangh are not terror-</td></tr><tr><td>ists. Go somewhere else to perform your sister's</td></tr><tr><td>halala. I deal with the likes of you everyday. Do</td></tr><tr><td>you understand you impotent.</td></tr><tr><td>4. \u092c\u0949\u0938\u0921\u0940\u0915\u0947 , \u092e\u0927\u0930\u091a\u0942 \u0924, \u0924\u0947 \u0930\u0940 \u092e\u093e\u0901 \u0915 , \u092c\u093f\u0939\u0928 \u0915 \u091b\u0942 \u091f, \u0930\u0902 \u0921\u0940 \u0915\u093e \u25cc\u094c\u0932\u0924, \u0916\u093e\u0928\u0926\u093e\u0928\u0940 \u0930\u0902 \u0921\u0940 \u0915\u093e \u25cc\u094c\u0932\u0924, \u0939\u0940\u0930\u093e\u092e\u0902 \u0921\u0940 \u0915\u093e \u093f\u092a\u0932\u094d\u093e, \u092d\u093e\u0926\u0935\u093e \u0932\u094c\u095c\u093e \u0932\u0941 \u0902 \u0921 \u0915\u092e\u0940\u0928\u093e, \u091b\u0942 \u091f \u0915\u0947 \u0922\u0915\u094d\u0915\u0928, \u093f\u091b\u092a\u0915\u0932\u0940 \u0915\u0947 \u0917\u093e\u0902 \u0921 \u0915\u0947 \u092a\u0938\u0940\u0928\u0947</td></tr><tr><td>Motherfucker, your mother's your sister's pussy,</td></tr><tr><td>son of a bitch, litter of heeramandi, pussy cap,</td></tr><tr><td>sweat of the anus of a lizard.</td></tr><tr><td>5. \u099a\u09b6\u09ae\u09be\u09aa\u09dc\u09be \u09ae\u09be\u09b8\u09c0\u09ae\u09be\u09b0 \u0997\u09c1\u09c7\u09a6\u09b0 \u09a8\u09be\u09ae\u09ac\u09cd\u09be\u09b0 \u099f\u09be \u09bf\u0995 \u099c\u09be\u09a8\u09c7\u09a4 \u09c7\u09aa\u09c7\u09b0\u09bf\u099b\u09c7\u09b2\u09a8?</td></tr><tr><td>Did you get the number of bespectacled aunty's</td></tr><tr><td>vagina?</td></tr><tr><td>6. \u092d\u0947 \u0928\u091a\u094b\u0926 \u092f\u0947 \u0917\u0941 \u0932\u093e\u092c\u0940 \u092a\u0948\u0902 \u091f \u0915\u094c\u0928 \u092a\u0939\u0928\u0924\u093e \u0939\u0948 \u092c\u0947</td></tr><tr><td>Fuck man, who wears a pink trouser.</td></tr></table>",
"num": null
},
"TABREF10": {
"text": "GOVT. JOB WAALE SE HO RHI H AND THEY ARE TAKING #DOWRY. BUT I AM AGAINST DOWRY, I JUST WANT HER ONLY. But govt.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Job is in b/w</td></tr><tr><td>Dowry is gifted by the bride's parents only. When</td></tr><tr><td>something is received without a price then why</td></tr><tr><td>shouldn't one take it? Now look at my case. The</td></tr><tr><td>one I love is getting married to a government em-</td></tr><tr><td>ployee and they are taking #DOWRY. BUT I AM</td></tr><tr><td>AGAINST DOWRY, I JUST WANT HER ONLY.</td></tr><tr><td>But govt. Job is in b/w</td></tr></table>",
"num": null
},
"TABREF12": {
"text": ".",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"4\">LANGUAGE GEN NGEN TOTAL</td></tr><tr><td>Hindi</td><td>828</td><td>3,156</td><td>3,984</td></tr><tr><td>Bangla</td><td>871</td><td>2,955</td><td>3,826</td></tr><tr><td>English</td><td>393</td><td>3,870</td><td>4,263</td></tr></table>",
"num": null
},
"TABREF13": {
"text": "Training and Testing Dataset",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"4\">Language Character n-gram Word n-gram F-Score</td></tr><tr><td>Hindi</td><td>3</td><td>3</td><td>0.87</td></tr><tr><td>Bangla</td><td>5</td><td>NA</td><td>0.89</td></tr><tr><td>English</td><td>2</td><td>NA</td><td>0.93</td></tr></table>",
"num": null
},
"TABREF14": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}