ACL-OCL / Base_JSON /prefixT /json /trac /2020.trac-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:52:09.933697Z"
},
"title": "Aggression Identification in Social Media: a Transfer Learning Based Approach",
"authors": [
{
"first": "Faneva",
"middle": [],
"last": "Ramiandrisoa",
"suffix": "",
"affiliation": {
"laboratory": "IRIT",
"institution": "Universit\u00e9 de Toulouse",
"location": {
"country": "France"
}
},
"email": "faneva.ramiandrisoa@irit.fr"
},
{
"first": "Josiane",
"middle": [],
"last": "Mothe",
"suffix": "",
"affiliation": {
"laboratory": "IRIT",
"institution": "Universit\u00e9 de Toulouse",
"location": {
"country": "France"
}
},
"email": "josiane.mothe@irit.fr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The way people communicate have changed in many ways with the outbreak of social media. One of the aspects of social media is the ability for their information producers to hide, fully or partially, their identity during a discussion; leading to cyber-aggression and interpersonal aggression. Automatically monitoring user-generated content in order to help moderating it is thus a very hot topic. In this paper, we propose to use the transformer based language model BERT (Bidirectional Encoder Representation from Transformer) (Devlin et al., 2019) to identify aggressive content. Our model is also used to predict the level of aggressiveness. The evaluation part of this paper is based on the dataset provided by the TRAC shared task (Kumar et al., 2018a). When compared to the other participants of this shared task, our model achieved the third best performance according to the weighted F1 measure on both Facebook and Twitter collections.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The way people communicate have changed in many ways with the outbreak of social media. One of the aspects of social media is the ability for their information producers to hide, fully or partially, their identity during a discussion; leading to cyber-aggression and interpersonal aggression. Automatically monitoring user-generated content in order to help moderating it is thus a very hot topic. In this paper, we propose to use the transformer based language model BERT (Bidirectional Encoder Representation from Transformer) (Devlin et al., 2019) to identify aggressive content. Our model is also used to predict the level of aggressiveness. The evaluation part of this paper is based on the dataset provided by the TRAC shared task (Kumar et al., 2018a). When compared to the other participants of this shared task, our model achieved the third best performance according to the weighted F1 measure on both Facebook and Twitter collections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Over the years, social media has become one of the key ways people communicate and share opinions (Pelicon et al., 2019) . These platforms such as Twitter or WhatsApp, have changed the way people communicate (D\u00e9cieux et al., 2019) . Indeed, the ability to fully or partially hide their identity leads people to publish things that they probably would never say to someone face to face (Pelicon et al., 2019) . Several studies have observed the proliferation of abusive language and increase of aggressive and potentially harmful contents on social media (Zhu et al., 2019) . Although most of the forms of abusive language are not criminal, they can lead to a deterioration of public discourse and opinions, which can in turn generate a more radicalized society (Pelicon et al., 2019) . Some studies focus on the automatic detection of abusive language as a first step. Different types of abusive content detection have been defined and studied such as hate speech (Warner and Hirschberg, 2012) , cyberbulling (Dadvar et al., 2013) , aggression (Kumar et al., 2018a) . In parallel, different evaluation forums propose shared tasks to foster the development of systems to help abusive language detection. Among them, we can cite: TRAC (Kumar et al., 2018a) , GermEval (Stru\u00df et al., 2019) and SemEval-2019 Task 6 (Zampieri et al., 2019 . The objective of SemEval-2019 Task 6 and GermEval is to detect offensive language in tweets, respectively in English and German. To solve these shared tasks, participants heavily rely on deep learning approaches as well as transfer learning using the transformer based language model BERT (Devlin et al., 2019) ; with good success (Stru\u00df et al., 2019; Zampieri et al., 2019) . As for the TRAC shared task, the objective is to detect aggression in Facebook and Twitter posts and comments. Deep learning approaches are also widely used in this shared task and achieved the best performance (Kumar et al., 2018a) . However, no participant used transfer learn-ing based on BERT model while this model achieved good performance on offensive language detection and on a wide range of Natural Language Processing (NLP) tasks. Indeed, BERT model broke several records for how well models can handle language-based tasks. Moreover, to the best of our knowledge, the BERT model has never been used on the TRAC dataset in the literature. This statement motivated us to conduce this work and evaluate a BERT model approach on the TRAC task. In this paper, we proposed a model that uses transfer learning technique based on the on BERT model to address the problem of aggression identification on Facebook and Twitter content (more details in Section 3.). We evaluate the model on the dataset provided by the TRAC shared task. We also compare our model with the ones of the participants to the shared task. For this, we adopted the same rules as during the shared task (Kumar et al., 2018a) . The rest of this paper is organized as follows: Section 2. presents related work in the area of offensive detection and different existing shared tasks in this domain; Section 3. describes the methodology we propose for aggression detection; Section 4. describes in detail the TRAC dataset and evaluation measures we use for evaluation; Section 5. presents the results and discusses them; finally, Section 6. concludes this paper and presents some future work.",
"cite_spans": [
{
"start": 98,
"end": 120,
"text": "(Pelicon et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 208,
"end": 230,
"text": "(D\u00e9cieux et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 385,
"end": 407,
"text": "(Pelicon et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 554,
"end": 572,
"text": "(Zhu et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 761,
"end": 783,
"text": "(Pelicon et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 964,
"end": 993,
"text": "(Warner and Hirschberg, 2012)",
"ref_id": "BIBREF21"
},
{
"start": 1009,
"end": 1030,
"text": "(Dadvar et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 1044,
"end": 1065,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF8"
},
{
"start": 1233,
"end": 1254,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF8"
},
{
"start": 1266,
"end": 1286,
"text": "(Stru\u00df et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 1291,
"end": 1303,
"text": "SemEval-2019",
"ref_id": null
},
{
"start": 1304,
"end": 1333,
"text": "Task 6 (Zampieri et al., 2019",
"ref_id": null
},
{
"start": 1625,
"end": 1646,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 1667,
"end": 1687,
"text": "(Stru\u00df et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 1688,
"end": 1710,
"text": "Zampieri et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 1924,
"end": 1945,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF8"
},
{
"start": 2892,
"end": 2913,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Recent overviews of related work on the detection of abusive language are presented in (Schmidt and Wiegand, 2017) and (Mishra et al., 2019) . (Schmidt and Wiegand, 2017 ) presents a survey on hate speech detection using Natural Language Processing (NLP). The authors report that supervised learning approaches are predominantly used for this later task. Support vector machines (SVM) and recurrent neural networks are the most widespread. The authors also report that features are widely used for hate speech detection, such as simple surface features (e.g. bag of words, n-grams, etc.), word generalization (e.g. word em-bedding, etc.), knowledge-based features (e.g. ontology, etc.), ... (Mishra et al., 2019 ) report a survey of automated abuse detection methods as well as a detailed overview of datasets that are annotated for abuse. The authors notice that many researchers have exclusively relied on text based features for abuse detection while the recent state of the art approaches rely on word-level Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). Within shared tasks on abusive language detection, participants heavily use deep learning techniques that achieved good performances. This is the case for GermEval (Stru\u00df et al., 2019) , SemEval-2019 Task 6 (Zampieri et al., 2019) and TRAC (Kumar et al., 2018a) . GermEval (Stru\u00df et al., 2019) is a shared task that focuses on the detection of offensive language on German tweets. During this shared task, the best performing system on the various sub-tasks of the challenge uses the transformer based language model BERT (Devlin et al., 2019) , which convinced us to consider BERT in our work as well. SemEval-2019 Task 6 (Zampieri et al., 2019) is a shared task that focused on identification and classification of offensive language in social media, more precisely on English tweets. During the SemEval-2019 Task 6, the transformer based language model BERT (Devlin et al., 2019) was also widely used and achieved top performances, and even in the case it did not achieve the best performance, overall it performed well. Finally, TRAC (Kumar et al., 2018a) is a shared task that focuses on aggression identification considering both English and Hindi languages. The objective is to classify texts into three classes: Non-Aggressive (NAG), Covertly Aggressive (CAG), and Overtly Aggressive (OAG). Facebook posts and comments are provided for training and validation, while, for testing, two different sets, one from Facebook and one from Twitter, were provided. The best performance during the shared task was achieved with deep learning approaches whether on Facebook test set or Twitter test set (Kumar et al., 2018a) . During this shared task, apart from deep learning approaches, such as CNN + LSTM architecture (Ramiandrisoa, 2020) , participants considered classical machine learning methods (e.g. Random Forests) based on features as in (Ramiandrisoa and Mothe, 2018; Arroyo-Fern\u00e1ndez et al., 2018; Risch and Krestel, 2018) . However, no team used BERT model for aggression detection and according to our knowledge, it was also never used on the TRAC dataset. In this paper, we propose to use this transformer based language model for aggression detection on TRAC dataset since it achieved good results on other shared tasks, specifically on abusive language detection and it has also advanced the state of the art for eleven Natural Language Processing (NLP) tasks (Devlin et al., 2019) . In the next Section, we describe the methodology we adopted as well as the TRAC dataset we used.",
"cite_spans": [
{
"start": 87,
"end": 114,
"text": "(Schmidt and Wiegand, 2017)",
"ref_id": "BIBREF19"
},
{
"start": 119,
"end": 140,
"text": "(Mishra et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 143,
"end": 169,
"text": "(Schmidt and Wiegand, 2017",
"ref_id": "BIBREF19"
},
{
"start": 691,
"end": 711,
"text": "(Mishra et al., 2019",
"ref_id": "BIBREF11"
},
{
"start": 1249,
"end": 1269,
"text": "(Stru\u00df et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 1292,
"end": 1315,
"text": "(Zampieri et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 1325,
"end": 1346,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF8"
},
{
"start": 1358,
"end": 1378,
"text": "(Stru\u00df et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 1607,
"end": 1628,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 1946,
"end": 1967,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 2123,
"end": 2144,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF8"
},
{
"start": 2685,
"end": 2706,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF8"
},
{
"start": 2803,
"end": 2823,
"text": "(Ramiandrisoa, 2020)",
"ref_id": "BIBREF17"
},
{
"start": 2931,
"end": 2961,
"text": "(Ramiandrisoa and Mothe, 2018;",
"ref_id": "BIBREF16"
},
{
"start": 2962,
"end": 2992,
"text": "Arroyo-Fern\u00e1ndez et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 2993,
"end": 3017,
"text": "Risch and Krestel, 2018)",
"ref_id": "BIBREF18"
},
{
"start": 3460,
"end": 3481,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "According to related work where the transformer-based language model BERT (Devlin et al., 2019) achieves the top performance on offensive language and hate speech detection, we decided to adopt it for the aggression detection problem. For best understanding of our model, in this section, we provide first a short description of BERT model before describing our model.",
"cite_spans": [
{
"start": 74,
"end": 95,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3."
},
{
"text": "BERT or Bidirectional Encoder Representations from Transformers is a new method of pre-training language representations which obtains state-of-the-art results on a wide range of NLP tasks. Using BERT has two stages : pretraining and fine-tuning. During pre-training, a deep bidirectional representation is trained on unlabeled data by jointly conditioning on both left and right context in all layers. Pre-training is fairly expensive but fortunately a number of pre-trained models were trained at Google on the same corpus data composed of BooksCorpus (800M words) (Zhu et al., 2015) and English Wikipedia (2,500M words). These pre-trained BERT models are publicly available on github 1 , so most of NLP researchers do not need to pre-train their own model from scratch. Two model sizes of pre-trained BERT model are released which are BERT Base and BERT Large . The BERT Base model contains 12 layers of size 768, 12 self-attention heads and 110M parameters, while the BERT Large model contains 24 layers of size 1024, 16 selfattention heads and 340M parameters. Compared to pre-training, fine-tuning is relatively inexpensive. Fine-tuning BERT model consists of consists of adding one additional output layer to the pre-trained model, then train it on labeled data from the downstream task to create a new model. With this method, there is no need of task-specific architecture modifications. In other words, the fine-tuning is a transfer learning of pre-trained BERT model. More details on BERT can be found in (Devlin et al., 2019) .",
"cite_spans": [
{
"start": 567,
"end": 585,
"text": "(Zhu et al., 2015)",
"ref_id": "BIBREF23"
},
{
"start": 1516,
"end": 1537,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT details",
"sec_num": "3.1."
},
{
"text": "In this work, we fine-tuned the BERT Large model since it gives better performance than the BERT Base model in a variety of tasks (Devlin et al., 2019) . As BERT is a pre-trained model, it requires a specific format for the input data. As input, it requires three sequences (of the same length): sequence of token IDs, sequence of mask IDs and sequence of segment IDs. In others words, we should convert all texts in our corpus into triplets of sequences. In the following, we detail how to transform a given text into a triplet of sequences as illustrated in Figure 1 : 1) Break text into sequence of tokens by using the BERT tokenizer. A maximum sequence length is fixed in order to have the same length for all sequences in the corpus. So longer sequences are truncated to the size of maximum sequence length minus two and shorter sequences are padded. In this paper, we set the maximum sequence length to 40 tokens because the maximum length of our preprocessed text is equal to 32 in the training set and 31 in the validation set. In other words, we do not cut any texts during training.",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 560,
"end": 568,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model details",
"sec_num": "3.2."
},
{
"text": "2) Add the token \"[CLS]\" at the beginning of the sequence of tokens and the token \"[SEP]\" at the end.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model details",
"sec_num": "3.2."
},
{
"text": "3) Convert each token in the sequence of tokens into ID by using also the BERT tokenizer. The result of the conversion is the sequence of token IDs. 4) Pad with 0 the sequence of token IDs with length less than the maximum sequence length fixed in step 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model details",
"sec_num": "3.2."
},
{
"text": "5) Build the sequence of mask IDs which is used to indicate which elements in the sequence of token IDs are real tokens and which are padding elements. The mask has 1 for real tokens and 0 for padding tokens. Figure 1 illustrates this process on an example. 6) Build the sequence of segment IDs which contains only 0 as elements because we classify a text. See Figure 1 for an illustrative example.",
"cite_spans": [],
"ref_spans": [
{
"start": 209,
"end": 217,
"text": "Figure 1",
"ref_id": null
},
{
"start": 361,
"end": 369,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model details",
"sec_num": "3.2."
},
{
"text": "Figure 1: The sequence of token IDs, sequence of mask IDs and sequence of segment IDs from a text. In that illustrative example, the maximum sequence length is fixed to 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model details",
"sec_num": "3.2."
},
{
"text": "With regard to the output, a linear layer composed of three nodes is added. This is because there are three classes in the TRAC shared task dataset. During training, more precisely fine-tuning, we used a batch size of 8, the Adam optimizer with a learning rate of 2e-5 and a number of epochs of 3 as parameters. For the implementation, we used the library pytorch-pretrained-bert 2 . Training was carried out on a Nvidia Geforce GTX 1080TI GPU and took about 39 minutes in total. In the next sections, we report the evaluation framework and then the results of our fine-tuned BERT model. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model details",
"sec_num": "3.2."
},
{
"text": "In this section, we detail the dataset we used in this paper to evaluate our model as well as how we preprocessed it for text cleaning; we also present the evaluation measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation framework",
"sec_num": "4."
},
{
"text": "The dataset used in this work is the dataset provided for the TRAC shared task (Kumar et al., 2018a) which is a subset of dataset describes in (Kumar et al., 2018b) . It consists in English and Hindi randomly sampled Facebook and Twitter comments. In this study, we focused on the English part only, which is detailed in Table 1 . In the dataset, comments are annotated with 3 levels of aggression:",
"cite_spans": [
{
"start": 79,
"end": 100,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF8"
},
{
"start": 143,
"end": 164,
"text": "(Kumar et al., 2018b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 321,
"end": 328,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Description",
"sec_num": "4.1.1."
},
{
"text": "\u2022 Non-Aggressive (NAG) : this label is used for data that is generally not intended to be aggressive and mostly used while wishing or supporting individuals or groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "4.1.1."
},
{
"text": "\u2022 Covertly Aggressive (CAG) : this label is used for data that contains hidden aggression and sarcastic negative emotions such as using metaphorical words to attack an individual or a group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "4.1.1."
},
{
"text": "\u2022 Overtly Aggressive (OAG) : this label is used for data that contains open and direct aggression such as a direct verbal attack pointed towards any group or individual.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "4.1.1."
},
{
"text": "The dataset in the shared task was divided in three sets: training, validation and test. The training and validation sets are used to build models and are only composed of comments from Facebook. Considering English only, the training set is composed of 11,999 comments while the validation set is composed of 3,001 comments. For the test set, two collections were given: the first is composed of 916 comments crawled from Facebook and the second is composed of 1,257 comments crawled from Twitter. The collection built from Twitter is what the organizers named the surprise collection and the idea behind this collection is to test the power of generalization of the developed model. Indeed, the model is trained on Facebook content but tested on both Facebook and Twitter contents. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "4.1.1."
},
{
"text": "In this section, we describe the preprocessing steps we applied on Facebook and Twitter comments in order to clean them before using it to learn the model when training and to evaluate it when testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.1.2."
},
{
"text": "Emoticon substitution : we used the online emoji project on github https://github.com/carpedm20/ emoji 3 to map the emoticon unicode to substituted phrase. Then we treat the substituted phrase into regular English phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.1.2."
},
{
"text": "HashTag segmentation : HashTags are commonly used in social media like Twitter, Instagram, Facebook,... In order to detect whether an HashTag contains abusive or offensive words, we used an open source word segmentation available on github https://github.com/ grantjenks/python-wordsegment 4 . One example would be \"#asshole\" segmented as \"asshole\" which is offensive in this case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.1.2."
},
{
"text": "Misc. : we converted all texts into lowercase. Also all \"URL\" is substituted by \"http\". And Finally, we removed all digit, punctuation, email and non UTF-8 word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.1.2."
},
{
"text": "The evaluation metric used in this paper is the same measure as used in the TRAC shared task which is the weighted F1. The weighted F1 is equal to the average of the F1 (given by equation 1) of each class label; it is an weighted average, weighted by the number of instances for each class label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measure",
"sec_num": "4.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F 1 = 2 R * P R + P",
"eq_num": "(1)"
}
],
"section": "Evaluation measure",
"sec_num": "4.2."
},
{
"text": "where P = T P T P +F P is the precision, R = T P T P +F N is the recall, T P denotes the true positives, F P the false positives, and F N the false negatives. Table 2 (resp. Table 3 ) summarizes our results on Facebook (resp. on Twitter) test set. In each table, we can see the three best results from participants in the TRAC workshop and our model which is the fine-tuned of the large pre-trained BERT model. On Facebook test set, the fine-tuned BERT model (our model) achieves a weighted F1 of 0.627, clearly exceeding the baseline and ranks our model 3rd when compared to the participants of the TRAC shared task.",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 181,
"text": "Table 2 (resp. Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Evaluation measure",
"sec_num": "4.2."
},
{
"text": "Weighted F1 Saroyehun (Aroyehun and Gelbukh, 2018) 0.642 EBSI-LIA-UNAM 0.632 (Arroyo-Fern\u00e1ndez et al., 2018) BERT-based model (ours) 0.627 DA-LD-Hildesheim (Modha et al., 2018) 0.618 On Twitter test set, the fine-tuned BERT model (our model) achieves a weighted F1 of 0.595, clearly exceeding the baseline and ranks also our model 3rd when compared to TRAC shared task participants.",
"cite_spans": [
{
"start": 22,
"end": 50,
"text": "(Aroyehun and Gelbukh, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 77,
"end": 108,
"text": "(Arroyo-Fern\u00e1ndez et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 156,
"end": 176,
"text": "(Modha et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": null
},
{
"text": "3 accessed on February, 04 th 2020 4 accessed on February, 04 th 2020 Systems Weighted F1 vista.ue (Raiyani et al., 2018) 0.601 Julian (Risch and Krestel, 2018) 0.599 BERT-based model (ours) 0.595 saroyehun (Aroyehun and Gelbukh, 2018) 0.592 Table 3 : Results for the English task on Twitter test set. Bold value is the best performance.",
"cite_spans": [
{
"start": 99,
"end": 121,
"text": "(Raiyani et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 135,
"end": 160,
"text": "(Risch and Krestel, 2018)",
"ref_id": "BIBREF18"
},
{
"start": 207,
"end": 235,
"text": "(Aroyehun and Gelbukh, 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 242,
"end": 249,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Systems",
"sec_num": null
},
{
"text": "In view of these results, our model can easily generalize from one social media platform to another one. Indeed, our model is trained on Facebook comments and achieved good performance, the same 3rd rank, when tested on both Facebook and Twitter comments. It is worth noticing that the systems that outperforms ours are not the same on the two collections, showing that there are less stable than ours. The next step is to test our model on other social media content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": null
},
{
"text": "Figures 2 and 3 present the confusion matrices of our model on Facebook and Twitter test sets respectively. When analysing the results of our model according to weighted F1 on both test sets, we can see that our model mislabelled several NAG instances with CAG class. In general, our model shows better performance on classes with many training instances compared with classes with less training instances except with CAG class. Our model has some difficulty to identify the CAG class. Indeed, even though the OAG class has the smaller number of instances, the performance on the OAG class is better than on the CAG class which has more instances. On the Facebook test set, CAG is the class where our model is less performing, with an F1 score of 0.36, followed by OAG class with an F1 score of 0.55 and NAG with 0.71. From the figure 2, we can see that it is hard for our model to distinguish CAG from NAG as it predicts 181 NAG instances as CAG. We can see this also holds between OAG and NAG where our model predicts 74 NAG instances as OAG. This second case may be due to the number of instances in the data set (used to train the model) because we have about 2 times more NAG cases than OAG cases. On the Twitter test set, the most problematic class to identify was also CAG where our model got an F1 score of 0.38, followed by OAG with an F1 score of 0.66 and NAG with 0.73. Figure 3 shows that not only our model has some difficulty to distinguish CAG from NAG but also has some difficulty to distinguish CAG from OAG.",
"cite_spans": [],
"ref_spans": [
{
"start": 1381,
"end": 1389,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.1."
},
{
"text": "This paper details the model we propose to solve aggression detection. It also reports the results we obtained on the TRAC English dataset (Facebook and Twitter based) (Kumar et al., 2018a) . For this, we trained a neural network based classifier by fine-tuning the pre-trained BERTLarge model. The evaluation shows that our model is able to detect aggression in social media content and achieves the 3rd best result both on Facebook and Twitter test sets and this, even if the model is trained on Facebook comments only. For Future work, we plan to apply our model to the second edition of the TRAC shared task 5 . Also we plan to improve our preporcessing step by enlarging the training set with data augmentation techniques or using external datasets because it has been shown to be effective in (Aroyehun and Gelbukh, 2018) . As for information representation, the Information Nutritional Label could be worth investigating as well since it has been shown to be interesting to represent information for various IR tasks (Fuhr et al., 2018; Lespag- ",
"cite_spans": [
{
"start": 168,
"end": 189,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF8"
},
{
"start": 799,
"end": 827,
"text": "(Aroyehun and Gelbukh, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 1024,
"end": 1043,
"text": "(Fuhr et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 1044,
"end": 1051,
"text": "Lespag-",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6."
},
{
"text": "https://github.com/google-research/bert, accessed on February, 04 th 2020",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://sites.google.com/view/trac2/home, accessed on February, 04 th 2020",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "We also plan to test our model on related collections, tasks, and sub-tasks in order to evaluate its robustness",
"authors": [
{
"first": "",
"middle": [],
"last": "Nol",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "nol et al., 2019), possibly combined with a key-phrase rep- resentation which is semantically richer than word repre- sentation (Mothe et al., 2018). We also plan to test our model on related collections, tasks, and sub-tasks in order to evaluate its robustness.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "While TRAC challenge has its proper ethical policies, detecting aggressive content from user's posts raises ethical issues",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ethical issue. While TRAC challenge has its proper ethi- cal policies, detecting aggressive content from user's posts raises ethical issues that are beyond the scope of the paper. 7. Bibliographical References",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Aggression detection in social media: Using deep neural networks, data augmentation, and pseudo labeling",
"authors": [
{
"first": "S",
"middle": [
"T"
],
"last": "Aroyehun",
"suffix": ""
},
{
"first": "A",
"middle": [
"F"
],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "90--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aroyehun, S. T. and Gelbukh, A. F. (2018). Aggression detection in social media: Using deep neural networks, data augmentation, and pseudo labeling. In Proceedings of the First Workshop on Trolling, Aggression and Cyber- bullying, TRAC@COLING 2018, Santa Fe, New Mexico, USA, August 25, 2018, pages 90-97.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Cyberbullying detection task: the EBSI-LIA-UNAM system (ELU) at coling'18 TRAC-1",
"authors": [
{
"first": "I",
"middle": [],
"last": "Arroyo-Fern\u00e1ndez",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Forest",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Torres-Moreno",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Carrasco-Ruiz",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Legeleux",
"suffix": ""
},
{
"first": "Joannette",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "140--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arroyo-Fern\u00e1ndez, I., Forest, D., Torres-Moreno, J., Carrasco-Ruiz, M., Legeleux, T., and Joannette, K. (2018). Cyberbullying detection task: the EBSI-LIA- UNAM system (ELU) at coling'18 TRAC-1. In Pro- ceedings of the First Workshop on Trolling, Aggression and Cyberbullying, TRAC@COLING 2018, Santa Fe, New Mexico, USA, August 25, 2018, pages 140-149.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving cyberbullying detection with user context",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dadvar",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Trieschnigg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ordelman",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jong",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Information Retrieval -35th European Conference on IR Research",
"volume": "",
"issue": "",
"pages": "693--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dadvar, M., Trieschnigg, D., Ordelman, R., and de Jong, F. (2013). Improving cyberbullying detection with user context. In Advances in Information Retrieval -35th Eu- ropean Conference on IR Research, ECIR 2013, Moscow, Russia, March 24-27, 2013. Proceedings, pages 693- 696.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Social media and its role in friendship-driven interactions among young people: A mixed methods study",
"authors": [
{
"first": "J",
"middle": [
"P"
],
"last": "D\u00e9cieux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Heinen",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Willems",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "27",
"issue": "",
"pages": "18--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D\u00e9cieux, J. P., Heinen, A., and Willems, H. (2019). So- cial media and its role in friendship-driven interactions among young people: A mixed methods study. YOUNG, 27(1):18-31.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Lan- guage Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An information nutritional label for online documents",
"authors": [
{
"first": "N",
"middle": [],
"last": "Fuhr",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Giachanou",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hanselowski",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Jarvelin",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mothe",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Nejdl",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM SIGIR Forum",
"volume": "51",
"issue": "",
"pages": "46--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fuhr, N., Giachanou, A., Grefenstette, G., Gurevych, I., Hanselowski, A., Jarvelin, K., Jones, R., Liu, Y., Mothe, J., Nejdl, W., et al. (2018). An information nutritional label for online documents. In ACM SIGIR Forum, vol- ume 51, pages 46-66. ACM New York, NY, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Benchmarking aggression identification in social media",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Ojha",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying",
"volume": "25",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2018a). Benchmarking aggression identifica- tion in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying, TRAC@COLING 2018, Santa Fe, New Mexico, USA, Au- gust 25, 2018, pages 1-11.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Aggression-annotated corpus of hindi-english code-mixed data",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [
"N"
],
"last": "Reganti",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Maheshwari",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Reganti, A. N., Bhatia, A., and Maheshwari, T. (2018b). Aggression-annotated corpus of hindi-english code-mixed data. In Proceedings of the Eleventh Inter- national Conference on Language Resources and Evalu- ation, LREC 2018, Miyazaki, Japan, May 7-12, 2018.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Information nutritional label and word embedding to estimate information check-worthiness",
"authors": [
{
"first": "C",
"middle": [],
"last": "Lespagnol",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mothe",
"suffix": ""
},
{
"first": "M",
"middle": [
"Z"
],
"last": "Ullah",
"suffix": ""
}
],
"year": 2019,
"venue": "ACM SIGIR conference on research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "941--944",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lespagnol, C., Mothe, J., and Ullah, M. Z. (2019). In- formation nutritional label and word embedding to esti- mate information check-worthiness. In ACM SIGIR con- ference on research and development in information re- trieval, pages 941-944.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Tackling online abuse: A survey of automated abuse detection methods",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mishra, P., Yannakoudakis, H., and Shutova, E. (2019). Tackling online abuse: A survey of automated abuse de- tection methods. CoRR, abs/1908.06024.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Filtering aggression from the multilingual social media feed",
"authors": [
{
"first": "S",
"middle": [],
"last": "Modha",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mandl",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "199--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Modha, S., Majumder, P., and Mandl, T. (2018). Filtering aggression from the multilingual social media feed. In Proceedings of the First Workshop on Trolling, Aggres- sion and Cyberbullying, TRAC@COLING 2018, Santa Fe, New Mexico, USA, August 25, 2018, pages 199-207.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic keyphrase extraction using graphbased methods",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mothe",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Ramiandrisoa",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rasolomanana",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 33rd Annual ACM Symposium on Applied Computing",
"volume": "",
"issue": "",
"pages": "728--730",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mothe, J., Ramiandrisoa, F., and Rasolomanana, M. (2018). Automatic keyphrase extraction using graph- based methods. In Proceedings of the 33rd Annual ACM Symposium on Applied Computing, pages 728-730.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Embeddia at semeval-2019 task 6: Detecting hate with neural network and transfer learning approaches",
"authors": [
{
"first": "A",
"middle": [],
"last": "Pelicon",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Martinc",
"suffix": ""
},
{
"first": "P",
"middle": [
"K"
],
"last": "Novak",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2019",
"volume": "",
"issue": "",
"pages": "604--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pelicon, A., Martinc, M., and Novak, P. K. (2019). Em- beddia at semeval-2019 task 6: Detecting hate with neu- ral network and transfer learning approaches. In Pro- ceedings of the 13th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2019, Minneapolis, MN, USA, June 6-7, 2019, pages 604-610.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Fully connected neural network with advance preprocessor to identify aggression over facebook and twitter",
"authors": [
{
"first": "K",
"middle": [],
"last": "Raiyani",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Gon\u00e7alves",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Quaresma",
"suffix": ""
},
{
"first": "V",
"middle": [
"B"
],
"last": "Nogueira",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying",
"volume": "25",
"issue": "",
"pages": "28--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raiyani, K., Gon\u00e7alves, T., Quaresma, P., and Nogueira, V. B. (2018). Fully connected neural network with advance preprocessor to identify aggression over facebook and twitter. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying, TRAC@COLING 2018, Santa Fe, New Mexico, USA, Au- gust 25, 2018, pages 28-41.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "IRIT at TRAC",
"authors": [
{
"first": "F",
"middle": [],
"last": "Ramiandrisoa",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mothe",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference of Computational Linguistics (TRAC@COLING 2018)",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramiandrisoa, F. and Mothe, J. (2018). IRIT at TRAC 2018. In Workshop on Trolling, Aggression and Cy- berbullying, in International Conference of Compu- tational Linguistics (TRAC@COLING 2018), Santa Fe, New Mexico, USA, 25/08/2018, pages 19-27, http://www.aclweb.org. Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Aggression Identification in Posts -two machine learning approaches",
"authors": [
{
"first": "F",
"middle": [],
"last": "Ramiandrisoa",
"suffix": ""
}
],
"year": 2020,
"venue": "Workshop on Machine Learning for Trend and Weak Signal Detection in Social Networks and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramiandrisoa, F. (2020). Aggression Identification in Posts -two machine learning approaches. In Workshop on Machine Learning for Trend and Weak Signal Detec- tion in Social Networks and Social Media.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Aggression identification using deep learning and data augmentation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Krestel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "150--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Risch, J. and Krestel, R. (2018). Aggression identification using deep learning and data augmentation. In Proceed- ings of the First Workshop on Trolling, Aggression and Cyberbullying, TRAC@COLING 2018, Santa Fe, New Mexico, USA, August 25, 2018, pages 150-158.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A survey on hate speech detection using natural language processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, So-cialNLP@EACL 2017",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schmidt, A. and Wiegand, M. (2017). A survey on hate speech detection using natural language process- ing. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, So- cialNLP@EACL 2017, Valencia, Spain, April 3, 2017, pages 1-10.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Overview of germeval task 2, 2019 shared task on the identification of offensive language",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Stru\u00df",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Siegel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Klenner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 15th Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stru\u00df, J. M., Siegel, M., Ruppenhofer, J., Wiegand, M., and Klenner, M. (2019). Overview of germeval task 2, 2019 shared task on the identification of offensive language. In Proceedings of the 15th Conference on Natural Lan- guage Processing, KONVENS 2019, Erlangen, Germany, October 9-11, 2019.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Detecting hate speech on the world wide web",
"authors": [
{
"first": "W",
"middle": [],
"last": "Warner",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Second Workshop on Language in Social Media",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Warner, W. and Hirschberg, J. (2012). Detecting hate speech on the world wide web. In Proceedings of the Second Workshop on Language in Social Media, pages 19-26, Montr\u00e9al, Canada, June. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval)",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2019",
"volume": "",
"issue": "",
"pages": "75--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019). Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval). In Proceedings of the 13th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2019, Minneapolis, MN, USA, June 6-7, 2019, pages 75-86.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "R",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhu, Y., Kiros, R., Zemel, R. S., Salakhutdinov, R., Ur- tasun, R., Torralba, A., and Fidler, S. (2015). Align- ing books and movies: Towards story-like visual ex- planations by watching movies and reading books. In 2015 IEEE International Conference on Computer Vi- sion, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 19-27.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "UM-IU@LING at SemEval-2019 task 6: Identifying offensive tweets using BERT and SVMs",
"authors": [
{
"first": "J",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "788--795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhu, J., Tian, Z., and K\u00fcbler, S. (2019). UM-IU@LING at SemEval-2019 task 6: Identifying offensive tweets us- ing BERT and SVMs. In Proceedings of the 13th Inter- national Workshop on Semantic Evaluation, pages 788- 795, Minneapolis, Minnesota, USA, June. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "https://github.com/shehzaadzd/ pytorch-pretrained-BERT, accessed on February, 04 th 2020"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Heatmap of the confusion matrix of our model on Facebook test set."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Heatmap of the confusion matrix of our model on Twitter test set."
},
"TABREF1": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Distribution of training, validation and testing data on English TRAC 2018 data collection."
},
"TABREF2": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Results for the English task on Facebook test set. Bold value is the best performance."
}
}
}
}