{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:48:24.793845Z" }, "title": "Towards Toxic Positivity Detection", "authors": [ { "first": "Ishan", "middle": [], "last": "Sanjeev", "suffix": "", "affiliation": {}, "email": "ishan.sanjeev@research.iiit.ac.in" }, { "first": "Aditya", "middle": [], "last": "Srivatsa", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Radhika", "middle": [], "last": "Mamidi", "suffix": "", "affiliation": {}, "email": "radhika.mamidi@iiit.ac.in" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Over the past few years, there has been a growing concern around toxic positivity on social media which is a phenomenon where positivity is used to minimize one's emotional experience. In this paper, we create a dataset for toxic positivity classification from Twitter and an inspirational quote website. We then perform benchmarking experiments using various text classification models and show the suitability of these models for the task. We achieved a macro F1 score of 0.71 and a weighted F1 score of 0.85 by using an ensemble model. To the best of our knowledge, our dataset is the first such dataset created.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Over the past few years, there has been a growing concern around toxic positivity on social media which is a phenomenon where positivity is used to minimize one's emotional experience. In this paper, we create a dataset for toxic positivity classification from Twitter and an inspirational quote website. We then perform benchmarking experiments using various text classification models and show the suitability of these models for the task. We achieved a macro F1 score of 0.71 and a weighted F1 score of 0.85 by using an ensemble model. To the best of our knowledge, our dataset is the first such dataset created.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Toxic positivity can be defined as the overgeneralization of a positive state of mind that encourages using positivity to suppress and displace any acknowledgement of stress and negativity (Sokal et al., 2020; Bosveld, 2021) . The popularity of the term \"toxic positivity\" peaked during the COVID 19 pandemic (refer to figure 1) where it was used to identify advice that focused on just looking at the positive at a time when people were hurting due to loss of life, loss of jobs and other traumatic events.", "cite_spans": [ { "start": 189, "end": 209, "text": "(Sokal et al., 2020;", "ref_id": "BIBREF23" }, { "start": 210, "end": 224, "text": "Bosveld, 2021)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Toxic positivity results in one minimizing one's own negative feelings and suppressing negativity instead of acknowledging, processing and working through it. Some examples of toxic positivity include telling someone to focus on the positive aspects of a loss, telling someone that positive thinking will solve all their problems, suggesting that things could be worse and shaming someone for expressing negative emotions. This suppression of emotions is not only unhelpful but also leads to poorer recovery from the negative effects of the emotion. Accepting and working through one's emotions is the better route to take while dealing with negative emotions (Campbell-Sills et al., 2006) .", "cite_spans": [ { "start": 660, "end": 689, "text": "(Campbell-Sills et al., 2006)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Macro level events like COVID 19 and climate change disasters have distressed many people in the past few years (Marazziti et al., 2021) . Social media is used by people having mental health issues or going through a tough time to find community, support, advice and encouraging messages (Gowen et al., 2012) . However, it becomes important to be able to differentiate between messages that may help uplift an individual and those that may look positive but promote suppression of emotions and cause great harm in the long term recovery from negative emotions. The harms of toxic positivity are not only limited to its deleterious mental health outcomes but it can also be used to uphold oppression by making people ignore the oppression that is going on and encouraging them to \"just be positive\".", "cite_spans": [ { "start": 112, "end": 136, "text": "(Marazziti et al., 2021)", "ref_id": "BIBREF18" }, { "start": 288, "end": 308, "text": "(Gowen et al., 2012)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we aim to create a dataset for toxic positivity and perform text classification using various transformer based models to establish the baseline results for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There have been studies that show the ineffectiveness and deleterious effects of emotion suppression. Gross and John (2003) showed that people who suppressed their emotions had a greater experience of negative emotions while also expressing lesser positive emotion. They also showed that using suppression is related negatively to well being. A study done by Campbell-Sills et al. (2006) involved dividing 60 participants diagnosed with anxiety and mood disorders into two groups. One group was given a rationale for suppressing their emotions while the other was given a rationale for accepting emotions. It was found that suppression was ineffective in reducing distress while watching an emotion-provoking film. It was also seen that the suppression group showed a poorer recovery from the changes in negative affect after watching the film compared to the acceptance group. A similar observation is seen in the case of physical pain as well. Cioffi and Holloway (1993) divided participants into three groups during a cold-pressor pain induction (CPT) where participants would dip their hands in cold water for as long as tolerable. The first group was told to pay attention to the pain, the second was told to focus on their room at home as a distraction, and the third was told to suppress the sensations they felt. It was seen that the group that focused on the pain had a faster recovery from the pain and the suppression group had the slowest recovery from pain. Suppressing pain has shown to have negative outcomes, while accepting it is observed to be as a better strategy. Ford et al. (2018) through longitudinal and lab studies showed that habitually accepting mental experiences broadly predicted psychological health and that it reduced negative emotional response and experience. Hence toxic positivity, with its overemphasis on thinking positively and having a positive state of mind, encourages emotion suppression rather than emotional acceptance which has negative consequences for the person who engages in it.", "cite_spans": [ { "start": 102, "end": 123, "text": "Gross and John (2003)", "ref_id": "BIBREF12" }, { "start": 359, "end": 387, "text": "Campbell-Sills et al. (2006)", "ref_id": "BIBREF2" }, { "start": 946, "end": 972, "text": "Cioffi and Holloway (1993)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Lecompte-Van Poucke (2022) conducted a critical discourse analysis of toxic positivity as a discursive construct on Facebook. Two corpora of posts from organizations that promoted endometriosis awareness (an invisible chronic condition) were analyzed using systematic functional linguistics, pragma-dialectics and critical theory. The study showed that users on social media platforms of-ten engage in toxic positivity or forced positive discourse which is inspired by the neoliberal \"positive thinking\" ideology, leading to a less inclusive online community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In the field of NLP, there have been many papers focusing on hate speech detection using support vector machine (SVM), long short term memory networks (LSTM),convolutional neural network (CNN), transformers and other machine learning models (Wang et al., 2019b; Zhang et al., 2018; Ousidhoum et al., 2019; Basile et al., 2019) . These works use Twitter posts (tweets) to create datasets. YouTube and Reddit comments have also been used in some works (Mollas et al., 2022; Mandl et al., 2020) . There have been recent efforts in hope speech detection as well (Palakodety et al., 2020) . The HopeEDI dataset (Chakravarthi, 2020) is a hope speech dataset that contains Youtube comments that have been marked for hope and not-hope speech. There has been a shared task on this dataset where participants have used various machine learning models for hope speech detection like multilingual transformer-based models, recurrent neural networks (RNN) and CNN-LSTMs (Chakravarthi and Muralidaran, 2021).", "cite_spans": [ { "start": 241, "end": 261, "text": "(Wang et al., 2019b;", "ref_id": "BIBREF25" }, { "start": 262, "end": 281, "text": "Zhang et al., 2018;", "ref_id": "BIBREF26" }, { "start": 282, "end": 305, "text": "Ousidhoum et al., 2019;", "ref_id": "BIBREF21" }, { "start": 306, "end": 326, "text": "Basile et al., 2019)", "ref_id": "BIBREF0" }, { "start": 450, "end": 471, "text": "(Mollas et al., 2022;", "ref_id": null }, { "start": 472, "end": 491, "text": "Mandl et al., 2020)", "ref_id": "BIBREF17" }, { "start": 558, "end": 583, "text": "(Palakodety et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "However, to the best of our knowledge, there has been no prior work on creating datasets and classification models for toxic positivity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We sourced our data from two sources. Twitter and inspirational quote website BrainyQuote 1 which is one of the largest quotation websites.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Extraction and Pre-processing", "sec_num": "3.1" }, { "text": "The reason for sourcing data from BrainyQuotes was that we observed that a lot of motivational quotes being shared on Twitter were ones that were said by famous personalities. Hence, including popular quotes from a quotation website is helpful. We made a web scraper using Beautiful Soup 4 2 library in python to extract a subset of quotations from the website.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Extraction and Pre-processing", "sec_num": "3.1" }, { "text": "For the Twitter data, we extracted tweets using Twitter API 3 we queried using hashtags like #Mon-dayMotivation to #SundayMotivation and hashtags like #InspirationalQuotes, #Motivation, #SelfLove and #AdviceForSuccess. We also took quotes from widely followed inspirational or motivational twitter accounts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Extraction and Pre-processing", "sec_num": "3.1" }, { "text": "After collecting the data, pre-processing was performed. Bylines of quotes were removed because it was not useful information for annotation and to also to ensure that there was no annotator bias. For tweets, hashtags and \"@\" tags were removed. The Twitter data and BrainyQuotes data was also manually filtered to remove sentences that were not inspirational, motivational or advisory in nature. Examples of the kind of data removed are given in Table 4 . A total of 4,250 quotes and tweets were collected for annotation after the data elimination and pre-processing steps. 4 .", "cite_spans": [], "ref_spans": [ { "start": 446, "end": 453, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Data Extraction and Pre-processing", "sec_num": "3.1" }, { "text": "Two annotators annotated the data for toxic positivity. The annotators were linguistics students. An annotation workshop was conducted for the annotators where they were sensitized to the topic of toxic positivity through academic works as described in the related works section and examples of toxic positivity. The annotators were then asked to annotate 50 sentences separately and then their annotator agreement was measured and was found to have a Kappa score of 0.72.We used Cohen's Kappa coefficient to calculate Inter Annotator Agreement (Fleiss and Cohen, 1973) . The annotators then discussed their disagreements and came to a better 1 http://www.brainyquote.com 2 BeautifulSoup Documentation 3 Twitter API Documentation 4 Dataset Link understanding of the annotation guidelines. They annotated another 50 sentences and got a better Kappa score of 0.76. They again had a discussion about their disagreements. After this exercise, they were told to annotate the dataset separately without communicating with each other. The 100 sentences used for training the annotators were discarded and are not a part of this dataset of 4,250 sentences. It was observed that sentences that had the following general characteristics were marked as toxic positive:", "cite_spans": [ { "start": 545, "end": 569, "text": "(Fleiss and Cohen, 1973)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset Annotation", "sec_num": "3.2" }, { "text": "\u2022 Encouraging hiding or suppressing negative emotions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Annotation", "sec_num": "3.2" }, { "text": "-Example: \"A negative mind will never give you a positive life.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Annotation", "sec_num": "3.2" }, { "text": "\u2022 Encouraging focusing on positivity rather than processing negative emotions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Annotation", "sec_num": "3.2" }, { "text": "-Example: \"Every time I hear something negative, I will replace it with a positive thought.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Annotation", "sec_num": "3.2" }, { "text": "\u2022 Minimizing someone's negative feelings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Annotation", "sec_num": "3.2" }, { "text": "-Example: \"You cannot be lonely if you like the person you're alone with.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Annotation", "sec_num": "3.2" }, { "text": "A few categories of sentences or quotes we emerged when were studying the dataset. We decided to annotate for them as well. The categories of the sentences were as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Annotation", "sec_num": "3.2" }, { "text": "\u2022 Worldview: sentences that are philosophical, abstract and provide an insight into the worldview of the writer. Example: \"Things may come to those who wait, but only the things left by those who hustle\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Annotation", "sec_num": "3.2" }, { "text": "\u2022 Personal Experience: sentences that provide insights based on the writer's personal experience. Example: \"I always did something I was a little not ready to do. I think that's how you grow. When there's that moment of 'Wow, I'm not really sure I can do this,' and you push through those moments, that's when you have a breakthrough.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Annotation", "sec_num": "3.2" }, { "text": "\u2022 Advice: sentences that are more instructional in nature and provide straightforward recommendations and advice. Example: \"Do one thing every day that scares you.\" \u2022 Affirmation: First-person sentences that are used as affirmations. Example: \"I choose to make the rest of my life, the best of my life.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Annotation", "sec_num": "3.2" }, { "text": "The same annotators annotated the categories of sentences as well. The same process of annotating 100 sentences, 50 sentences at a time and discussing disagreements was followed to train the annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Annotation", "sec_num": "3.2" }, { "text": "Out of the 4,250 sentences, 512 were annotated as toxic positive, which constitutes 12% of the dataset.The rest of the 3738 sentences were nontoxic positive. Examples of toxic and non-toxic positive sentences are presented in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 233, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Dataset Statistics", "sec_num": "3.3" }, { "text": "Worldview was the most common category of sentence occurring 73.6% of the time with advice occurring 16.7% of the time and the rest occurring less than 10% of the time in the dataset. Exact figures are presented in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 215, "end": 222, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Dataset Statistics", "sec_num": "3.3" }, { "text": "It was also seen that 44% of the sentences that belonged to the affirmation category were toxic positive. 21% of the sentences belonging to the advice category were toxic positive, while 14% and 8% of sentences belonging to the personal experience and the worldview category respectively were toxic positive. We noticed that in our dataset, most affirmation sentences were focused on emotion suppression, and hence they were marked as toxic positive. The non-toxic positive affirmations focused on gratitude, having a growth mindset and self-acceptance, although they were fewer in number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Statistics", "sec_num": "3.3" }, { "text": "We got a Kappa score of 0.82 for the toxic positivity (toxic or non-toxic) annotation and a Kappa score of 0.74 for category annotations (worldview, advice, personal experience or affirmation).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Statistics", "sec_num": "3.3" }, { "text": "We used the following transfomer-based models for text classification:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "\u2022 BERT: BERT (Devlin et al., 2019 ) is a transformer encoder with several encoder layers, each with several self-attention heads. It is trained using two tasks, Masked Language Modelling (MLM), and Next Sentence Prediction (NSP). MLM has been shown to help incorporate both the left and the right contexts into the bidirectional embeddings generated.", "cite_spans": [ { "start": 13, "end": 33, "text": "(Devlin et al., 2019", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "We have fine-tuned the \"bert-base-uncased\" model in our implementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "\u2022 RoBERTa: RoBERTa ) is a transformer-based encoder built by modifying the original BERT architecture. It utilizes more data with longer average sequence lengths and larger batches. It is solely trained on MLM and makes use of dynamic masking (i.e. the set of masked tokens is subject to change while training). It performs better on the GLUE benchmark (Wang et al., 2019a) in comparison to BERT and XLNet. For the classifier, we have fine-tuned the \"roberta-base\" model.", "cite_spans": [ { "start": 353, "end": 373, "text": "(Wang et al., 2019a)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "\u2022 ALBERT: ALBERT (Lan et al., 2020 ) is yet another transformer encoder based on BERT but aimed at being lighter than its predecessor. The core parameter reduction methods include factorizing the vocabulary embedding matrix into smaller sub-matrices and utilizing repeating layers distributed across groups for increased parameter sharing. These techniques help reduce the parameter count by almost 80% with minimal changes to the overall performance. We have fine-tuned the \"albertbase-v2\" model in our implementation.", "cite_spans": [ { "start": 17, "end": 34, "text": "(Lan et al., 2020", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "We also experimented with an ensemble based classifier for which we additionaly used the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "\u2022 XGBoost Random Forest Classifier: Random Forest Classifiers (Ho, 1995) are widely used for ensemble classification. They consist Sentence Class When people say there is a 'reason' for the depression, they insult the person who suffers, making it seem that those in agony are somehow at fault for not 'cheering up.' The fact is that those who suffer -and those who love them -are no more at fault for depression than a cancer patient is for a tumor.", "cite_spans": [ { "start": 62, "end": 72, "text": "(Ho, 1995)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "Just like it's not healthy to think overly negative thoughts, exaggeratedly positive thoughts can be equally detrimental. If you overestimate how much of a positive impact a particular change will have on your life, you may end up feeling disappointed when reality doesn't live up to your fantasy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Toxic Positive", "sec_num": null }, { "text": "Do what you feel in your heart to be right Non-Toxic Positive The secret of getting ahead is getting started.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Toxic Positive", "sec_num": null }, { "text": "Non-Toxic Positive Being positive is like going up a mountain. Being negative is like sliding down a hill. A lot of times, people want to take the easy way out, because it's basically what they've understood throughout their lives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Toxic Positive", "sec_num": null }, { "text": "You must not under any pretense allow your mind to dwell on any thought that is not positive, constructive, optimistic, kind.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Toxic Positive", "sec_num": null }, { "text": "While you're going through this process of trying to find the satisfaction in your work, pretend you feel satisfied. Tell yourself you had a good day. Walk through the corridors with a smile rather than a scowl. Your positive energy will radiate. If you act like you're having fun, you'll find you are having fun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Toxic Positive", "sec_num": null }, { "text": "You can't live a positive life with a negative mind and if you have a positive outcome you have a positive income and just to have more positivity and just to kind of laugh it off. Table 3 : Examples of toxic positive and non-toxic positive sentences in the dataset.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 188, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Toxic Positive", "sec_num": null }, { "text": "Source Check out this new print for SPRING! #SpringForArt #ThisSpringBuyArt #gardeners #gardens #Inspire #InspirationalQuotes Twitter A future Metaverse, a social network for the people by the people, around jobs and finance in the decentralised world.Tomorrow's job fair in 3 dimensions at your fingertips. #MondayMotivation #cryptocurrency #blockchain #Crypto #jobseeker #Trader #Jobs #trading #ICO", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Removed Text", "sec_num": null }, { "text": "The failure of Lehman Brothers demonstrated that liquidity provision by the Federal Reserve would not be sufficient to stop the crisis; substantial fiscal resources were necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Twitter", "sec_num": null }, { "text": "Museums are managers of consciousness. They give us an interpretation of history, of how to view the world and locate ourselves in it. They are, if you want to put it in positive terms, great educational institutions. If you want to put it in negative terms, they are propaganda machines. of a large number of decision trees, each set to only a subset of the overall feature-set of the data. This helps create numerous weak learners with relatively low correlation. The majority verdict of these weak learners tends to outperform an individual predictor tasked with the entire feature-set. In this paper, we have made use of the implementation of the Random Forest Classifier by XGBoost (Chen and Guestrin, 2016) .", "cite_spans": [ { "start": 687, "end": 712, "text": "(Chen and Guestrin, 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "BrainyQuote", "sec_num": null }, { "text": "\u2022 Bayesian Optimization: Bayesian Optimization (Mockus, 1989 ) is a sequential global optimization strategy for various black-box functions and is used for models across Machine Learning. It attempts to determine the prior distribution of the system (i.e model hyperparameters), which yields the optimal posterior distribution (i.e objective function) by iteratively testing the prior and updating the posterior accordingly. It provides a more computationally efficient yet fine-grained search space than more exhaustive methods such as grid search. In our work, Bayesian optimization is used for tuning the hyperparameters (i.e. number of tree estimators, train subsample ratio, and column subsample ratio) of the Random Forest Classifier. We make use of the implementation by the bayesian-optimization Python library (Fernando, 2014) .", "cite_spans": [ { "start": 47, "end": 60, "text": "(Mockus, 1989", "ref_id": "BIBREF19" }, { "start": 819, "end": 835, "text": "(Fernando, 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "BrainyQuote", "sec_num": null }, { "text": "We experimented with 3 transformer models BERT, RoBERTa, and ALBERT. Each of the classification models utilizes a pretrained Transformer encoder, i.e. BERT-Base, RoBERTa-Base, and ALBERT-Base. The pooled output layer from each encoder is passed through respective dropout layers (p = 0.3) for further regularization and linear layers (mapping from a vector size of 768 to the number of classification categories, i.e. 2). A softmax function is applied to each of the size-2 vectors for normalized likelihoods of the two classes. The results from these models are provided in Table 5 . We also experimented with an ensemble-based classifier. The classifier is an ensemble of three predictors with a random forest classifier on top (as shown in Figure 2 ). The predictors were the three text classification transformer based models as mentioned above.", "cite_spans": [], "ref_spans": [ { "start": 575, "end": 582, "text": "Table 5", "ref_id": "TABREF3" }, { "start": 743, "end": 751, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "The likelihoods from each of the predictors were concatenated and passed as features to an XGBoost Random Forest Classifier to generate an ensemble class prediction. After a Bayesian Search for the classifier parameters on the validation set, the number of tree estimators w set to 149, subsample ratio of the training samples to 0.50, and subsample ratio of columns for each split to 0.33.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "Each of the Transformer encoder predictors were trained using AdamW optimizer (\u03b2 1 = 0.9, \u03b2 2 = 0.999, \u03f5 = 10 \u22128 ), with Cross Entropy loss, using a linear training scheduler. The encoder pipelines were trained with an initial learning rate of 2e \u22125 and the XGBoost ensemble classifier with a learning rate of 1.0. The predictors were trained for 6 epochs . The predictions from the epoch with the best validation weighted macro F1 score were utilized for the ensemble classification. The overall batch size for the pipeline was set to 16.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "The ensemble model generalized better than the individual models producing the highest macro F1 score of 0.71 and a weighted F1 score of 0.85 as seen in Table 5 . As the toxic tweets comprise of only a small portion of the data (14.5%), models performing well on non-toxic tweets tend to have inflated weighted-F1 scores. Therefore we opted for macro-F1 as the main performance metric for this task.", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 160, "text": "Table 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "In this work, we created a dataset for toxic positivity detection. We scraped 4,250 sentences from Twitter and the inspirational quote website BrainyQuote. We then annotated them and achieved a Kappa score of 0.82 for toxic positivity classification. We then performed experiments using transformer-based models for text classification. Our ensemble model gave us the best results achieving a macro F1 score of 0.71 and a weighted F1 score of 0.85. As more people turn to social media to get help when they are going through a tough time, it becomes important for them to be able to differentiate between positive and toxic positive messages. Furthermore, being able to recognize toxic positivity is also important for chatbots and other automated systems that aim to provide mental health assistance. We hope that our work contributes to further research in this field. In the future, we plan to extend the study by introducing a larger dataset in English as well as other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter", "authors": [ { "first": "Valerio", "middle": [], "last": "Basile", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Bosco", "suffix": "" }, { "first": "Elisabetta", "middle": [], "last": "Fersini", "suffix": "" }, { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "" }, { "first": "Viviana", "middle": [], "last": "Patti", "suffix": "" }, { "first": "Francisco Manuel Rangel", "middle": [], "last": "Pardo", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rosso", "suffix": "" }, { "first": "Manuela", "middle": [], "last": "Sanguinetti", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "54--63", "other_ids": { "DOI": [ "10.18653/v1/S19-2007" ] }, "num": null, "urls": [], "raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54-63, Min- neapolis, Minnesota, USA. Association for Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Positive Vibes Only: The Downsides of a Toxic Cure-All", "authors": [ { "first": "Eva", "middle": [], "last": "Bosveld", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eva Bosveld. 2021. Positive Vibes Only: The Down- sides of a Toxic Cure-All.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Effects of suppression and acceptance on emotional responses of individuals with anxiety and mood disorders", "authors": [ { "first": "Laura", "middle": [], "last": "Campbell-Sills", "suffix": "" }, { "first": "David", "middle": [ "H" ], "last": "Barlow", "suffix": "" }, { "first": "Timothy", "middle": [ "A" ], "last": "Brown", "suffix": "" }, { "first": "Stefan", "middle": [ "G" ], "last": "Hofmann", "suffix": "" } ], "year": 2006, "venue": "Behaviour Research and Therapy", "volume": "44", "issue": "9", "pages": "1251--1263", "other_ids": { "DOI": [ "10.1016/j.brat.2005.10.001" ] }, "num": null, "urls": [], "raw_text": "Laura Campbell-Sills, David H. Barlow, Timothy A. Brown, and Stefan G. Hofmann. 2006. Effects of suppression and acceptance on emotional responses of individuals with anxiety and mood disorders. Be- haviour Research and Therapy, 44(9):1251-1263.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "HopeEDI: A multilingual hope speech detection dataset for equality, diversity, and inclusion", "authors": [ { "first": "Chakravarthi", "middle": [], "last": "Bharathi Raja", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media", "volume": "", "issue": "", "pages": "41--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi. 2020. HopeEDI: A mul- tilingual hope speech detection dataset for equality, diversity, and inclusion. In Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Me- dia, pages 41-53, Barcelona, Spain (Online). Associ- ation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Findings of the shared task on hope speech detection for equality, diversity, and inclusion", "authors": [ { "first": "Vigneshwaran", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "", "middle": [], "last": "Muralidaran", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion", "volume": "", "issue": "", "pages": "61--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi and Vigneshwaran Mural- idaran. 2021. Findings of the shared task on hope speech detection for equality, diversity, and inclu- sion. In Proceedings of the First Workshop on Lan- guage Technology for Equality, Diversity and Inclu- sion, pages 61-72, Kyiv. Association for Computa- tional Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "XGBoost: A scalable tree boosting system", "authors": [ { "first": "Tianqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16", "volume": "", "issue": "", "pages": "785--794", "other_ids": { "DOI": [ "10.1145/2939672.2939785" ] }, "num": null, "urls": [], "raw_text": "Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, pages 785-794, New York, NY, USA. ACM.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Delayed costs of suppressed pain", "authors": [ { "first": "Delia", "middle": [], "last": "Cioffi", "suffix": "" }, { "first": "James", "middle": [], "last": "Holloway", "suffix": "" } ], "year": 1993, "venue": "Journal of Personality and Social Psychology", "volume": "64", "issue": "2", "pages": "274--282", "other_ids": { "DOI": [ "10.1037/0022-3514.64.2.274" ] }, "num": null, "urls": [], "raw_text": "Delia Cioffi and James Holloway. 1993. Delayed costs of suppressed pain. Journal of Personality and Social Psychology, 64(2):274-282.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bayesian Optimization: Open source constrained global optimization tool for Python", "authors": [ { "first": "Nogueira", "middle": [], "last": "Fernando", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nogueira Fernando. 2014. Bayesian Optimization: Open source constrained global optimization tool for Python.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability", "authors": [ { "first": "L", "middle": [], "last": "Joseph", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Fleiss", "suffix": "" }, { "first": "", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1973, "venue": "Educational and Psychological Measurement", "volume": "33", "issue": "3", "pages": "613--619", "other_ids": { "DOI": [ "10.1177/001316447303300309" ] }, "num": null, "urls": [], "raw_text": "Joseph L. Fleiss and Jacob Cohen. 1973. The equiva- lence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and Psychological Measurement, 33(3):613-619.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The psychological health benefits of accepting negative emotions and thoughts: Laboratory, diary, and longitudinal evidence", "authors": [ { "first": "Q", "middle": [], "last": "Brett", "suffix": "" }, { "first": "Phoebe", "middle": [], "last": "Ford", "suffix": "" }, { "first": "Oliver", "middle": [ "P" ], "last": "Lam", "suffix": "" }, { "first": "Iris", "middle": [ "B" ], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Mauss", "suffix": "" } ], "year": 2018, "venue": "Journal of Personality and Social Psychology", "volume": "115", "issue": "6", "pages": "1075--1092", "other_ids": { "DOI": [ "10.1037/pspp0000157" ] }, "num": null, "urls": [], "raw_text": "Brett Q. Ford, Phoebe Lam, Oliver P. John, and Iris B. Mauss. 2018. The psychological health benefits of ac- cepting negative emotions and thoughts: Laboratory, diary, and longitudinal evidence. Journal of Person- ality and Social Psychology, 115(6):1075-1092.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Young adults with mental health conditions and social networking websites: Seeking tools to build community", "authors": [ { "first": "Kris", "middle": [], "last": "Gowen", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Deschaine", "suffix": "" }, { "first": "Darcy", "middle": [], "last": "Gruttadara", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Markey", "suffix": "" } ], "year": 2012, "venue": "Psychiatric Rehabilitation Journal", "volume": "35", "issue": "3", "pages": "245--250", "other_ids": { "DOI": [ "10.2975/35.3.2012.245.250" ] }, "num": null, "urls": [], "raw_text": "Kris Gowen, Matthew Deschaine, Darcy Gruttadara, and Dana Markey. 2012. Young adults with men- tal health conditions and social networking websites: Seeking tools to build community. Psychiatric Reha- bilitation Journal, 35(3):245-250.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Individual differences in two emotion regulation processes: Implications for affect, relationships, and well-being", "authors": [ { "first": "James", "middle": [ "J" ], "last": "Gross", "suffix": "" }, { "first": "Oliver", "middle": [ "P" ], "last": "John", "suffix": "" } ], "year": 2003, "venue": "Journal of Personality and Social Psychology", "volume": "85", "issue": "2", "pages": "348--362", "other_ids": { "DOI": [ "10.1037/0022-3514.85.2.348" ] }, "num": null, "urls": [], "raw_text": "James J. Gross and Oliver P. John. 2003. Indi- vidual differences in two emotion regulation pro- cesses: Implications for affect, relationships, and well-being. Journal of Personality and Social Psy- chology, 85(2):348-362.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Random decision forests", "authors": [ { "first": "Kam", "middle": [], "last": "Tin", "suffix": "" }, { "first": "", "middle": [], "last": "Ho", "suffix": "" } ], "year": 1995, "venue": "Proceedings of 3rd international conference on document analysis and recognition", "volume": "1", "issue": "", "pages": "278--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tin Kam Ho. 1995. Random decision forests. In Pro- ceedings of 3rd international conference on docu- ment analysis and recognition, volume 1, pages 278- 282. IEEE.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Albert: A lite bert for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "you got this!\": A critical discourse analysis of toxic positivity as a discursive construct on facebook", "authors": [ { "first": "Margo", "middle": [], "last": "Lecompte-Van Poucke", "suffix": "" } ], "year": 2022, "venue": "Applied Corpus Linguistics", "volume": "2", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.1016/j.acorp.2022.100015" ] }, "num": null, "urls": [], "raw_text": "Margo Lecompte-Van Poucke. 2022. \"you got this!\": A critical discourse analysis of toxic positivity as a discursive construct on facebook. Applied Corpus Linguistics, 2(1):100015.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german", "authors": [ { "first": "Thomas", "middle": [], "last": "Mandl", "suffix": "" }, { "first": "Sandip", "middle": [], "last": "Modha", "suffix": "" }, { "first": "Anand", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "M", "middle": [], "last": "", "suffix": "" }, { "first": "Bharathi Raja", "middle": [], "last": "Chakravarthi", "suffix": "" } ], "year": 2020, "venue": "Forum for Information Retrieval Evaluation", "volume": "2020", "issue": "", "pages": "29--32", "other_ids": { "DOI": [ "10.1145/3441501.3441517" ] }, "num": null, "urls": [], "raw_text": "Thomas Mandl, Sandip Modha, Anand Kumar M, and Bharathi Raja Chakravarthi. 2020. Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german. In Forum for Information Re- trieval Evaluation, FIRE 2020, page 29-32, New York, NY, USA. Association for Computing Machin- ery.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Climate change, environment pollution, covid-19 pandemic and mental health", "authors": [ { "first": "Donatella", "middle": [], "last": "Marazziti", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Cianconi", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Mucci", "suffix": "" }, { "first": "Lara", "middle": [], "last": "Foresi", "suffix": "" }, { "first": "Chiara", "middle": [], "last": "Chiarantini", "suffix": "" }, { "first": "Alessandra", "middle": [ "Della" ], "last": "Vecchia", "suffix": "" } ], "year": 2021, "venue": "Science of The Total Environment", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1016/j.scitotenv.2021.145182" ] }, "num": null, "urls": [], "raw_text": "Donatella Marazziti, Paolo Cianconi, Federico Mucci, Lara Foresi, Chiara Chiarantini, and Alessandra Della Vecchia. 2021. Climate change, environment pollu- tion, covid-19 pandemic and mental health. Science of The Total Environment, page 145182.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bayesian approach to global optimization", "authors": [ { "first": "Jonas", "middle": [], "last": "Mockus", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonas Mockus. 1989. Bayesian approach to global optimization. Kluwer Academic.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Ethos: a multi-label hate speech detection dataset. Complex Intelligent Systems", "authors": [ { "first": "Ioannis", "middle": [], "last": "Mollas", "suffix": "" }, { "first": "Zoe", "middle": [], "last": "Chrysopoulou", "suffix": "" } ], "year": null, "venue": "Stamatis Karlos, and Grigorios Tsoumakas. 2022", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/s40747-021-00608-2" ] }, "num": null, "urls": [], "raw_text": "Ioannis Mollas, Zoe Chrysopoulou, Stamatis Karlos, and Grigorios Tsoumakas. 2022. Ethos: a multi-label hate speech detection dataset. Complex Intelligent Systems.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Multilingual and multi-aspect hate speech analysis", "authors": [ { "first": "Nedjma", "middle": [], "last": "Ousidhoum", "suffix": "" }, { "first": "Zizheng", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Hongming", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yangqiu", "middle": [], "last": "Song", "suffix": "" }, { "first": "Dit-Yan", "middle": [], "last": "Yeung", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4675--4684", "other_ids": { "DOI": [ "10.18653/v1/D19-1474" ] }, "num": null, "urls": [], "raw_text": "Nedjma Ousidhoum, Zizheng Lin, Hongming Zhang, Yangqiu Song, and Dit-Yan Yeung. 2019. Multi- lingual and multi-aspect hate speech analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4675- 4684, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Hope speech detection: A computational analysis of the voice of peace", "authors": [ { "first": "Shriphani", "middle": [], "last": "Palakodety", "suffix": "" }, { "first": "Ashiqur", "middle": [ "R" ], "last": "Khudabukhsh", "suffix": "" }, { "first": "Jaime", "middle": [ "G" ], "last": "Carbonell", "suffix": "" } ], "year": 2020, "venue": "", "volume": "2020", "issue": "", "pages": "1881--1889", "other_ids": { "DOI": [ "10.3233/FAIA200305" ] }, "num": null, "urls": [], "raw_text": "Shriphani Palakodety, Ashiqur R. KhudaBukhsh, and Jaime G. Carbonell. 2020. Hope speech detection: A computational analysis of the voice of peace. ECAI 2020, page 1881-1889.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "It's okay to be okay too. why calling out teachers", "authors": [ { "first": "Laura", "middle": [], "last": "Sokal", "suffix": "" }, { "first": "Lesley", "middle": [ "Eblie" ], "last": "Trudel", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Babb", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Sokal, Lesley Eblie Trudel, and Jeff Babb. 2020. It's okay to be okay too. why calling out teachers' \"toxic positivity\" may backfire. EdCan, 60.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "the Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In the Pro- ceedings of ICLR.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Ynu_wb at HASOC 2019: Ordered neurons LSTM with attention for identifying hate speech and offensive language", "authors": [ { "first": "Bin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yunxia", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Shengyan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiaobing", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2019, "venue": "Working Notes of FIRE 2019 -Forum for Information Retrieval Evaluation", "volume": "2517", "issue": "", "pages": "191--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bin Wang, Yunxia Ding, Shengyan Liu, and Xiaobing Zhou. 2019b. Ynu_wb at HASOC 2019: Ordered neurons LSTM with attention for identifying hate speech and offensive language. In Working Notes of FIRE 2019 -Forum for Information Retrieval Evalua- tion, Kolkata, India, December 12-15, 2019, volume 2517 of CEUR Workshop Proceedings, pages 191- 198. CEUR-WS.org.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Detecting hate speech on twitter using a convolution-gru based deep neural network", "authors": [ { "first": "Ziqi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "David", "middle": [], "last": "Robinson", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Tepper", "suffix": "" } ], "year": 2018, "venue": "The Semantic Web", "volume": "", "issue": "", "pages": "745--760", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ziqi Zhang, David Robinson, and Jonathan Tepper. 2018. Detecting hate speech on twitter using a convolution-gru based deep neural network. In The Semantic Web, pages 745-760, Cham. Springer Inter- national Publishing.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Worldwide Google Trends showing search interest of the term \"Toxic Positivity\".", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Schematic overview of the architecture of our model.", "uris": null, "num": null }, "TABREF0": { "text": "Distribution of toxic positive and non-toxic positive sentences.", "num": null, "content": "
Type of sentenceNumber of sentences
Worldview3128
Advice709
Personal Experience 253
Affirmation160
", "type_str": "table", "html": null }, "TABREF1": { "text": "Distribution of the various types of sentences occurring in the dataset.", "num": null, "content": "", "type_str": "table", "html": null }, "TABREF2": { "text": "Examples of the text removed during dataset creation.", "num": null, "content": "
ModelMacro PrecisionWeighted PrecisionMacro RecallWeighted RecallMacro F1 Weighted F1
BERT0.780.840.60.860.630.83
RoBERTa 0.710.850.70.840.680.85
ALBERT 0.710.830.650.850.670.84
Ensemble 0.760.850.690.860.710.85
", "type_str": "table", "html": null }, "TABREF3": { "text": "Classification results of various models used on the dataset.", "num": null, "content": "", "type_str": "table", "html": null } } } }