{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:21:04.079396Z" }, "title": "Offensive language identification in Dravidian code mixed social media text", "authors": [ { "first": "Sunil", "middle": [], "last": "Saumya", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Information Technology Dharwad", "location": { "region": "Karnataka", "country": "India" } }, "email": "sunil.saumya@iiitdwd.ac.in" }, { "first": "Abhinav", "middle": [], "last": "Kumar", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Technology Patna", "location": { "settlement": "Bihar", "country": "India" } }, "email": "" }, { "first": "Jyoti", "middle": [ "Prakash" ], "last": "Singh", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Technology Patna", "location": { "settlement": "Bihar", "country": "India" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Hate speech and offensive language recognition in social media platforms have been an active field of research over recent years. In non-native English spoken countries, social media texts are mostly in code mixed or script mixed/switched form. The current study presents extensive experiments using multiple machine learning, deep learning, and transfer learning models to detect offensive content on Twitter. The data set used for this study are in Tanglish (Tamil and English), Manglish (Malayalam and English) code-mixed, and Malayalam script-mixed. The experimental results showed that 1 to 6-gram character TF-IDF features are better for the said task. The best performing models were naive bayes, logistic regression, and vanilla neural network for the dataset Tamil code-mix, Malayalam code-mixed, and Malayalam script-mixed, respectively instead of more popular transfer learning models such as BERT and ULMFiT and hybrid deep models.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Hate speech and offensive language recognition in social media platforms have been an active field of research over recent years. In non-native English spoken countries, social media texts are mostly in code mixed or script mixed/switched form. The current study presents extensive experiments using multiple machine learning, deep learning, and transfer learning models to detect offensive content on Twitter. The data set used for this study are in Tanglish (Tamil and English), Manglish (Malayalam and English) code-mixed, and Malayalam script-mixed. The experimental results showed that 1 to 6-gram character TF-IDF features are better for the said task. The best performing models were naive bayes, logistic regression, and vanilla neural network for the dataset Tamil code-mix, Malayalam code-mixed, and Malayalam script-mixed, respectively instead of more popular transfer learning models such as BERT and ULMFiT and hybrid deep models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The hate speech is generally defined as any communication which humiliates or denigrates an individual or a group based on the characteristics such as colour, ethnicity, sexual orientation, nationality, race and religion. Due to huge volume of usergenerated content on the web, particularly social networks such as Twitter, Facebook, and so on, the problem of detecting and probably restricting Hate Speech on these platforms has become a very critical issue (Del Vigna12 et al., 2017) . Hate speech lasts forever on these social platforms compared to physical abuse and terribly affects the individual on the mental status creating depression, sleeplessness and even suicide (Ullmann and Tomalin, 2020) .", "cite_spans": [ { "start": 459, "end": 485, "text": "(Del Vigna12 et al., 2017)", "ref_id": null }, { "start": 676, "end": 703, "text": "(Ullmann and Tomalin, 2020)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Owing to the high frequency of posts, detecting hate speech on social media manually is almost impossible. Some recent researches have indicated that the automation of hate speech detection is a more reliable solution. (Davidson et al., 2017) extracted N-gram TF-IDF features from tweets using logistical regression to classify each tweet in hate, offensive and non-offensive classes. Another model for the detection of the cyberbullying instances was presented by (Kumari and Singh, 2020) with a genetic algorithm to optimize the distinguishing features of multimodal posts. (Agarwal and Sureka, 2017) used the linguistic, semantic and sentimental feature to detect racial content. The LSTM and CNN based model for recognising hate speech in the social media posts were explored by (Kapil et al., 2020) . (Badjatiya et al., 2017) exploited the semantic word embedding to classify each tweet into racist, sexist and neither class. Another deep learning model for the detection of hate speech was proposed by (Paul et al., 2020) . However, most of the works for hate speech detection were validated with English datasets only.", "cite_spans": [ { "start": 219, "end": 242, "text": "(Davidson et al., 2017)", "ref_id": "BIBREF13" }, { "start": 465, "end": 489, "text": "(Kumari and Singh, 2020)", "ref_id": "BIBREF26" }, { "start": 576, "end": 602, "text": "(Agarwal and Sureka, 2017)", "ref_id": "BIBREF0" }, { "start": 783, "end": 803, "text": "(Kapil et al., 2020)", "ref_id": "BIBREF24" }, { "start": 806, "end": 830, "text": "(Badjatiya et al., 2017)", "ref_id": "BIBREF1" }, { "start": 1008, "end": 1027, "text": "(Paul et al., 2020)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In a country such as India, the majority of people in social media use at least two languages, primarily English and Hindi (or say Hinglish). These texts are considered bilingual. In a bilingual setting, the script of the entire post may be same with words coming from both of these languages termed as mixed-code (or code mix) text. A few popular code mixed posts in India are English and Hindi (or say Hinglish), Tanglish (Tamil and English) (Chakravarthi et al., 2020c), Manglish (Malayalam and English) (Chakravarthi et al., 2020a), Kanglish (Kannada and English) (Hande et al., 2020) , and so on. The Tamil language is one of the world's longest-enduring traditional languages, with a set of experiences tracing all the way back to 600 BCE. Tamil writing is overwhelmed by verse, particularly Sangam writing, which is made out of sonnets formed between 600 BCE and 300 CE. The main Tamil creator was the writer and thinker Thiruvalluvar, who composed the Tirukkua, a gathering of compositions on morals, legislative issues, love and ethical quality broadly thought to be the best work of Tamil writing. Tamil has the oldest extant literature among Dravidian languages. All Dravidian languages evolved from classical Tamil language Mahesan, 2019, 2020a,b) . Even though they have their own scripts still in the Internet code-mixing comments can be found in these languages (Chakravarthi, 2020b) . Identifying hate content in such bilingual or code mixed language is a very challenging task (Jose et al., 2020; Chakravarthi, 2020a ). An automatic model which is trained in a monolingual context to detect hate posts may not yield the same result when tested bilingually or with a code-mix (Puranik et al., 2021; Hegde et al., 2021; Yasaswini et al., 2021; Ghanghor et al., 2021b,a) . This is because each system learns and recognises words in the given vocabulary. When a new word is encountered, which is not in the vocabulary, it is marked as an undefined token that makes no difference in the estimation of the model. Therefore, when checked with the language in other scripts the model's performance decreases.", "cite_spans": [ { "start": 568, "end": 588, "text": "(Hande et al., 2020)", "ref_id": "BIBREF19" }, { "start": 1236, "end": 1259, "text": "Mahesan, 2019, 2020a,b)", "ref_id": null }, { "start": 1377, "end": 1398, "text": "(Chakravarthi, 2020b)", "ref_id": "BIBREF4" }, { "start": 1494, "end": 1513, "text": "(Jose et al., 2020;", "ref_id": "BIBREF23" }, { "start": 1514, "end": 1533, "text": "Chakravarthi, 2020a", "ref_id": "BIBREF3" }, { "start": 1692, "end": 1714, "text": "(Puranik et al., 2021;", "ref_id": "BIBREF31" }, { "start": 1715, "end": 1734, "text": "Hegde et al., 2021;", "ref_id": "BIBREF20" }, { "start": 1735, "end": 1758, "text": "Yasaswini et al., 2021;", "ref_id": null }, { "start": 1759, "end": 1784, "text": "Ghanghor et al., 2021b,a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The current study identifies the hate content in Tanglish, Manglish and Malayalam script mixed in tweets and validated with the dataset provided in HASOC-Dravidian-CodeMix-FIRE2020 challenge (Chakravarthi et al., 2020b). The dataset proposed in the challenge was collected from Twitter. A variety of deep learning models have been examined in the current paper to distinguish offensive posts from script-mixed posts. Along with that we also examined a few transfer learning models like BERT (Devlin et al., 2018a) and ULMFit (Howard and Ruder, 2018) for the classification task.", "cite_spans": [ { "start": 491, "end": 513, "text": "(Devlin et al., 2018a)", "ref_id": "BIBREF15" }, { "start": 525, "end": 549, "text": "(Howard and Ruder, 2018)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the article is summarized as follows: Section 2 presents the overview of articles proposed in the domain of hate or offensive speech. The task and dataset description is explained in Section 3. This followed by the explanation of the proposed methodology in Section 4. The experimental results and discussion are explained in Section 5 and 6. The paper concludes by highlighting the main findings in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The hate speech identification in social media texts suffers from many challenges; such as code-mixed social media content, script-mixed social media contents, and so on. This section sheds light on a few state-of-art techniques presented to handle such issues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related works", "sec_num": "2" }, { "text": "Most of the analysis proposed for the detection of hate contents were validated with monolingual datasets. It is relatively easy to build a monolingual model, since (i) it is readily accessible, (ii) it learns a single language dictionary, and (iii) unknown token frequency is lower in the test data. (Davidson et al., 2017) worked on 25000 tweets in English and reported that tweets that contained racism and homophobic contexts were hate speech and tweets that contained sexism contexts were offensive contents. Other work, on English data, was proposed by (Waseem and Hovy, 2016) where n-gram features were extracted for identifying sexiest, racism, and none class.", "cite_spans": [ { "start": 301, "end": 324, "text": "(Davidson et al., 2017)", "ref_id": "BIBREF13" }, { "start": 559, "end": 582, "text": "(Waseem and Hovy, 2016)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Related works", "sec_num": "2" }, { "text": "Apart from this, some works were reported on a multilingual dataset where scripts of two or more languages are mixed. (Kumar et al., 2018) proposed a model for multi-lingual datasets containing aggressive and non-aggressive comments in English as well as Hindi from Facebook and Twitter. (Samghabadi et al., 2018) used ensemble learning based on various machine learning classifiers such as Logistic Regression, SVM with word ngram, character n-gram, word embedding, and sentiment as a feature set. They found that combined words and character n-gram features performed better than an individual feature. (Srivastava et al., 2018) identified online social aggression on the Facebook comment in a multilingual scenario and Wikipedia toxic comments using stacked LSTM units followed by convolution layer and fastText as word representation. They achieved 0.98 AUC for Wikipedia toxic comment classification and a weighted F1 score of 0.63 for the Facebook test set and 0.59 for the Twitter test set. (Mandl et al., 2020; Chakravarthi et al., 2020d) presented several models and their results for English, Hindi, and German datasets. They reported the best model as long short term memory-based network that could capture the multilingual context in a better way. Bohra et al. (2018) extended the earlier research of hate speech detection for code mixed tweets of Hindi and English. (Kumari et al., 2021) presented a Convolutional Neural Network (CNN) and Binary Particle Swarm Optimization (BPSO) based model to classify multimodal posts with images and text into non-aggressive, mediumaggressive and high-aggressive classes. Another multilingual context could be code-mixed where two languages are written in a single script. For example, Chakravarthi and Muralidaran, 2021; Chakravarthi et al., 2021a; Suryawanshi and Chakravarthi, 2021) proposed a code mixed Dravidian data in Tamil, Malayalam, and Kannada. (Bohra et al., 2018 ) developed a Hinglish dataset from Twitter. They reported preliminary experiment results of Support Vector Machine (SVM) and Random Forest (RF) classifiers with n-grams and lexicon-based features with an accuracy of 0.71.", "cite_spans": [ { "start": 118, "end": 138, "text": "(Kumar et al., 2018)", "ref_id": "BIBREF25" }, { "start": 605, "end": 630, "text": "(Srivastava et al., 2018)", "ref_id": "BIBREF35" }, { "start": 998, "end": 1018, "text": "(Mandl et al., 2020;", "ref_id": "BIBREF28" }, { "start": 1019, "end": 1046, "text": "Chakravarthi et al., 2020d)", "ref_id": "BIBREF12" }, { "start": 1261, "end": 1280, "text": "Bohra et al. (2018)", "ref_id": "BIBREF2" }, { "start": 1380, "end": 1401, "text": "(Kumari et al., 2021)", "ref_id": "BIBREF27" }, { "start": 1738, "end": 1773, "text": "Chakravarthi and Muralidaran, 2021;", "ref_id": "BIBREF8" }, { "start": 1774, "end": 1801, "text": "Chakravarthi et al., 2021a;", "ref_id": "BIBREF10" }, { "start": 1802, "end": 1837, "text": "Suryawanshi and Chakravarthi, 2021)", "ref_id": "BIBREF36" }, { "start": 1909, "end": 1928, "text": "(Bohra et al., 2018", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related works", "sec_num": "2" }, { "text": "Research in hate and offensive language, as described above, is mainly conducted in a monolingual setting. The paper aims to propose a machine learning system for the code-mixed and scriptmixed dataset to identify hate contents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related works", "sec_num": "2" }, { "text": "The current study performs two tasks; (i) Task 1 includes the development of an offensive and nonoffensive classification system for distinguishing script-mixed Malayalam comments, and (ii) Task 2 requires to build a classifier to differentiate Tanglish and Manglish (Tamil and Malayalam have written using Roman Characters) into offensive and not-offensive classes. Table 1 shows the overview of the data set used in this analysis. As can be seen in Table 1 , there are three sets of data, out of which in the first sets, Malayalam code-mixed and Tamil code-mixed, the posts were written in a single script English, but in the last set, the posts were written in two different scripts (Malayalam script-mixed).", "cite_spans": [], "ref_spans": [ { "start": 367, "end": 374, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 451, "end": 458, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Task and Data description", "sec_num": "3" }, { "text": "Three different models were developed to identify hate or offensive contents in Dravidian posts; (i) conventional learning based models, (ii) neural network-based models, and (iii) transfer learningbased models. In this section, we explain the working of each model in detail. A detailed diagram for presented models is shown in Figure 1 . The results of the models are explained in Section 5.", "cite_spans": [], "ref_spans": [ { "start": 329, "end": 337, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "In conventional machine learning-based classifications, the current study explored the use of different N-gram TF-IDF word and character features. In the case of character, 1 to 6 gram character TF-IDF features were used, whereas, in case of a word, 1 to 3 gram word TF-IDF features were used. The extracted features were fed to classifiers like Support Vector Machine (SVM), Logistic Regression (LR), Naive Bayes (NB), Random Forest (RF). The detailed performance report of word n-grams and character n-grams are shown in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conventional learning based models", "sec_num": "4.1" }, { "text": "Initially, the character n-grams TF-IDF features (1-6 grams) extracted in previous Section 4.1 were used as an input to a vanilla neural network (VNN) model. For the vanilla neural network, four fully connected layers were sequenced, having 1024, 256, 128, and 2 neurons in first, second, third and fourth layer, respectively. We kept two neurons in the final layer (or output layers) to identify each input in offensive groups. Based on the probabilities of softmax activation with output neurons, the last class was determined. In the intermediate layers, the activation function was ReLu. The proposed vanilla neural network was trained with cross-entropy loss function and Adam optimizer. The training dropout was 0.3 and the batch size was 32.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural learning-based models", "sec_num": "4.2" }, { "text": "Consequently, other deep learning models for offensive groups prediction were also developed. A hybrid attention-based Bi-LSTM and CNN network was built as shown in Figure 1 . The detailed working of the CNN and attention-based Bi-LSTM network for text classification can be seen in (Jang et al., 2020; Xu et al., 2020; Saumya et al., 2019) . To CNN, character embedding was the input, whereas to Bi-LSTM, word embedding was the input. To prepare the character embedding, a one-hot vector representation of characters were used. Every input was padded with a maximum of 200 characters with repetition. The total unique character found in the vocabulary was 70. Therefore, a (200 \u00d7 70) dimensional embedding matrix was given as an input to CNN. To extract the features from the convolution layer, 128 different filters for each 1-gram, 2-gram, 3-gram, and 4-gram were used. The output of the first convolution layer was fed to the second convolution layer with similar filter dimensions. The features extracted from the CNN layers were then represented in a vector having 128 features using a dense layer.", "cite_spans": [ { "start": 283, "end": 302, "text": "(Jang et al., 2020;", "ref_id": "BIBREF22" }, { "start": 303, "end": 319, "text": "Xu et al., 2020;", "ref_id": "BIBREF42" }, { "start": 320, "end": 340, "text": "Saumya et al., 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 165, "end": 173, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Neural learning-based models", "sec_num": "4.2" }, { "text": "To prepare the word embedding input for Bi-LSTM was we used FastText 1 utilizing the language-specific code-mixed Tamil and Malayalam text for Tamil and Malayalam models, respec- tively. The skip-gram architecture was trained for ten epochs to extract the FastText embedding vectors. A maximum of 30 words embedding vectors was given input to the network in a time stamp manner. Every word was represented in a 100dimensional vector which was extracted from the embedding layer. Finally, a (30 \u00d7 100) dimensional matrix input was given to 2-layered stacked Bi-LSTM layer, followed by an attention layer. Finally, the output of attention-based Bi-LSTM and CNN layer is concatenated and passes through a softmax layer to predict offensive and not-offensive text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural learning-based models", "sec_num": "4.2" }, { "text": "Hyperparameters tuning was done to check the performance of the proposed deep-neural model. We conducted comprehensive experiments by adjusting the learning rate, batch size, optimizer, epoch, loss function and activation function. The system performance was best with the learning rate 0.001, batch size 32, Adam optimizer, epochs 100, loss function as binary cross-entropy, and ReLU activation within the internal layers of the network. At the output layer, the activation was softmax.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural learning-based models", "sec_num": "4.2" }, { "text": "The current study used two different transfer models, BERT (Bidirectional Encoder Representations from Transformers ) and ULMFiT (Universal Language Model Fine-tuning for Text Classification) to accomplish the given objectives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer models", "sec_num": "4.3" }, { "text": "Two different variations of BERT model 2 (Devlin et al., 2018b) is used in the current study; (i) BERT base (bert-base-uncased), and (ii) BERT multilingual (bert-base-multilingualuncased). The BERT base model is trained for English language using a masked modelling technique. Whereas, BERT multilingual is trained for 102 languages with masked language modelling. We used ktrain 3 libraries to develop the BERT based models. Both BERT variations are uncased that means it does not make a difference between a word written in upper case lower case. In training BERT-models, we fixed 30-words for the text to input in the model and used a batch size of 32 and a learning rate of 2e \u22125 to fine-tune the pre-trained model. The detailed description of the BERT model can be seen in (Sanh et al., 2019) . The other transfer model used was ULMFiT. It can be applied to any task in NLP. To train ULMFiT model, we used fastai library 4 . The input and hyper-parameters were the same as we used in BERT.", "cite_spans": [ { "start": 41, "end": 63, "text": "(Devlin et al., 2018b)", "ref_id": "BIBREF16" }, { "start": 778, "end": 797, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Transfer models", "sec_num": "4.3" }, { "text": "This section presents the experimental results of all three aforementioned models explained in Section 4. The results are presented in terms of precision, recall, and F 1 -score of class offensive and notoffensive. The weighted average of both classes is also presented. A particular model is identified as best if it has reported the highest weighted average of precision, recall, and F 1 -score. The value in bold represents the highest value for a particular dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result", "sec_num": "5" }, { "text": "The convectional learning experiments were performed using character N-gram (1 to 6-gram) TF-IDF features. The results are shown in the Table 2 for SVM, LR, NB, RF. In the case of Tamil codemixed text, the NB classifier performed best and achieved a precision, recall, and F 1 -score of 0.90. In the case of Malayalam code-mixed text, the LR classifier performed best with the precision, recall, and F 1 -score of 0.78. Similarly, in the case of Malayalam script-mixed RF classifier reported better performance having precision, recall of 0.95, and F 1 -score of 0.94. Similar experiments were done for word TF-IDF features for 1 to 3 N-grams. The results are shown in the Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 143, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 673, "end": 680, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Result", "sec_num": "5" }, { "text": "The results of the proposed neural-based models for Tamil code-mixed, Malayalam code-mixed, and Malayalam script-mixed text are listed in Table 4 . As can be seen from the Table 4 , the vanilla neural network (VNN) model outperformed attentionbased Bi-LSTM-CNN for all three datasets. For Tamil code-mixed, VNN reported precision, recall, and F 1 -score of 0.89 and for Malayalam codemixed it reported precision, recall, and F 1 -score of 0.77. Similarly, for Malayalam script-mixed data, the proposed vanilla neural network reported a precision, recall, and F 1 score of 0.95. Finally, the experimental results of transfer models are shown in Table 5 . The table shows the results of three transfer models BERT, BERTmultilingual, and ULMFiT. In Malayalam scriptmixed text, the BERT-multilingual model achieved the highest precision, recall, and F 1 -score of 0.93. Even, for Tamil code-mixed text, BERTmultilingual performed better than others with precision, recall, and F 1 -score of 0.86. The results of the BERT model was also comparable with precision, recall, and F 1 -score of 0.89, 0.84, and 0.86. But, for Malayalam code-mixed BERT performance was highest with precision, recall, and F 1 -score of 0.76.", "cite_spans": [], "ref_spans": [ { "start": 138, "end": 145, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 172, "end": 179, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 644, "end": 651, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Result", "sec_num": "5" }, { "text": "Of all the experimental models, conventional learning models with character 1 to 6 gram TF-IDF features showed the best output for the two datasets, Tamil code-mixed and Malayalam code-mixed. For Tamil code-mixed, the best performance was reported by NB model with precision, recall, and F 1 -score of 0.90. Similarly, for Malayalam codemixed the LR model reported best with precision, recall, and F 1 -score of 0.78. However, for the Malayalam Script mixed, Vanilla Neural Network (VNN) reported best results having precision, recall, and F 1 -score of 0.95, 0.95, and 0.95 respectively. The receiver operating characteristics (ROC) area under curve for all three best models are shown in Figures 2, 3, and 4 .", "cite_spans": [], "ref_spans": [ { "start": 690, "end": 709, "text": "Figures 2, 3, and 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Result Comparison and Discussion", "sec_num": "6" }, { "text": "The outcome of this comprehensive study was surprising, given that the performance of all complex models, such as the BiLSTM-CNN hybrid model and Transfer models, was relatively low, but for many NLP tasks, such as text classification or language modelling, it was proven better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result Comparison and Discussion", "sec_num": "6" }, { "text": "The results indicate that the character n-gram TF-IDF features play a very important role for code-mixed and code-script data. Secondly, in the same sense, the performance of the transfer models is not encouraging. BERT, which is trained in the English language, treats most of the token of code-mixed and code-script data as an unknown token which could affect the model performance. The BERT-multilingual which is trained on 102 languages identifies the language of input text first and loads its vocabulary then. Even, for code-mixed and code-script data BERT-multilingual identified a single language and was subsequently processed. In effect, the overall model performance was reduced. Moreover, it was found that the language identified by BERT-multilingual for code-mixed and the code-script dataset was different for different runs. Consequently, the results fluctuated even.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result Comparison and Discussion", "sec_num": "6" }, { "text": "Hate speech identification in code-mixed and scriptmixed context is one of the most challenging tasks in NLP. The current study presented extensive experiments utilizing various conventional learning, deep learning, and transfer learning models. Three datasets used in the study were Tamil code-mixed, Malayalam code-mixed, and Malayalam scriptmixed. The results reported by all models clearly show that conventional learning models along with vanilla neural model outperformed other complex deep learning, and transfer learning models. The character N-gram TF-IDF based Naive Bayes classifier performed best with the weighted precision, recall, and F 1 -score of 0.90 for Tamil code-mixed text. The Logistic regression classifier with character N-gram TF-IDF features performed best with the weighted precision, recall, and F 1 -score of 0.78 for Malayalam code-mixed text. The Vanilla Neural Network with character N-gram TF-IDF features performed best with the weighted precision of 0.95, recall of 0.95, and F 1 -score of 0.95 for Malayalam script-mixed text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://fasttext.cc/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/transformers/ pretrained_models.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/amaiya/ktrain 4 https://nlp.fast.ai/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Characterizing linguistic attributes for automatic classification of intent based racist/radicalized posts on tumblr micro-blogging website", "authors": [ { "first": "Swati", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sureka", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1701.04931" ] }, "num": null, "urls": [], "raw_text": "Swati Agarwal and Ashish Sureka. 2017. Charac- terizing linguistic attributes for automatic classifi- cation of intent based racist/radicalized posts on tumblr micro-blogging website. arXiv preprint arXiv:1701.04931.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Deep learning for hate speech detection in tweets", "authors": [ { "first": "Pinkesh", "middle": [], "last": "Badjatiya", "suffix": "" }, { "first": "Shashank", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Manish", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Vasudeva", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Conference on WWW Companion", "volume": "", "issue": "", "pages": "759--760", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on WWW Compan- ion, pages 759-760.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Dataset of Hindi-English Code-Mixed Social Media Text for Hate Speech Detection", "authors": [ { "first": "Aditya", "middle": [], "last": "Bohra", "suffix": "" }, { "first": "Deepanshu", "middle": [], "last": "Vijay", "suffix": "" }, { "first": "Vinay", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Manish", "middle": [], "last": "Syed Sarfaraz Akhtar", "suffix": "" }, { "first": "", "middle": [], "last": "Shrivastava", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Second Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social Media", "volume": "", "issue": "", "pages": "36--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Bohra, Deepanshu Vijay, Vinay Singh, Syed Sarfaraz Akhtar, and Manish Shrivastava. 2018. A Dataset of Hindi-English Code-Mixed Social Media Text for Hate Speech Detection. In Proceedings of the Second Workshop on Computa- tional Modeling of People's Opinions, Personality, and Emotions in Social Media, pages 36-41.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "HopeEDI: A multilingual hope speech detection dataset for equality, diversity, and inclusion", "authors": [ { "first": "Chakravarthi", "middle": [], "last": "Bharathi Raja", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media", "volume": "", "issue": "", "pages": "41--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi. 2020a. HopeEDI: A mul- tilingual hope speech detection dataset for equality, diversity, and inclusion. In Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Me- dia, pages 41-53, Barcelona, Spain (Online). Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Leveraging orthographic information to improve machine translation of under-resourced languages", "authors": [ { "first": "Chakravarthi", "middle": [], "last": "Bharathi Raja", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi. 2020b. Leveraging ortho- graphic information to improve machine translation of under-resourced languages. Ph.D. thesis, NUI Galway.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A sentiment analysis dataset for codemixed Malayalam-English", "authors": [ { "first": "Navya", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "Shardul", "middle": [], "last": "Jose", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Suryawanshi", "suffix": "" }, { "first": "John", "middle": [ "Philip" ], "last": "Sherly", "suffix": "" }, { "first": "", "middle": [], "last": "Mc-Crae", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", "volume": "", "issue": "", "pages": "177--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, Elizabeth Sherly, and John Philip Mc- Crae. 2020a. A sentiment analysis dataset for code- mixed Malayalam-English. In Proceedings of the 1st Joint Workshop on Spoken Language Technolo- gies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 177-184, Marseille, France. European Language Resources association.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Overview of the track on HASOC-Offensive Language Identification-DravidianCodeMix", "authors": [ { "first": "Anand", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "John", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "", "middle": [], "last": "Philip Mccrae", "suffix": "" }, { "first": "B", "middle": [], "last": "Premjith", "suffix": "" }, { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mandl", "suffix": "" } ], "year": 2020, "venue": "Working Notes of the Forum for Information Retrieval Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi, M Anand Kumar, John Philip McCrae, Premjith B, Soman KP, and Thomas Mandl. 2020b. Overview of the track on HASOC-Offensive Language Identification- DravidianCodeMix. In Working Notes of the Forum for Information Retrieval Evaluation (FIRE 2020).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "CEUR Workshop Proceedings", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "CEUR Workshop Proceedings. In: CEUR-WS. org, Hyderabad, India.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclusion", "authors": [ { "first": "Vigneshwaran", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "", "middle": [], "last": "Muralidaran", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi and Vigneshwaran Mural- idaran. 2021. Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclu- sion. In Proceedings of the First Workshop on Lan- guage Technology for Equality, Diversity and Inclu- sion. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Corpus creation for sentiment analysis in code-mixed Tamil-English text", "authors": [ { "first": "Vigneshwaran", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "Ruba", "middle": [], "last": "Muralidaran", "suffix": "" }, { "first": "John", "middle": [ "Philip" ], "last": "Priyadharshini", "suffix": "" }, { "first": "", "middle": [], "last": "Mc-Crae", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", "volume": "", "issue": "", "pages": "202--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi, Vigneshwaran Murali- daran, Ruba Priyadharshini, and John Philip Mc- Crae. 2020c. Corpus creation for sentiment anal- ysis in code-mixed Tamil-English text. In Pro- ceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced lan- guages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 202-210, Marseille, France. European Language Re- sources association.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Findings of the shared task on Machine Translation in Dravidian languages", "authors": [ { "first": "Ruba", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "Shubhanker", "middle": [], "last": "Priyadharshini", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "John", "middle": [], "last": "Saldhana", "suffix": "" }, { "first": "Anand", "middle": [], "last": "Philip Mccrae", "suffix": "" }, { "first": "M", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Parameswari", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Shubhanker Banerjee, Richard Saldhana, John Philip McCrae, Anand Kumar M, Parameswari Krishnamurthy, and Melvin Johnson. 2021a. Find- ings of the shared task on Machine Translation in Dravidian languages. In Proceedings of the First Workshop on Speech and Language Tech- nologies for Dravidian Languages. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Findings of the shared task on Offensive Language Identification in Tamil, Malayalam, and Kannada", "authors": [ { "first": "Ruba", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "Navya", "middle": [], "last": "Priyadharshini", "suffix": "" }, { "first": "Anand", "middle": [], "last": "Jose", "suffix": "" }, { "first": "M", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mandl", "suffix": "" }, { "first": "Prasanna", "middle": [], "last": "Kumar Kumaresan", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Ponnusamy", "suffix": "" }, { "first": "V", "middle": [], "last": "Hariharan", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Sherly", "suffix": "" }, { "first": "John", "middle": [ "Philip" ], "last": "Mc-Crae", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Navya Jose, Anand Kumar M, Thomas Mandl, Prasanna Kumar Kumaresan, Rahul Ponnusamy, Hariharan V, Elizabeth Sherly, and John Philip Mc- Crae. 2021b. Findings of the shared task on Offen- sive Language Identification in Tamil, Malayalam, and Kannada. In Proceedings of the First Workshop on Speech and Language Technologies for Dravid- ian Languages. Association for Computational Lin- guistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Overview of the Track on Sentiment Analysis for Dravidian Languages in Code-Mixed Text", "authors": [ { "first": "Ruba", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "Vigneshwaran", "middle": [], "last": "Priyadharshini", "suffix": "" }, { "first": "Shardul", "middle": [], "last": "Muralidaran", "suffix": "" }, { "first": "Navya", "middle": [], "last": "Suryawanshi", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Jose", "suffix": "" }, { "first": "John", "middle": [ "P" ], "last": "Sherly", "suffix": "" }, { "first": "", "middle": [], "last": "Mccrae", "suffix": "" } ], "year": 2020, "venue": "In Forum for Information Retrieval Evaluation", "volume": "2020", "issue": "", "pages": "21--24", "other_ids": { "DOI": [ "10.1145/3441501.3441515" ] }, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Vigneshwaran Muralidaran, Shardul Suryawanshi, Navya Jose, Elizabeth Sherly, and John P. McCrae. 2020d. Overview of the Track on Sentiment Analy- sis for Dravidian Languages in Code-Mixed Text. In Forum for Information Retrieval Evaluation, FIRE 2020, page 21-24, New York, NY, USA. Associa- tion for Computing Machinery.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automated hate speech detection and the problem of offensive language", "authors": [ { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Macy", "suffix": "" }, { "first": "Ingmar", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.04009" ] }, "num": null, "urls": [], "raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. arXiv preprint arXiv:1703.04009.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Hate me, hate me not: Hate speech detection on facebook", "authors": [], "year": 2017, "venue": "Proceedings of the First Italian Conference on Cybersecurity (ITASEC17)", "volume": "", "issue": "", "pages": "86--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabio Del Vigna12, Andrea Cimino23, Felice Dell'Orletta, Marinella Petrocchi, and Maurizio Tesconi. 2017. Hate me, hate me not: Hate speech detection on facebook. In Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), pages 86-95.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018a. Bert: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018b. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Ruba Priyadharshini, and Bharathi Raja Chakravarthi. 2021a. IIITK@DravidianLangTech-EACL2021: Offensive Language Identification and Meme Classification in Tamil, Malayalam and Kannada", "authors": [ { "first": "Parameswari", "middle": [], "last": "Nikhil Kumar Ghanghor", "suffix": "" }, { "first": "Sajeetha", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "", "middle": [], "last": "Thavareesan", "suffix": "" } ], "year": null, "venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikhil Kumar Ghanghor, Parameswari Krishna- murthy, Sajeetha Thavareesan, Ruba Priyad- harshini, and Bharathi Raja Chakravarthi. 2021a. IIITK@DravidianLangTech-EACL2021: Offensive Language Identification and Meme Classification in Tamil, Malayalam and Kannada. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, Online. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "IIITK@LT-EDI-EACL2021: Hope Speech Detection for Equality, Diversity, and Inclusion in Tamil, Malayalam and English", "authors": [ { "first": "Rahul", "middle": [], "last": "Nikhil Kumar Ghanghor", "suffix": "" }, { "first": "Prasanna", "middle": [], "last": "Ponnusamy", "suffix": "" }, { "first": "Ruba", "middle": [], "last": "Kumar Kumaresan", "suffix": "" }, { "first": "Sajeetha", "middle": [], "last": "Priyadharshini", "suffix": "" }, { "first": "Bharathi Raja", "middle": [], "last": "Thavareesan", "suffix": "" }, { "first": "", "middle": [], "last": "Chakravarthi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikhil Kumar Ghanghor, Rahul Ponnusamy, Prasanna Kumar Kumaresan, Ruba Priyad- harshini, Sajeetha Thavareesan, and Bharathi Raja Chakravarthi. 2021b. IIITK@LT-EDI-EACL2021: Hope Speech Detection for Equality, Diversity, and Inclusion in Tamil, Malayalam and English. In Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion, Online.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "KanCMD: Kannada CodeMixed dataset for sentiment analysis and offensive language detection", "authors": [ { "first": "Adeep", "middle": [], "last": "Hande", "suffix": "" }, { "first": "Ruba", "middle": [], "last": "Priyadharshini", "suffix": "" }, { "first": "Bharathi Raja", "middle": [], "last": "Chakravarthi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media", "volume": "", "issue": "", "pages": "54--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adeep Hande, Ruba Priyadharshini, and Bharathi Raja Chakravarthi. 2020. KanCMD: Kannada CodeMixed dataset for sentiment analysis and offensive language detection. In Proceedings of the Third Workshop on Computational Modeling of Peo- ple's Opinions, Personality, and Emotion's in Social Media, pages 54-63, Barcelona, Spain (Online). Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "UVCE-IIITT@DravidianLangTech-EACL2021: Tamil Troll Meme Classification: You need to Pay more Attention", "authors": [ { "first": "Adeep", "middle": [], "last": "Siddhanth U Hegde", "suffix": "" }, { "first": "Ruba", "middle": [], "last": "Hande", "suffix": "" }, { "first": "Sajeetha", "middle": [], "last": "Priyadharshini", "suffix": "" }, { "first": "Bharathi Raja", "middle": [], "last": "Thavareesan", "suffix": "" }, { "first": "", "middle": [], "last": "Chakravarthi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddhanth U Hegde, Adeep Hande, Ruba Priyadharshini, Sajeetha Thavareesan, and Bharathi Raja Chakravarthi. 2021. UVCE- IIITT@DravidianLangTech-EACL2021: Tamil Troll Meme Classification: You need to Pay more Attention. In Proceedings of the First Workshop on Speech and Language Technologies for Dra- vidian Languages. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Universal language model fine-tuning for text classification", "authors": [ { "first": "Jeremy", "middle": [], "last": "Howard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1801.06146" ] }, "num": null, "urls": [], "raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Bi-lstm model to increase accuracy in text classification: combining word2vec cnn and attention mechanism", "authors": [ { "first": "Beakcheol", "middle": [], "last": "Jang", "suffix": "" }, { "first": "Myeonghwi", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Gaspard", "middle": [], "last": "Harerimana", "suffix": "" }, { "first": "Sang-Ug", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Jong Wook", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2020, "venue": "Applied Sciences", "volume": "10", "issue": "17", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beakcheol Jang, Myeonghwi Kim, Gaspard Hareri- mana, Sang-ug Kang, and Jong Wook Kim. 2020. Bi-lstm model to increase accuracy in text classifica- tion: combining word2vec cnn and attention mecha- nism. Applied Sciences, 10(17):5841.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A Survey of Current Datasets for Code-Switching Research", "authors": [ { "first": "Navya", "middle": [], "last": "Jose", "suffix": "" }, { "first": "Shardul", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Suryawanshi", "suffix": "" }, { "first": "John", "middle": [ "P" ], "last": "Sherly", "suffix": "" }, { "first": "", "middle": [], "last": "Mccrae", "suffix": "" } ], "year": 2020, "venue": "2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS)", "volume": "", "issue": "", "pages": "136--141", "other_ids": { "DOI": [ "10.1109/ICACCS48705.2020.9074205" ] }, "num": null, "urls": [], "raw_text": "Navya Jose, Bharathi Raja Chakravarthi, Shardul Suryawanshi, Elizabeth Sherly, and John P. McCrae. 2020. A Survey of Current Datasets for Code- Switching Research. In 2020 6th International Con- ference on Advanced Computing and Communica- tion Systems (ICACCS), pages 136-141.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Investigating deep learning approaches for hate speech detection in social media", "authors": [ { "first": "Prashant", "middle": [], "last": "Kapil", "suffix": "" }, { "first": "Asif", "middle": [], "last": "Ekbal", "suffix": "" }, { "first": "Dipankar", "middle": [], "last": "Das", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.14690" ] }, "num": null, "urls": [], "raw_text": "Prashant Kapil, Asif Ekbal, and Dipankar Das. 2020. Investigating deep learning approaches for hate speech detection in social media. arXiv preprint arXiv:2005.14690.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Trac-1 shared task on aggression identification: Iit (ism) @ coling'18", "authors": [ { "first": "Ritesh", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Guggilla", "middle": [], "last": "Bhanodai", "suffix": "" }, { "first": "Rajendra", "middle": [], "last": "Pamula", "suffix": "" }, { "first": "Maheshwar Reddy", "middle": [], "last": "Chennuru", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)", "volume": "", "issue": "", "pages": "58--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ritesh Kumar, Guggilla Bhanodai, Rajendra Pamula, and Maheshwar Reddy Chennuru. 2018. Trac-1 shared task on aggression identification: Iit (ism) @ coling'18. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC- 2018), pages 58-65.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Identification of cyberbullying on multi-modal social media posts using genetic algorithm", "authors": [ { "first": "Kirti", "middle": [], "last": "Kumari", "suffix": "" }, { "first": "Jyoti", "middle": [ "Prakash" ], "last": "Singh", "suffix": "" } ], "year": 2020, "venue": "Transactions on Emerging Telecommunications Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kirti Kumari and Jyoti Prakash Singh. 2020. Identi- fication of cyberbullying on multi-modal social me- dia posts using genetic algorithm. Transactions on Emerging Telecommunications Technologies, page e3907.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Multi-modal aggression identification using convolutional neural network and binary particle swarm optimization. Future Generation Computer Systems", "authors": [ { "first": "Kirti", "middle": [], "last": "Kumari", "suffix": "" }, { "first": "Jyoti", "middle": [ "Prakash" ], "last": "Singh", "suffix": "" }, { "first": "K", "middle": [], "last": "Yogesh", "suffix": "" }, { "first": "Nripendra P", "middle": [], "last": "Dwivedi", "suffix": "" }, { "first": "", "middle": [], "last": "Rana", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kirti Kumari, Jyoti Prakash Singh, Yogesh K Dwivedi, and Nripendra P Rana. 2021. Multi-modal aggres- sion identification using convolutional neural net- work and binary particle swarm optimization. Fu- ture Generation Computer Systems.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil", "authors": [ { "first": "Thomas", "middle": [], "last": "Mandl", "suffix": "" }, { "first": "Sandip", "middle": [], "last": "Modha", "suffix": "" }, { "first": "Anand", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "M", "middle": [], "last": "", "suffix": "" }, { "first": "Bharathi Raja Chakravarthi ;", "middle": [], "last": "Malayalam", "suffix": "" }, { "first": "", "middle": [], "last": "Hindi", "suffix": "" }, { "first": "German", "middle": [], "last": "English", "suffix": "" } ], "year": 2020, "venue": "Forum for Information Retrieval Evaluation", "volume": "2020", "issue": "", "pages": "29--32", "other_ids": { "DOI": [ "10.1145/3441501.3441517" ] }, "num": null, "urls": [], "raw_text": "Thomas Mandl, Sandip Modha, Anand Kumar M, and Bharathi Raja Chakravarthi. 2020. Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil, Malay- alam, Hindi, English and German. In Forum for Information Retrieval Evaluation, FIRE 2020, page 29-32, New York, NY, USA. Association for Com- puting Machinery.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Identification of cyberbullying: A deep learning based multimodal approach. Multimedia Tools and Applications", "authors": [ { "first": "Sayanta", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Sriparna", "middle": [], "last": "Saha", "suffix": "" }, { "first": "Mohammed", "middle": [], "last": "Hasanuzzaman", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "1--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sayanta Paul, Sriparna Saha, and Mohammed Hasanuz- zaman. 2020. Identification of cyberbullying: A deep learning based multimodal approach. Multime- dia Tools and Applications, pages 1-20.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Named Entity Recognition for Code-Mixed Indian Corpus using Meta Embedding", "authors": [ { "first": "Ruba", "middle": [], "last": "Priyadharshini", "suffix": "" }, { "first": "Mani", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "John", "middle": [ "P" ], "last": "Vegupatti", "suffix": "" }, { "first": "", "middle": [], "last": "Mccrae", "suffix": "" } ], "year": 2020, "venue": "2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS)", "volume": "", "issue": "", "pages": "68--72", "other_ids": { "DOI": [ "10.1109/ICACCS48705.2020.9074379" ] }, "num": null, "urls": [], "raw_text": "Ruba Priyadharshini, Bharathi Raja Chakravarthi, Mani Vegupatti, and John P. McCrae. 2020. Named Entity Recognition for Code-Mixed Indian Corpus using Meta Embedding. In 2020 6th International Conference on Advanced Computing and Communi- cation Systems (ICACCS), pages 68-72.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "IIITT@LT-EDI-EACL2021-Hope Speech Detection: There is always hope in Transformers", "authors": [ { "first": "Karthik", "middle": [], "last": "Puranik", "suffix": "" }, { "first": "Adeep", "middle": [], "last": "Hande", "suffix": "" }, { "first": "Ruba", "middle": [], "last": "Priyadharshini", "suffix": "" }, { "first": "Sajeetha", "middle": [], "last": "Thavareesan", "suffix": "" }, { "first": "Bharathi Raja", "middle": [], "last": "Chakravarthi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthik Puranik, Adeep Hande, Ruba Priyad- harshini, Sajeetha Thavareesan, and Bharathi Raja Chakravarthi. 2021. IIITT@LT-EDI-EACL2021- Hope Speech Detection: There is always hope in Transformers. In Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Ritual-uh at trac 2018 shared task: Aggression identification", "authors": [ { "first": "Deepthi", "middle": [], "last": "Niloofar Safi Samghabadi", "suffix": "" }, { "first": "Sudipta", "middle": [], "last": "Mave", "suffix": "" }, { "first": "Thamar", "middle": [], "last": "Kar", "suffix": "" }, { "first": "", "middle": [], "last": "Solorio", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying", "volume": "", "issue": "", "pages": "12--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niloofar Safi Samghabadi, Deepthi Mave, Sudipta Kar, and Thamar Solorio. 2018. Ritual-uh at trac 2018 shared task: Aggression identification. In Proceed- ings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 12-18.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.01108" ] }, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Predicting the helpfulness score of online reviews using convolutional neural network", "authors": [ { "first": "Sunil", "middle": [], "last": "Saumya", "suffix": "" }, { "first": "Jyoti", "middle": [ "Prakash" ], "last": "Singh", "suffix": "" }, { "first": "Yogesh K", "middle": [], "last": "Dwivedi", "suffix": "" } ], "year": 2019, "venue": "Soft Computing", "volume": "", "issue": "", "pages": "1--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sunil Saumya, Jyoti Prakash Singh, and Yogesh K Dwivedi. 2019. Predicting the helpfulness score of online reviews using convolutional neural network. Soft Computing, pages 1-17.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Identifying aggression and toxicity in comments using capsule network", "authors": [ { "first": "Saurabh", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Prerna", "middle": [], "last": "Khurana", "suffix": "" }, { "first": "Vartika", "middle": [], "last": "Tewari", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)", "volume": "", "issue": "", "pages": "98--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saurabh Srivastava, Prerna Khurana, and Vartika Tewari. 2018. Identifying aggression and toxicity in comments using capsule network. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 98-105.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Findings of the shared task on Troll Meme Classification in Tamil", "authors": [ { "first": "Shardul", "middle": [], "last": "Suryawanshi", "suffix": "" }, { "first": "", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shardul Suryawanshi and Bharathi Raja Chakravarthi. 2021. Findings of the shared task on Troll Meme Classification in Tamil. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Compu- tational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Sentiment Analysis in Tamil Texts: A Study on Machine Learning Techniques and Feature Representation", "authors": [ { "first": "Sajeetha", "middle": [], "last": "Thavareesan", "suffix": "" }, { "first": "Sinnathamby", "middle": [], "last": "Mahesan", "suffix": "" } ], "year": 2019, "venue": "2019 14th Conference on Industrial and Information Systems (ICIIS)", "volume": "", "issue": "", "pages": "320--325", "other_ids": { "DOI": [ "10.1109/ICIIS47346.2019.9063341" ] }, "num": null, "urls": [], "raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2019. Sentiment Analysis in Tamil Texts: A Study on Machine Learning Techniques and Feature Rep- resentation. In 2019 14th Conference on Industrial and Information Systems (ICIIS), pages 320-325.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Sentiment Lexicon Expansion using Word2vec and fastText for Sentiment Prediction in Tamil texts", "authors": [ { "first": "Sajeetha", "middle": [], "last": "Thavareesan", "suffix": "" }, { "first": "Sinnathamby", "middle": [], "last": "Mahesan", "suffix": "" } ], "year": 2020, "venue": "2020 Moratuwa Engineering Research Conference (MERCon)", "volume": "", "issue": "", "pages": "272--276", "other_ids": { "DOI": [ "10.1109/MERCon50084.2020.9185369" ] }, "num": null, "urls": [], "raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2020a. Sentiment Lexicon Expansion using Word2vec and fastText for Sentiment Prediction in Tamil texts. In 2020 Moratuwa Engineering Re- search Conference (MERCon), pages 272-276.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Word embedding-based Part of Speech tagging in Tamil texts", "authors": [ { "first": "Sajeetha", "middle": [], "last": "Thavareesan", "suffix": "" }, { "first": "Sinnathamby", "middle": [], "last": "Mahesan", "suffix": "" } ], "year": 2020, "venue": "2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)", "volume": "", "issue": "", "pages": "478--482", "other_ids": { "DOI": [ "10.1109/ICIIS51140.2020.9342640" ] }, "num": null, "urls": [], "raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2020b. Word embedding-based Part of Speech tag- ging in Tamil texts. In 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS), pages 478-482.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Quarantining online hate speech: technical and ethical perspectives", "authors": [ { "first": "Stefanie", "middle": [], "last": "Ullmann", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Tomalin", "suffix": "" } ], "year": 2020, "venue": "Ethics and Information Technology", "volume": "22", "issue": "1", "pages": "69--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefanie Ullmann and Marcus Tomalin. 2020. Quar- antining online hate speech: technical and ethical perspectives. Ethics and Information Technology, 22(1):69-80.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the NAACL student research workshop", "volume": "", "issue": "", "pages": "88--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL student research workshop, pages 88-93.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Incorporating context-relevant concepts into convolutional neural networks for short text classification", "authors": [ { "first": "Jingyun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xue", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Qingbao", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Qing", "middle": [], "last": "Ho-Fung Leung", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "Neurocomputing", "volume": "386", "issue": "", "pages": "42--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingyun Xu, Yi Cai, Xin Wu, Xue Lei, Qingbao Huang, Ho-fung Leung, and Qing Li. 2020. Incorporating context-relevant concepts into convolutional neural networks for short text classification. Neurocomput- ing, 386:42-53.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Sajeetha Thavareesan, and Bharathi Raja Chakravarthi. 2021. IIITT@DravidianLangTech-EACL2021: Transfer Learning for Offensive Language Detection in Dravidian Languages", "authors": [ { "first": "Konthala", "middle": [], "last": "Yasaswini", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Puranik", "suffix": "" }, { "first": "Adeep", "middle": [], "last": "Hande", "suffix": "" }, { "first": "Ruba", "middle": [], "last": "Priyadharshini", "suffix": "" } ], "year": null, "venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Konthala Yasaswini, Karthik Puranik, Adeep Hande, Ruba Priyadharshini, Sajeetha Thava- reesan, and Bharathi Raja Chakravarthi. 2021. IIITT@DravidianLangTech-EACL2021: Transfer Learning for Offensive Language Detection in Dravidian Languages. In Proceedings of the First Workshop on Speech and Language Technolo- gies for Dravidian Languages. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Proposed hybrid attention-based Bi-LSTM and CNN network", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "ROC curve (area = 0.90) macro-average ROC curve (area = 0.90) Not-offensive (AUC = 0.90) Offensive (AUC = 0.90) Figure 2: ROC for Naive Bayes (Tamil code-mixed)", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "ROC for Logistic Regression (Malayalam code-mixed) ROC curve (area = 0.97) macro-average ROC curve (area = 0.93) Not-offensive (AUC = 0.92) Offensive (AUC = 0.94)", "num": null, "type_str": "figure", "uris": null }, "FIGREF3": { "text": "ROC for vanilla Neural Network (Malayalam script-mixed)", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "content": "
LanguageClassNot-offensive Offensive Total
Malayalam code-mixedTraining Testing2047 4731953 4784000 951
Tamil code-mixedTraining Testing2020 4651980 4754000 940
Training26335673200
Malayalam script-mixedDevelopment 32872400
Testing33466400
OffensiveNot-OffensiveOffensiveNot-OffensiveOffensiveNot-Offensive
Concatenated layer (640)
Conventional Learning Models (SVM, LR, NB, RF)128 filters: 2-gram, 128 filters:3-gram, & 4-gram 1-gramCNN CNN Dense (128) Character embedding (200 x 70)Neural Learning Model (Attentionbased Bi-LSTM-CNN model)Bi-LSTM (512) Bi-LSTM (256) Attention layer Word embedding (30 x 100)Transfer Learning Models (BERT, BERT-multilingual, ULMFiT)
Dravidian code-mixed
and script-mixed Posts
", "html": null, "num": null, "type_str": "table", "text": "Data statistic used in this study" }, "TABREF1": { "content": "
Models ClassTamil (Code-mixed)Malayalam (Code-mixed)Malayalam (Script-mixed)
Offensive0.870.900.880.830.690.750.970.560.71
SVMNot-offensive0.890.860.880.730.860.790.921.000.96
Weighted Avg. 0.880.880.880.780.770.770.930.930.92
Offensive0.880.890.890.810.720.770.910.300.45
LRNot-offensive0.890.880.880.750.830.790.880.990.93
Weighted Avg. 0.890.890.890.780.780.780.880.880.85
Offensive0.920.880.900.790.630.700.490.730.59
NBNot-offensive0.880.920.900.690.830.750.940.850.89
Weighted Avg. 0.900.900.900.740.730.730.870.830.84
Offensive0.850.900.880.780.700.740.960.710.82
RFNot-offensive0.890.840.870.720.810.760.950.990.97
Weighted Avg. 0.870.870.870.750.750.750.950.950.94
", "html": null, "num": null, "type_str": "table", "text": "Results for the different classifiers with character 1 to 6-gram TF-IDF feature Precision Recall F 1 -score Precision Recall F 1 -score Precision Recall F 1 -score" }, "TABREF2": { "content": "
Models ClassTamil (Code-mixed)Malayalam (Code-mixed)Malayalam (Script-mixed)
Precision Recall F 1 -score Precision Recall F 1 -score Precision Recall F -score
Offensive0.640.940.760.750.550.631.000.530.69
SVMNot-offensive0.880.460.600.640.810.720.921.000.96
Weighted Avg. 0.760.700.680.690.680.670.930.920.91
Offensive0.880.860.870.750.680.710.910.300.45
LRNot-offensive0.860.880.870.700.770.740.880.990.93
Weighted Avg. 0.870.870.870.730.730.730.880.880.85
Offensive0.670.820.740.680.620.650.470.830.60
NBNot-offensive0.760.590.670.650.710.680.960.810.88
Weighted Avg. 0.720.710.700.670.660.660.880.810.83
Offensive0.780.890.830.700.660.680.940.670.78
RFNot-offensive0.870.750.800.670.710.690.940.990.96
Weighted Avg. 0.830.820.820.690.680.680.940.940.93
", "html": null, "num": null, "type_str": "table", "text": "Results for the different classifiers with word 1 to 3-gram TF-IDF feature" }, "TABREF3": { "content": "
ModelsClassTamil (code-mixed)Malayalam (Code-mixed)Malayalam (script-mixed)
Precision Recall F1-score Precision Recall F1-score Precision Recall F1-score
Offensive0.870.910.890.770.780.780.960.760.85
Vanilla NNNot-offensive 0.910.860.880.780.760.760.950.990.97
Weighted-Avg 0.890.890.890.770.770.770.950.950.95
Attention-based BiLSTM-CNNOffensive Not-offensive 0.83 0.85 Weighted-Avg 0.840.83 0.85 0.840.84 0.84 0.840.71 0.71 0.710.71 0.71 0.710.71 0.71 0.710.89 0.93 0.930.68 0.98 0.930.77 0.96 0.92
", "html": null, "num": null, "type_str": "table", "text": "Results for the VNN and attention-based BiLSTM-CNN models" }, "TABREF4": { "content": "
ModelsClassTamil (code-mixed)Malayalam (Code-mixed)Malayalam (script-mixed)
Precision Recall F1-score Precision Recall F1-score Precision Recall F1-score
Offensive0.930.770.840.710.790.750.730.730.73
BERTNot-offensive 0.850.920.880.810.730.770.730.730.73
Weighted-Avg 0.890.840.860.760.760.760.730.730.73
BERTOffensive0.850.870.860.750.680.720.950.970.96
Not-offensive 0.860.850.860.710.770.740.830.740.78
MuiltilingualWeighted-Avg 0.860.860.860.730.730.730.930.930.93
Offensive0.720.630.670.300.510.380.570.680.62
ULMFitNot-offensive 0.560.570.570.710.500.590.720.560.63
Weighted-Avg 0.650.600.620.590.500.520.660.610.63
", "html": null, "num": null, "type_str": "table", "text": "Results transfer models BERT and ULMFiT" } } } }