ACL-OCL / Base_JSON /prefixN /json /nlp4if /2021.nlp4if-1.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:32:18.678897Z"
},
"title": "Transformers to Fight the COVID-19 Infodemic",
"authors": [
{
"first": "Lasitha",
"middle": [],
"last": "Uyangodage",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of M\u00fcnster",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": "",
"affiliation": {
"laboratory": "Research Group in Computational Linguistics",
"institution": "University of Wolverhampton",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Birmingham City University",
"location": {
"country": "UK"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The massive spread of false information on social media has become a global risk especially in a global pandemic situation like COVID-19. False information detection has thus become a surging research topic in recent months. NLP4IF-2021 shared task on fighting the COVID-19 infodemic has been organised to strengthen the research in false information detection where the participants are asked to predict seven different binary labels regarding false information in a tweet. The shared task has been organised in three languages; Arabic, Bulgarian and English. In this paper, we present our approach to tackle the task objective using transformers. Overall, our approach achieves a 0.707 mean F1 score in Arabic, 0.578 mean F1 score in Bulgarian and 0.864 mean F1 score in English ranking 4 th place in all the languages.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The massive spread of false information on social media has become a global risk especially in a global pandemic situation like COVID-19. False information detection has thus become a surging research topic in recent months. NLP4IF-2021 shared task on fighting the COVID-19 infodemic has been organised to strengthen the research in false information detection where the participants are asked to predict seven different binary labels regarding false information in a tweet. The shared task has been organised in three languages; Arabic, Bulgarian and English. In this paper, we present our approach to tackle the task objective using transformers. Overall, our approach achieves a 0.707 mean F1 score in Arabic, 0.578 mean F1 score in Bulgarian and 0.864 mean F1 score in English ranking 4 th place in all the languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "By April 2021, coronavirus(COVID-19) pandemic has affected 219 nations around the world with 136 million total cases and 2.94 million deaths. With this pandemic situation, a rapid increase in social media usage was noticed. In measures, during 2020, 490 million new users joined indicating a more than 13% year-on-year growth (Kemp, 2021) . This growth is mainly resulted due to the impacts on day-to-day activities and information sharing and gathering requirements related to the pandemic.",
"cite_spans": [
{
"start": 326,
"end": 338,
"text": "(Kemp, 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a drawback of these exponential growths, the dark side of social media is further revealed during this COVID-19 infodemic (Mourad et al., 2020) . The spreading of false and harmful information resulted in panic and confusions which make the pandemic situation worse. Also, the inclusion of false information reduced the usability of a huge volume of data which is generated via social media platforms with the capability of fast propagation. To handle these issues and utilise social media data effectively, accurate identification of false information is crucial. Considering the high data generation in social media, manual approaches to filter false information require significant human efforts. Therefore an automated technique to tackle this problem will be invaluable to the community.",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Mourad et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Targeting the infodemic that occurred with COVID-19, NLP4IF-2021 shared task was designed to predict several properties of a tweet including harmfulness, falseness, verifiability, interest to the general public and required attention. The participants of this task were required to predict the binary aspect of the given properties for the test sets in three languages: Arabic, Bulgarian and English provided by the organisers. Our team used recently released transformer models with the text classification architecture to make the predictions and achieved the 4 th place in all the languages while maintaining the simplicity and universality of the method. In this paper, we mainly present our approach, with more details about the architecture including an experimental study. We also provide our code to the community which will be freely available to everyone interested in working in this area using the same methodology 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Identifying false information in social media has been a major research topic in recent years. False information detection methods can be mainly categorised into two main areas; Content-based methods and Social Context-based methods (Guo et al., 2020) .",
"cite_spans": [
{
"start": 233,
"end": 251,
"text": "(Guo et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Content-based methods are mainly based on the different features in the content of the tweet. For example, Castillo et al. (2011) find that highly credible tweets have more URLs, and the textual content length is usually longer than that of lower credibility tweets. Many studies utilize the lexical and syntactic features to detect false information. For instance, Qazvinian et al. (2011) find that the part of speech (POS) is a distinguishable feature for false information detection. Kwon et al. (2013) find that some types of sentiments are apparent features of machine learning classifiers, including positive sentiments words (e.g., love, nice, sweet), negating words (e.g., no, not, never), cognitive action words (e.g., cause, know), and inferring action words (e.g., maybe, perhaps). Then they propose a periodic time-series model to identify key linguistic differences between true tweets and fake tweets. With the word embeddings and deep learning getting popular in natural language processing, most of the fake information detection methods were based on embeddings of the content fed into a deep learning network to perform the classification (Ma et al., 2016) .",
"cite_spans": [
{
"start": 107,
"end": 129,
"text": "Castillo et al. (2011)",
"ref_id": "BIBREF2"
},
{
"start": 487,
"end": 505,
"text": "Kwon et al. (2013)",
"ref_id": "BIBREF13"
},
{
"start": 1157,
"end": 1174,
"text": "(Ma et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Traditional content-based methods analyse the credibility of the single microblog or claim in isolation, ignoring the high correlation between different tweets and events. However, Social Contextbased methods take different tweets in a user profile or an event to identify false information. Many studies detect false information by analyzing users' credibility or stances (Mohammad et al., 2017). Since this shared task is mainly focused on the content of the tweet to detect false information, we can identify our method as a contentbased false information identification approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The task is about predicting several binary properties of a tweet on COVID-19: whether it is harmful, whether it contains a verifiable claim, whether it may be of interest to the general public, whether it appears to contain false information, etc. (Shaar et al., 2021) . The data has been released for three languages; English, Arabic and Bulgarian 2 . Following are the binary properties that the participants should predict for a tweet. (Devlin et al., 2019) provide pretrained multilingual language models that support more than 100 languages which will solve the multilingual issues of these tasks (Ranasinghe et al., 2020; Ranasinghe and Zampieri, 2021b, 2020).",
"cite_spans": [
{
"start": 249,
"end": 269,
"text": "(Shaar et al., 2021)",
"ref_id": "BIBREF28"
},
{
"start": 440,
"end": 461,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Transformer models take an input of a sequence and outputs the representations of the sequence. There can be one or two segments in a sequence which are separated by a special token [SEP] (Devlin et al., 2019) . In this approach we considered a tweet as a sequence and no [SEP] token is used. Another special token [CLS] is used as the first token of the sequence which contains a special classification embedding. For text classification tasks, transformer models take the final hidden state h of the [CLS] token as the representation of the whole sequence (Sun et al., 2019) . A simple softmax classifier is added to the top of the transformer model to predict the probability of a class c as shown in Equation 1 where W is the task-specific parameter matrix. In the classification task all the parameters from transformer as well as W are fine tuned jointly by maximising the log-probability of the correct label. The architecture of transformer-based sequence classifier is shown in Figure 1 .",
"cite_spans": [
{
"start": 188,
"end": 209,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 558,
"end": 576,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 987,
"end": 995,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(c|h) = sof tmax(W h)",
"eq_num": "(1)"
}
],
"section": "Data",
"sec_num": "3"
},
{
"text": "Figure 1: Text Classification Architecture",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We considered the whole task as seven different classification problems. We trained a transformer model for each label mentioned in Section 3. This gave us the flexibility to fine-tune the classification model in to the specific label rather than the whole task. Given the very unbalanced nature of the dataset, the transformer models tend to overfit and predict only the majority class. Therefore, for each label we took the number of instances in the training set for the minority class and undersampled the majority class to have the same number of instances as the minority class. We then divided this undersampled dataset into a training set and a validation set using 0.8:0.2 split. We mainly fine tuned the learning rate and number of epochs of the classification model manually to obtain the best results for the development set provided by organisers in each language. We obtained 1e \u22125 as the best value for learning rate and 3 as the best value for number of epochs for all the languages in all the labels. The other configurations of the transformer model were set to a constant value over all the languages in order to ensure consistency between the languages. We used a batch-size of eight, Adam optimiser (Kingma and Ba, 2014) and a linear learning rate warm-up over 10% of the training data. The models were trained using only training data. We performed early stopping if the evaluation loss did not improve over ten evaluation rounds. A summary of hyperparameters and their values used to obtain the reported results are mentioned in Appendix - Table 3 . The optimized hyperparameters are marked with \u2021 and their optimal values are reported. The rest of the hyperparameter values are kept as constants. We did not use any language specific preprocessing techniques in order to have a flexible solution between the languages. We used a Nvidia Tesla K80 GPU to train the models. All the experiments were run for five different random seeds and as the final result, we took the majority class predicted by these different random seeds as mention in Hettiarachchi and Ranasinghe (2020b) . We used the following pretrained transformer models for the experiments.",
"cite_spans": [
{
"start": 2064,
"end": 2100,
"text": "Hettiarachchi and Ranasinghe (2020b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1563,
"end": 1570,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "bert-base-cased -Introduced in Devlin et al. (2019) , the model has been trained on a Wikipedia dump of English using Masked Language Modelling (MLM) objective. There are two variants in English BERT, base model and the large model. Considering the fact that we built seven different models for each label, we decided to use the base model considering the resources and time.",
"cite_spans": [
{
"start": 31,
"end": 51,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "roberta-base -Introduced in Liu et al. (2019) , RoBERTa builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with much larger minibatches and learning rates. RoBERTa has outperformed BERT in many NLP tasks and it motivated us to use RoBERTa in this research too. Again we only considered the base model. bert-nultilingual-cased -Introduced in Devlin et al. (2019) , the model has been trained on a Wikipedia dump of 104 languages using MLM objective. This model has shown good performance in variety of languages and tasks. Therefore, we used this model in Arabic and Bulgarian.",
"cite_spans": [
{
"start": 28,
"end": 45,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF15"
},
{
"start": 401,
"end": 421,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "AraBERT Recently language-specific BERT based models have proven to be very efficient at language understanding. AraBERT (Antoun et al., 2020 ) is such a model built for Arabic with BERT using scraped Arabic news websites and two publicly available Arabic corpora; 1.5 billion words Arabic Corpus (El-khair, 2016) and OSIAN: the Open Source International Arabic News Corpus (Zeroual et al., 2019) . Since AraBERT has outperformed multilingual bert in many NLP tasks in Arabic (Antoun et al., 2020) we used this model for Arabic in this task. There are two version in AraBERT; AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses presegmented text where prefixes and suffixes were Table 2 : Macro F1 between the InfoMiner submission and human annotations for test set in all the languages. Best System is the results of the best model submitted for each language as reported by the task organisers (Shaar et al., 2021) . splitted using the Farasa Segmenter (Abdelali et al., 2016) .",
"cite_spans": [
{
"start": 121,
"end": 141,
"text": "(Antoun et al., 2020",
"ref_id": "BIBREF1"
},
{
"start": 374,
"end": 396,
"text": "(Zeroual et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 476,
"end": 497,
"text": "(Antoun et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 917,
"end": 937,
"text": "(Shaar et al., 2021)",
"ref_id": "BIBREF28"
},
{
"start": 976,
"end": 999,
"text": "(Abdelali et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 700,
"end": 707,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "When it comes to selecting the best model for each language, highest F1 score out of the evaluated models was chosen. Due to the fact that our approach uses a single model for each label, our main goal was to achieve good F1 scores using light weight models. The limitation of available resources to train several models for all seven labels itself was a very challenging task to the team but we managed to evaluate several.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "As depicted in Table 1 , for English, bert-basecased model performed better than roberta-base model. For Arabic, arabert-v2-tokenized performed better than the other two models we considered. For Bulgarian, with the limited time, we could only train bert-multilingual model, therefore, we submitted the predictions from that for Bulgarian.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "As shown in Table 2 , our submission is very competitive with the best system submitted in each language and well above the random baseline. Our team was ranked 4 th in all the languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We have presented the system by InfoMiner team for NLP4IF-2021-Fighting the COVID-19 Infodemic. We have shown that multiple transformer models trained on different labels can be successfully applied to this task. Furthermore, we have shown that undersampling can be used to prevent the overfitting of the transformer models to the majority class in an unbalanced dataset like this. Overall, our approach is simple but can be considered as effective since it achieved 4 th place in the leader-board for all three languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "One limitation in our approach is that it requires maintaining seven transformer models for the seven binary properties of this task which can be costly in a practical scenario which also restricted us from experimenting with different transformer types due to the limited time and resources. Therefore, in future work, we are interested in remodeling the task as a multilabel classification problem, where a single transformer model can be used to predict all seven labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The GitHub repository is publicly available on https: //github.com/tharindudr/infominer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The dataset can be downloaded from https:// gitlab.com/NLP4IF/nlp4if-2021",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the shared task organizers for making this interesting dataset available. We further thank the anonymous reviewers for their insightful feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "A summary of hyperparameters and their values used to obtain the reported results are mentioned in Table 3 . The optimised hyperparameters are marked with \u2021 and their optimal values are reported. The rest of the hyperparameter values are kept as constants. ",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Farasa: A fast and furious segmenter for Arabic",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Abdelali",
"suffix": ""
},
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations",
"volume": "",
"issue": "",
"pages": "11--16",
"other_ids": {
"DOI": [
"10.18653/v1/N16-3003"
]
},
"num": null,
"urls": [],
"raw_text": "Ahmed Abdelali, Kareem Darwish, Nadir Durrani, and Hamdy Mubarak. 2016. Farasa: A fast and furious segmenter for Arabic. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demon- strations, pages 11-16, San Diego, California. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "AraBERT: Transformer-based model for Arabic language understanding",
"authors": [
{
"first": "Wissam",
"middle": [],
"last": "Antoun",
"suffix": ""
},
{
"first": "Fady",
"middle": [],
"last": "Baly",
"suffix": ""
},
{
"first": "Hazem",
"middle": [],
"last": "Hajj",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection",
"volume": "",
"issue": "",
"pages": "9--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic lan- guage understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Pro- cessing Tools, with a Shared Task on Offensive Lan- guage Detection, pages 9-15, Marseille, France. Eu- ropean Language Resource Association.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Information credibility on twitter",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Castillo",
"suffix": ""
},
{
"first": "Marcelo",
"middle": [],
"last": "Mendoza",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Poblete",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 20th International Conference on World Wide Web, WWW '11",
"volume": "",
"issue": "",
"pages": "675--684",
"other_ids": {
"DOI": [
"10.1145/1963405.1963500"
]
},
"num": null,
"urls": [],
"raw_text": "Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011. Information credibility on twitter. In Proceedings of the 20th International Conference on World Wide Web, WWW '11, page 675-684, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Ibrahim Abu El-khair. 2016. 1.5 billion words arabic corpus",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.04033"
]
},
"num": null,
"urls": [],
"raw_text": "Ibrahim Abu El-khair. 2016. 1.5 billion words arabic corpus. In arXiv preprint arXiv:1611.04033.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The future of false information detection on social media: New perspectives and trends",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yasan",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Lina",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yunji",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Zhiwen",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Comput. Surv",
"volume": "53",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3393880"
]
},
"num": null,
"urls": [],
"raw_text": "Bin Guo, Yasan Ding, Lina Yao, Yunji Liang, and Zhi- wen Yu. 2020. The future of false information detec- tion on social media: New perspectives and trends. ACM Comput. Surv., 53(4).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Emoji powered capsule network to detect type and target of offensive posts in social media",
"authors": [
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "474--480",
"other_ids": {
"DOI": [
"10.26615/978-954-452-056-4_056"
]
},
"num": null,
"urls": [],
"raw_text": "Hansi Hettiarachchi and Tharindu Ranasinghe. 2019. Emoji powered capsule network to detect type and target of offensive posts in social media. In Pro- ceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 474-480, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BRUMS at SemEval-2020 task 3: Contextualised embeddings for predicting the (graded) effect of context in word similarity",
"authors": [
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "142--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hansi Hettiarachchi and Tharindu Ranasinghe. 2020a. BRUMS at SemEval-2020 task 3: Contextualised embeddings for predicting the (graded) effect of context in word similarity. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 142-149, Barcelona (online). International Commit- tee for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "InfoMiner at WNUT-2020 task 2: Transformerbased covid-19 informative tweet extraction",
"authors": [
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Sixth Workshop on Noisy Usergenerated Text (W-NUT 2020)",
"volume": "",
"issue": "",
"pages": "359--365",
"other_ids": {
"DOI": [
"10.18653/v1/2020.wnut-1.49"
]
},
"num": null,
"urls": [],
"raw_text": "Hansi Hettiarachchi and Tharindu Ranasinghe. 2020b. InfoMiner at WNUT-2020 task 2: Transformer- based covid-19 informative tweet extraction. In Proceedings of the Sixth Workshop on Noisy User- generated Text (W-NUT 2020), pages 359-365, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "TransWiC at SemEval-2021 Task 2: Transformerbased Multilingual and Cross-lingual Word-in-Context Disambiguation",
"authors": [
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Fifteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hansi Hettiarachchi and Tharindu Ranasinghe. 2021. TransWiC at SemEval-2021 Task 2: Transformer- based Multilingual and Cross-lingual Word-in- Context Disambiguation. In Proceedings of the Fif- teenth Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Comparing approaches to Dravidian language identification",
"authors": [
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "120--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommi Jauhiainen, Tharindu Ranasinghe, and Marcos Zampieri. 2021. Comparing approaches to Dravid- ian language identification. In Proceedings of the Eighth Workshop on NLP for Similar Languages, Va- rieties and Dialects, pages 120-127, Kiyv, Ukraine. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "users join social every second (and other key stats to know",
"authors": [
{
"first": "",
"middle": [],
"last": "Simon",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Kemp. 2021. 15.5 users join so- cial every second (and other key stats to know). https://blog.hootsuite.com/ simon-kemp-social-media/.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Prominent features of rumor propagation in online social media",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kwon",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cha",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Jung",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "2013 IEEE 13th International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "1103--1108",
"other_ids": {
"DOI": [
"10.1109/ICDM.2013.61"
]
},
"num": null,
"urls": [],
"raw_text": "S. Kwon, M. Cha, K. Jung, W. Chen, and Y. Wang. 2013. Prominent features of rumor propagation in online social media. In 2013 IEEE 13th Inter- national Conference on Data Mining, pages 1103- 1108.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Rumor detection by exploiting user credibility information, attention and multi-task learning",
"authors": [
{
"first": "Quanzhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qiong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Luo",
"middle": [],
"last": "Si",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1173--1179",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1113"
]
},
"num": null,
"urls": [],
"raw_text": "Quanzhi Li, Qiong Zhang, and Luo Si. 2019. Rumor detection by exploiting user credibility information, attention and multi-task learning. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1173-1179, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. In arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Detecting rumors from microblogs with recurrent neural networks",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Sejeong",
"middle": [],
"last": "Kwon",
"suffix": ""
},
{
"first": "Bernard",
"middle": [
"J"
],
"last": "Jansen",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Meeyoung",
"middle": [],
"last": "Cha",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI'16",
"volume": "",
"issue": "",
"pages": "3818--3824",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J. Jansen, Kam-Fai Wong, and Meeyoung Cha. 2016. Detecting rumors from microblogs with recurrent neural networks. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI'16, page 3818-3824. AAAI Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Stance and sentiment in tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Parinaz",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Sobhani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Trans. Internet Technol",
"volume": "17",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3003433"
]
},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Trans. Internet Technol., 17(3).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Critical impact of social networks infodemic on defeating coronavirus covid-19 pandemic: Twitter-based study and research directions",
"authors": [
{
"first": "Azzam",
"middle": [],
"last": "Mourad",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Srour",
"suffix": ""
},
{
"first": "Haidar",
"middle": [],
"last": "Harmanai",
"suffix": ""
},
{
"first": "Cathia",
"middle": [],
"last": "Jenainati",
"suffix": ""
},
{
"first": "Mohamad",
"middle": [],
"last": "Arafeh",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Network and Service Management",
"volume": "17",
"issue": "4",
"pages": "2145--2155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Azzam Mourad, Ali Srour, Haidar Harmanai, Cathia Jenainati, and Mohamad Arafeh. 2020. Critical im- pact of social networks infodemic on defeating coro- navirus covid-19 pandemic: Twitter-based study and research directions. IEEE Transactions on Network and Service Management, 17(4):2145-2155.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Offensive language identification in Greek",
"authors": [
{
"first": "Zesis",
"middle": [],
"last": "Pitenis",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "5113--5119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zesis Pitenis, Marcos Zampieri, and Tharindu Ranas- inghe. 2020. Offensive language identification in Greek. In Proceedings of the 12th Language Re- sources and Evaluation Conference, pages 5113- 5119, Marseille, France. European Language Re- sources Association.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Rumor has it: Identifying misinformation in microblogs",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Vahed Qazvinian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosengren",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Dragomir",
"suffix": ""
},
{
"first": "Qiaozhu",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mei",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1589--1599",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vahed Qazvinian, Emily Rosengren, Dragomir R. Radev, and Qiaozhu Mei. 2011. Rumor has it: Iden- tifying misinformation in microblogs. In Proceed- ings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1589-1599, Edinburgh, Scotland, UK. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "WLV-RIT at HASOC-Dravidian-CodeMix-FIRE2020: Offensive Language Identification in Code-switched YouTube Comments",
"authors": [
{
"first": "Sarthak",
"middle": [],
"last": "Tharindu Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Gupte",
"suffix": ""
},
{
"first": "Ifeoma",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nwogu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th annual meeting of the Forum for Information Retrieval Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Sarthak Gupte, Marcos Zampieri, and Ifeoma Nwogu. 2020. WLV-RIT at HASOC-Dravidian-CodeMix-FIRE2020: Offensive Language Identification in Code-switched YouTube Comments. In Proceedings of the 12th annual meeting of the Forum for Information Retrieval Evaluation.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "BRUMS at SemEval-2020 task 12: Transformer based multilingual offensive language identification in social media",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1906--1915",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe and Hansi Hettiarachchi. 2020. BRUMS at SemEval-2020 task 12: Transformer based multilingual offensive language identification in social media. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1906- 1915, Barcelona (online). International Committee for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans",
"authors": [
{
"first": "Diptanu",
"middle": [],
"last": "Tharindu Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ororbia",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Fifteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Diptanu Sarkar, Marcos Zampieri, and Alex Ororbia. 2021. WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans. In Pro- ceedings of the Fifteenth Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multilingual offensive language identification with cross-lingual embeddings",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5838--5844",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.470"
]
},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe and Marcos Zampieri. 2020. Multilingual offensive language identification with cross-lingual embeddings. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5838-5844, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "MUDES: Multilingual Detection of Offensive Spans",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe and Marcos Zampieri. 2021a. MUDES: Multilingual Detection of Offensive Spans. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics (Demonstrations).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Multilingual Offensive Language Identification for Low-resource Languages",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2021,
"venue": "ACM Transactions on Asian and Low-Resource Language Information Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe and Marcos Zampieri. 2021b. Multilingual Offensive Language Identification for Low-resource Languages. ACM Transactions on Asian and Low-Resource Language Information Pro- cessing (TALLIP).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "BRUMS at HASOC 2019: Deep learning models for multilingual hate speech and offensive language identification",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 11th annual meeting of the Forum for Information Retrieval Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Marcos Zampieri, and Hansi Hettiarachchi. 2019. BRUMS at HASOC 2019: Deep learning models for multilingual hate speech and offensive language identification. In Proceed- ings of the 11th annual meeting of the Forum for In- formation Retrieval Evaluation.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Findings of the NLP4IF-2021 shared task on fighting the COVID-19 infodemic and censorship detection",
"authors": [
{
"first": "Shaden",
"middle": [],
"last": "Shaar",
"suffix": ""
},
{
"first": "Firoj",
"middle": [],
"last": "Alam",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Da San",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Martino",
"suffix": ""
},
{
"first": "Wajdi",
"middle": [],
"last": "Nikolov",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Zaghouani",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Feldman",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Fourth Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda, NLP4IF@NAACL' 21, Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaden Shaar, Firoj Alam, Giovanni Da San Martino, Alex Nikolov, Wajdi Zaghouani, Preslav Nakov, and Anna Feldman. 2021. Findings of the NLP4IF- 2021 shared task on fighting the COVID-19 info- demic and censorship detection. In Proceedings of the Fourth Workshop on Natural Language Process- ing for Internet Freedom: Censorship, Disinforma- tion, and Propaganda, NLP4IF@NAACL' 21, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "How to fine-tune bert for text classification?",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yige",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Chinese Computational Linguistics",
"volume": "",
"issue": "",
"pages": "194--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In Chinese Computational Linguistics, pages 194- 206, Cham. Springer International Publishing.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "End-to-end open-domain question answering with BERTserini",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yuqing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Aileen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xingyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Luchen",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "72--77",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4013"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with BERTserini. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics (Demonstra- tions), pages 72-77, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "OSIAN: Open source international Arabic news corpus -preparation and integration into the CLARIN-infrastructure",
"authors": [
{
"first": "Imad",
"middle": [],
"last": "Zeroual",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Goldhahn",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Eckart",
"suffix": ""
},
{
"first": "Abdelhak",
"middle": [],
"last": "Lakhouaja",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "175--182",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4619"
]
},
"num": null,
"urls": [],
"raw_text": "Imad Zeroual, Dirk Goldhahn, Thomas Eckart, and Ab- delhak Lakhouaja. 2019. OSIAN: Open source inter- national Arabic news corpus -preparation and inte- gration into the CLARIN-infrastructure. In Proceed- ings of the Fourth Arabic Natural Language Process- ing Workshop, pages 175-182, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"text": "Verifiable Factual Claim: Does the tweet contain a verifiable factual claim? II False Information: To what extent does the tweet appear to contain false information? III Interest to General Public: Will the tweet have an effect on or be of interest to the general public? IV Harmfulness: To what extent is the tweet harmful to the society? V Need of Verification: Do you think that a professional fact-checker should verify the claim in the tweet?",
"type_str": "table",
"html": null,
"content": "<table><tr><td>VI Harmful to Society: Is the tweet harmful for</td></tr><tr><td>the society?</td></tr><tr><td>VII Require attention: Do you think that this</td></tr><tr><td>tweet should get the attention of government</td></tr><tr><td>entities?</td></tr><tr><td>4 Architecture</td></tr><tr><td>The main motivation for our architecture is the re-</td></tr><tr><td>cent success that the transformer models had in vari-</td></tr><tr><td>ous natural language processing tasks like sequence</td></tr><tr><td>classification (Ranasinghe and Hettiarachchi, 2020;</td></tr><tr><td>Ranasinghe et al., 2019; Pitenis et al., 2020), token</td></tr><tr><td>classification (Ranasinghe and Zampieri, 2021a;</td></tr><tr><td>Ranasinghe et al., 2021), language detection (Jauhi-</td></tr><tr><td>ainen et al., 2021), word context prediction (Het-</td></tr><tr><td>tiarachchi and Ranasinghe, 2020a, 2021) question</td></tr><tr><td>answering (Yang et al., 2019) etc. Apart from pro-</td></tr><tr><td>viding strong results compared to RNN based ar-</td></tr><tr><td>chitectures (Hettiarachchi and Ranasinghe, 2019;</td></tr><tr><td>Ranasinghe et al., 2019), transformer models like</td></tr><tr><td>BERT</td></tr></table>"
},
"TABREF2": {
"num": null,
"text": "Macro F1 between the algorithm predictions and human annotations for development set in all the languages. Results are sorted from Mean F1 score for each language.",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td>Model</td><td>I</td><td>II</td><td>III</td><td>IV</td><td>V</td><td>VI</td><td>VII</td><td>Mean</td></tr><tr><td/><td>Best System</td><td colspan=\"8\">0.835 0.913 0.978 0.873 0.882 0.908 0.889 0.897</td></tr><tr><td>English</td><td>InfoMiner</td><td colspan=\"8\">0.819 0.886 0.946 0.841 0.803 0.884 0.867 0.864</td></tr><tr><td/><td colspan=\"9\">Random Baseline 0.552 0.480 0.457 0.473 0.423 0.563 0.526 0.496</td></tr><tr><td/><td>Best System</td><td colspan=\"8\">0.843 0.762 0.890 0.799 0.596 0.912 0.663 0.781</td></tr><tr><td>Arabic</td><td>InfoMiner</td><td colspan=\"8\">0.852 0.704 0.774 0.743 0.593 0.698 0.588 0.707</td></tr><tr><td/><td colspan=\"9\">Random Baseline 0.510 0.444 0.487 0.442 0.476 0.584 0.533 0.496</td></tr><tr><td/><td>Best System</td><td colspan=\"8\">0.887 0.955 0.980 0.834 0.819 0.678 0.706 0.837</td></tr><tr><td>Bulgarian</td><td>InfoMiner</td><td colspan=\"8\">0.786 0.749 0.419 0.599 0.556 0.303 0.631 0.578</td></tr><tr><td/><td colspan=\"9\">Random Baseline 0.594 0.502 0.470 0.480 0.399 0.498 0.528 0.496</td></tr></table>"
}
}
}
}